What are the disadvantages of using master database in MSSMS when querying?
One of the manager here was running his query using master database and it took 56mins to finish but when we ran it directly to the database (sdbfile.dbo.) it only took 32 seconds.
Usually users have public role to connect to master database, The disadvantage is SQL have to delegate again after authentication to specific database which is referred in query.
Related
I have a scheduled task on my warehouse server that runs a batch file, which triggers a command that runs a sproc on my SQL server. The task is set up to run as ACCOUNTS\sqlservice.
I recently made some updates to my linked server objects to avoid warehouse users querying data from them by mapping just the user(s) that should have query access in the linked server security. While mapping the local sql server login to sql server login works, I can't seem to map a domain account between the two servers successfully, that is, the ACCOUNTS\sqlservice, who has sa on both servers.
Any ideas on how I can give the sqlservice account access to query the linked server object? Thank you!
One solution: use an appropriate alternative user for the remote mapping that is a local sql server login rather than a domain account.
There may be another way to do this between domain accounts, but haven't the time for that.
In SQL+, I first connect to the server I have been given;
CONNECT/
Connected.
However, when trying to create a Database I get the following:
CREATE DATABASE Project3;
CREATE DATABASE failed
database already mounted
I've also tried - STARTUP NOMOUNT but only states I have insufficient privileges.
Is there something i'm doing wrong here?
In case this is Oracle you are talking about (Oracles default CLI is called SQL*Plus), this means that a database has already been configured on the server. Oracle only uses a single database per server instance. Inside those databases are schema's, and that is where your database objects will be stored.
See the below quote from: http://docs.oracle.com/cd/B28359_01/server.111/b28301/install.htm#ADMQS002
After you create a database, either during installation or as a standalone
operation, you do not need to create another. Each Oracle instance works
with a single database only. Rather than requiring that you to create
multiple databases to accommodate different applications, Oracle Database
uses a single database, and accommodates multiple applications by enabling
you to separate data into different schemas within the single database.
We have a server with SQL database (8 database) working in LAN, Now we are planning to make a backup server connected through LAN.
What we need is, when user enter data it should save in both database, so that we have all the data in both database.
I am a newbie so pls give me some detail information. I have seen some replication option, is it better option for us.
We have SQL Server 2005.
Which database engine are you using?
There're several ways to build a distributed/replicated database, I'm sure you'll find out how to do it by reading your engine documentation, but we cannot help here without more info.
Yes, you can backup your database from one server to other by multiple way.. Following are those
1) Make script of complete database with schema and data..Technique is here
2) Export your database to excel file and import it to other server (Use only in required conditions).
3) communicate two server by addlink.. Technique here
Now if you want dump data in two different server then its depend on code logic written for dumping database and connection string provided for it. By adding Trigger to one server database you can dump data on different server or database.. Trigger
Is it possible to have SQL Server create a temp table inside a particular database upon a user connecting to the database in such a way that the connecting user is the only one with access to the contents in this table (or even better, the connecting user is the only one that can even see the table)?
I tried using a logon trigger (including using a 'with execute as caller' clause) but although this creates the temp table, the connecting user can never see it/select from it.
All of this has to run inside SQL Server and require no user interaction at all...
Basically, this is the scenario I want to support:
user connects
a temp table is created inside a particular DB inside SQL (by SQL, kicked off by establishing of the connection)
some specific information is populated inside the table
for the duration of the connection; the user has (Read) access to the contents in this table; the information in this table is used by a sub-system inside a particular database
user disconnects
the temp table and all its contents is dropped by SQL
Thanks
First thoughts:
modify your client code to create the table on connection? Then it can be done only when needed not all the time
use a common, persisted table with a SessionID based on a GUID? This will provide some audit + troubleshooting information too
use table value parameters to send data on demand rather than have any server-side caching
And what I'd probably do:
create the table when it's populated when I need it. The user can connect to the database for a variety of reasons (I assume). So "connection" should be decoupled from "CREATE TABLE".
Using temp tables for this would not be the right approach if your data access is properly designed to open a connection-do an operation/query-close the connection. The moment you closed the connection, the temp table would be destroyed. It would be better to use a view or stored procedure to filter the information to which the user should have access. The structure of that view will depend greatly on how users connect to the database. Do users connect to the database using their own personal windows authentication account or do they connect indirectly through another account like many web servers do?
IMO, the better approach is the second bullet point of gbn's answer: a common persisted table with an indicator as to the session or user.
I have 2 big tables in sql server that i need to sync to mysql.
now, i need that as an ongoing process.
the tables are 1 GB each and getting new/update/delete row every 0.1 second.
Can you recommend me a tool that can do it that is not resource expensive.
you can offer OPEN SOURCE and commercial as well
Thanks
You could create a linked server instance in SQL Server, pointing to the MySQL instance. This article gives the step-by-step process. Once that is in place, providing you grant the MySQL user you connect on behalf of proper permissions, you can write to the MySQL instance as you like. So you could easily update stored procedures to do an additional step to insert records into MySQL.