Does it effect the performance of my web Application if many users use single table of SQL Server database simultaneously...?
This question is very general, it depends upon number of concurrent users per second and design of application code keeping an eye on atomicity.
Related
I'm creating a Microsoft SQL server that initially only served one client but am now looking to have many (Up to several thousand if things go well). The entire structure will be the same for each client with only the data within each table being client specific.
I am thinking of adding ClientID to almost all tables and referencing this in all functions (basically a where ClientID = #ClientID on every statement). Along with a Clients table that gains a new entry for every new client
The alternative being a create database [Client_Name] script that is fired whenever a new client joins the server to create another client specific database and all its associated structure and procedures.
Is there any advantage performance wise to either option?
The decision on how to structure such a database should not be made only on performance issues. In fact, that is probably the least of the issues. Some things to consider:
How will you manage updates to your application? Multiple databases can make this easier or harder.
Will individual clients have customizations? This favors multiple databases.
What are the security requirements for the data? This can go either way.
What are the replication and recovery requirements for the data? This would tend to be easier with one database, but not in all scenarios.
Will concurrent usage by different clients interfere with each other?
Will clients be responsible for managing their own data or is this part of your offering?
Is any data shared among clients? How will you maintain common reference tables?
In general, performance is going to be better with a single database (think half-filled data pages occupying memory). Maintenance and development will be easier with a single database (managing multiple client databases is cumbersome). But actual requirements on the application should be driving such a decision.
In our company we using many DB servers in different cities. Sometimes data in one server should be synchronized with another. For example, in table "Monitor" values "status" and "date" may be updated very often. My problem is when theese values updated in server A, they also should be updated in server B:
Update Monitor set(date='2013-06-13')
and then
'Update Monitor set(status=4)'
in server A udating of both values is sucsessfull, but in server B (usualy with highest loading) somtimes, in approx. 0.03% cases updated only value date and status is stil old. Can anybody explain, is it possible in DB server with high loading?
It's hard to explain without looking at the boxes, logs and workload each is doing; there are a thousand things that would cause server "B" to miss data, including table and row locks, requests dropped by the network, unfinished transactions and the like. To find out exactly, you'd have to turn on the logging and compare the requests on "A" versus "B". The first thing I'd do, however, would be to look for errors in the SQL logs.
But in general keeping database synchronized across regions is do-able using existing technologies available in MS and Oracle. One scenario involves using a master, central db to receive all requests. It then distributes inserts, updates, delete and queries out to the regional DBs using SSIS or regular DB connectivity over a WAN.
Here's a high-level guide to the technology solution available in SQL Server.
http://msdn.microsoft.com/en-us/library/hh868047.aspx
You were probably looking for a simple answer, but I don't think there is one.
Is there any way to give a particular SQL login higher priority for running queries? We have one server, that has multiple databases, unfortunately one of the databases occasionally runs very intensive queries (which aren't too time dependant), and it slows down the rest of the databases on the server.
I'd like to be able to tell the server to run queries from a particular login on a higher priority to avoid slow down for other systems.
I understand that typically there would be issues with locking - however in this case, there is one database table that all the databases reference (user information) that is read only - so there wouldn't be any of these issues.
We can't separate out the databases, and we can't add more servers - any ideas?
Thanks
The only way to handle resources in SL 2005 is to create seperate instances, however this only hides memory/cpu from the other instances, it doesnt allow under utilized instances to share its memory/cpu with busy instances.
In SQL Server 2008, they have added the Resource Governor which can prioritise the CPU and Memory based on users or databases (http://msdn.microsoft.com/en-us/library/bb933866.aspx).
Thanks,
Matt
SQL Server 2008 database design problem.
I'm defining the architecture for a service where site users would manage a large volume of data on multiple websites that they own (100MB average, 1GB maximum per site). I am considering whether to split the databases up such that the core site management tables (users, payments, contact details, login details, products etc) are held in one database, and the database relating to the customer's own websites is held in a separate database.
I am seeing a possible gain in that I can distribute the hardware architecture to provide more meat to the heavy lifting done in the websites database leaving the site management database in a more appropriate area. But I'm also conscious of losing the ability to directly relate the sites to the customers through a Foreign key (as far as I know this can't be done cross database?).
So, the question is two fold - in general terms should data in this sort of scenario be split out into multiple databases, or should it all be held in a single database?
If it is split into multiple, is there a recommended way to protect the integrity and security of the system at the database layer to ensure that there is a strong relationship between the two?
Thanks for your help.
This question and thus my answer may be close to the gray line of subjective, but at the least I think it would be common practice to separate out the 'admin' tables into their own db for what it sounds like you're doing. If you can tie a client to a specific server and db instance then by having separate db instances, it opens up some easy paths for adding servers to add clients. A single db would require you to monkey with various clustering approaches if you got too big.
[edit]Building in the idea early that each client gets it's own DB also just sets the tone for how you develop when it is easy to make structural and organizational changes. Discovering 2 yrs from now you need to do it will become a lot more painful. I've worked with split dbs plenty of times in the past and it really isn't hard to deal with as long as you can establish some idea of what the context is. Here it sounds like you already have the idea that the client is the context.
Just my two cents, like I said, you could be close to subjective on this one.
Single Database Pros
One database to maintain. One database to rule them all, and in the darkness - bind them...
One connection string
Can use Clustering
Separate Database per Customer Pros
Support for customization on per customer basis
Security: No chance of customers seeing each others data
Conclusion
The separate database approach would be valid if you plan to support per customer customization. I don't see the value if otherwise.
You can use link to connect the databases.
Your architecture is smart.
If you can't use a link, you can always replicate critical data to the website database from the users database in a read only mode.
concerning security - The best way is to have a service layer between ASP (or other web lang) and the database - so your databases will be pretty much isolated.
If you expect to have to split the databases across different hardware in the future because of heavy load, I'd say split it now. You can use replication to push copies of some of the tables from the main database to the site management databases. For now, you can run both databases on the same instance of SQL Server and later on, when you need to, you can move some of the databases to a separate machine as your volume grows.
Imagine we have infinitely fast computers, would you split your databases? Of course not. The only reason why we split them is to make it easy for us to scale out at some point. You don't really have any choice here, 100MB-1000MB per client is huge.
Is there a way to tell MS SQL that a query is not too important and that it can (and should) take its time?
Likewise is there a way to tell MS SQL that it should give higher priority to a query?
Not in versions below SQL 2008. In SQL Server 2008 there's the resource governor. Using that you can assign logins to groups based on properties of the login (login name, application name, etc). The groups can then be assigned to resource pools and limitations or restrictions i.t.o. resources can be applied to those resource pools
SQL Server does not have any form of resource governor yet. There is a SET option called QUERY_GOVERNOR_COST_LIMIT but it's not quite what you're looking for. And it prevents queries from executing based on the cost rather than controlling resources.
I'm not sure if this is what you're asking, but I had a situation where a single UI click added 10,000 records to an email queue (lots of data in the body). The email went out over the next several days so it didn't need to be a high priority, in fact it would bog the server every time it happened.
I split the procedure into 10,000 individual calls, ran the process on the UI in a different thread (set to low priority) and set it to sleep for a second after running the procedure. It took a while, but I had very granular control over exactly what it was doing.
btw, this was NOT spam, so don't flame me thinking it was.