microsoft sql server management studio express store db in memory? - sql

I have a database intensive test I'm running that uses a small database ~100MB.
Is there a way to have microsoft sql server management studio express store the database in memory instead of hard drive? Is there some option I can select for it to do this?
I'm also thinking about a ram drive, but if there is an option in mssmse I'd rather do that.

Management Studio has nothing to do with how the database is stored. The SQL database engine will, given sufficient memory, cache appropriately to speed up queries. You really shouldn't need to do anything special. You'll see that the initial query is a bit slower than the ones that run after the cache is populated, that's normal.
Don't mess with a RAM drive, you'll be taking memory away from SQL to do it and will probably end up less efficient. If you have a critical need for fast disk, you'll either need to look at a properly configured array or solid state drives.
There are ways to performance tune SQL to specific applications, but it's very involved and requires a deep knowledge of the specific SQL server product. You're better off looking at database design and query optimization first.
Realistically databases around 100MB are tiny and shouldn't require special handling if properly designed.

Related

What database strategy to choose for a large web application

I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM.
The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control
on the load. There are no really complex queries, so SQL is not required if there is a better solution.
The databases get updated via FTP every day at midnight. The database is read-only.
C# is my favourite language and I want to use ASP.NET MVC.
I thought about the following options:
Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services.
Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client.
Use a combination of SQL Server and Redis
Use SQL Server 2012 together with Hadoop
Use Hadoop without SQL Server
What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario?
The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax.
The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.
Few questions. What problems have come up that you're rewriting this? What do the query patterns look like? It sounds like you would be most comfortable with a SQLServer + caching (memcached) to address whatever issues that are causing you to rewrite this. Redis is good, but you won't need the data structure features with the db handling queries, and you don't need persistance if it's only being used as a cache. Without knowing more about the problem, I guess I'd look at MongoDB to handle data sharding, redundant storage, and caching all in one solution. There are no special machines in this setup, redundancy can be configured, and the load should balance well.
This question is almost an opinion piece. I'd personally prefer an Oracle RAC with TimesTen for caching if performance is of the utmost importance, and if volume of concurrent reads is high during the day.
There's a white paper here...
http://www.oracle.com/us/products/middleware/timesten-in-memory-db-504865.pdf
The specs of the disk subsystem and organization of indexes and data files across physical disks is probably the most important factor though.

Relational DB in-memory?

I have a simpleton question on Redis. If the key to it's performance is that it's in-memory, whey can't that be done on a regular SQL db?
Any DBMS can be run "in memory". Consider the use of a ramdisk. However, most DBMSs (those with SQL) are not designed to run entirely in memory and put alot of effort to minimize disk IO and paging: a DBMS works very hard to keep the "relevant data" hot (in memory and in cache) -- IO is slow, slow slow.
This is because database data is often [and has historically been] significantly larger than main memory. That and main-memory is volatile :-) [ACID DBMSs do lots of works with write-ahead logging -- to a non-volatile store -- and other techniques to ensure data is never corrupted, even in case of a unexpected shutdown.]
Some databases, like SQLite use the same format for the disk and memory stores even though they explicitly support an in-memory store. Support for other [in-memory] back-ends and memory usage tuning vary by provider.
Happy coding.
You may be interested in VoltDB
The key is not only is it in memory, but it also has simpler operations than a SQL DB. Redis has simple operations such as GET, SET (and so on) using hash tables, and other optimized data structures.
SQL Databases generally take longer to compute, however they are a ton more flexible and in most cases more powerful (in terms of what type of queries). You most certainly cannot run JOIN queries in Redis, for example
You may be interested in TimesTen (which is now Oracle).
In 11g its SQL has improved significantly, though still is not as powerful as that of Oracle.
You can do that natively with some SQL database management systems. But there are risks.
You stand to lose data if the server fails, for example. I don't think you can get ACID compliant transactions; any log file would have to be written to disk to survive a server failure. (I imagine it's possible for an in-memory SQL dbms to still write log files to disk, but I've never run across that myself. Not that I've looked much.)
On DB's in RAM: Traditional databases will eventually wind up in RAM:
Traditional database data — records of human transactional activity [...] — will not grow as fast as Moore’s Law makes computer chips cheaper.
And that point has a straightforward corollary, namely:
It will become ever more affordable to put traditional database data entirely into RAM.

Single logical SQL Server possible from multiple physical servers?

With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008?
I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit.
We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server?
Sorry I'm not an expert, I'm not sure if the question is a stupid one.
TIA
Before rushing out and buying new hardware, find out where your bottlenecks really are. Many locking problems can be solved with the appropriate indexes for your workload.
For example, I've seen instances where placing tempDB on SSD solved performance issues and saved the client buying an expensive new server.
Analyse your workload: How Can I Log and Find the Most Expensive Queries?
With SQL Server 2008 you can utilise the Management Data Warehouse (MDW) to capture your workload.
White Paper: SQL Server 2008 Performance and Scale
Also: please be aware that a SAN solution is not necessarily a faster I/O solution than directly attached storage. It depends on the SAN, number of Physical disks in a LUN, LUN subscription and usage, the speed of the HBA's and several other hardware factors...
Optimizing the app may be a big job of going through all business logic and lines of code. But looking for the most expansive query can easily locate the bottleneck area. Maybe it only happens to a couple of the biggest tables, views or stored procedures. Add or fine tune an index may help right the way. If bumping up the RAM is possible try that option as well. That is cheap and easy configure.
Good luck.
You might want to google for "sql server scalable shared database". Yes you can store your db files on a SAN and use multiple servers, but you're going to have to meet some pretty rigid criteria for it to be a performance boost or even useful (high ratio of reads to writes, small enough dataset to fit in memory or a fast enough SAN, multiple concurrent accessors, etc, etc).
Clustering is complicated and probably much more expensive in the long run than a bigger server, and far less effective than properly optimized application code. You should definitely make sure your app is well optimized.

SQL Server 2K5 and memory assignment

Two Part Question:
What kind of actions does SQL Server process in RAM? Of those that I know are as follows: table variables and CTE's. My colleague also mentioned COUNTS and indexes? I'm not sure how accurate is this.
How do I control what kind of data is stored in RAM. I know this is dynamically assigned by SQL Server and it probably does a good job of it. But for academic reasons, does anyhow know the guidelines governing this?
Roughly speaking (and this is hiding some of the details), there are two types of memory use: one is for data pages and the other is for cached query plans. It's obviously more complicated than that but you start to need to know quite a bit about SQL Server's internals.
You don't control what is stored in RAM. The system does it on your behalf.
In an ideal setup, all of the active Databases's hot data pages should be in RAM.
For details:
Dynamic Memory Management
Memory Management Architecture
Memory Used by SQL Server Objects Specifications
You can force a table to be in the cache using DBCC PINTABLE. This command tells SQL Server not to flush the pages for the table from memory.
http://msdn.microsoft.com/en-us/library/ms178015%28SQL.90%29.aspx

Moving SQL2005 app to SQL2008

I will be moving our production SQL2005 application to SQL2008 soon. Any things to lookout for before/after the move? Any warnings, advices?
Thank you!
Change your compatibility level on the database after moving it to the 2008 server. By default, it will still stay at the old compatibility level. This will let you use the new goodies in SQL 2008 for that database.
If you're using the Enterprise Edition of SQL 2008 and you're not running at 80-90% CPU on the box, turn on data compression and compress all of your objects. There's a big performance gain on that. Unfortunately, you have to do it manually for every single object - there's not a single switch to throw.
If you're not using Enterprise, after upping the compatibility level, rebuild all of your indexes. (This holds pretty much true for any version upgrade.)
The upgrade adviser can also help.
Look at the execution plans with production data in the database.
Though my best advice is to test, test, test.
When people started moving from 2000 to 2005 it wasn't the breaking features that were show stoppers it was the change in how queries performed with the new optimizer.
Queries that were heavily optimized for 2000 now performed poorly or even worse erratically leading people to chase down non-problems and generally lowering the confidence of the end users.