How to control growing SQL database day by day - sql

I have an SQL database which has a main Orders table taking 2-5 new rows per day.
Other table which has daily records is Log table. It receives new data every time a user accesses the login page of the web site including time and the IP address of the user. It gets 10-15 new rows per day for now.
As I monitor the daily backup of SQL, I realized that it is growing like 2-3MB per day. I have enough storage but it makes me worried. Is it the Log table causing this growth? I deleted like 150 rows but it didn't cause the .bak file size reduce. It increased! I didn't shrink database and I don't want to do it.
I'm not sure what to do about it. Is there any other decent way of Logging user accesses?

I typically export the rows from the production server, and import into a database on a non-production server (like my local machine), then delete the existing rows from the production server. Also run an optimize on the production server table so the size is recalculated. This is somewhat manual but it keeps the production server table size down, and the export/import process is rather quick.

Related

Whether snapshot database has instant records same as the source database in SQL?

Whether snapshot database has instant records same as the source database in SQL?
This is a small production database.
We are considering to have snapshot database in the same server only for reporting purpose. I wonder, whether snapshot database will have instant records or time lag in the records.
I have worked on replication databases which takes about 5 or 10 mins to get the fresh data records.
No, a database snapshot is purely a point in time view of your active database. Not only will it not be instant, it will not ever catch up. It is purely a point in time view of data as it was.
In other words, the more time that lapses between the time the snapshot is taken and the time your query runs against the snapshot, the greater the difference will potentially be between the snapshot and the original query.
This is also evident in the way the snapshot is managed on disk. Snapshots maintain point in time views by referencing original copies of the database pages. As modifications come in post snapshot, a copy of the page is made to maintain the state of the snapshot. Hence, a snapshot on disk is very small at the time that it is taken, but will continue to grow larger and larger as time passes as it continues to keep an exact version of the original state of the database at the moment the snapshot was taken.
As quoted in the documentation. A database snapshot is a read-only, static view of a SQL Server database (the source database).

ASPdotnetstorefront dbo.SecurityLog row deleting

I've acquired the maintenance task for an aspdotnetstorefront website that appears to have not had the maintenance scripts from the admin panel run for several years.
I've since ran each, but this has only freed up roughly 50MB. I have a table inside of the database (dbo.SecurityLog) that has rows from several years ago (2012). I assume that these rows display by going to Maintenance > Security, but because of the sheer size of the table, the application times out (Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.)
Would deleting older rows from 2012 be safe to do or would this cause possible issues with the application?
So I resolved this based off the following link:
Is my SQL Server Express 2008 R2 database about to hit the size limit and crash my website?
The post contributors discuss several issues with the aspdotnetstorefront in regards to database size growing, along with methods to clear rows, one of which is the SecurityLog table. I cleared this table up to a specified date, and all is well, freeing up 300MB+

Data synchronization between two databases

I need to synchronize between two data sources:
I have a web service running on then net. It continuously gathers data from the net and stores it on the database. It also provides the data to the client based on the client's request. I want to keep a repository of data as object for faster service.
On the client side, there is a windows service that calls the web service mentioned previously and synchronize its local database to the server.
Few of my restrictions:
The web service has very small buffer limit and it can only transfer less then 200 records per call which is not enough for data collected in a day.
I also can't copy the database files since the database structure is very different (sql and other is access)
The data is being updated on a hourly basis and there will be large amount of data that will be needed to be transfer.
Sync by date or other group is not possible with the size limitation. Paging can be done but the remote repository keeps changing (and I don't know how to take chunk of data from the middle of table of SQL database)
How do I use the repository for recent data update/or full database in sync with this limitation?
A better approach for the problem or an improvement of the current approach will be taken as the right answer
You mentioned that syncing by date or by group wouldn't work because the number of records would be too big, but what about syncing by date (or group or whatever) and then paging by that? The benefit is that you will have a defined batch of records and you can now page over that because that group won't change.
For example, if you need to pull data off hourly, as each hour elapses (so, when it goes from 8:59am to 9:00 am), you begin pulling down the data that was added between 8am and 9am in chunks of 200 or whatever size the service can handle.

SQL 2005 Partitioning

I have a database with 200 million records and I need to support 200 write tps. How many partitions do you recommend to use?
One. Don't bother. Partitions will slow you down for writes
It's far more important for writes to have a dedicated, fast volume for your transaction log file (the LDF file) for that database alone. Don't add log files either: one LDF on one volume only.
This is because of write ahead logging: One and Two. Simply, a data page may not be written to disk immediately, but your associated log entry must be confirmed as written for any given transaction

How to replicate database A to B, then truncate data on database A, leaving B alone?

I am having a problem with my SQL Server 2005 database. The database must handle 1000 inserts a sec constantly. This is proving to be very difficult when the database must also handle reporting of the data, thus indexing. It seems to slow down after a couple of days only achieving 300 inserts per sec. By 10 days it is almost non functional.
The requirement is to store 14 days worth of data. So far I can only manage 3 or 4 before everything falls apart. Is there a simple solution to this problem?
I was thinking that I could replicate the primary database allowing the new database to be the reporting database storing the 14 days worth of database, then truncate the primary database daily. Would this work?
It is unlikely you will want reporting running against a database capturing 1000 records per second. I'd suggest two databases, one handling the constant stream of inserts and a second reporting database that only loads records at an interval, either by querying the first for a finite set since the last load or by caching the incoming data and loading it separately.
However, reporting in near real time against a database capturing 86 million rows per day and carrying approximately 1.2 billion rows will require significant planning and hardware demands. Further, on the backend as you reach day 14 and start to remove old data you will put more load on the database. If you can run without logging that will help the primary system, but the reporting system with indexing demands and such will require some pretty significant performance considerations.
If the server has multiple harddrives I would try to split the database (or even the tables) in partitions.
Yeah, you dont need to copy a database over and then truncate/delete the live database on the fly. My guess is that the slowness is because your transaction logs are growing like crazy?
I think you are trying to say that you want to "shrink" the database periodically. If you have a FULL backup scheme, I think that if you backup the transaction logs once in a while that will shrink things down to normal again.