Transaction log gets full [closed] - sql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
We are on SQL Server 2016. Our recovery mode is FULL. Auto-growth is set to 4GB.
Drive size is 1TB. Transaction log backup frequency is 2 hours.
We have an issue with the transaction log getting full very frequently. Our data size is approximately 1.2TB.
Can someone please suggest what we could do to get rid of this issue. Any additional setting that we could change or check for?
PS: I'm a beginner in this field, so would appreciate any kind of help.
Thanks.

The log must be sized to accommodate all activity between log backups at a minimum. The log backup frequency should be driven by your recovery point objective (RPO), which is the maximum acceptable data loss as defined by the business.
However, you may need to schedule log backups more frequently to keep the transaction log size reasonable. 2 hours is apparently not often enough in your environment so you need to either increase the log size or increase the log backup frequency.

Related

What if a block size limit in a certain blockchain is exceeded and a new block isn't yet created? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
What if a block size limit in the Bitcoin blockchain or any other blockchain is exceeded and at that time a new block isn't yet mined? I have been thinking about this lately and I haven't found any article pertaining to this.
Thanks in advance.
in the context of bitcoin ,
you are missing mempoolsize with block size,
transactions are stored in mempool which acts as a (pool obviously), miners take transactions out of the pool and put it in a block and try to ming it(find a Nounce for it)
so.
if a new block is not mined the new transactions are piling up in the mempool, miners may let the mempool to grow for a while but at somepoint they will not have the resources and would be forced to reject new transactions.
this will have a negative effect on the network, would drive down the price and disinsetivise miners to try more... slowly blockchain would die...
unless a hard fork happens

Tempdb becomes too big in size [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I was executing one normal SP and after that my Tempdb acquired 80GB disk space which was only 8MB before. How can I overcome this? and Why this happened?
It happened because you did something in the SP that needed tempdb. Sorting under certain conditions, the dreaded DISTINCT that needds to know all data for example.
You can overcome this by rewriting your SQL not to use Tempdb. And the current resize you just fix (redefine size, restart server, tempdb is recreated).
Depending on the db, btw, I would NOT consider 80gb to be excessive on a decent modern server. Depends WHAT you do, obviously.

Can we restrict our database to not autogrow? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
can we make our database ( what ever its size) to not auto grow at all ( data and log file ) ?
if we proceed with this choice maybe we will face problems when the database is full during the on hours
Typically the way you prevent growth events from occurring during business hours is by pre-allocating the data and log files to a large enough size to minimize or completely eliminate auto-growth events in the first place. This may mean making the files larger than they need to be right now, but large enough to handle all of the data and/or your largest transactions across some time period x.
Other things you can do to minimize the impact of growth events:
balance the growth size so that growth events are rare, but still don't take a lot of time individually. You don't want the default of 10% and 1MB that come from the model database; but there is no one-size-fits-all answer for what your settings should be.
ensure you are in the right recovery model. If you don't need point-in-time recovery, put your database in SIMPLE. If you do, put it in FULL, but make sure you are taking frequent log backups.
ensure you have instant file initialization enabled. This won't help with log files, but when your data file grows, it should be near instantaneous, up to a certain size (again, no one-size-fits-all here).
get off of slow storage.
Much more info here:
How do you clear the SQL Server transaction log?

Database size much more than database stats [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to export a database from PHPMyAdmin in SQL format.
I checked the database stats and it showed the size of the database as 285 MB. I started to download it, but it has already crossed 500 MB but no sign of download completion.
What could be the reason for this?
exporting to SQL converts the data to text, and add additional text between fields and row.
For example, a TINYINT takes 1 byte storage, but as SQL it takes 1-4 bytes, ('0' -> '-127')
Whily mysqldump exports data to a file it doesn't operate with raw binary content. It creates SQL requests to create your database from the stretch and fill it with inserts.
So your dump contains test SQL requests (CREATE, INSERT, etc.), comments, connections settings commands, etc. All your binary data are represented as strings as well.
That's why your dump file much bigger than actual data size in the db.

What is the estimated S3 file loss percentage [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to communicate to a client the likely-hood of losing files in S3. I would also like to know if it is possible to lose an entire bucket from S3. So, I would like to know the following:
Is there a documented expected file loss percentage in S3?
Is there a documented expected bucket loss percentage in S3?
When I say "lose" a file. I mean a file that is lost, damaged or otherwise unable to be pulled from S3. This "loss" is caused by a failure on S3. It is not caused by a tool or other user error.
Amazon doesn't give any kind of SLA or data loss guarantees for data stored on S3, but as far as I know nobody has ever lost any data on S3 aside from user/tool errors.
I would say the probability of user / coder error causing data loss is substantially greater than data loss through some kind of failure on S3. So you may wish to consider some kind of backup strategy to mitigate that.