I have got an application that I am storing some images in the indexedDB.
In IE11 I seem to get the following problem:
When I create a database the size of the Interner.edb (C:\Users\%USERNAME%\AppData\Local\Microsoft\Internet Explorer\Indexed DB) file where the database gets stored increases obviously but the size of it does not reduce when I delete the database either programmatically
var request = window.indexedDB.deleteDatabase("databaseName");
or manually through "Tools-> Internet Options -> Settings (in Browsing history section) -> Caches and databases tab" and delete the entry.
As a result, it reaches the max limit and I get a QuotaExceededError or a "Not enough storage is available to complete this operation" error message when I try to store a few hundred mb of data. I have set the limit to its max (1GB).
Any idea why Internet.edb size does not reduce after deleting the database?
Related
We are using Azure SQL Database with 32 GB Storage. I have checked the storage utilization from Azure portal and it is showing me 68% used (22 GB). However, when I am trying to create an Index, I am getting the bellow error.
The database has reached its size quota. Partition or delete data,
drop indexes, or consult the documentation for possible resolutions.
Thanks.
Make sure the database does not have a maximum size set with the following query. This size limit is independent of the size limit of the tier.
SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes
You can change that max size to the limit of the current tier the database is using.
ALTER DATABASE CURRENT MODIFY MAXSIZE=100MB
Verify also the Azure SQL is not reaching the maximum size for the TempDB. Please visit this documentation to see the current limit for the service tier the database is using.
The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.
First, sorry for my approximative english.
I'm a little lost with HSQLDB using.
I need to save in local database a big size of data (3Go+), in a minimum of time.
So I made the following :
CREATE CACHED TABLE ...; for save data in .data file
SET FILES LOG FALSE; for don't save data in .log file and gain time
SHUTDOWN COMPACT; for save records in local disk
I know there's other variable to parameter for increase the .data size and for increase data access speed, like :
hsqldb.cache_scale=
hsqldb.cache_size_scale=
SET FILES NIO SIZE xxxx
But I don't know how to parameter this for a big storage.
Thanks to help me.
When you use SET FILES LOG FALSE data changes are not saved until you execute SHUTDOWN
or CHECKPOINT.
The other parameters can be left to their default values. If you want to use more memory and gain some speed, you can multiply the default values of the parameters by 2 or 4.
After deleting data from SQL database, deleted data stored in log file or is it permanently deleted from database?
When deleting data from database log file get increased the size, why?
After shrink the database it will reduced the file size
edit 1::
in your fifth row you desc why log file incresed but after completing the delete command why it is not free the disk space ,the records maintain as it is in log file . is it possible to delete data without storing into the log file ? because i deleted near about 2 millions data from the file it will incresed the 16GB space of the disk
So, as you described the log behavior - it is reduces size after shrink - you use Simple recovery model of your DB.
And there is no any copy of deleted data left in log file
If you ask - how to recover that data - there is no any conventional way.
If you ask for data security and worried about deleted data left somewhere in the DB - yes, there are - see ghosted records. And obviously some data can be recovered by third party tools from both DB file types - data and log.
The db log is increased by size because it holds all the data being deleting until DELETE command finishes, because if it fails - sql server should restore all the partially deleted data due to atomicity guarantee.
Additional answers:
No, it is not possible to delete data without storing into the log file
Since log file already grown up in does not shrink automatically. To reduce size of files in DB you should perform some shrink operations, which is strongly not recommended on production environment
try to delete in smaller chunks, see the example
instead of deleting all the data like this:
DELETE FROM YourTable
delete in small chunks:
WHILE 1 = 1
BEGIN
DELETE TOP (10000) FROM YourTable
IF ##ROWCOUNT = 0 BREAK
END
I need to push a large SQL table from my local instance to SQL Azure. The transfer is a simple, 'clean' upload - simply push the data into a new, empty table.
The table is extremely large (~100 million rows) and consist only of GUIDs and other simple types (no timestamp or anything).
I create an SSIS package using the Data Import / Export Wizard in SSMS. The package works great.
The problem is when the package is run over a slow or intermittent connection. If the internet connection goes down halfway through, then there is no way to 'resume' the transfer.
What is the best approach to engineering an SSIS package to upload this data, in a resumable fashion? i.e. in case of connection failure, or to allow the job to be run only between specific time windows.
Normally, in a situation like that, I'd design the package to enumerate through batches of size N (1k row, 10M rows, whatever) and log to a processing table what the last successful batch transmitted would be. However, with GUIDs you can't quite partition them out into buckets.
In this particular case, I would modify your data flow to look like Source -> Lookup -> Destination. In your lookup transformation, query the Azure side and only retrieve the keys (SELECT myGuid FROM myTable). Here, we're only going to be interested in rows that don't have a match in the lookup recordset as those are the ones pending transmission.
A full cache is going to cost about 1.5GB (100M * 16bytes) of memory assuming the Azure side was fully populated plus the associated data transfer costs. That cost will be less than truncating and re-transferring all the data but just want to make sure I called it out.
Just order by your GUID when uploading. And make sure you use the max(guid) from Azure as your starting point when recovering from a failure or restart.