Setting JVM parameter not affecting the size for HSQLDB in-memory database increase in size - jvm

I set JVM memory(JRE Parameter) size to 1024MB and by default it is 256MB. I inserted data into HSQLDB tables (size ~220MB) and i am getting the out of memory error on windows 7 machine. Though i set the size to 1024MB and i am still facing out of memory error. Please let me how to resolve this issue as this database is about to move into production site.
Any suggestion is greatly appreciated.

How do you know the size of HSQLDB tables?
The size of files that contain the database is not the same as the total Java object size of the database in memory. You can use CACHED tables for your largest tables to restrict the amount of objects loaded into memory.

Related

Azure Database cannot reduce the sizing

Azure Database cannot reduce the sizing from 750 to 500 GB.
Overall sizing after I checked in the Azure dashboard.
Used space is 248.29 GB.
Allocated space is 500.02 GB
Maximum storage size is 750 GB.
The validation message when I reduce the sizing:
Message is
The storage size of your database cannot be smaller than the currently
allocated size. To reduce the database size, the database first needs
to reclaim unused space by running DBCC SHRINKDATABASE (XXX_Database
Name). This operation can impact performance while it is running and
may take several hours to complete.
What should I do?
Best Regard
If we want to reduce the database size, we need to ensure the number of databse size is larger than the number of the Allocated space you set. Now, according to your need, we should reclaim unused allocated space. Regarding how to do it, we can run the following command
-- Shrink database data space allocated.
DBCC SHRINKDATABASE (N'db1')
For more details, please refer to the document
I got this error via CLI after also disabling read scale and the solution was to remove --max-size 250GB from command:
az sql db update -g groupname -s servername -n dbname --edition GeneralPurpose --capacity 1 --max-size 250GB --family Gen5 --compute-model Serverless --no-wait

Approximate disk space consumption of rows on SQL Server

I'd like to understand, what causes the size of a SQL Server 12 database. The mdf has 21.5 GB. Using the "Disk Usage by Top Tables" report in SQL Server Management Studio, I can see that 15.4 GB are used by the "Data" of one table. This table has 1,691 rows of 4 columns (int, varchar(512), varchar(512), image). I assume the image column is responsible for most of the consumption. But
Select (sum(datalength(<col1>)) + ... )/1024.0/1024.0 as MB From <Table>
only gives 328.9 MB.
What might be the reason behind this huge discrepancy?
Additional information:
For some rows the image column is updated regularly.
This is a screenshot of the report:
If we can trust it, indices or unused space should not be the cause.
Maybe you are using a lot of indexes per table, these will all add up. Maybe your auto-grow settings are wrong.
The reason was a long running transaction on another unrelated database (!) on the same SQL Server instance. The read committed snapshot isolation level filled the version store. Disconnecting the other application reduced the memory usage to a sensible amount.

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

Configuration HSQLDB big storage

First, sorry for my approximative english.
I'm a little lost with HSQLDB using.
I need to save in local database a big size of data (3Go+), in a minimum of time.
So I made the following :
CREATE CACHED TABLE ...; for save data in .data file
SET FILES LOG FALSE; for don't save data in .log file and gain time
SHUTDOWN COMPACT; for save records in local disk
I know there's other variable to parameter for increase the .data size and for increase data access speed, like :
hsqldb.cache_scale=
hsqldb.cache_size_scale=
SET FILES NIO SIZE xxxx
But I don't know how to parameter this for a big storage.
Thanks to help me.
When you use SET FILES LOG FALSE data changes are not saved until you execute SHUTDOWN
or CHECKPOINT.
The other parameters can be left to their default values. If you want to use more memory and gain some speed, you can multiply the default values of the parameters by 2 or 4.

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.