Mnesia: DAT file huge, remove old records to resize it - resize

I'm trying to reduce the DAT file size about a Mnesia table (disc_copies), but I don't find yet the solution. I can't truncate the file or create a new one, I must work in this table during the system is working.
The procedure that I considered was to delete the oldest records, but this isn't enough because the DAT file size remain the same until I restart the node.
So there is a way to force the sync to disk without restart the entire node?

Related

Can I copy data table folders in QuestDb to another instance?

I am running QuestDb on production server which constantly writes data to a table, 24x7. The table is daily partitioned.
I want to copy data to another instance and update it there incrementally since the old days data never changes. Sometimes the copy works but sometimes the data gets corrupted and reading from the second instance fails and I have to retry coping all the table data which is huge and takes a lot of time.
Is there a way to backup / restore QuestDb while not interrupting continuous data ingestion?
QuestDB appends data in following sequence
Append to column files inside partition directory
Append to symbol files inside root table directory
Mark transaction as committed in _txn file
There is no order between 1 and 2 but 3 always happens last. To incrementally copy data to another box you should copy in opposite manner:
Copy _txn file first
Copy root symbol files
Copy partition directory
Do it while your slave QuestDB sever is down and then on start the table should have data up to the point when you started copying _txn file.

Index corruption on large table

I have a large table with around 123 million records in cratedb. I noticed that during a snapshot to s3 (or indeed to file system) an index corruption occurs on each shard. Consequently this causes a partial snapshot. Once crate is restarted the table doesn't load on the account that there is a corrupted index. I have to remove the corrupted file and a file lock from the index folder and table heals. I have tried to recreate tables by moving everything to another table and swapping (using alter cluster command) but the corruption still occurs on the new table as well.
Is there anything else I can try to fully snapshot the cluster and avoid corruption?
Crate team found a bug https://github.com/crate/crate/pull/9318
Resolved in 4.0.8

Undo Log error: No more space left over in system tablespace for allocating UNDO log pages

I am importing a csv file into a table in a mysql database using load data infile command. The csv file is pretty big (around 10 Gig). In the middle of the import, I get the following error:
Undo Log error: No more space left over in system tablespace for allocating UNDO log pages. Please add new data file to the tablespace or check if filesystem is full or enable auto-extension for the tablespace
What is this error for?
The MySQL system comes with a way to rollback changes using this "UNDO log file". It's also used for coherency. With large datasets, that log file may grow too fast and be filled up. Then you get that error. The idea is to be able to undo the last command. This is similar to going in a paint system, for example, making changes to an image and then clicking Ctrl-Z. That's what the UNDO log file is there for.
To avoid having the table running, you can mark it as inactive:
ALTER UNDO TABLESPACE tablespace_name SET INACTIVE;
You can also delete the table altogether (not recommended) or allow for auto-truncation, which may be slow. The auto-truncate makes sure to remove data as required.
For more info you can see here.

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

SQL deleted data stored in log file or permantly deleted from database

After deleting data from SQL database, deleted data stored in log file or is it permanently deleted from database?
When deleting data from database log file get increased the size, why?
After shrink the database it will reduced the file size
edit 1::
in your fifth row you desc why log file incresed but after completing the delete command why it is not free the disk space ,the records maintain as it is in log file . is it possible to delete data without storing into the log file ? because i deleted near about 2 millions data from the file it will incresed the 16GB space of the disk
So, as you described the log behavior - it is reduces size after shrink - you use Simple recovery model of your DB.
And there is no any copy of deleted data left in log file
If you ask - how to recover that data - there is no any conventional way.
If you ask for data security and worried about deleted data left somewhere in the DB - yes, there are - see ghosted records. And obviously some data can be recovered by third party tools from both DB file types - data and log.
The db log is increased by size because it holds all the data being deleting until DELETE command finishes, because if it fails - sql server should restore all the partially deleted data due to atomicity guarantee.
Additional answers:
No, it is not possible to delete data without storing into the log file
Since log file already grown up in does not shrink automatically. To reduce size of files in DB you should perform some shrink operations, which is strongly not recommended on production environment
try to delete in smaller chunks, see the example
instead of deleting all the data like this:
DELETE FROM YourTable
delete in small chunks:
WHILE 1 = 1
BEGIN
DELETE TOP (10000) FROM YourTable
IF ##ROWCOUNT = 0 BREAK
END