Has HSQLDB some mechanism to save in-memory data to file? - hsqldb

Has HSQLDB some mechanism for saving in-memory data to file?
As I know after the server is shutted down, all in-memory data become unaccessible. So I want to save all in-memory data to file.
Unfortunately I can't use BACKUP mechanism, because it can't be applied for in-memory data.

HSQLDB databases are of different types. The all-in-memory databases do not store the data to disk. These databases have URLs in the form jdbc:hsqldb:mem:<name>.
If your database URL is in the form jdbc:hsqldb:file:<file path> and your tables are the default MEMORY tables, the data is all in memory but the changes are written to a set of disk files.
With all types of database, including all_in_memory, you can use the SQL statement SCRIPT <file path> to save the full database to a file. If you save the data to a file with the .script extension, you can open the file as a file database.
When you run a server, the URL's are used without the jdbc:hsqldb prefix, for example server.database.0=file:C:/myfile
See the guide here http://hsqldb.org/doc/2.0/guide/running-chapt.html#running_db-sect

There is an SQL command for that. Try this:
SCRIPT '/tmp/data.sql'
See here for details.

Related

How to store files in sqlite3 database

is there a way to store .txt or .pdf files within a table of my sqlite3 database?
yes is possible to store file in sqlite you can see this link to find how to store file
but I suggest you to don't do that because if you store some file in database, it's become too Heavy and slow to get query
the manageable situation is your save file in file manager and store file location address in database
Yes, you can store file in database in 2 ways
Store as binary or blobs
Store file in physical and path only store in database.
But both got some disadvantages.
If binary or blobs , database get heavy and make slow performance in query.
if file path only in database, when u backup and restore database in another place, then need to move physical file also and also anybody delete file from physical folder directly.
Is your requirements is small , then got with binary.let choose yourself.

Dump HANA Database using "SAP HANA Web-based Development Workbench"

I'd like to get dump of a HANA DB using the browser based "SAP HANA Web-based Development Workbench".
I'm especially interessted in exporting:
the structure of the tables including primary and foreign key constraints
the data inside the tables
Once I log into the "SAP HANA Web-based Development Workbench", I'm able to open the "catalog" and execute SQL commands like e.g. SELECT * FROM MY_TABLE;. This allows me to download the data from one table as a CSV. But is there also something similar to pg_dump in postgres, a command that exports both table structure and data as for example a tar-compressed .sql file?
You can right click on the database which you would like to backup and select Export.
Be sure to activate the checkbox Including data. I am not sure if it is also necessary to check the Including dependencies checkbox.
You get a zip file which contains the sql-commands to create the tables and seperate data files which contains the content of the tables. Each table is saved in a seperate directory.
The export command seems relevant.
The server will generate .sql files for structure and .csv for data.
If the database is a managed service such as HANA Cloud, you don't have access to the filesystem and should dump the files to an S3 bucket or an azure blob store.
Otherwise, just grab the files from the server box.

Can you use the attach database command with a db stored remotely?

I want to store a database in different files and have each of these files stored in a different computer. Then from any of these computers I want to be able to access the other databases and attach them together with the command ATTACH DATABASE to create a single local database with all the information.
Is this possible or does ATTACH DATABASE require that the databases are stored locally?
You can attach a file to SQLite as long as it's accessible on the filesystem. Both Unix and Windows can do that.
It is strictly not recommended to write anything to such remote-mounted database files. It should be OK for reading, though.
No idea about MySQL, but probably the same or stricter limitations apply.

Import large table to azure sql database

I want to transfer one table from my SQL Server instance database to newly created database on Azure. The problem is that insert script is 60 GB large.
I know that the one approach is to create backup file and then load it into storage and then run import on azure. But the problem is that when I try to do so than while importing on azure IO have an error:
Could not load package.
File contains corrupted data.
File contains corrupted data.
Second problem is that using this approach I cant copy only one table, the whole database has to be in the backup file.
So is there any other way to perform such an operation? What is the best solution. And if the backup is the best then why I get this error?
You can use tools out there that make this very easy (point and click). If it's a one time thing, you can use virtually any tool (Red Gate, BlueSyntax...). You always have BCP as well. Most of these approaches will allow you to backup or restore a single table.
If you need something more repeatable, you should consider using a backup API or code this yourself using the SQLBulkCopy class.
I don't know that I'd ever try to execute a 60gb script. Scripts generally do single inserts which aren't very optimized. Have you explored using various bulk import/export options?
http://msdn.microsoft.com/en-us/library/ms175937.aspx/css
http://msdn.microsoft.com/en-us/library/ms188609.aspx/css
If this is a one-time load, using a IaaS VM to do the import into the SQL Azure database might be a good alternative. The data file, once exported could be compressed/zipped and uploaded to blob storage. Then pull that file back out of storage into your VM so you can operate on it.
Have you tried using BCP in the command prompt?
As explained here: Bulk Insert Azure SQL.
You basically create a text file with all your table data in it and bulk copy it your azure sql database by using the BCP command in the command prompt.

Storing files in SQL server vs something like Amazon S3

Whats the advantage/disadvantage between storing files as a byte array in a SQL table and using something like Amazon S3 to store them? Whats the advantage of S3 that makes it so I should use that instead of SQL?
Pros for storing files in the database:
transactional consistency
security (assuming you need it and that your database isn't wide open anyway)
Cons for storing files in the database:
much larger database files + backups (which can be costly if you are hosting on someone else's storage)
much more difficult to debug (you can't say "SELECT doc FROM table" in Management Studio and have Word pop up)
more difficult to present the documents to users (and allow them to upload) - instead of just presenting a link to a file on the file system, you must build an app that takes the file and stores it in the database, and pulls the file from the database to present it to the user.
typically, database file storage and I/O are charged at a much higher premium that flat file storage