I was wondering what the simplest and easiest way to backup / restore a database on SQLite 3 is? I have read around and there are lots of articles detailing methods for complicated situations, but I am struggling to find a basic procedure.
I have one simple database on a site which is basically a news reel of a company's recent activities. The site is just about to be deployed and will have new posts added on a roughly daily basis. I am hoping to write a number of posts before the site goes online, then upload the database onto the live server. From then on, new posts will be added online but it would be nice to have a backup in case something goes wrong.
So, essentially my question is:
Is there a simple way to backup a database in SQLite3 and also to upload a database? I am aware that I could possibly use seeds as a way to upload the data initially, but ideally i would rather just copy the development database (if possible...) and upload it onto the production server.
Apologies for my ignorance...
I would read the backup documentation here. There are some potential risks in doing file copies, but especially for the initial launch, this approach would be fine. I have done this on a couple of low traffic sites for a number of years and never run into any issues.
The nice thing about sqlite3 is that it's a file-based database exclusively. So long as you can prevent an application from using the database for a bit, backing up and restoring is as simple as copying the database file itself.
Related
I have very little Database management experience, I took a single class when I was in Undergrad. I wanted to see other's inputs on the best way to setup the database.
I have developed a docker application(Webscraping, PostGIS database). The webscraper scrapes from multiple websites everyday. Then uploads to the database, it also checks for duplicates before uploading to the database.
However, I don't want the Reasearch Assistants to be able to change things on the original tables, since lot of the webscraper depends on the structure of the original tables. I gave them SELECT access, but I want them to be able to share their data on the Database as this is a collaborative project.
My original thoughts was to create a new and empty database with full permission. And only SELECT access to the webscraper database. I don't know if this is the best way to do this.
What are your thoughts?
Also to note, this is a contract job for a university project under a grant so I won't be maintaining the database after the contract. Also the project isn't big enough to hire a person with Docker & Database experience just to maintain the database. So I am trying to bulletproof this as much as possible.
Here is the use-case: we need to backup some of the tables from a client server, copy it to our servers, restore it, then running some queries using ODBC.
I managed to do this process for the entire database by using probkup for backup, prorest for restore and proserve to make it accessible for SQL queries.
However, some of the databases are big (> 8GB), so we are looking for a solution to do the backup for only the tables we need. I didn't find anything with the documentation of probkup how this can be done.
Progress only supports full database backups.
To get the effect that you are looking for you could dump (export) the tables that you want and then load them into an empty database.
"proutil dump" and "proutil load" are where you want to start digging.
The details will vary depending on exactly what you want to do and what resources and capabilities you have available to you.
Another option would be to replicate the tables in question to a partial database. Progress has a product called "pro2" that can help with that. It is usually pointed at SQL targets but you could also point it at a Progress database.
Or, if you have programming skills, you could put together a solution using replication triggers (under the covers that's what pro2 does...)
probkup and prorest are block-level programs and can't do a backup or restore by table.
To do what you're asking for, you'll need to do a dump the data from the source db's tables and then load it into the target db.
If your object is simply to maintain a copy of the DB, you might also try incremental backups. Depending upon your situation, that might speed things up a bit.
Other options include various forms of DB replication, which allow you to keep real- or near-real-time copies of your database.
OpenEdge Replication. With the correct license, you can do query-only access on the replication target, which is good for reporting and analysis.
Third-party replication products. These can be more flexible in terms of both target DBs and limiting the tables to be replicated.
Home-grown replication (by copying and applying AI files). This is not terribly complicated, but you have to factor the cost of doing the work and maintaining the system. There are some scripts out there that can get you started.
Or, as Tom said, you can get clever with replication via triggers.
We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).
I am developing an Adobe AIR application which stores data locally using a SQLite database.
At any time, I want the end user to synchronize his/her local data to a central MySQL database.
Any tips, advice for getting this right?
Performance and stability is the key (besides security ;))
I can think of a couple of ways:
Periodically, Dump your MySQL database and create a new SQLite database from the dump. You can then serve the SQLite database (SQLite databases are contained in a single file) for your users client to download and replace the current database.
Create a diff script that generates the necessary statements to bring the current database up to speed (various INSERT, UPDATE and DELETE statements). To do this, you must record the time of each change continuously in your database (the time of creation and update for each row, and keep a history of deleted rows).
User's client will download the diff file (a text file of the various statements) and apply it on the local database.
Both approaches have their own pros and cons - by dumping the entire database, you make sure all the data gets through. It is also much easier than creating the diff, however it might put more load on the server, depending on how often does the database gets updated between dumps.
On the other hand, diffing between the database will give you just the data that changed (hopefully), but it is more open to logical errors. It will incur an additional overhead on the client as well, since it will have to create/update all the necessary records instead of just copying a file.
If you're just sync'ing from the server to client, Eran's solution should work.
If you're just sync'ing from the client to the server, just reverse it.
If you're sync'ing both ways, have fun. You'll at minimum probably want to keep change logs, and you'll need to figure out how to deal with conflicts.
The source database is quite large. The target database doesn't grow automatically. They are on different machines.
I'm coming from a MS SQL Server, MySQL background and IDS11 seems overly complex (I am sure, with good reason).
One way to move data from one server to another is to backup the database using the dbexport command.
Then after copying the backup files to the destination server run the dbimport command.
To create a new database you need to create the DBSpace for the new database using the onmonitor tool, at this point you could use the existing files from the other server.
You will then need to create the database on the destination server using the dbaccess tool. The dbaccess tool has a database option that allows you to create a database. When creating the database you specify what DBSpace to use.
The source database may be made up of many chunks which you will also need to copy and attach to the new database.
The easiest way is dbexport/dbimport, as others have mentioned.
The fastest way is using onpload, the High Performance Loader. If you have lots of data, but not a ridiculous number of tables, this is definitely worth pursuing. There are some bits and pieces on the IIUG site that may be of assistance in scripting the HPL to generate all the config you'll need.
You have a few choices.
dbexport/dbimport
onunload/onload
HPL (high performance loader) options.
I have personally used onunload/onload and dbexport/dbimport. I have not used HPL. I'm using IDS 10.
onunload/onload IBM docs
Back up the raw database to disk or tape in page size chunks
faster (especially if you go to disk)
Issues if the the database servers are on different operating systems or hardware or if they just have different page sizes.
dbexport/dbimport IBM docs
backup the database in delimited ascii files
writes an ascii schema of the database including all users, tables, views, indexes, etc. Everything about the structure of the database into one huge plain text file.
separate plain text files for each table of the database as well
not so fast
issues on dbimport on any table that has bad data, any view with incorrect syntax, etc. (This can be a good thing, an opportunity to identify and clean)
DO NOT LEAVE THIS TAPE ON THE FRONT SEAT OF YOUR CAR WHEN YOU RUN INTO THE STORE FOR AN ICE CREAM (or you'll be on the news). Also read ... Not a very secure way to be moving data around. :)
Limitation: Requires exclusive access to the source database.
Here is a good place to start in the docs --> Migration of Data Between Database Servers
have you used the export tool ? There used to be a way if you first put the db's into quiescent mode and then you could actually copy the DBSpaces across (dbspaces tool I think... its been a few years now).
Because with informix you used to be able to specify the DBSpaces(s) to used for the table (maybe even in the alter table ?).
Check - dbaccess tool - there is an export command.
Put the DB's into quiesent mode or shut down, copy the dbspaces and then attach table telling it to point to the new dbspaces file. (the dbspaces tool could be worth while looking at.. I have manuals around here. they are 9.2, but it shouldn't have changed too much).
If both the machines use the same version of IDS then another option would be to use ontape to take a backup on one machine one machine and restore on another. You can use the STDIO option and then just stream the backup onto the other machine where the restore could just restore from the STDIO.
From the "Data Replication for High Availability and Distribution" redbook:
ontape -s -L 0 -F | rsh secondary_server "ontape –p"
You could also create a passwordless ssh connection b/w the hosts and transfer in a more secure way.