I've just started using SQLite and I want to write all my application data to a file, not knowing if the file exists already; with 'normal' files, this is straightforward, but with SQLite I can't create a table if it already exists, and I can't insert a row if the primary key already exists.
I basically want to do something like "CREATE TABLE IF NOT EXISTS table....else...DELETE FROM table". There must be a way to do it, I suspect that there are some ways that are more efficient than others. You'd think, for example, that it would be better to use an existing table rather than deleting and recreating, but that depends on what's involved in checking if it exists and deleting its contents.
Alternatively, is there any way to write a database to memory (sqlite3_open(":memory:",db)), but then get hold of its contents - as a byte array or something - to write to a file?
For all database systems it will almost always be more efficient to DROP the table first and then re CREATE it. Using DELETE will require index updates etc, whereas a simple DROP removes the indexes too, and will not involve making transaction log entries For SQLite, you can do a DROP IF EXISTS to drop the table conditionaly.
What about having an empty, fully designed database file ready somewhere. Then, if the intended app data database does not exist, create it by copying the empty db file?
If you're going to read the application data in at some point before writing it back out - user settings for instance - you'll know whether the data exists or not. Try calls to tables wrapped in exception handling and you'll know if they exist - alternatively use the database tables to tell you what exists:
SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name;
If you only every want to overwrite an existing db file you can easily just delete the db file from the disk and then run your creation code - simples - no messing around with drop calls anywhere. No worries about having changes in the db between versions.
Related
We have a Postgres database in a test system which we want to 'refresh' with the data from the production system. However, there are some tables with test configuration data that I want to preserve in the test database. Note that the tables I want to preserve are referred to with foreign key constraints in other tables that are not preserved.
To refresh the test database, we usually rename it to '..._old' and then re-create the database from a dump of the production data.
Now there's a few ways to try to preserve the test configuration data, but I'm wondering if anyone has any brilliant ideas that are better/faster. Hoping that we can script this somehow to make it easy each time we do this.
A straight pg_dump/pg_restore won't work, because it will only INSERT not UPDATE the matching records. Or am I missing something?
I had thought about doing it by:
Renaming the tables involved with '..._test'
Using pg_dump to dump just those renamed tables to a file
Re-create the 'refreshed' database
Restore the renamed tables from file into the new database
Perform an UPDATE table_a SET (......) = (SELECT * FROM table_a_test) to overwrite refreshed data with preserved test data
Note that the number of records in the production data and the test data may not be the same.
The content of these tables is not huge, so I had also thought about generating UPDATE scripts for all the records within the preserved data.
Can anyone think of a better way to do this?
We have a table in our Oracle Database that was created from an actual script.
Ex:
Create Table AS (Select * from table).
I was hoping to recover the original script the table was created from as the data is quite old in the table, but needs this created table needs to be refreshed. This table is created with data from another live table in our database, so if there is a way to refresh this without the original query - I'm open ears. Any solutions are welcomed!
Thanks!
I suppose you could also do a column by column comparison of this table against all others to see which one (if any) matches it. Of course, this would only be a guess.
It would require that object to actually be a materialized view instead of a table. Otherwise you are probably left off with exploring logs. Beyond that I doubt there is any way to recover the original select statement used to create that table.
Is there a way in Oracle to create a table that only exists while the database is running and is only stored in memory? So if the database is restarted I will have to recreate the table?
Edit:
I want the data to persist across sessions. The reason being that the data is expensive to recreate but is also highly sensitive.
Using a temporary table would probably help performance compared to what happens today, but its still not a great solution.
You can create a 100% ephemeral table that is usable for the duration of a session (typically shorter than the duration than the database run time) called a TEMPORARY table. The entire purpose of a table in memory is to make it faster for reading from. You will have to re-populate the table for each session as the table will be forgotten (both structure and data) once the session completes.
No exactly, no.
Oracle has the concept of a "global temporary table". With a global temporary table, you create the table once, as with any other table. The table definition will persist permanently, as with any other table.
The contents of the table, however, will will not be permanent. Depending on how you define it, the contents will persist for either the life of the session (on commit perserve rows) or the life of the transaction (on commit delete rows).
See the documentation for all the details:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables003.htm#ADMIN11633
Hope that helps.
You can use Oracle's trigger mechanism to invoke a stored procedure when the database starts up or shuts down.
That way you could have the startup trigger create the table, and the shutdown trigger drop it.
You'd probably also want the startup trigger to handle cases where the table exists and truncate it just in case the server stopped suddenly and the shutdown trigger wasn't called.
Oracle trigger documentation
Using Oracle's Global Temporary Tables, you can create a table in memory and have it delete the data at the end of the transaction, or the end of the session.
If I understand correctly, you have some data that needs to be processed when the database is brought online and left available only as long as the database is online. The only use-case I can think of that would require this is if you're encrypting some data and you want to ensure that the unencrypted data is never written to disk.
If this is actually your use-case, I would recommend forgetting about trying to create your own solution for this and, instead, make use of Oracle's encrypted tablespaces or Transparent Data Encryption.
I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client
Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition
Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.
Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.
if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.
So I'm trying to do something I thought would've been straightforward. I have a table in the DB named "Images." It's 'Description' property is of type nvarchar(50). I simply want to make it nvarchar(250). Every time I try, it says it can't save because some tables would have to be redropped. I can't just delete it (i think) because, there's already data being maintained by it, and I can't lose it.
EDIT::
Exact error message
"Saving changes is not permitted. The
changes you have made require the
following tables to be dropped and
re-created. You have either made
changes to a table that can't be
re-created or enabled the option
Prevent saving changes that require
the table to be re-created."
Should I just disable the 'Prevent saving changes that require table re-creation' and save it from there.
This KB article explain it
Do you have any tables referencing the "Description" column? That would prevent you from changing the data type/length.
Were you doing this from the SSMS GUI or were you running a script using alter table to make the change?
IF you did it through the designer, I believe it creates another table, drops the orginal and renames the new table. If that table is in a PK/FK relationship. it can't drop the table. Never make table changes except by using a script. YOu also need these to properly put them in source control as well.