When I create table that has definition of FK's directly in CREATE command and target table does not exists yet, results in error.
Can checking, if target table exists, be somehow suspended?
my DBMS is Postgres.
Example (pseudocode):
create table "Bar"
foo_id integer FK of "Foo"."id",
someattr text;
create table "Foo"
id integer;
Example is in wrong order, thats why it wont run.
I'm trying to recreate databse in batch, based on definitions in many sql files.
When I create table that has definition of FK's directly in CREATE command and target table does not exists yet, results in error.
Can checking, if target table exists, be somehow suspended?
The best ways to deal with this are likely:
Create your tables in the correct order, or
Create the constraints
outside the table creation, after all tables are created.
Brute force is always an option.
Keep on running your DDL scripts in until you get a run with no errors.
More elegance requires a sequential structuring of your scripts.
Adding existence checks is possible, but I am not too familiar with the postgres metadata.
Related
In the project I have been recently working on, many (PostgreSQL) database tables are just used as big lookup arrays. We have several background worker services, which periodically pull the latest data from a server, then replace all contents of a table with the latest data. The replacing has to be atomic because we don't want a partially completed table to be seen by lookup-ers.
I thought the simplest way to do the replacing is something like this:
BEGIN;
DELETE FROM some_table;
COPY some_table FROM 'source file';
COMMIT;
But I found a lot of production code use this method instead:
BEGIN;
CREATE TABLE some_table_tmp (LIKE some_table);
COPY some_table_tmp FROM 'source file';
DROP TABLE some_table;
ALTER TABLE some_table_tmp RENAME TO some_table;
COMMIT;
(I omit some logic such as change the owner of a sequence, etc.)
I just can't see any advantage of this method. Especially after some discoveries and experiments. SQL statements like ALTER TABLE and DROP TABLE acquire an ACCESS EXCLUSIVE lock, which even blocks a SELECT.
Can anyone explain what problem the latter SQL pattern is trying to solve? Or it's wrong and we should avoid using it?
Is it possible to copy a table (with definition, constraints, identity) to a new table?
Generate a CREATE script based on the table
Modify the script to create a different table name
Perform an INSERT from selecting everything from the source table
No, not really, you have to script it out, then change the names
you can do this
select * into NewTable
FROM OldTable
WHERE 1 =2 --if you only want the table without data
but it won't copy any constraints
It's not the most elegant solution, but you could use a tool like the free Database Publishing Wizard from Microsoft.
It creates an SQL script of the table definition including data and including indexes and stuff. But you would have to alter the script manually to change the table name...
Another possibility:
I just found this old answer on SO.
This script is an example to script the constraints of all tables, but you can easily change it to select only the constraints of "your" table.
So, you could do the following:
Create the new table with data like SQLMenace said (select * into NewTable from OldTable)
Add constraints, indexes and stuff by changing this SQL script
I am migrating an MS Access application (which has linked tables to a MSSQL Server) to MySQL.
As a means to overcome some MSAccess table naming problems, I am seeking a solution to add a MySQL table alias that will point to an existing table in the MySQL database. Ideally I would like to create the alias 'dbo_customers' in mysql that would point to the customers table also in mysql.
To be clear I am not wanting to alias a table name inside a query like this:
SELECT * FROM customers AS dbo_customers
But rather I would like to be able issue the following query:
SELECT * FROM dbo_customers
and have it return data from the customers table.
Off the top of my head
CREATE VIEW dbo_customers AS
SELECT * FROM customers
Maybe not the best solution but should work as the view is updatable. Will definitely work for Read Only
You can create a View.
CREATE VIEW dbo_customers AS SELECT * FROM customers;
If that doesn't work for you, you could try creating a shadow-copy of the table, and use Triggers to keep the tables synced.
For example:
CREATE TABLE t1( id serial primary key, field varchar(255) not null );
CREATE TABLE dbo_t1( id serial primary key, field varchar(255) not null );
-- INSERT trigger
CREATE TRIGGER t1_dbo_insert AFTER INSERT ON t1
FOR EACH ROW BEGIN
INSERT INTO dbo_t1 SET field = NEW.field;
-- No need to specify the ID, it should stay in-sync
END
-- UPDATE trigger
CREATE TRIGGER t1_dbo_update AFTER UPDATE ON t1
FOR EACH ROW BEGIN
UPDATE dbo_t1 SET field = NEW.field WHERE id = NEW.id;
END
-- DELETE trigger
CREATE TRIGGER t1_dbo_delete AFTER DELETE ON t1
FOR EACH ROW BEGIN
DELETE FROM dbo_t1 WHERE id = OLD.id;
END
Not exactly an 'alias', and far from perfect. But it is an option if all else fails.
there is a simpler solution for MySQL via MERGE table engine:
imagine we have table named rus_vacancies and need its English equivalent
create table eng_vacancies select * from rus_vacancies;
delete from eng_vacancies;
alter table eng_vacancies ENGINE=MERGE;
alter table eng_vacancies UNION=(rus_vacancies);
now table rus_vacancies equals to table eng_vacancies for any read-write operations
one limitation - original table must have ENGINE=MyISAM (it can be easily done by "alter table rus_vacancies ENGINE=MyISAM")
You could create a view named dbo_customers which is backed by the customers table.
#OMG Ponies ponies said in a comment:
Why not rename the table?
...and it seems the obvious answer to me.
If you create an ODBC linked table for the MySQL table customers it will be called customers and then all you have to do is rename the table to dbo_customers. There is absolutely no need that I can see to create a view in MySQL for this purpose.
That said, I'd hate to have an Access app that was using SQL Server table names when the MySQL tables were not named the same thing -- that's just confusing and will lead to maintenance problems (i.e., it's simpler for the linked tables in the Access front end to have the same names as the MySQL tables, wherever possible). If I were in your position, I'd get a search and replace utility and replace all the SQL Server table names with the MySQL table names throughout the entire Access front end. You'd likely have to do it one table at a time, but in my opinion, the time it takes to do this now is going to be more than made up for in clarity going forward with development of the Access front end.
I always rename my "linked to SQL" tables in Access from
{dbo_NAME} to {NAME}.
The link creates the table name as {dbo_NAME} but access occasionally has problems with the dbo_ prefix.
Aliases would be nice, yet MySQL does NOT have such a feature.
One option that may serve your needs, besides creating a view, is to use the FEDERATED storage engine locally.
CREATE TABLE dbo_customers (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#localhost:9306/federated/customers';
There are currently some limitations with the FEDERATED storage engine. Here are a couple especially important ones:
FEDERATED tables do not support transactions
FEDERATED tables do not work with the query cache
I'd like to mention a bad solution I explored (and abandoned), which was to use hardlinks on the .frm, .MYD and .MYI files corresponding to my table in /var/lib/mysql/{name_of_my_database}/.
It does, however, NOT work. For InnoDB tables, it simply cannot (even if you hardlink the .idb file) because tables are also referenced in ibdata1.
For MyISAM tables, it kind of works, except it doesn't because in memory, the tables are still distinct and thus do not share cache. So if you write a row to original_table, it won't immediately appear in aliased_table. You would have to flush tables first… which defeats the purpose and even causes data loss (if you insert a row in both the original and the alias before flushing, only one row is kept).
I thought my experiment was worth mentioning as a cautionary tale.
Is it possible to create more than one table at a time using single create table statement.
For MySQL, you can use multi-query to execute multiple SQL statements in a single call. You'd issue two CREATE TABLE statements separated by a semicolon.
But each CREATE TABLE statement individually can create only one table. The syntax supported by MySQL does not allow multiple tables to be created simultaneously.
#bsdfish suggests using transactions, but DDL statements like CREATE TABLE cause implicit transaction commits. There's no way to execute multiple CREATE TABLE statements in a single transaction in MySQL.
I'm also curious why you would need to create two tables simultaneously. The only idea I could come up with is if the two tables have cyclical dependencies, i.e. they reference each other with foreign keys. The solution to that is to create the first table without that foreign key, then create the second table, then add the foreign key to the first table with ALTER TABLE ADD CONSTRAINT. Dropping either table requires a similar process in reverse.
Not with MS SQL Server. Not sure about mysql.
Can you give more info on why you'd want to do this? Perhaps there's an alternative approach.
I don't know, but I don't think you can do that. Why you want to do this?
Not in standard SQL using just the 'CREATE TABLE' statement. However, you can write multiple statements inside a CREATE SCHEMA statement, and some of those statements can be CREATE TABLE statements. Next question - does your DBMS support CREATE SCHEMA? And does it have any untoward side-effects?
Judging from the MySQL manual pages, it does support CREATE SCHEMA as a synonym for CREATE DATABASE. That would be an example of one of the 'untoward side-effects' I was referring to.
(Did you know that standard SQL does not provide a 'CREATE DATABASE' statement?)
I don't think it's possible to create more than one table with a 'CREATE TABLE' command. Everything really depends on what you want to do. If you want the creation to be atomic, transactions are probably the way to go. If you create all your tables inside a transaction, it will act as a single create statement from the perspective of anything going on outside the transaction.
I'm attempting to make a table for the first time using postgres and the examples I'm seeing are kind of throwing me off. When it comes to creating a schema, I have a schema.sql file that contains my schema as follows:
CREATE TABLE IF NOT EXISTS orders
(
order_id INTEGER NOT NULL,
order_amount INTEGER NOT NULL
);
COMMENT ON COLUMN orders.order_id IS 'The order ID';
COMMENT ON COLUMN orders.order_amount IS 'The order amount';
Now I'd upload that schema by doing the following:
psql -d mydb -f /usr/share/schema.sql
Now when it comes time to create the table I'm suppose to do something like this:
create table schema.orders(
order_id INT NOT NULL,
order_amount INT NOT NULL
);
The uploading of the schema.sql file is what confuses me. What is all the information inside the file used for. I thought by uploading the schema i'm providing the model to create the table, but running create table schema.orders seems to be doing just that.
What you call "upload" is actually executing a script file (with SQL DDL commands in it).
I thought by uploading the schema i'm providing the model to create the table
You are creating the table by executing that script. The second CREATE TABLE command is almost but not quite doing the same. Crucial difference (besides the missing comments): A schema-qualified table name. And your schema happens to be named "schema", which is a pretty bad idea, but allowed.
Now, the term "schema" is used for two different things:
The general database structure created with SQL DDL commands.
A SCHEMA which is similar to a directory in a file system.
The term just happens to be the same for either, but one has nothing to do with the other.
Depending on the schema search path, the first invocation of CREATE TABLE may or may not have created another table in a different schema. You need to understand the role of the search path in Postgres:
How does the search_path influence identifier resolution and the "current schema"