postgresql: how to get primary keys of rows inserted with a bulk copy_from? - sql

The goal is this: I have a set of values to go into table A, and a set of values to go into table B. The values going into B reference values in A (via a foreign key), so after inserting the A values I need to know how to reference them when inserting the B values. I need this to be as fast as possible.
I made the B values insert with a bulk copy from:
def bulk_insert_copyfrom(cursor, table_name, field_names, values):
if not values: return
print "bulk copy from prepare..."
str_vals = "\n".join("\t".join(adapt(val).getquoted() for val in cur_vals) for cur_vals in values)
strf = StringIO(str_vals)
print "bulk copy from execute..."
cursor.copy_from(strf, table_name, columns=tuple(field_names))
This was far faster than doing an INSERT VALUES ... RETURNING id query. I'd like to do the same for the A values, but I need to know the ids of the inserted rows.
Is there any way to execute a bulk copy from in this fashion, but to get the id field (primary key) of the rows that are inserted, such that I know which id associates with which value?
If not, what would the best way to accomplish my goal?
EDIT: Sample data on request:
a_val1 = [1, 2, 3]
a_val2 = [4, 5, 6]
a_vals = [a_val1, a_val2]
b_val1 = [a_val2, 5, 6, 7]
b_val2 = [a_val1, 100, 200, 300]
b_val3 = [a_val2, 9, 14, 6]
b_vals = [b_val1, b_val2, b_val3]
I want to insert the a_vals, then insert the b_vals, using foreign keys instead of references to the list objects.

Generate the IDs yourself.
BEGIN transaction
Lock table a
call nextval() - that's your first ID
generate your COPY with IDs in place
same for table b
call setval() with your final ID + 1
COMMIT transaction
At step 2 you probably want to lock the sequence's relation too. If code calls nextval() and stashes that ID somewhere it might be already in use by the time it uses it.
Slightly off-topic fact: there is a "cache" setting that you can set if you have lots of backends doing lots of inserts. That increments the counter in blocks.
http://www.postgresql.org/docs/9.1/static/sql-createsequence.html

Actually you can do it differently, what you need is:
Start transaction
Create temp table with same (or almost same) schema
COPY data to that temp table
Perform regullar INSERT INTO .. FROM temp_table ... RETURNING id, other_columns
Commit
taken from here (in c#, but algo is the same)

Related

BigQuery insert values AS, assume nulls for missing columns

Imagine there is a table with 1000 columns.
I want to add a row with values for 20 columns and assume NULLs for the rest.
INSERT VALUES syntax can be used for that:
INSERT INTO `tbl` (
date,
p,
... # 18 more names
)
VALUES(
DATE('2020-02-01'),
'p3',
... # 18 more values
)
The problem with it is that it is hard to tell which value corresponds to which column. And if you need to change/comment out some value then you have to make edits in two places.
INSERT SELECT syntax also can be used:
INSERT INTO `tbl`
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
... # 980 more NULL AS column
Then if I need to comment out some column just one line has to be commented out.
But obviously having to set 980 NULLs is an inconvenience.
What is the way to combine both approaches? To achieve something like:
INSERT INTO `tbl`
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
The query above doesn't work, the error is Inserted row has wrong column count; Has 20, expected 1000.
Your first version is really the only one you should ever be using for SQL inserts. It ensures that every target column is explicitly mentioned, and is unambiguous with regard to where the literals in the VALUES clause should go. You can use the version which does not explicitly mention column names. At first, it might seem that you are saving yourself some code. But realize that there is a column list which will be used, and it is the list of all the table's columns, in whatever their positions from definition are. Your code might work, but appreciate that any addition/removal of a column, or changing of column order, can totally break your insert script. For this reason, most will strongly advocate for the first version.
You can try following solution, it is combination of above 2 process which you have highlighted in case study:-
INSERT INTO `tbl` (date, p, 18 other coll names)
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
Couple of things you should consider here are :-
Other 980 Columns should ne Nullable, that means it should hold NULL values.
All 18 columns in Insert line and Select should be in same order so that data will be inserted in same correct order.
To Avoid any confusion, try to use Alease in Select Query same as Insert Table Column name. It will remove any ambiguity.
Hopefully it will work for you.
In BigQuery, the best way to do what you're describing is to first load to a staging table. I'll assume you can get the values you want to insert into JSON format with keys that correspond to the target column names.
values.json
{"date": "2020-01-01", "p": "p3", "column": "value", ... }
Then generate a schema file for the target table and save it locally
bq show --schema project:dataset.tbl > schema.json
Load the new data to the staging table using the target schema. This gives you "named" null values for each column present in the target schema but missing from your json, bypassing the need to write them out.
bq load --replace --source_format=NEWLINE_DELIMIITED_JSON \
project:dataset.stg_tbl values.json schema.json
Now the insert select statement works every time
insert into `project:dataset.tbl`
select * from `project:dataset.stg_tbl`
Not a pure SQL solution but I managed this by loading my staging table with data then running something like:
from google.cloud import bigquery
client = bigquery.Client()
table1 = client.get_table(f"{project_id}.{dataset_name}.table1")
table1_col_map = {field.name: field for field in table1.schema}
table2 = client.get_table(f"{project_id}.{dataset_name}.table2")
table2_col_map = {field.name: field for field in table2.schema}
combined_schema = {**table2_col_map, **table1_col_map}
table1.schema = list(combined_schema.values())
client.update_table(table1_cols, ["schema"])
Explanation:
This will retrieve the schemas of both, convert their schemas into a dictionary with key as column name and value as the actual field info from the sdk. Then both are combined with dictionary unpacking (the order of unpacking determines which table's columns have precedence when a column is common between them. Finally the combined schema is assigned back to table 1 and used to update the table, adding the missing columns with nulls.

Whats wrong on my Postgres insert or update query

I want to insert or update data to a table. The column "Group" is UNIQUE and the ID of the group should remain constant.
there Is a Fiddle:
http://sqlfiddle.com/#!17/551ea/3
on Insert, everything is okay
also the Update works for "Group" = 'TEST01'
But when I insert a new group and then update, the ID changes (press multiple "Run SQL")
This is my insert query:
INSERT INTO GROUPS ("GROUP", SERVER, PATH, SHARE)
VALUES ('TEST04', 4, 4, 4)
ON CONFLICT("GROUP") DO UPDATE
SET SERVER = 11,
PATH = 11,
SHARE = 11
WHERE GROUPS."GROUP" = 'TEST01'
The ID will be used in other tables, this should only be created once for the first entry.
and this is the general structure:
CREATE SEQUENCE gid START 1;
CREATE TABLE GROUPS (
ID integer NOT NULL DEFAULT nextval('gid') PRIMARY KEY,
"GROUP" VARCHAR NOT NULL UNIQUE,
SERVER integer,
PATH integer,
SHARE integer
);
Look at this fiddle
Each time there is a conflict on insert - the sequence value is discarded and ON UPDATE requests a new value. So initially you start with 1, then you insert 3 tuples so the final value of the sequence would be 3. Then you try to insert a new tuple but there is a conflict - so the value of sequence is now 4. Then you try to insert a new tuple - and it gets a value of 5 for the sequence.
If you continue to run the 2 inserts the sequence will continue to increment. SQLfiddle probably uses persistent connections or some connection pooling which does not properly reset the sequence when rebuilding the schema.

SQL Server Unique Composite Key of Two Field With Second Field Auto-Increment

I have the following problem, I want to have Composite Primary Key like:
PRIMARY KEY (`base`, `id`);
for which when I insert a base the id to be auto-incremented based on the previous id for the same base
Example:
base id
A 1
A 2
B 1
C 1
Is there a way when I say:
INSERT INTO table(base) VALUES ('A')
to insert a new record with id 3 because that is the next id for base 'A'?
The resulting table should be:
base id
A 1
A 2
B 1
C 1
A 3
Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions.
EDIT
The base currently represents a company, the id represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the SEQUENCE objects that are provided are geared around fast access, and can't be rolled back (ie, you will get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own.
The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple:
CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED,
invoiceNumber INTEGER);
In reality the base column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity.
Next, you want a stored proc that will take a key (base) and spit out the next number in the sequence (invoiceNumber). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows:
CREATE PROCEDURE Next_Invoice_Number #baseKey CHAR(1),
#invoiceNumber INTEGER OUTPUT
AS MERGE INTO Invoice_Sequence Stored
USING (VALUES (#baseKey)) Incoming(base)
ON Incoming.base = Stored.base
WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1
WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(#baseKey)
OUTPUT INSERTED.invoiceNumber ;;
Note that:
You must run this in a serialized transaction
The transaction must be the same one that's inserting into the destination (invoice) table.
That's right, you'll still get blocking per-business when issuing invoice numbers. You can't avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued.
Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger:
CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT
AS
DECLARE #invoiceNumber INTEGER
BEGIN
EXEC Next_Invoice_Number Inserted.base, #invoiceNumber OUTPUT
INSERT INTO Invoice (base, invoiceNumber)
VALUES (Inserted.base, #invoiceNumber)
END
(obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in)
...which you can then use by simply saying:
INSERT INTO Invoice (base) VALUES('A');
So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this INSERT is committed, there are only two rows locked:
The row in Invoice_Sequence maintaining the sequence number
The row in Invoice for the new invoice.
All other rows for a particular base are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case.
That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert.
I think this may help you
MSSQL Triggers: http://msdn.microsoft.com/en-in/library/ms189799.aspx
Test Table
CREATE TABLE MyTable
( base CHAR(1),
id INT
)
GO
Trigger Definition
CREATE TRIGGER dbo.tr_Populate_ID
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO MyTable (base,id)
SELECT i.base, ISNULL(MAX(mt.id),0) +1 AS NextValue
FROM inserted i left join MyTable mt
on i.base = mt.base
GROUP BY i.base
END
Test
Execute the following statement multiple times and you will see the next values available in that group will be assigned to ID.
INSERT INTO MyTable VALUES
('A'),
('B'),
('C')
GO
SELECT * FROM MyTable
GO

Find Last Inserted Record MS SQL SERVER

I applied 12Lac Insert command in Single table ,
but after some time query terminated , How can I find Last
Inserted Record
a)Table don't have created Date column
b)Can not apply order by clause because primary key values are manually generated
c)Last() is not buit in fumction in mssql.
Or any way to find last executed query
There will be some way but not able to figure out
Table contain only primary key constrain no other constrain
As per comment request here a quick and dirty manual solution, assuming you've got the list of INSERT statements (or the according data) in the same sequence as the issued INSERTs. For this example I assume 1 million records.
INSERT ... VALUES (1, ...)
...
INSERT ... VALUES (250000, ...)
...
INSERT ... VALUES (500000, ...)
...
INSERT ... VALUES (750000, ...)
...
INSERT ... VALUES (1000000, ...)
You just have to find the last PK, that has been inserted. Luckily in this case there is one. So you start doing a manual binary search in the table issuing
SELECT pk FROM myTable WHERE pk = 500000
If you get a row back, you know it got so far. Continue checking with pk = 750000. Then again, if it is there with pk = 875000. If 750000 is not there, then the INSERTs must have stopped earlier. Then check for pk = 675000. This process stops in this case after 20 steps.
It's just plain manual divide and conquer.
There is a way.
Unfortunately you have to do this in advance so it helps you.
So if you have, by any chance the PRIMARY KEYS you inserted, still at hand go ahead and delete all rows that have those keys:
DELETE FROM tableName WHERE ID IN (id1, id2, ...., idn)
Then you enable Change Data Capture for your database (have the db already selected):
EXEC sys.sp_cdc_enable_db;
Now you also need to enable Change Data Capture for that table, in an example that I've tried I could just run:
EXEC sys.sp_cdc_enable_table #source_schema = N'dbo', #source_name = N'tableName', #role_name = null
Now you are almost setup! You need to look into your system services and verify that SQL Server Agent is running for your DBMS, if it does not capturing will not happen.
Now when you insert something into your table you can select data changes from a new table called [cdc].[dbo_tableName_CT]:
SELECT [__$start_lsn]
,[__$end_lsn]
,[__$seqval]
,[__$operation]
,[__$update_mask]
,[ID]
,[Value]
FROM [cdc].[dbo_tableName_CT]
GO
An example output of this looks like this:
you can order by __$seqval that should give you the order in which the rows were inserted.
NOTE: this feature seems not to be present in SQL Server Express

Row number in Sybase tables

Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)