Insert into a BigQuery table with a template suffix using the web-ui - google-bigquery

Using the Big Query Streaming API its possible to partition tables with a template suffix:
<targeted_table_name> + <templateSuffix>
eg. targettable_suffix
How can this be done from the web ui in an insert statement?
For example:
insert into `project123.dataset123.targettable_suffix`
(`id`, `value`) values ('123', 'abc')
(Where a table exists called targettable but the suffix table has not been created.)

The INSERT statement requires that the target table has been created in advance. Use a CREATE TABLE statement first to create it, then use INSERT. Note that the BigQuery team strongly recommends using partitioned tables instead of multiple tables that share a prefix, however, and if you use a partitioned table, you only need to create it once.

Related

How to append unique values from temp_tbl into original_tbl (SQL Server)?

I have a table that I'm trying to append unique values to. Every month I get list of user logins to import into this table. I would like to keep all the original values and just append the new and unique values onto the existing table. Both the table and the flatfile have a single column, with unique values, built like this:
_____
login
abcde001
abcde002
...
_____
I'm bulk ingesting the flat file into a temp table, with this:
IF OBJECT_ID('tempdb..#FLAT_FILE_TBL') IS NOT NULL
DROP TABLE #FLAT_FILE_TBL
CREATE TABLE #FLAT_FILE_TBL
(
ntlogin2 nvarchar(15)
)
BULK INSERT #FLAT_FILE_TBL
FROM 'C:\ImportFiles\logins_Dec2021.csv'
WITH (FIELDTERMINATOR = ' ');
Is there a join that would give me the table with existing values + new unique values appended? I'd rather not hard code a loop to evaluate it line by line.
Something like (pseudocode):
append unique {login} from temp_tbl into original_tbl
Hopefully it's an easy answer for someone out there.
Thanks!
Poster on Reddit r/sql provided this answer, which I'm pursuing:
Merge statement?
It looks like using a merge statement will do exactly what I want. Thanks for those who already posted replies.
You can check if a record exists using 'EXISTS' clause and insert if it doesn't exist in the target table. You can also use MERGE statement to achieve the same. Depending on what you want to do to the existing records in the target table, you can modify the Merge statement. Here since you only want to insert new records, you need to specify only what you want to do when a new record comes in. Here is an example
MERGE original_tbl T
USING temp_tbl S
ON T.login = S.login
WHEN NOT MATCHED THEN
INSERT (login)
VALUES(S.login)
Another solution would be to left join the target table to the temp table and insert only when the record doesn't exist.
INSERT INTO original_tbl(login)
SELECT S.Login
FROM temp_tbl S
LEFT JOIN original_tbl T
ON S.Login = T.Login
WHERE T.Login IS NULL

How can I INSERT data into two tables simultaneously with only one sql script db2?

How would I insert into multiple tables with one sql script in db2
For example, insert a row into T1 DOCK_DOOR and then insert into T2 DOCK_DOOR_LANE multiple times based on the dock_door_sysid from the first table.
My first approach was the following. I was attempting to use a with with three inserts. on the other hand, doing to inserts on the second table is not and option if this can be automated with one insert.
thanks for any feedback
sql example
WITH ins AS (
INSERT INTO DBF1.DOCK_DOOR (DOCK_DOOR_SYSID,DOOR_NUMBER,DOOR_NAME,DOCK_SYSID,DOOR_SEQ,ENCRYPTION_CODE,RFID_ENBLD_FLAG,LANES_COUNT,CMNT_TEXT,CREATE_TS,CREATE_USERID,UPDATE_TS,UPDATE_USERID,VER_NUMBER,ACTIVE_FLAG,STATUS_SYSID,DOOR_TYPE_SYSID)
VALUES (nextval for DBF1.DOCK_DOOR_SEQ,'026','DOOR025',61,25,NULL,'N','2',NULL,current timestamp,'SQL_INSERT',current timestamp,'SQL_INSERT',0,NULL,1723,1142)
RETURNING door_number,dock_door_sysid),
ins2 AS (
INSERT INTO SIT.DOCK_DOOR_lane (DOCK_DOOR_LANE_SYSID,DOOR_LANE_ID,DOCK_DOOR_SYSID,LANE_ID,CREATE_TS,CREATE_USERID,UPDATE_TS,UPDATE_USERID,VER_NUMBER)
VALUES (nextval for DBF1.DOCK_DOOR_LANE_seq,door_number||''||'A',dock_door_sysid,'A',current timestamp},'SQL_INSERT',current timestamp,'SQL_INSERT',0)
SELECT door_number,dock_door_sysid FROM DBF1.DOCK_DOOR
RETURNING door_number,dock_door_sysid)
INSERT INTO DBF1.DOCK_DOOR_lane (DOCK_DOOR_LANE_SYSID,DOOR_LANE_ID,DOCK_DOOR_SYSID,LANE_ID,CREATE_TS,CREATE_USERID,UPDATE_TS,UPDATE_USERID,VER_NUMBER)
VALUES (nextval for DBF1.DOCK_DOOR_LANE_seq,door_number||''||'B',dock_door_sysid,'B',current timestamp},'SQL_INSERT',current timestamp,'SQL_INSERT',0)
SELECT door_number,dock_door_sysid FROM DBF1.DOCK_DOOR;
Table 1 = dock_door
Table 2 = Dock_door_lane
You could do it with a trigger on the dock_door table.
However, if you're on a recent, version on IBM i. You might be able to make use of data change table reference
Your statement would look something like this
insert into dock_door_lane
select <....>
from final table (insert into dock_door <...>)
I'm not sure it will work, as this article indicates that at least at a couple of years ago DB2 for i didn't support the secondary insert required.
This old SO question also seems to confirm that at least at v7.1, the double insert isn't supported.
If I get a chance, I'll run a test on a 7.2 system Monday.

How to insert multiple JSON files into postgresql table at a time?

I have multiple JSON files, they all have same format but the values are different based on each transaction. I want to migrate this data to a postgresql table. What is the best way to proceed with this?
Right now, I am using the following query:
CREATE TABLE TEST (MULTIPROCESS VARCHAR(20), HTTP_REFERER VARCHAR(50));
INSERT INTO TEST SELECT MULTIPROCESS, HTTP_REFERER FROM json_populate_record(NULL::test, '{"multiprocess": true,"http_referer": "http://localhost:9000/"}');
But, once the number of files become large, it becomes very difficult to use this technique. Is there any other way to do this effectively?
You could use a LATERAL JOIN to do insert more than one row at a time:
WITH
json AS(
VALUES('{"multiprocess": true,"http_referer":"http://localhost:9000"}')
,('{"multiprocess": false,"http_referer": "http://localhost:9001/"}')
,('{"multiprocess": true,"http_referer": "http://localhost:9002/"}')
) INSERT INTO test
SELECT multiprocess, http_referer
FROM json, LATERAL json_populate_record(NULL::test, json.column1::json);
Or you could insert into a staging table first and then populate your other table.

Insert overwrite in Hive

I am trying to use Insert overwrite in Hive. Basically I would like to insert overwrite not the complete partition but only a few records in the partition. I am not finding any solution to do it (Insert overwrite in destination table based on a filter on non partition column also).
Is there any way I can achieve it?
Hive is not as Regular RDBMS, If you want to update the record simple do INSERT OVERWRITE TABLE Table_Name...simple change your data in one temporary table or by using WITH clause simply insert overwrite..by using table partioning..it is safe.
QUERY[HIVE]:
WITH TEMP_TABLE AS (SELECT * FROM SOURCE_TABLE_NAME) INSERT OVERWRITE TABLE TARGET_TABLE_NAME SELECT * FROM TEMP_TABLE
Hive is not an RDBMS. What you are trying to achieve with Hive is not recommended. Hive is better suited for batch processing over very large sets of immutable data.
However, from what I could deduce, you are trying to update an existing record in your table. To do so, enable ACID support on the table that needs to be updated and your update queries will start working.
UPDATE <TABLE>
SET <COL1>='Value1',
SET <COL2>='Value2'
WHERE <Some Condition That Only Evaluates To The Rows You Need Updated>

SQL insert into 2 tables in one query

I have the following query in SQLRPGLE for DB2:
INSERT INTO ITEMS2 (PROGRAM, VLDFILE, VLDFLD,
SELFILE, SELFLD) VALUES(:SCAPP , 'CSTMR', 'CYC',
'BYC', 'BYCC');
I would like this query to be run in 2 libraries as in FIRST/ITEMS2 and SECOND/ITEMS2
where FIRST and SECOND are the library names. Can this be achieved in one query?
For those who have no understanding of iSeries: The above insert statement would be similar to having a insert query for 2 tables.
The INSERT statement does not support inserting into multiple tables.
However you could create a trigger on FIRST/ITEMS2 to automatically insert/update/delete the record into SECOND/ITEMS2.
See the CREATE TRIGGER statement for more information.
If this will be run often, consider making the INSERT into a stored procedure, and then setting the target schema via SET SCHEMA:
set schema=first;
call my_insert_proc(:scapp);
set schema=second;
call my_insert_proc(:scapp);
You could create a QMQuery like this
INSERT INTO &LIB/ITEMS2
(PROGRAM, VLDFILE, VLDFLD, SELFILE, SELFLD)
VALUES (&SCAPP, 'CSTMR', 'CYC', 'BYC', 'BYCC');
Then
STRQMQRY myQmQry SETVAR(('LIB' 'FIRSTLIB')('SCAPP' &VAR))
STRQMQRY myQmQry SETVAR(('LIB' 'SECONDLIB')('SCAPP' &VAR))
From IBM's Syntax diagram of INSERT ( http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=%2Fdb2%2Frbafzbackup.htm ), I'd say you have to go with two queries.
But after the first time of executing this query, you can try changing the current library ( http://publib.boulder.ibm.com/infocenter/iadthelp/v7r1/topic/com.ibm.etools.iseries.langref2.doc/chglibl.html ).