I have a file in linux, the file is something like:
(I have millions of rows)
date number name id state
20131110 1089 name1 123 start
20131110 1080 name2 122 start
20131110 1082 name3 121 start
20131114 1089 name1 120 end
20131115 1082 name3 119 end
And i have a table in Oracle with the following fileds:
init_table
start_date
end_date
number
name
id
The problem is that i read that i can insert data with a sqlloader, (I have millions of rows, then create a temporal table to insert and later with a trigger update the other table is not well) the problem is that I have an user with start date X, for example the number 1089 has the start date: 20131110, and the end_date of this user is: 20131114, then i need insert first the start_date in my table, later when i found the end_date, update my table of the number that i am inserting, that in my example is 1089 with the end date that is: 20131114.
How can do it with a ctl, or with other thing.
Who can help me. Thanks
What version of Oracle?
I would use an external table. Define an external table that exactly matches your flat file. Then, you should be able to solve this with two passes, one insert, one update.
Something like this should do it:
insert into init_table select to_date(date,'YYYYMMDD'),null,number,name,id from external_table where state='start';
update init_table set end_date=(select date from external_table where state='end' and init_table.number=external_table.number);
Note that you can't actually have columns named 'date' or 'number', so, the sql above isn't actually going to work as written. You'll have to change those column names.
Hope that helps...
If you use an external table approach then you can join the data in the external table to itself to produce a single record that can then be inserted. Although the join is expensive, overall it ought to be an efficient process as long as the hash join I'd expect to be used stays in memory.
So something like ...
insert into init_table (
start_date,
end_date,
number,
name,
id)
select
s.date,
e.date,
s.number,
s.name,
s.id
from external_table s
join external_table e on s.number = e.number
where
s.state = 'start' and
e.state = 'end'
That assumes that there will always be an end date for every start date, and that the number does not already exist in the table -- if either of those conditions is not true then an outer join would be required in the former case, and a merge required in the latter.
$ cat sqlldrNew.ctl
load data
infile '/home/Sameer/employee.txt'
into table employee
fields terminated by X'9'
( date, -->select number from employee where name=[Name from the file record], name, id, state )
$ sqlldr scott/tiger control=/home/Sameer/sqlldrNew.ctl
I think this should work.
Related
In Google BigQuery, I would like to delete a subset of records, based on the value of a specific column. It's a query that I need to run repeatedly and that I would like to run automatically.
The problem is that this specific column is of the form STRUCT<column_1 ARRAY (STRING), column_2 ARRAY (STRING), ... >, and I don't know how to use such a column in the where-clause when using the delete-command.
Here is basically what I am trying to do (this code does not work):
DELETE
FROM dataset.table t
LEFT JOIN UNNEST(t.category.column_1) AS type
WHERE t.partition_date = '2020-07-22'
AND type = 'some_value'
The error that I'm getting is: Syntax error: Expected end of input but got keyword LEFT at [3:1]
If I replace the DELETE with SELECT *, it does work:
SELECT *
FROM dataset.table t
LEFT JOIN UNNEST(t.category.column_1) AS type
WHERE t.partition_date = '2020-07-22'
AND type = 'some_value'
Does somebody know how to use such a column to delete a subset of records?
EDIT:
Here is some code to create a reproducible example with some silly data (fill in your own dataset and table name in all queries):
Suppose you want to delete all rows where category.type contains the value 'food'.
1 - create a table:
CREATE TABLE <DATASET>.<TABLE_NAME>
(
article STRING,
category STRUCT<
color STRING,
type ARRAY<STRING>
>
);
2 - Insert data into the new table:
INSERT <DATASET>.<TABLE_NAME>
SELECT "apple" AS article, STRUCT('red' AS color, ['fruit','food'] as type) AS category
UNION ALL
SELECT "cabbage" AS article, STRUCT('blue' AS color, ['vegetable', 'food'] as type) AS category
UNION ALL
SELECT "book" AS article, STRUCT('red' AS color, ['object'] as type) AS category
UNION ALL
SELECT "dog" AS article, STRUCT('green' AS color, ['animal', 'pet'] as type) AS category;
3 - Show that select works (return all rows where category.type contains the value 'food'; these are the rows I want to delete):
SELECT *
FROM <DATASET>.<TABLE_NAME>
LEFT JOIN UNNEST(category.type) type
WHERE type = 'food'
Initial Result
4 - My attempt at deleting rows where category.type contains 'food' does not work:
DELETE
FROM <DATASET>.<TABLE_NAME>
LEFT JOIN UNNEST(category.type) type
WHERE type = 'food'
Syntax error: Unexpected keyword LEFT at [3:1]
Desired Result
This is the code I used to delete the desired records (the records where category.type contains the value 'food'.)
DELETE
FROM <DATASET>.<TABLE_NAME> t1
WHERE EXISTS(SELECT 1 FROM UNNEST(t1.category.type) t2 WHERE t2 = 'food')
The embarrasing thing is that I've seen these kind of answers on similar questions (for example on update-queries). But I come from Oracle-SQL and I think that there you are required to connect your subquery with your main query in the WHERE-statement of the subquery (ie. connect t1 with t2), so I didn't understand these answers. That's why I posted this question.
However, I learned that BigQuery automatically understands how to connect table t1 and 'table' t2; you don't have to explicitly connect them.
Now it is possible to still do this (perhaps even recommended?):
DELETE
FROM <DATASET>.<TABLE_NAME> t1
WHERE EXISTS (SELECT 1 FROM <DATASET>.<TABLE_NAME> t2 LEFT JOIN UNNEST(t2.category.type) AS type WHERE type = 'food' AND t1.article=t2.article)
but a second difficulty for me was that my ID in my actual data is somehow hidden in an array>struct-construction, so I got stuck connecting t1 & t2. Fortunately this is not always an absolute necessity.
Since you did not provide any sample data I am going to explain using some dummy data. In case you add your sample data, I can update the answer.
Firstly,according to your description, you have only a STRUCT not an Array[Struct <col_1, col_2>].For this reason, you do not need to use UNNEST to access the values within the data. Below is an example how to access particular data within a STRUCT.
WITH data AS (
SELECT 1 AS id, STRUCT("Alex" AS name, 30 AS age, "NYC" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Leo" AS name, 18 AS age, "Sydney" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Robert" AS name, 25 AS age, "Paris" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Mary" AS name, 28 AS age, "London" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Ralph" AS name, 45 AS age, "London" AS city) AS info
)
SELECT * FROM data
WHERE info.city = "London"
Notice that the STRUCT is named info and the data we accessed is city and used it in the WHERE clause.
Now, in order to delete the rows that contains an specific value within the STRUCT , in your case I assume it would be your_struct.column_1, you can use DELETE or MERGE and DELETE. I have saved the above data in a table to execute the below examples, which have the same output,
First method: DELETE
DELETE FROM `project.dataset.table`
WHERE info.city = "Sydney"
Second method: MERGE and DELETE
MERGE `project.dataset.table` a
USING (SELECT * from `project.dataset.table` WHERE info.city ="London") b
ON a.info.city =b.info.city
WHEN matched and b.id=1 then
Delete
And the output for both queries,
Row id info.name info.age info.city
1 1 Alex 30 NYC
2 1 Robert 25 Paris
3 1 Ralph 45 London
4 1 Mary 28 London
As you can see the row where info.city = "Sydney" was deleted in both cases.
It is important to point out that your data is excluded from your source table. Therefore, you should be careful.
Note: Since you want to run this process everyday, you could use Schedule Query within BigQuery Console, appending or overwriting the results after each run. Also, it is a good practice not deleting data from your source table. Thus, consider creating a new table from your source table without the rows you do not desire.
[UPDATE: 2/20/19]
I figured out a pretty trivial solution to solve this problem.
CREATE TRIGGER TriggerClaims_History on Claims
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
INSERT INTO Claims_History
SELECT name, status, claim_date
FROM Claims
EXCEPT SELECT name, status, claim_date FROM Claims_History
END
GO
I am standing up a SQL Server database for a project I am working on. Important info: I have 3 tables - enrollment, cancel, and claims. There are files located on a server that populate these tables every day. These files are NOT deltas (i.e. each new file placed on server every day contains data from all previous files) and because of this, I am able to simply drop all tables, create tables, and then populate tables from files each day. My question is regarding my claims table - since tables will be dropped and created each night, I need a way to keep track of all the different status changes.
I'm struggling to figure out the best way to go about this.
I was thinking of creating a claims_history table that is NOT dropped each night. Essentially I'd want my claims_history table to be populated each time an initial new record is added to the claims table. Then I'd want to scan the claims table and add a row to the claims_history table if and only if there was a change in the status column (i.e. claims.status != claims_history.status).
Day 1:
select * from claims
id | name | status
1 | jane doe | received
select * from claims_history
id | name | status | timestamp
1 | jane doe | received | datetime
Day 2:
select * from claims
id | name | status
1 | jane doe | processed
select * from claims_history
id | name | status | timestamp
1 | jane doe | received | datetime
1 | jane doe | processed | datetime
Is there a SQL script that can do this? I'd also like to automatically have the timestamp field populate in claims_history table each time a new row is added (status change). I know I could write a python script to handle something like this, but i'd like to keep it in SQL if at all possible. Thank you.
Acording to your questions you need to create a trigger after update of the column claims.status and it very simple to do that use this link to know and see how to do a simple trigger click here create asimple sql server trigger
then as if there is many problem to manipulate dateTime in a query a would suggest you to use UNIX time instead of using datetime you can use Long or bigInt UNix time store the date as a number to know the currente time simple use the query SELECT UNIX_TIMESTAMP()
A very common approach is to use a staging table and a production (or final) table. All your ETLs will truncate and load the staging table (volatile) and then you execute an Stored Procedure that adds only the new records to your final table. This requires that all the data you handle this way have some form of key that identifies unequivocally a row.
What happens if your files suddenly change format or are badly formatted? You will drop your table and won't be able to load it back until you fix your ETL. This approach will save you from that, since the process will fail while loading the staging table and won't impact the final table. You can also keep deleted records for historic reasons instead of having them deleted.
I prefer to separate the staging tables into their proper schema, for example:
CREATE SCHEMA Staging
GO
CREATE TABLE Staging.Claims (
ID INT,
Name VARCHAR(100),
Status VARCHAR(100))
Now you do all your loads from your files into these staging tables, truncating them first:
TRUNCATE TABLE Staging.Claims
BULK INSERT Staging.Claims
FROM '\\SomeFile.csv'
WITH
--...
Once this table is loaded you execute a specific SP that adds your delta between the staging content and your final table. You can add whichever logic you want here, like doing only inserts for new records, or inserting already existing values that were updated on another table. For example:
CREATE TABLE dbo.Claims (
ClaimAutoID INT IDENTITY PRIMARY KEY,
ClaimID INT,
Name VARCHAR(100),
Status VARCHAR(100),
WasDeleted BIT DEFAULT 0,
ModifiedDate DATETIME,
CreatedDate DATETIME DEFAULT GETDATE())
GO
CREATE PROCEDURE Staging.UpdateClaims
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION
-- Update changed values
UPDATE C SET
Name = S.Name,
Status = S.Status,
ModifiedDate = GETDATE()
FROM
Staging.Claims AS S
INNER JOIN dbo.Claims AS C ON S.ID = C.ClaimID -- This has to be by the key columns
WHERE
ISNULL(C.Name, '') <> ISNULL(S.Name, '') AND
ISNULL(C.Status, '') <> ISNULL(S.Status, '')
-- Insert new records
INSERT INTO dbo.Claims (
ClaimID,
Name,
Status)
SELECT
ClaimID = S.ID,
Name = S.Name,
Status = S.Status
FROM
Staging.Claims AS S
WHERE
NOT EXISTS (SELECT 'not yet loaded' FROM dbo.Claims AS C WHERE S.ID = C.ClaimID) -- This has to be by the key columns
-- Mark deleted records as deleted
UPDATE C SET
WasDeleted = 1,
ModifiedDate = GETDATE()
FROM
dbo.Claims AS C
WHERE
NOT EXISTS (SELECT 'not anymore on files' FROM Staging.Claims AS S WHERE S.ClaimID = C.ClaimID) -- This has to be by the key columns
COMMIT
END TRY
BEGIN CATCH
DECLARE #v_ErrorMessage VARCHAR(MAX) = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR (#v_ErrorMessage, 16, 1)
END CATCH
END
This way you always work with dbo.Claims and the records are never lost (just updated or inserted).
If you need to check the last status of a particular claim you can create a view:
CREATE VIEW dbo.vClaimLastStatus
AS
WITH ClaimsOrdered AS
(
SELECT
C.ClaimAutoID,
C.ClaimID,
C.Name,
C.Status,
C.ModifiedDate,
C.CreatedDate,
DateRanking = ROW_NUMBER() OVER (PARTITION BY C.ClaimID ORDER BY C.CreatedDate DESC)
FROM
dbo.Claims AS C
)
SELECT
C.ClaimAutoID,
C.ClaimID,
C.Name,
C.Status,
C.ModifiedDate,
C.CreatedDate,
FROM
ClaimsOrdered AS C
WHERE
DateRanking = 1
I have a table set up in the following manner.
CustomerNumber June15_Order June15_Billing July15_Order July15_Billing August15_Order August15_Billing
12345 5 55 3 45
5431 6 66 5 67
I would prefer it to be:
CustomerNumber Date Order Billing
12345 01/06/2015 5 55
12345 01/07/2015 3 45
5431 01/06/2015 6 66
5431 01/07/2015 5 67
Any thoughts as to how I would accurately transpose this table?
If you're just trying to get the old data into the new, you'll basically need to use brute force:
INSERT INTO NewTable
(CustomerNumber, [Date], [Order], Billing)
(
SELECT CustomerNumber, '06/15/2015', June15_Order, June15_Billing
FROM OldTable
UNION
SELECT CustomerNumber, '07/15/2015', July15_Order, July15_Billing
FROM OldTable
UNION
SELECT CustomerNumber, '08/15/2015', August15_Order, August15_Billing
FROM OldTable
)
Presuming there are columns for any month and any years, this gets ugly really fast. If the columns are set and hard-coded, use #John Pasquet's solution (+1). If you need the ability to work with any set of columns of the form MMMMDD_Type, here's an outline.
First pass:
Write a SELECT... UNPIVOT... query to transform the table
Map the resulting "label" column to a Date datatype and a "Type" (Order, Billing)
However, mapping result set column names of "July15" to "Jul 1, 2015" (or 01/07/2015) is hard, if not crazy hard. This leads to a second pass:
Build a "lookup" list of columns from sys.tables and sys.colmns
Pick out those that are to be unpivoted
Figure out the dates and types for each of them
Build the SELECT... UNPIVOT... in dynamic SQL, dumping the results to a temp table
Join this temp table to the lookup list by original column name, which (via the join) gets you the prepared date and type values
Seriously, this could get ridiculously complex. The smart money is on rebuild the tables with columns for date and type.
First create the a new table with the desired structure, after that you will need to create a stored procedure for that task, which will iterate over all rows.
On the columns you know old_col to new_col just take the value and save in a variable, for the others you will need to create condition for each month like a "contains june" and save in two variables date and value, after that each time you found a new month with value > 0 perform a insert on the new table with all the variables.
I am working on SQL Sever and I want to assign unique Id's to rows being pulled from those three tables, but the id's should not overlap.
Let's say, Table one contains cars data, table two contains house data, table three contains city data. I want to pull all this data into a single table with a unique id to each of them say cars from 1-100, house from 101 - 200 and city from 300- 400.
How can I achieve this using only select queries. I can't use insert statements.
To be more precise,
I have one table with computer systems/servers host information which has id from 500-700.
I have another tables, storage devices (id's from 200-600) and routers (ids from 700-900). I have already collected systems data. Now I want to pull storage systems and routers data in such a way that the consolidated data at my end should has a unique id for all records. This needs to be done only by using SELECT queries.
I was using SELECT ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS UniqueID and storing it in temp tables (separate for storage and routers). But I believe that this may lead to some overlapping. Please suggest any other way to do this.
An extension to this question:
Creating consistent integer from a string:
All I have is various strings like this
String1
String2Hello123
String3HelloHowAreYou
I Need to convert them in to positive integers say some thing like
String1 = 12
String2Hello123 = 25
String3HelloHowAreYou = 4567
Note that I am not expecting the numbers in any order.Only requirement is number generated for one string should not conflict with other
Now later after the reboot If I do not have 2nd string instead there is a new string
String1 = 12
String3HelloHowAreYou = 4567
String2Hello123HowAreyou = 28
Not that the number 25 generated for 2nd string earlier can not be sued for the new string.
Using extra storage (temp tables) is not allowed
if you dont care where the data comes from:
with dat as (
select 't1' src, id from table1
union all
select 't2' src, id from table2
union all
select 't3' src, id from table3
)
select *
, id2 = row_number() over( order by _some_column_ )
from dat
Is there a way to know the last nth id field of a table, without scanning it completely? (just go to the end of table and get id value)
table
id fieldvalue
1 2323
2 4645
3 556
... ...
100000000 1232
So for example here n = 100000000 100 Million
--------------EDIT-----
So which one of the queries proposed would be more efficient?
SELECT MAX(id) FROM <tablename>
Assuming ID is the IDENTITY for the table, you could use SELECT IDENT_CURRENT('TABLE NAME').
See here for more info.
One thing to note about this approach: If you have INSERTs that fail but increment the IDENTITY counter, then you will get back a result that is higher than the result returned by SELECT MAX(id) FROM <tablename>
You can also use system tables to get all last values from all identity columns in system:
select
OBJECT_NAME(object_id) + '.' + name as col_name
, last_value
from
sys.identity_columns
order by last_value desc
In case when table1 rows are inserted first, and then rows to table2 which depend on ids from the table1, you can use SELECT:
INSERT INTO `table2` (`some_id`, `some_value`)
VALUES ((SELECT some_id
FROM `table1`
WHERE `other_key_1` = 'xxx'
AND `other_key_2` = 'yyy'),
'some value abc abc 123 123 ...');
Of course, this can work only if there are other identifiers that can uniquely identify rows from table1
First of all, you want to access the table in DESCENDING order by ID.
Then you would select the TOP N records.
At this point, you want the last record of the set which hopefully is obvious. Assuming that the id field is indexed, this would at most retrieve the last N records of the table and most likely would end up being optimized into a single record fetch.
Select Ident_Current('Your Table Name') gives the last Id of your table.