db2 snapshot at an instance - sql

I have a requirement where I would like to get the snapshot od data at an instance in database.
For Eg:
At a given time - T1, EMP table in DB has following values
Col 1 Col2
1 ABC
2 DEF
3 GHI
However that data has been modified by another resource/s.
So when I checked at time T2
Col 1 Col2
1 LMN
2 PQR
3 XYZ
Is there any command available in DB2/ORACLE or any database where if, I provide Time stamp,
I can retrieve the state of Data at that timestamp??
Thanks

Both DB2 and Oracle allow you to do that by issuing queries similar to
SELECT * FROM EMP AS OF <timestamp> WHERE ...
In DB2 the table must be set up as a system temporal table (as described here, for example) before you can query it like that.
In Oracle the entire database must be enabled for flashback.
In either case the corresponding feature must be enabled before the data change to allow you to query the original state of data. You cannot make the table EMP system temporal today and query its yesterday's state; you had to enable the feature yesterday.

Related

Using Job schedule to input current date and time into columns

I have 2 databases, first being Oracle and the second being SQL Server. I am able to connect the 2 databases together and data sync between the 2 databases are possible. However, I would like to know if, I can automatically insert the database's system date and time upon each entry into the table?
For example, in this case, when the data gets transferred over from Oracle, I would like the TIMEUPDATE data to mirror the real timing in SQL Server and not the data from Oracle.
I have already scripted this to create the database. All the data in my Oracle database are successfully transferfed to my SQL Server. The TimeUpdate column is created.
Sample data:
ID NAME TIMEUPDATE
---------------------------------------
1 John 2019-09-13 04:42:31.1320000
22 Mary 2019-09-09 04:42:43.6570000
3 Tommy 2019-09-17 04:42:47.0220000
4 Jill 2019-09-06 04:42:50.1170000
5 Sam 2019-09-25 04:42:51.9230000
Query:
SELECT *
INTO Customers
FROM OPENQUERY(NEWTABLE, 'SELECT ID, NAME, TIMEUPDATE FROM CUSTOMER')
Try this
SELECT *
INTO Customers
FROM OPENQUERY(NEWTABLE, 'SELECT ID, NAME, SYSDATE AS TIMEUPDATE FROM CUSTOMER')

Adding column to sqlite database and distribute rows based on primary key

I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!
Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-

Sequential update statements

When using multiple SETs on a single update query like
update table set col1=value1,col2=col1
is there an order of execution that will decide the outcome, when the same column is left or right of an equals sign? As far as I've tested so far, it seems when a column is used to the right of an equals as a data source, then its value is used from BEFORE it gets a new value within the same update statement, by being to the left of an equals sign elsewhere.
I believe that SQL Server always uses the old values when performing an UPDATE. This would best be explained by showing some sample data for your table:
col1 | col2
1 | 3
2 | 8
3 | 10
update table set col1=value1,col2=col1
At the end of this UPDATE, the table should look like this:
col1 | col2
value1 | 1
value1 | 2
value1 | 3
This behavior for UPDATE is part of the ANSI-92 SQL standard, as this SO question discusses:
SQL UPDATE read column values before setting
Here is another link which discusses this problem with an example:
http://dba.fyicenter.com/faq/sql_server/Using_Old_Values_to_Define_New_Values_in_UPDATE_Statements.html
You can assume that in general SQL Server puts some sort of lock on the table during an UPDATE, and uses a snapshot of the old values throughout the entire UPDATE statement.

Database trigger

I have no experience in writing database trigger but I need one in my current project.
My use case is the following. I have two tables - Table 1 and Table 2.
These tables have a 1 : m relation.
My usecase is, if all records in Table1 have "VALUE2" than value in Table2 should updated to VALUE2.
So if record-value with ID 3 of table1 is updated to VALUE2 than Value of table2 also should be updated to value2.
It would be great if someone could help me - Thanks a lo for than!
TABLE1:
ID FK_Table2 VALUE
-----------------------------
1 77 VALUE2
2 77 VALUE2
3 77 VALUE1
4 54 OTHERVALUE
TABLE2:
ID VALUE
---------------
77 VALUE1
So you need to learn and try basic trigger first.
CREATE OR REPLACE TRIGGER trigger_name
AFTER UPDATE ON TABLE1
FOR EACH ROW
BEGIN
/* trigger code goes here...*/
/* for this particular case you need to update value of table2 */
UPDATE TABLE2 SET VALUE = new.VALUE WHERE TABLE2.ID = new.FK_Table2 ;
END
Try and write some code. IF stucked... come back and let us know...
No matter which system, there are some basic rules or best practices you should know. One is that it is bad form (and outright prohibited in many systems) for a trigger to reach back out and query the very table the trigger is written for. Your use case requires the trigger on Table1 to go back out and read from Table1 during the Update operation. Not good.
One available option is to use a stored procedure to handle all the updates to this table. They are more awkward to work with (for example: if a parameter is NULL, does that mean put a NULL in the corresponding field or leave it unmodified?). For that reason, and with the understanding that this is based on the limited amount of information in the question, I would recommend one of two alternatives.
One is to have a stored procedure that is used only to change the VALUE field. That field is not changed in a vacuum, but as part of a larger process. The step in the process that actually ends up changing the field could then call the SP.
Another is to front the table with a view with an "instead of" trigger and perform all DML through the view. This is the method I prefer, at least on those systems that allow triggers on views. The view trigger may query the underlying table as needed.
As for the logic (SP or trigger) here is some pseudo code:
-- Make the update
update table1 set value = #somevalue
where id = #someid;
-- Get the group that id is in
select FK_Table2 into #somegroupid
from Table1
where id = #someid;
-- Are all the values in that group the same?
select count(*) into #OtherValues
from Table1
where FK_Table2 = #somegroupid
and value <> #somevalue;
-- If so, notify the other table.
if #OtherValues = 0 then
update table2 set value = #somevalue
where id = #somegroupid;
I hope this answers your immediate question. However, based on what you have shown us here, the major cause of the problem would seem to be poor design. Let us know the higher level requirement you are trying to fill here and I'll bet we could come up with some modeling changes that would make this a whole lot easier without having to get really clever with SPs or triggers.

SQL Command to copy data from 1 column in table and 1 column in another table into a new table?

I had to make a new table to get the Include statement working in Entity Framework since EF was looking for a table called be_PostTagbe_Posts. I was using EF Code First from DB. But now the question is about SQL. I added one row of data and now the include works. But what I am looking for is a SQL command that can copy data from 1 column in 1 table and 1 column in another into the new be_PostTagbe_Posts table. In the be_Posts table I need the data in PostRowID to go into be_Posts_PostRowID and PostTagId to go into be_PostTag_PostTagID. Both be_PostTag_PostTagID and be_Posts_PostRowID are in the new be_PostTagbe_Posts table. I am not very good with SQL so not sure how to do this.
Edit: Thanks for the answers. I tried 2 separate queries but only data was inserted into the the be_PostTag_PostTagID while be_PostTag_PostRowID remained null.
And I tried this query which returned The multi-part identifier "be_PostTag.PostID" could not be bound.
INSERT INTO be_PostTagbe_Posts(be_PostTag_PostTagID, be_Posts_PostRowID)
SELECT be_PostTag.PostTagID, be_Posts.PostRowID
WHERE be_PostTag.PostID = be_Posts.PostID
EDIT:
This only inserted half the data - even 2 inserts leave one column null
INSERT INTO be_PostTagbe_Posts (be_Posts_PostRowID)
SELECT PostRowID FROM be_Posts;
INSERT INTO be_PostTagbe_Posts (be_PostTag_PostTagID)
SELECT PostTagID FROM be_PostTag;
And yet management studio tells me the query executed successfully but one column is still null. Weird.
Here are screenshots of the tables:
SELECT PostTagID AS be_PostTag_PostTagID, PostRowID AS be_Posts_PostRowID
INTO be_PostTagbe_Posts
FROM be_PostTag
Inner JOIN be_Posts
ON be_PostTag.PostID=be_Posts.PostID
That command created the new table with the 2 columns populated.
If i understand you ,you want to Copy Table Z's Column A to Table X And Table Z's Column B to Table Y.
If it is so, According to your question it is not clear about Table Structure of TableX and TableY
Assuming TableX And TableY to single ColumnTable [Apart from IdentityColumn] our query will be
INSERT INTO TableX
SELECT ColumnA FROM TableZ
INSERT INTO TableY
SELECT ColumnB FROM TableZ
Rest put your Entire Structure of Table To Get More Help Because These query are on Assumptions
There's not enough information in your question to give you a working example, but this would be the general syntax for INSERTing into a different table using a query SELECTing from two other tables.
INSERT INTO destination_table(wanted_value_1, wanted_value_2)
SELECT table_1.source_field_1, table_2.source_field_1
WHERE table_1.matching_field = table_2.matching_field
There has to be some sort of relationship between the two tables for the WHERE clause to work in that statement. I'm guessing based the little information you provided that there is a PostRowID field somewhere in the table that contains the tags such that your data would look similar to this in the tag table:
PostRowID PostTagID
--------- ---------
1 1
1 2
1 3
1 4
2 1
2 2
3 3
4 4
It sounds like you should use two sql statements:
Insert into `be_PostTagbe_Posts` (`be_PostTag_PostTagID`)
select `PostTagID` from POSTTAGIDTABLE
and
Insert into `be_PostTagbe_Posts` (`be_Posts_PostRowID`)
select `PostRowID` from POSTTAGIDTABLE
unless the items have some sort of relationship, then if you have a select statement that will select the merged data in two columns you can just do
Insert into `be_PostTagbe_Posts` (`be_PostTag_PostTagID`,`be_Posts_PostRowID`)
(select statement that selects the two items)