Create table on the fly with Insert Into - sql

Is this possible? I got a big .sql file full of Insert Into statements without the database schema. Can I just create table on the fly?
Here is an example:
INSERT INTO [g_fuel_site] ([SiteID], ... ,[EMVEnabled])
VALUES('Sep 23 2011 3:05:51:000PM', ... ,0)
EDIT: There is no tables! The script assumed I do!

Aaron beat me by 20 seconds.
For an example change the first insert from:
INSERT INTO [g_fuel_site] ([SiteID],[CurrentOperatingLevelID],[CurrentPriceBookID], [NumberFuelSaleBuffers],[LinearUnitOfMeasure],[VolumeUnitOfMeasure],[PreAuthAllowed],[StackedSalesAllowed],[MaxLiveDispensers],[AllowedZeroPPUs],[MaxPPU],[MinPPU],[InitialConfigDone],[DispenserOptionModeID],[GenAuthEnabled],[PendingPriceBookID],[AllowPresetWithHandleUp],[UseFixedGradeName],[UseFixedServiceLevelName],[UseFixedGradeProductCodes],[TokenAttendantRcptCtl],[TokenAttendantNtwrkRcptCtl],[TokenAttendantPrpayRcptCtl],[RunAttendantInBufferedMode],[AllowAttendantBalanceQuery],[TokenOrStandardOperation],[TokenPrefix],[EnablePostPayLimit],[PostPayLimit],[EMVEnabled])VALUES('Sep 23 2011 3:05:51:000PM',1,1,2,'CM','L',1,1,12,0,9.9990,0.7500,1,1,1,2,1,0,0,0,0,0,0,0,0,0,'',0,100.0000,0)
to be:
SELECT
'Sep 23 2011 3:05:51:000PM' [SiteID],
1 [CurrentOperatingLevelID],
1 [CurrentPriceBookID],
2 [NumberFuelSaleBuffers],
'CM' [LinearUnitOfMeasure],
'L' [VolumeUnitOfMeasure],
1 [PreAuthAllowed],
1 [StackedSalesAllowed],
12 [MaxLiveDispensers],
0 [AllowedZeroPPUs],
9.9990 [MaxPPU],
0.7500 [MinPPU],
1 [InitialConfigDone],
1 [DispenserOptionModeID],
1 [GenAuthEnabled],
2 [PendingPriceBookID],
1 [AllowPresetWithHandleUp],
0 [UseFixedGradeName],
0 [UseFixedServiceLevelName],
0 [UseFixedGradeProductCodes],
0 [TokenAttendantRcptCtl],
0 [TokenAttendantNtwrkRcptCtl],
0 [TokenAttendantPrpayRcptCtl],
0 [RunAttendantInBufferedMode],
0 [AllowAttendantBalanceQuery],
0 [TokenOrStandardOperation],
'' [TokenPrefix],
0 [EnablePostPayLimit],
100.0000 [PostPayLimit],
0 [EMVEnabled]
INTO g_fuel_site
After this the table will exist. It just infers column types, and will only work if the first select into contains all the columns that later inserts expect.

If you change the first one (and only the first one) to SELECT INTO, yes. Assuming the first INSERT has correctly deducible data types. Note that it won't magically create keys, indexes, constraints, computed columns, etc. for you.
However your example also includes a leading DELETE, which leads me to believe the table already exists. DELETE deletes all of the rows from the table, it doesn't drop the table. If the table doesn't exist, then your script should (a) check if it exists before running a delete and (b) run the first command a SELECT INTO so that it creates it. However you will probably want to define data types (I also find it hard to believe that SiteID is a DATETIME).
IF OBJECT_ID('dbo.g_fuel_site') IS NOT NULL
BEGIN
DELETE g_fuel_site;
END
ELSE
BEGIN
SELECT SiteID = CONVERT(INT, 1), ...
INTO dbo.g_fuel_site
WHERE 1 = 0; -- creates table but with 0 rows
END
INSERT dbo.g_fuel_site(SiteID, ...) VALUES(...); -- first row
INSERT ...
INSERT ...

Related

How to set flag based on values in previous columns in same table ? (Oracle)

I'm creating a new table and carrying over several columns from a previous table. One of the new fields that I need to create is a flag that will have values 0 or 1 and value needs to be determined based on 6 previous fields in the table.
The 6 previous columns have preexisting values of 0 or 1 stored for each one. This new field needs to check whether any of the 6 columns have 1 and if so set the flag to 0. If there is 0 in all 6 fields then set itself to 1.
Hopefully this makes sense. How can I get this done in oracle? I assume a case statement and some sort of forloop?
You can use greatest() function: GREATEST
create table t_new
as
select
case when greatest(c1,c2,c3,c4,c5,c6)=1 -- at least one of them contains 1
then 0
else 1
end c_new
from t_old;
Or even shorter:
create table t_new
as
select
1-greatest(c1,c2,c3,c4,c5,c6) as c_new
from t_old;
In case of greatest = 1, (1-1)=0, otherwise (1-0)=1
You can use a virtual column with a case expression; something like:
flag number generated always as (
case when val_1 + val_2 + val_3 + val_4 + val_5 + val_6 = 0 then 1 else 0 end
) virtual
db<>fiddle
or the same thing with greatest() as #Sayan suggested.
Using a virtual column means the flag will be right for newly-inserted rows, and if any of the other values are updated; you won't have to recalculate or update the flag column manually.
I've assumed the other six columns can't be null and are constrained to only be 0 or 1, as the question suggests. If they can be null you can add nvl() or coalesce() to each term in the calculation.

Update query just showing zero values when there exists non-zero values. (ACCESS)

I have been struggling with this for hours. I am trying to update all values that have the same 'SHORT#'. If the 'SHORT#' is in 017_PolWpart2 I want this to be the value that updates the corresponding 'SHORT#' in 017_WithdrawalsYTD_changelater. This update query is just displaying zeroes, but these values are in fact non-zero.
So say 017_WithdrawalsYTD_changelater looks like this:
SHORT# WithdrawalsYTD
1 0
2 0
3 0
4 0
5 0
and 017_PolWpart2 looks like this:
SHORT# Sum_MTD_AGG
3 50
5 12
I want this:
SHORT# WithdrawalsYTD
1 0
2 0
3 50
4 0
5 12
But I get this:
SHORT# WithdrawalsYTD
1 0
2 0
3 0
4 0
5 0
I have attached the SQL for the Query below.
Thanks!
UPDATE 017_WithdrawalsYTD_changelater
INNER JOIN 017b_PolWpart2 ON [017_WithdrawalsYTD_changelater].[SHORT#] =
[017b_PolWpart2].[SHORT#]
SET [017_WithdrawalsYTD_changelater].WithdrawalsYTD = [017b_PolWpart2].[Sum_MTD_AGG];
EDIT:
As I must aggregate on the fly, I have tried to do so. Still getting all kinds off errors. Note the table 17a_PolicyWithdrawalMatch is of the form:
SHORT# MTG_AGG WithdrawalPeriod PolDurY
1 3 1 1
1 5 1 0
2 2 1 1
2 22 1 1
So I aggregate:
SHORT# MTG_AGG
1 3
2 24
And put these aggregated values in 017_WithdrawalsYTD_changelater.
I tried to this like so:
SELECT [017a_PolicyWithdrawalMatch].[SHORT#], Sum([017a_PolicyWithdrawalMatch].MTD_AGG) AS Sum_MTD_AGG
WHERE ((([017a_PolicyWithdrawalMatch].WithdrawalPeriod)=[017a_PolicyWithdrawalMatch].[PolDurY]))
GROUP BY [017a_PolicyWithdrawalMatch].[SHORT#]
UPDATE 017_WithdrawalsYTD_changelater INNER JOIN 017a_PolicyWithdrawalMatch ON [017_WithdrawalsYTD_changelater].[SHORT#] = [017a_PolicyWithdrawalMatch].[SHORT#] SET 017_WithdrawalsYTD_changelater.WithdrawalsYTD =Sum_MTD_AGG;
I am getting no luck... I get told SELECT statement is using a reserved word... :(
Consider heeding #June7's comments to avoid the use of saving aggregate data in a table as it redundantly uses storage resources since such data can be easily queried in real time. Plus, such aggregate values immediately become historical figures since it is saved inside a static table.
In MS Access, update queries must be sourced from updateable objects of which aggregate queries are not, being read-only types. Hence, they cannot be used in UPDATE statements.
However, if you really, really, really need to store aggregate data, consider using domain functions such as DSUM inside the UPDATE. Below assumes SHORT# is a string column.
UPDATE [017_WithdrawalsYTD_changelater] c
SET c.WithdrawalsYTD = DSUM("MTD_AGG", "[017a_PolicyWithdrawalMatch]",
"[SHORT#] = '" & c.[SHORT#] & "' AND WithdrawalPeriod = [PolDurY]")
Nonetheless, the aggregate value can be queried and refreshed to current values as needed. Also, notice the use of table aliases to reduce length of long table names:
SELECT m.[SHORT#], SUM(m.MTD_AGG) AS Sum_MTD_AGG
FROM [017a_PolicyWithdrawalMatch] m
WHERE m.WithdrawalPeriod = m.[PolDurY]
GROUP BY m.[SHORT#]

Oracle SQL statement to update column values based on specific condition

I have a table which is having 3 columns-PID,LOCID,ISMGR. Now in existing scenario, for some person, based on the location ID, he is set as ISMGR=true.
But as per the new requirement, we have to make all the ISMGR=true for any person who is having at least one ISMGR=true(means if he is mangager for any one location, he should be manager for all the locations).
Table Data before running the script:
PID|LOCID|ISMGR
1 1 1
1 2 0
1 3 0
2 1 0
2 2 1
Table Data after running the script:
PID|LOCID|ISMGR
1 1 1
1 2 1
1 3 1
2 1 1
2 2 1
Any help will be highly appreciated..
Thanks in advance.
I would be inclined to write this using exists:
update t
set ismgr = 1
where ismgr = 0 and
exists (select 1 from t t2 where t2.pid = t.pid and t2.ismgr = 1);
exists should be more efficient than doing a subquery with an aggregation.
This will work best with indexes on t(pid, ismgr) and t(ismgr).
This is not an answer but a test of the two solutions offered so far - I will call them the "EXISTS" and the "AGGREGATE" solutions or approaches.
Details of the tests are below, but here are two overall conclusions:
Both approaches have comparable execution times; on average the AGGREGATE approach worked a little faster than the EXISTS approach, but by a very small margin (smaller than the differences between running times from one trial to the next). Without indexes on any columns, the run times were: (first number is for the EXISTS approach and the second for AGGREGATE). Trial 1: 8.19s 8.08s Trial 2: 8.98s 8.22s Trial 3: 9.46s 9.55s Note - Estimated optimizer costs should be used only to compare different execution plans for the same statement, not for different solutions using different approaches. Even so, someone will inevitably ask; so - for the EXISTS approach the lowest cost the Optimizer found was 4766; for AGGREGATE, 2665. Again, though, this is completely meaningless.
If a lot of rows need to be updated, indexes will hurt performance much more than they help it. Indeed, when rows are updated, the indexes must be updated as well. If only a small number of rows must be updated, then the indexes will help, because most of the time is spent finding the rows that must be updated and only little time is spent in the updates themselves. In my example almost 25% of rows had to be updated... so the AGGREGATE solution took 51.2 seconds and the EXISTS solution took 59.3 seconds! RECOMMENDATION: If you expect that a large number of rows may need to be updated, and you already have indexes on the table, you may be better off DROPPING them and re-creating them after the updates! Or, perhaps there are other solutions to this problem; I am not an expert (keep that in mind!)
To test properly, after I created the test table and committed, I ran each solution by itself, then I rolled back and, logged in as SYS (in a different session), I ran alter system flush buffer_cache to make sure performance is not randomly helped by cache hits or hurt by misses. In all cases everything is done from disk storage.
I created a table with id's from 1 to 1.2 million and a random integer between 1 and 3, with probabilities 40%, 40% and 20% respectively (see the use of dbms_random below). Then from this prep data I created the test table: each pid was included one, two or three times based on this random integer; and a random 0 or 1 was added as ismgr (with 50-50 probability) in each row. I also added a random integer between 1 and 4 as locid just to simulate the actual data; I didn't worry about duplicate locid since that column plays no role in the problem.
Of the 1.2 million pids, approximately 480,000 (40%) appear just once in the test table, another ~480,000 appear twice and ~240,000 three times. Total rows should be about 2,160,000. That's the cardinality of the base table (in reality it ended up being 2,160,546). Then: none of the ~480,000 rows with unique pid need to be changed; half of the 480,000 pids with a count of 2 will have the same ismgr (so no change) and the other half will be split, so we will need to change 240,000 rows from these; and a simple combinatorial argument shows that 3/8, or 270,000 rows, of the 720,000 rows for pids that appear three times in the table must be changed. So we should expect that 510,000 rows should be changed. In fact the update statements resulted in 510,132 rows updated (same for both solutions). These sanity checks show that the test was probably set up correctly. Below I show also a small sample from the base table, also as a sanity check.
CREATE TABLE statement:
create table tbl as
with prep ( pid, dup ) as (
select level,
round( dbms_random.value(0.5, 3) ) as dup
from dual
connect by level <= 1200000
)
select pid,
round( dbms_random.value(0.5, 4.5) ) as locid,
round( dbms_random.value(0, 1) ) as ismgr
from prep
connect by level <= dup
and prior pid = pid
and prior sys_guid() is not null
;
commit;
Sanity checks:
select count(*) from tbl;
COUNT(*)
----------
2160546
select * from tbl where pid between 324720 and 324730;
PID LOCID ISMGR
---------- ---------- ----------
324720 4 1
324721 1 0
324721 4 1
324722 3 0
324723 1 0
324723 3 0
324723 3 1
324724 3 1
324724 2 0
324725 4 1
324725 2 0
324726 2 0
324726 1 0
324727 3 0
324728 4 1
324729 1 0
324730 3 1
324730 3 1
324730 2 0
19 rows selected
UPDATE statements:
update tbl t
set ismgr = 1
where ismgr = 0 and
exists (select 1 from tbl t2 where t2.pid = t.pid and t2.ismgr = 1);
rollback;
update tbl
set ismgr = 1
where ismgr = 0
and pid in ( select pid
from tbl
group by pid
having max(ismgr) = 1);
rollback;
-- statements to create indexes, used in separate testing:
create index pid_ismgr_idx on tbl(pid, ismgr);
create index ismgr_ids on tbl(ismgr);
Why PL/SQL? All you need is a plain SQL statement. For example:
update your_table t -- enter your actual table name here
set ismgr = 1
where ismgr = 0
and pid in ( select pid
from your_table
group by pid
having max(ismgr) = 1)
;
The existing solutions are perfectly fine, but I prefer to use merge any time I'm updating rows from a correlated sub-query. I find it to be more readable and the performance is typically commensurate with the exists method.
MERGE INTO t
USING (SELECT DISTINCT pid
FROM t
WHERE ismgr = 1) src
ON (t.pid = src.pid)
WHEN MATCHED THEN
UPDATE SET ismgr = 1
WHERE ismgr = 0;
As #mathguy pointed out, in this case using group by and having is more efficient than distinct. To use that with merge is just a matter of changing the sub-query:
MERGE INTO t
USING (SELECT pid
FROM t
GROUP BY pid
HAVING MAX(ismgr) = 1) src
ON (t.pid = src.pid)
WHEN MATCHED THEN
UPDATE SET ismgr = 1
WHERE ismgr = 0;

SQL Server Stored Procedure SELECT DISTINCT

I'm working on deciphering some stored procedures and have minimal vocabulary on the subject. Can someone please explain to me what role this '1' serves in the below statement? I can not find any DISTINCT syntax tutorials to explain this. I'm referring to the actual "1" one in the statement.
USE TEST
GO
SET ANSI_NULLS, QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].sp_F_SQL
(#Id int)
WITH ENCRYPTION AS
SELECT DISTINCT
dbo.MAP_SQL.rID,
dbo.MAP_SQL.lID,
dbo.MAP_SQL.cID,
**1** as RESPFACT,
dbo.MAP_SQL.Longitude,
dbo.MAP_SQL.Latitude,
dbo.MAP_SQL.Altitude,
...
The 1 has nothing to do with DISTINCT. It just adds an output column titled RESPFACT that has a value of 1 for all rows. I suspect whatever is consuming the output need that column.
SELECT DISTINCT only returns the "distinct" rows from the output - meaning rows where ALL column values are equal.
e.g. if your output without distinct was
1 2 ABC DEF
2 3 GHI JLK
2 1 ABC DEF
1 2 ABC DEF
Then rows 1 and 4 would be seen as "equal" and ony one would be returned:
1 2 ABC DEF
2 3 GHI JLK
2 1 ABC DEF
Note that rows 1 and 3 are NOT equal even though 3 of the 4 column values match.
The 1 generates a column called RESPFACT. This always has the value of 1.
I cannot say why this is important for the sp_F_SQL procedure.
The distinct returns unique rows. If there are duplicate values for the columns in the select then only one row is returned. Clearly, the RESPFACT column is the same in all rows, so it does not affect the rows being returned.

Modifying a column value depending on other column value for all the rows

I have a teradata table. Now I need to add a column, say, Flag and insert values into the Flag column which will depend on say, Sales. Flag=1 if Sales greater than x or Flag=0.
Here is the structure of Table at present
Sales Date
10 11/11/1987
20 12/13/1987
I want it like the following way
Sale Date Flag
10 11/11/1987 0
20 12/13/1987 1
I tried to look for such problems on the forums but couldn't find any. Excuse if you find any duplicate problems.
What you want to use here is a CASE statement:
UPDATE teradata_table
SET flag = CASE WHEN sales > 10 THEN 1 ELSE 0 END;
after adding the column, do the update statement
Update <table>
set Flag = case when Sale<=10 then 0
else 1
end
ALTER TABLE MYTABLE
( FLAG NUMBER(1) );
UPDATE MYTABLE SET FLAG = 1 WHERE SALE >= 10;