Creating a table populated with record counts from other tables - sql

I want to validate that 2 (or more) tables have the same count daily. I figured executing a SELECT count against each table and reporting them as values in their own named column within a temp table would be efficient even if just to eye-ball the numbers in the results window
CREATE TABLE #sync_count (mst_cnt AS varchar(5), info_cnt AS varchar(5));
INSERT INTO #sync_count (mst_cnt, info_cnt)
VALUES (
SELECT Sum(SRV_MST.SRV_MACID) AS mst_cnt FROM SRV_MST,
SELECT Sum(SRV_INFO.SRV_MACID) AS info_cnt FROM SRV_INFO,
))
DELETE #sync_count
Desired output
SRV_MST | SRV_INFO
===================
22 | 22
===================
Not sure exactly where I went wrong. Any tips? Many thanks

You need parentheses around subqueries:
CREATE TABLE #sync_count (mst_cnt varchar(5), info_cnt varchar(5));
----------------------------------^ as is not used in table declarations.
INSERT INTO #sync_count (mst_cnt, info_cnt)
VALUES ( (SELECT Sum(SRV_MST.SRV_MACID) AS mst_cnt FROM SRV_MST),
(SELECT Sum(SRV_INFO.SRV_MACID) AS info_cnt FROM SRV_INFO)
);
You also had an extra comma before the last paren.
I would strongly, strongly, strongly recommend that you define the counts as numbers and not string!
CREATE TABLE #sync_count (mst_cnt int, info_cnt int);
In fact, you can dispense with the table declaration and just use:
SELECT (SELECT Sum(SRV_MST.SRV_MACID) FROM SRV_MST) AS mst_cnt,
(SELECT Sum(SRV_INFO.SRV_MACID) FROM SRV_INFO) AS info_cnt
INTO #sync_count ;

You could write this as:
INSERT INTO #sync_count (mst_cnt, info_cnt)
SELECT
(SELECT Sum(SRV_MST.SRV_MACID) FROM SRV_MST),
(SELECT Sum(SRV_INFO.SRV_MACID) FROM SRV_INFO)

Related

inserting results of different queries of different tables into a new created table in Postgres via PGadmin4

So i have created a table with multiple columns to collect some information about a database
CREATE TABLE DATENBANKEN (
ID serial,
name VARCHAR(20),
Erstellt timestamp,
Datenbankgröße VARCHAR(20),
"Collation" VARCHAR (20),
Beschreibung VARCHAR (50)
)
and with the following insert statement i was able to fill the rows with the desired information
INSERT INTO DATENBANKEN (id, name, Erstellt, Datenbankgröße, "Collation")
SELECT pg_database.oid,
pg_database.datname,
(pg_stat_file('base/'||pg_database.oid ||'/PG_VERSION')).modification,
pg_size_pretty(pg_database_size(datname)),
pg_database.datcollate datcollate from pg_database
this is the results
all the values above were captured from on table (pg_database)
now the last value "Beschreibung" is located in another table named (pg_shdescription)
so in this case i had to make another insert statement specifically for the column "Beschreibung"
INSERT INTO DATENBANKEN (Beschreibung)
select pg_shdescription.description from pg_shdescription
as you can see the rows in the column "Beschreibung" were not inserted beside the first three rows as i expected, but were added as additional rows with no connection to the data above.
this is the table pg_shdescription and as you can see, for every objoid there is a specific description. So 1 is "default template for new databases"
so here the 4th row in the column "Beschreibung" should have been in the second row where the datacenter name "template 1 is"
what did i do wrong here or what is the best way to insert certain data from different tables into a new table that are still linked together?
I really appreciate your help, any help 🙂
i tried INNER JOIN in the statement, but it did not work
CREATE TABLE DATENBANKEN (
ID serial,
name VARCHAR(20),
Erstellt timestamp,
Datenbankgröße VARCHAR(20),
"Collation" VARCHAR (20),
Beschreibung VARCHAR (50)
)
INSERT INTO DATENBANKEN (id, name, Erstellt, Datenbankgröße, "Collation")
SELECT pg_database.oid,
pg_database.datname,
(pg_stat_file('base/'||pg_database.oid ||'/PG_VERSION')).modification,
pg_size_pretty(pg_database_size(datname)),
pg_database.datcollate datcollate from pg_database
INSERT INTO DATENBANKEN (Beschreibung)
select pg_shdescription.description from pg_shdescription
INNER JOIN on Datenbanken.id = pg_shdescription.objoid
you should join the tables before you insert, else you need to UPDATE DATENbBANKEN and not INSERT INTO`
INSERT INTO DATENBANKEN (id, name, Erstellt, Datenbankgröße, "Collation",Beschreibung )
SELECT pg_database.oid,
pg_database.datname,
(pg_stat_file('base/'||pg_database.oid ||'/PG_VERSION')).modification,
pg_size_pretty(pg_database_size(datname)),
pg_database.datcollate datcollate,
pg_shdescription.description
from pg_database JOIN pg_shdescription
ON pg_database.oid = pg_shdescription.objoid

DB2 Using with statement to keep last id

In my project I need to create a script that insert data with auto generate value for the primary key and then to reuse this number for foreign on other tables.
I'm trying to use the WITH statement in order to keep that value.
For instance, I'm trying to do this:
WITH tmp as (SELECT ID FROM (INSERT INTO A ... VALUES ...))
INSERT INTO B ... VALUES tmp.ID ...
But I can't make it work.
Is it at least possible to do it or am I completely wrong???
Thank you
Yes, it is possible, if your DB2-server version supports the syntax.
For example:
create table xemp(id bigint generated always as identity, other_stuff varchar(20));
create table othertab(xemp_id bigint);
SELECT id FROM FINAL TABLE
(INSERT INTO xemp(other_stuff)
values ('a'), ('b'), ('c'), ('d')
) ;
The above snippet of code gives the result below:
ID
--------------------
1
2
3
4
4 record(s) selected.
If you want to re-use the ID to populate another table:
with tmp1(id) as ( SELECT id FROM new TABLE (INSERT INTO xemp(other_stuff) values ('a1'), ('b1'), ('c1'), ('d1') ) tmp3 )
, tmp2 as (select * from new table (insert into othertab(xemp_id) select id from tmp1 ) tmp4 )
select * from othertab;
As per my understanding
You will have to create an auto-increment field with the sequence object (this object generates a number sequence).
You can CREATE SEQUENCE to achieve the auto increment value :
CREATE SEQUENCE seq_person
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10

Split one large, denormalized table into a normalized database

I have a large (5 million row, 300+ column) csv file I need to import into a staging table in SQL Server, then run a script to split each row up and insert data into the relevant tables in a normalized db. The format of the source table looks something like this:
(fName, lName, licenseNumber1, licenseIssuer1, licenseNumber2, licenseIssuer2..., specialtyName1, specialtyState1, specialtyName2, specialtyState2..., identifier1, identifier2...)
There are 50 licenseNumber/licenseIssuer columns, 15 specialtyName/specialtyState columns, and 15 identifier columns. There is always at least one of each of those, but the remaining 49 or 14 could be null. The first identifier is unique, but is not used as the primary key of the Person in our schema.
My database schema looks like this
People(ID int Identity(1,1))
Names(ID int, personID int, lName varchar, fName varchar)
Licenses(ID int, personID int, number varchar, issuer varchar)
Specialties(ID int, personID int, name varchar, state varchar)
Identifiers(ID int, personID int, value)
The database will already be populated with some People before adding the new ones from the csv.
What is the best way to approach this?
I have tried iterating over the staging table one row at a time with select top 1:
WHILE EXISTS (Select top 1 * from staging)
BEGIN
INSERT INTO People Default Values
SET #LastInsertedID = SCOPE_IDENTITY() -- might use the output clause to get this instead
INSERT INTO Names (personID, lName, fName)
SELECT top 1 #LastInsertedID, lName, fName from staging
INSERT INTO Licenses(personID, number, issuer)
SELECT top 1 #LastInsertedID, licenseNumber1, licenseIssuer1 from staging
IF (select top 1 licenseNumber2 from staging) is not null
BEGIN
INSERT INTO Licenses(personID, number, issuer)
SELECT top 1 #LastInsertedID, licenseNumber2, licenseIssuer2 from staging
END
-- Repeat the above 49 times, etc...
DELETE top 1 from staging
END
One problem with this approach is that it is prohibitively slow, so I refactored it to use a cursor. This works and is significantly faster, but has me declaring 300+ variables for Fetch INTO.
Is there a set-based approach that would work here? That would be preferable, as I understand that cursors are frowned upon, but I'm not sure how to get the identity from the INSERT into the People table for use as a foreign key in the others without going row-by-row from the staging table.
Also, how could I avoid copy and pasting the insert into the Licenses table? With a cursor approach I could try:
FETCH INTO ...#LicenseNumber1, #LicenseIssuer1, #LicenseNumber2, #LicenseIssuer2...
INSERT INTO #LicenseTemp (number, issuer) Values
(#LicenseNumber1, #LicenseIssuer1),
(#LicenseNumber2, #LicenseIssuer2),
... Repeat 48 more times...
.
.
.
INSERT INTO Licenses(personID, number, issuer)
SELECT #LastInsertedID, number, issuer
FROM #LicenseTEMP
WHERE number is not null
There still seems to be some redundant copy and pasting there, though.
To summarize the questions, I'm looking for idiomatic approaches to:
Break up one large staging table into a set of normalized tables, retrieving the Primary Key/identity from one table and using it as the foreign key in the others
Insert multiple rows into the normalized tables that come from many repeated columns in the staging table with less boilerplate/copy and paste (Licenses and Specialties above)
Short of discreet answers, I'd also be very happy with pointers towards resources and references that could assist me in figuring this out.
Ok, I'm not an SQL Server expert, but here's the "strategy" I would suggest.
Calculate the personId on the staging table
As #Shnugo suggested before me, calculating the personId in the staging table will ease the next steps
Use a sequence for the personID
From SQL Server 2012 you can define sequences. If you use it for every person insert, you'll never risk an overlapping of IDs. If you have (as it seems) personId that were loaded before the sequence you can create the sequence with the first free personID as starting value
Create a numbers table
Create an utility table keeping numbers from 1 to n (you need n to be at least 50.. you can look at this question for some implementations)
Use set logic to do the insert
I'd avoid cursor and row-by-row logic: you are right that it is better to limit the number of accesses to the table, but I'd say that you should strive to limit it to one access for target table.
You could proceed like these:
People:
INSERT INTO People (personID)
SELECT personId from staging;
Names:
INSERT INTO Names (personID, lName, fName)
SELECT personId, lName, fName from staging;
Licenses:
here we'll need the Number table
INSERT INTO Licenses (personId, number, issuer)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then licenseNumber1
when 2 then licenseNumber2
...
when 50 then licenseNumber50
end as licenseNumber,
case nbrs.n
when 1 then licenseIssuer1
when 2 then licenseIssuer2
...
when 50 then licenseIssuer50
end as licenseIssuer
from staging
cross join
(select n from numbers where n>=1 and n<=50) nbrs
) WHERE licenseNumber is not null;
Specialties:
INSERT INTO Specialties(personId, name, state)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then specialtyName1
when 2 then specialtyName2
...
when 15 then specialtyName15
end as specialtyName,
case nbrs.n
when 1 then specialtyState1
when 2 then specialtyState2
...
when 15 then specialtyState15
end as specialtyState
from staging
cross join
(select n from numbers where n>=1 and n<=15) nbrs
) WHERE specialtyName is not null;
Identifiers:
INSERT INTO Identifiers(personId, value)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then identifier1
when 2 then identifier2
...
when 15 then identifier15
end as value
from staging
cross join
(select n from numbers where n>=1 and n<=15) nbrs
) WHERE value is not null;
Hope it helps.
You say: but the staging table could be modified
I would
add a PersonID INT NOT NULL column and fill it with DENSE_RANK() OVER(ORDER BY fname,lname)
add an index to this PersonID
use this ID in combination with GROUP BY to fill your People table
do the same with your names table
And then use this ID for a set-based insert into your three side tables
Do it like this
SELECT AllTogether.PersonID, AllTogether.TheValue
FROM
(
SELECT PersonID,SomeValue1 AS TheValue FROM StagingTable
UNION ALL SELECT PersonID,SomeValue2 FROM StagingTable
UNION ALL ...
) AS AllTogether
WHERE AllTogether.TheValue IS NOT NULL
UPDATE
You say: might cause a conflict with IDs that already exist in the People table
You did not tell anything about existing People...
Is there any sure and unique mark to identify them? Use a simple
UPDATE StagingTable SET PersonID=xyz WHERE ...
to set existing PersonIDs into your staging table and then use something like
UPDATE StagingTable
SET PersonID=DENSE RANK() OVER(...) + MaxExistingID
WHERE PersonID IS NULL
to set new IDs for PersonIDs still being NULL.

SQL query with pivot tables?

I'm trying to wrap by brain around how to use pivot tables for a query I need. I have 3 database tables. Showing relevant columns:
TableA: Columns = pName
TableB: Columns = GroupName, GroupID
TableC: Columns = pName, GroupID
TableA contains a list of names (John, Joe, Jack, Jane)
TableB contains a list of groups with an ID#. (Soccer|1, Hockey|2, Basketball|3)
TableC contains a list of the names and the group they belong to (John|1, John|3, Joe|2, Jack|1, Jack|2, Jack|3, Jane|3)
I need to create a matrix like grid view using a SQL query that would return a list of all the names from TableA (Y-axis) and a list of all the possible groups (X-axis). The cell values would be either true or false if they belong to the group.
Any help would be appreciated. I couldn't quite find an existing answer that helped.
You might try it like this
Here I set up a MCVE, please try to create this in your next question yourself...
DECLARE #Name TABLE (pName VARCHAR(100));
INSERT INTO #Name VALUES('John'),('Joe'),('Jack'),('Jane');
DECLARE #Group TABLE(gName VARCHAR(100),gID INT);
INSERT INTO #Group VALUES ('Soccer',1),('Hockey',2),('Basketball',3);
DECLARE #map TABLE(pName VARCHAR(100),gID INT);
INSERT INTO #map VALUES
('John',1),('John',3)
,('Joe',2)
,('Jack',1),('Jack',2),('Jack',3)
,('Jane',3);
This quer will collect the values and perform PIVOT
SELECT p.*
FROM
(
SELECT n.pName
,g.gName
,'x' AS IsInGroup
FROM #map AS m
INNER JOIN #Name AS n ON m.pName=n.pName
INNER JOIN #Group AS g ON m.gID=g.gID
) AS x
PIVOT
(
MAX(IsInGroup) FOR gName IN(Soccer,Hockey,Basketball)
) as p
This is the result.
pName Soccer Hockey Basketball
Jack x x x
Jane NULL NULL x
Joe NULL x NULL
John x NULL x
Some hints:
You might use 1 and 0 instead of x as SQL Server does not know a real boolean
You should add a pID to your names. Never join tables on real data (unless it is something unique and unchangeable [which means never acutally!!!])
UPDATE dynamic SQL (thx to #djlauk)
If you want a query which deals with any amount of groups you have to to this dynamically. But please be aware, that you loose the chance to use this in ad-hoc-SQL like in VIEW or inline TVF, which is quite a big backdraw...
CREATE TABLE #Name(pName VARCHAR(100));
INSERT INTO #Name VALUES('John'),('Joe'),('Jack'),('Jane');
CREATE TABLE #Group(gName VARCHAR(100),gID INT);
INSERT INTO #Group VALUES ('Soccer',1),('Hockey',2),('Basketball',3);
CREATE TABLE #map(pName VARCHAR(100),gID INT);
INSERT INTO #map VALUES
('John',1),('John',3)
,('Joe',2)
,('Jack',1),('Jack',2),('Jack',3)
,('Jane',3);
DECLARE #ListOfGroups VARCHAR(MAX)=
(
STUFF
(
(
SELECT DISTINCT ',' + QUOTENAME(gName)
FROM #Group
FOR XML PATH('')
),1,1,''
)
);
DECLARE #sql VARCHAR(MAX)=
(
'SELECT p.*
FROM
(
SELECT n.pName
,g.gName
,''x'' AS IsInGroup
FROM #map AS m
INNER JOIN #Name AS n ON m.pName=n.pName
INNER JOIN #Group AS g ON m.gID=g.gID
) AS x
PIVOT
(
MAX(IsInGroup) FOR gName IN(' + #ListOfGroups + ')
) as p');
EXEC(#sql);
GO
DROP TABLE #map;
DROP TABLE #Group;
DROP TABLE #Name;
I suspect it may be laborious to keep the pivot up to date if categories are added. Or maybe I just prefer Excel (if you ignore one major advantage). The following approach could be helpful too, assuming you do have Office 365.
I added the three tables using 3 CREATE TABLE statements and 3 INSERT statements based on the code I saw above. (The solutions make use of temporary tables to insert specific values, but I believe you already have the data in your three tables, called TableA, TableB, TableC).
CREATE TABLE TestName (pName VARCHAR(100));
INSERT INTO TestName VALUES('John'),('Joe'),('Jack'),('Jane');
CREATE TABLE TestGroup (gName VARCHAR(100),gID INT);
INSERT INTO TestGroup VALUES ('Soccer',1),('Hockey',2),('Basketball',3);
CREATE TABLE Testmap (pName VARCHAR(100),gID INT);
INSERT INTO Testmap VALUES
('John',1),('John',3)
,('Joe',2)
,('Jack',1),('Jack',2),('Jack',3)
,('Jane',3);
Then, in MS Excel, I added (there may be a shorter sequence but I'm still exploring) the three tables as queries from database > sql server database. After adding them, I added all three to the Data Model (I can elaborate if you ask).
I then inserted PivotTable from the ribbon, chose External data source, but opened the Tables tab (instead of Connections tab), to find my data model (mine was top of the list) and I clicked Open. At some point Excel prompted me to create relationships between tables and it did a good job of auto generating them for me.
After minor tweaks my PivotTable came out like this (I could also ask Excel to show the data as a PivotChart).
Pivot showing groups as columns and names as rows.
The advantage is that you don't have to revisit the PIVOT code in SQL if the list (of groups) changes. As I think someone else mentioned, consider using ids for pName as well, or another way to ensure that you are not stuck the next day if you have two persons named John or Jack.
In Excel you can choose when to refresh the data (or the pivot) and, after refresh, any additional categories will be added and counted.

Inserting a Row in a Table from Select Query

I'm using two Stored Procedures and two Table in My Sql Server.
First Table Structure.
Second Table Structure.
When a Customer books a Order then the Data will be Inserted in Table 1.
I'm using a Select Query in another Page Which Selects the Details from the Second Table.
If a row with a billno from first table is not Present in Second Table I want to Insert into the Second Table with some Default Values in the Select Query. How can I do this
??
If you want to insert in the same query, you will have to create a stored procedure. There you'll query if row exists in second table, and, if not, insert a new entity in second table.
Your code should look something like this:
-- --------------------------------------------------------------------------------
-- Routine DDL
-- Note: comments before and after the routine body will not be stored by the server
-- --------------------------------------------------------------------------------
DELIMITER $$
CREATE DEFINER=`table`#`%` PROCEDURE `insertBill`(IN billNo int, val1 int, val2 int, val3 int)
BEGIN
DECLARE totalres INT DEFAULT 0;
select count(*) from SECOND_TABLE where Bill_Number = billNo INTO totalres;
IF totalres < 1 THEN
INSERT into SECOND_TABLE values(val1,val2,val3);
END IF;
END
Val1,val2 and val3 are the valuest to be inserted into second table.
Hope this helps.
What you do is to LEFT JOIN the two tables and then select only the ones where the second table had no row to join, meaning the bill number were missing.
In the example below, you can replace #default_inform_status and #default_response_status with your default values.
INSERT INTO second_table (Bill_Number, Rest_Inform_Status, Rest_Response_Status)
SELECT ft.Bill_Number, #default_inform_status, #default_response_status
FROM first_table ft
LEFT JOIN second_table st
ON st.Bill_Number = ft.Bill_number
WHERE st.Bill_Number IS NULL
If it is possible to have duplicates of the same Bill_Number in the first table, you should also add a DISTINCT after the SELECT. But considering the fact that it is a primary key, this is no issue for you.