SQL Insert only the rows that have data in a specific column - sql

I am basically a noob at this and have gotten this far from Google searches alone. Access VBA and SQL inventory database.
I have a table that I populate by a barcode scanner that looks like the following;
PartNo | SerialNo | Qty | Vehicle
-------+----------+-----+---------
test | | 1 | H2
test2 | | 1 | H2
test3 | test3s/n | 1 | H2
test3 | test4s/n | 1 | H2
test | | 1 | H2
I am trying to update 2 tables from this, or insert if the PartNo doesn't exist.
tblPerm2 has PartNo as primary key
tblPerm1 has PartNo, SerialNo, Qty and Vehicle
PartNo must exist in tblPerm2 to be added to tblPerm1
I can get the PartNo inserted into tblPerm2 no problem, but I'm running into problems with tblPerm1.
I'm following user Parfait's example here, Update Existing Access Records from CSV Import , native to MS Access or in VB.NET
I've tried an Insert and and insert with a join. The code below adds everything to tblPerm1, including rows with no SerialNo. How can I insert only the rows from tblTemp that have a serial number?
INSERT INTO tblPerm1 (PartNo, SerialNo, Qty, Vehicle)
SELECT tblTemp.PartNo, tblTemp.SerialNo, tblTemp.Qty, tblTemp.Vehicle
FROM tblTemp
WHERE tblTemp.SerialNo IS NOT NULL;
I expect this to only insert the 2 'test3' rows, but all rows are inserted.
SELECT DISTINCT is the same, but only one entry for 'test'
Once this is done, I'll delete from tblTemp and continue on updating and inserting. Maybe there is a better way?
Thanks in advance

Are the SerialNo columns actually empty strings instead of NULL?
If this works, then yes they are:
INSERT INTO tblPerm1 (PartNo, SerialNo, Qty, Vehicle)
SELECT tblTemp.PartNo, tblTemp.SerialNo, tblTemp.Qty, tblTemp.Vehicle
FROM tblTemp
WHERE tblTemp.SerialNo <> '';
See How to check for Is not Null And Is not Empty string in SQL server? for more on checking for empty strings, with or without counting whitespace (though details may vary depending on what SQL server you are running).

Related

Transform Row Values to Column Names

I have a table of customer contacts and their role. Simplified example below.
customer | role | userid
----------------------------
1 | Support | 123
1 | Support | 456
1 | Procurement | 567
...
desired output
customer | Support1 | Support2 | Support3 | Support4 | Procurement1 | Procurement2
-----------------------------------------------------------------------------------
1 | 123 | 456 | null | null | 567 | null
2 | 123 | 456 | 12333 | 45776 | 888 | 56723
So dynamically create number of required columns based on how many user are in that role. It's a small number of roles. Also I can assume max 5 user in that same role. Which means worst case I need to generate 5 columns for each role. The userids don't need to be in any particular order.
My current approach is getting 1 userid per role/customer. Then a second query pulls another id that wasn't part of first results set. And so on. But that way I have to statically create 5 queries. It works. But I was wondering whether there is a more efficient way? Dynamically creating needed columns.
Example of pulling one user per role:
SELECT customer,role,
(SELECT top 1 userid
FROM temp as tmp1
where tmp1.customer=tmp2.customer and tmp1.role=tmp2.role
) as userid
FROM temp as tmp2
group by customer,role
order by customer,role
SQL create with dummy data
create table temp
(
customer int,
role nvarchar(20),
userid int
)
insert into temp values (1,'Support',123)
insert into temp values (1,'Support',456)
insert into temp values (1,'Procurement',567)
insert into temp values (2,'Support',123)
insert into temp values (2,'Support',456)
insert into temp values (2,'Procurement',888)
insert into temp values (2,'Support',12333)
insert into temp values (2,'Support',45776)
insert into temp values (2,'Procurement',56723)
You may need to adapt your approach slightly if you want to avoid getting into the realm of programming user defined table functions (which is what you would need in order to generate columns dynamically). You don't mention which SQL database variant you are using (SQL Server, PostgreSQL, ?). I'm going to make the assumption that it supports some form of string aggregation feature (they pretty much all do), but the syntax for doing this will vary, so you will probably have to adjust the code to your circumstances. You mention that the number of roles is small (5-ish?). The proposed solution is to generate a comma-separated list of user ids, one for each role, using common table expressions (CTEs) and the LISTAGG (variously named STRING_AGG, GROUP_CONCAT, etc. in other databases) function.
WITH tsupport
AS (SELECT customer,
Listagg(userid, ',') AS "Support"
FROM temp
WHERE ROLE = 'Support'
GROUP BY customer),
tprocurement
AS (SELECT customer,
Listagg(userid, ',') AS "Procurement"
FROM temp
WHERE ROLE = 'Procurement'
GROUP BY customer)
--> tnextrole...
--> AS (SELECT ... for additional roles
--> Listagg...
SELECT a.customer,
"Support",
"Procurement"
--> "Next Role" etc.
FROM tsupport a
JOIN tprocurement b
ON a.customer = b.customer
--> JOIN tNextRole ...
Fiddle is here with a result that appears as below based on your dummy data:

An SQL query that uses values from two columns in a Between Operator and adds these two columns as a class for the result

In one table, I have a column that contains a letter and another that contains a letter of a later alphabetical order. Like 'A' for the former and 'R' for the latter for example. I want to use these two columns in a Between operator to search for words in another table that starts a letter from the first column and ends with a letter from the second. So in my example, 'Air' would fit this requirement. The problem is I also need to add these two columns to results, so that for my example, the query would return 'Air' with 'A' and 'R' from the other table as two columns in my results. Sorry I can't be more explicit as the data is sensitive.
Based on what you have described here is one way to get the output.
create table t(id int, start_letter varchar(1), end_letter varchar(1));
create table search_data(words varchar(50))
insert into t values(1,'A','R')
begin
insert into search_data values('Air');
insert into search_data values('Amour');
insert into search_data values('Arogant');
end;
select *
from search_data a
join t b
on lower(substring(a.words,1,1))=lower(b.start_letter)
and lower(substring(reverse(a.words),1,1))=lower(b.end_letter)
+-------+----+--------------+------------+
| words | id | start_letter | end_letter |
+-------+----+--------------+------------+
| Air | 1 | A | R |
| Amour | 1 | A | R |
+-------+----+--------------+------------+
db fiddle link
https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=82cf80f4b76cb740ae56db8f236bfd46

How can I remove duplicate rows from a table but keeping the summation of values of a column

Suppose there is a table which has several identical rows. I can copy the distinct values by
SELECT DISTINCT * INTO DESTINATIONTABLE FROM SOURCETABLE
but if the table has a column named value and for the sake of simplicity its value is 1 for one particular item in that table. Now that row has another 9 duplicates. So the summation of the value column for that particular item is 10. Now I want to remove the 9 duplicates(or copy the distinct value as I mentioned) and for that item now the value should show 10 and not 1. How can this be achieved?
item| value
----+----------------
A | 1
A | 1
A | 1
A | 1
B | 1
B | 1
I want to show this as below
item| value
----+----------------
A | 4
B | 2
Thanks in advance
You can try to use SUM and group by
SELECT item,SUM(value) value
FROM T
GROUP BY item
SQLfiddle:http://sqlfiddle.com/#!18/fac26/1
[Results]:
| item | value |
|------|-------|
| A | 4 |
| B | 2 |
Broadly speaking, you can just us a sum and a GROUP BY clause.
Something like:
SELECT column1, SUM(column2) AS Count
FROM SOURCETABLE
GROUP BY column1
Here it is in action: Sum + Group By
Since your table probably isn't just two columns of data, here is a slightly more complex example showing how to do this to a larger table: SQL Fiddle
Note that I've selected my rows individually so that I can access the necessary data, rather than using
SELECT *
And I have achieved this result without the need for selecting data into another table.
EDIT 2:
Further to your comments, it sounds like you want to alter the actual data in your table rather than just querying it. There may be a more elegant way to do this, but a simple way use the above query to populate a temporary table, delete the contents of the existing table, then move all the data back. To do this in my existing example:
WITH MyQuery AS (
SELECT name, type, colour, price, SUM(number) AS number
FROM MyTable
GROUP BY name, type, colour, price
)
SELECT * INTO MyTable2 FROM MyQuery;
DELETE FROM MyTable;
INSERT INTO MyTable(name, type, colour, price, number)
SELECT * FROM MyTable2;
DROP TABLE MyTable2;
WARNING: If youre going to try this, please use a development environment first (i.e one you don't mind breaking!) to ensure it does exactly what you want it to do. It's imperative that your initial query captures ALL the data you want.
Here is the SQL Fiddle of this example in action: SQL Fiddle

INSERT SELECT with differed table/col stucture

I am trying to create a INSERT SELECT statement which inserts and converts data from Imported_table to Destination_table.
Imported_table
+------------------+-----------------------+
| Id (varchar(10)) | genre (varchar(4000)) |
+------------------+-----------------------+
| 6 | Comedy |
+------------------+-----------------------+
| 5 | Comedy |
+------------------+-----------------------+
| 1 | Action |
+------------------+-----------------------+
Destination_table (How it should be looking)
+-----------------------------+----------------------------+
| genre_name (PK,varchar(50)) | description (varchar(255)) |
+-----------------------------+----------------------------+
| Comedy | Description of Comedy |
+-----------------------------+----------------------------+
| Action | Description of Action |
+-----------------------------+----------------------------+
Imported_table.Id isn't used at all but is still in this (old) table
Destination_table.genre_name is a primairy key and should be unique (distinct)
Destination_table.description is compiled with CONCAT('Description of ',genre)
My best try
INSERT INTO testdb.dbo.Destination_table (genre_name, description)
SELECT DISTINCT Genre,
LEFT(Genre,50) AS genre_name,
CAST(CONCAT('Description of ',Genre) AS varchar(255)) AS description
FROM MYIMDB.dbo.Imported_table
Gives the error: The select list for the INSERT statement contains more items than the insert list. The number of SELECT values must match the number of INSERT columns.
Thanks in advance.
The largest error in your query is that you are trying to insert 3 columns into a destination table having only two columns. That being said, I would just use LEFT for both inserted values and take as much space as the new table can hold:
INSERT INTO testdb.dbo.Destination_table (genre_name, description)
SELECT DISTINCT
LEFT(Genre, 50),
'Description of ' + LEFT(Genre, 240) -- 240 + 15 = 255
FROM MYIMDB.dbo.Imported_table;
As a side note, the original genre field is 4000 characters wide, and your new table structure runs the risk of throwing away a lot of information. It is not clear whether you are concerned with this, but it is worth pointing out.
This means your SELECT (genre, genre_name,description) and INSERT (genre_name, description) lists don't match. You need to SELECT the same number of fields as you are specifying in your INSERT.
Try this:
INSERT INTO testdb.dbo.Destination_table (genre_name, description)
SELECT DISTINCT Genre,
CAST(CONCAT('Description of ',Genre) AS varchar(255)) AS description
FROM MYIMDB.dbo.Imported_table
You have 3 columns in your SELECT, try:
INSERT INTO testdb.dbo.Destination_table (genre_name, description)
SELECT DISTINCT LEFT(Genre,50) AS genre_name,
CAST(CONCAT('Description of ',Genre) AS varchar(255)) AS description
FROM MYIMDB.dbo.Imported_table

SQL Multiple Row Insert w/ multiple selects from different tables

I am trying to do a multiple insert based on values that I am pulling from a another table. Basically I need to give all existing users access to a service that previously had access to a different one. Table1 will take the data and run a job to do this.
INSERT INTO Table1 (id, serv_id, clnt_alias_id, serv_cat_rqst_stat)
SELECT
(SELECT Max(id) + 1
FROM Table1 ),
'33', --The new service id
clnt_alias_id,
'PI' --The code to let the job know to grant access
FROM TABLE2,
WHERE serv_id = '11' --The old service id
I am getting a Primary key constraint error on id.
Please help.
Thanks,
Colin
This query is impossible. The max(id) sub-select will evaluate only ONCE and return the same value for all rows in the parent query:
MariaDB [test]> create table foo (x int);
MariaDB [test]> insert into foo values (1), (2), (3);
MariaDB [test]> select *, (select max(x)+1 from foo) from foo;
+------+----------------------------+
| x | (select max(x)+1 from foo) |
+------+----------------------------+
| 1 | 4 |
| 2 | 4 |
| 3 | 4 |
+------+----------------------------+
3 rows in set (0.04 sec)
You will have to run your query multiple times, once for each record you're trying to copy. That way the max(id) will get the ID from the previous query.
Is there a requirement that Table1.id be incremental ints? If not, just add the clnt_alias_id to Max(id). This is a nasty workaround though, and you should really try to get that column's type changed to auto_increment, like Marc B suggested.