Insert data into table where data in only two column varies - sql

I have list of names and I need to insert into a table with a primary key which is auto generated and another three columns which will have the same data for each name. Is there any way to acheive this in single query?
| ID | Name | Age| Class|In-Charge|
|121 | Luc | 12 | Five | 47855 |
|122 | Wayne| 12 | Five | 47855 |
|123 | Lih | 12 | Five | 47855 |

You can use something like this where you SELECT the name from your list and the other values are just static values:
insert into yourtable (Name, Age, Class, [In-Charge])
select Name, 12, 'Five', 47855
from yourlist
See SQL Fiddle with Demo

Related

How can I convert varchar (1,2,3) to a correlating column values name in SQL Server 2014?

I have an issue with converting a varchar column filled with id's (foreign keys) to another string with names, these names are linked in another table with their correlating id.
Data
x-----x------------------------x
| Id | foreign Keys (varchar) |
x-----x------------------------x
| 1 | 1,2,3 |
| 2 | 2,3,4 |
| 3 | 4 |
x-----x------------------------x
Names
x-----x-----------------x
| Id | Names (varchar)|
x-----x-----------------x
| 1 | Rick |
| 2 | Steven |
| 3 | Charly |
| 4 | Tom |
x-----x-----------------x
Basically I need the values in the table data to UPDATE to a varchar like 'Rick, Steven, Charly'.
I am working in SQL Server 2014, so I can't use the function STRING_SPLIT.
Help would be really appreciated
Thanks

How do I get values that are themselves not unique, but are linked to unique fields(in SQL)?

I can't give the actual table, but my problem is something like this:
Assuming that there is a table called Names with entries like these:
+--------------+
| name | id |
+--------------+
| Jack | 1001 |
| Jack | 1022 |
| John | 1010 |
| Boris | 1092 |
+--------------+
I need to select all the unique names from that table, and display them(only names, not ids). But if I do:
SELECT DISTINCT name FROM Names;
Then it will return:
+-------+
| name |
+-------+
| Jack |
| John |
| Boris |
+-------+
But as you can see in the table, the 2 people named "Jack" are different, since they have different ids. How do I get an output like this one:
+-------+
| name |
+-------+
| Jack |
| Jack |
| John |
| Boris |
+-------+
?
Assuming that some ids can or will be repeated(not marked primary key in question)
Also, in the question, the result will have 1 column and some number of rows(exact number is given, its 18,013). Is there a way to check if I have the right number of rows? I know I can use COUNT(), but while selecting the unique values I used GROUP BY, so using COUNT() would return the counts for how many names have that unique id, as in:
SELECT name FROM Names GROUP BY id;
+------------------+
| COUNT(name) | id |
+------------------+
| 2 | 1001 |
| 1 | 1022 |
| 1 | 1092 |
| 3 | 1003 |
+------------------+
So, is there something to help me verify my output?
You can use group by:
select name
from names
group by name, id;
You can get all the distinct persons with:
SELECT DISTINCT name, id
FROM names
and you can select from the above query only the names:
SELECT name
FROM (
SELECT DISTINCT name, id
FROM names
)

Create column based on values on another column in redshift

Suppose I have the following table:
|---------------------|
| id |
|---------------------|
| 12 |
|---------------------|
| 390 |
|---------------------|
| 13 |
|---------------------|
And I want to create another column based on a map of the id column, for example:
12 -> qwert
13 -> asd
390 -> iop
So I basically want a query to create a column based on that map, my final table would be:
|---------------------|---------------------|
| id | col |
|---------------------|---------------------|
| 12 | qwert |
|---------------------|---------------------|
| 390 | iop |
|---------------------|---------------------|
| 13 | asd |
|---------------------|---------------------|
I have this map in a python dictionary.
Is this possible?
(It is basically pandas.map)
It appears that you wish to "fix" some data that is already in your PostgreSQL database.
You could include the data using this technique:
WITH foo AS (VALUES (12, 'qwert'), (13, 'asd'), (390, 'iop'))
SELECT table.id, foo.column2
FROM table
JOIN foo ON (foo.column1 = table.id)
You could do it as an UPDATE statement, but it gets tricky. It would probably be easier to craft a SELECT statement that has everything you want, then use CREATE TABLE new_table AS SELECT...
See: CREATE TABLE AS - Amazon Redshift

Pivot SSRS Dataset

I have a dataset which looks like so
ID | PName | Node | Val |
1 | Tag | Name | XBA |
2 | Tag | Desc | Dec1 |
3 | Tag | unit | Int |
6 | Tag | tids | 100 |
7 | Tag | post | AAA |
1 | Tag | Name | XBB |
2 | Tag | Desc | Des9 |
3 | Tag | unit | Float |
7 | Tag | post | BBB |
6 | Tag | tids | 150 |
I would like the result in my report to be
Name | Desc | Unit | Tids | Post |
XBA | Dec1 | int | 100 | AAA |
XBB | Des9 | Float | 150 | BBB |
I have tried using a SSRS Matrix with
Row: PName
Data: Node
Value: Val
The results were simply one row with Name and next row with desc and next with unit etc. Its not all in the same rows and also the second row was missing. This is possibly because there is no grouping on the dataset.
What is a good way of achieving the expected results?
I would not recommend this for a production scenario but if you need to knock out a report quickly or something you can try this. I would just not feel comfortable that the order of the records you get will always be what you expect.
You COULD try to insert the results of the SP into a table (regular table, temp table, table variable...doesn't matter really as long as you can get an identity column added). Assuming that the rows always come out in the correct order (which is probably not a valid assumption 100% of the time) then add an identity column on the table to get a unique row number for each row. From there you should be able to write some math logic to "group" your values together and then pivot out what you want.
create table #temp (ID int, PName varchar(100), Node varhar(100), Val varchar(100))
insert #temp exec (your stored proc)
alter table #temp add UniqueID int identity
then use UniqueID (modulo on 5 perhaps?) to group records together and then pivot

SQL deleting rows with duplicate dates conditional upon values in two columns

I have data on approx 1000 individuals, where each individual can have multiple rows, with multiple dates and where the columns indicate the program admitted to and a code number.
I need each row to contain a distinct date, so I need to delete the rows of duplicate dates from my table. Where there are multiple rows with the same date, I need to keep the row that has the lowest code number. In the case of more than one row having both the same date and the same lowest code, then I need to keep the row that also has been in program (prog) B. For example;
| ID | DATE | CODE | PROG|
--------------------------------
| 1 | 1996-08-16 | 24 | A |
| 1 | 1997-06-02 | 123 | A |
| 1 | 1997-06-02 | 123 | B |
| 1 | 1997-06-02 | 211 | B |
| 1 | 1997-08-19 | 67 | A |
| 1 | 1997-08-19 | 23 | A |
So my desired output would look like this;
| ID | DATE | CODE | PROG|
--------------------------------
| 1 | 1996-08-16 | 24 | A |
| 1 | 1997-06-02 | 123 | B |
| 1 | 1997-08-19 | 23 | A |
I'm struggling to come up with a solution to this, so any help greatly appreciated!
Microsoft SQL Server 2012 (X64)
The following works with your test data
SELECT ID, date, MIN(code), MAX(prog) FROM table
GROUP BY date
You can then use the results of this query to create a new table or populate a new table. Or to delete all records not returned by this query.
SQLFiddle http://sqlfiddle.com/#!9/0ebb5/5
You can use min() function: (See the details here)
select ID, DATE, min(CODE), max(PROG)
from table
group by DATE
I assume that your table has a valid primary key. However i would recommend you to take IDas Primary key. Hope this would help you.