I would like to know is there any way to do unpivot (turn columns to rows) data in PostgreSQL.
For example, if I have a table like:
ID Name Age
1 Alice 16
2 Bob 21
3 Carl 18
I want to get an output like
ID Column_Name Column_Value
1 Name Alice
1 Age 16
2 Name Bob
2 Age 21
3 Name Carl
3 Age 18
I know I could do it like (sql fiddle with data to try it):
select
U.ID,
unnest(array['Name', 'Age']) as Column_Name,
unnest(array[U.Name, U.Age::text]) as Column_Value
from Users as U
But is there any way I could do it for all columns in table without explicitly specifying column names?
For example, for SQL Server I know of at least 2 ways to do it - dynamic SQL or turn data into xml and parse xml - SQL Server : Columns to Rows. May be there's some xml trick in PostgreSQL too?
With hstore extension:
SELECT id, skeys(hstore(users)) AS column, svals(hstore(users)) AS value FROM users;
Related
I am new to SQL in Atlassian and have a query as there is some difference between my daily used SQL and the SQL used in table transformer macro in Atlassian confluence
I want to create an SQL query that can be used in table transformer macro in Atlassian confluence. It should sum up the column values of two tables having the same header name and full-join them by using another common column as a key.Let's say I have 2 tables given below
Table 1
Key
num
katie
23
Jack
41
June
43
Table 2
Key
num
paty
20
Jack
21
June
4
And I want the obtain the below table through an "Atlassian-valid" SQL
Key
num
Katie
23
paty
20
Jack
62
June
47
Can u please help me to get this?
Your can try (in SQL, I do not know Attlassian products):
SELECT
`key`,
SUM(Num) as Num
FROM (
SELECT `key`, Num
FROM Table1
UNION ALL
SELECT `key`, Num
FROM Table2
) x
GROUP BY `key`
ORDER BY `key`
DBFIDDLE
This question already has answers here:
How to concatenate text from multiple rows into a single text string in SQL Server
(47 answers)
Closed 1 year ago.
I am encountering a problem with joining a table in SQL Server. I get more rows than I need in my view.
The table I join looks something like this:
ID
other ID
Name
1
1
Bob
2
1
Max
3
2
Jim
4
2
Tom
5
2
Ron
The new table should look like this:
other ID
Names
1
Bob,Max
2
Jim,Tom,Ron
In that way I don’t get a new row every time a new Name comes up, but it's the same "other" ID.
Can someone please help me solve this problem? Thank you.
If your DBMS is SQL Server(2017+) or Postgres, you can use STRING_AGG aggregate function to have the names that are associated to the same id in a one row. Here is an exmple
SELECT other_id,
STRING_AGG(Name, ',')
FROM table_name
GROUP BY other_id
ORDER BY other_id
You basically have to use a group by with some string aggregate functions
Select id, string_agg(name, ',' ) from table1
t1 Join table2 t2 on t1.id=t2.id group by id
Just wondering will it be possible to insert records into a table from 2 difference sources in SQL?
Example:
Table 1
Number
1
2
Table 2
Name
Alex
Amy
I want to insert records into table 3 from table 1 and table 2 and the result for table 3 should be:
Number Name
1 Alex
2 Alex
1 Amy
2 Amy
Any way that I can do it in SQL Server?
Try a CROSS JOIN and a SELECT ... INTO:
This join relates every-with-every row. The result will be filled into a new table on the fly:
SELECT Nrs.Nr
,Nms.Name
INTO dbo.TheNewTable
FROM dbo.NumberTable AS Nrs
CROSS JOIN dbo.NameTable AS Nms;
See the result:
SELECT * FROM dbo.TheNewTable;
connect for two connections .
use a script not only in SQL :javca for example.
use hashmap and hashet ...
isnert in temporary table(drop on commit for example).
copy in table 3.
-don't forget to close connections.
I've a simple query, vanilla select statement to which I want to filter the values provided by the user.
SELECT A,B,C,D,E FROM TAB
WHERE ....
Here the WHERE is not fixed i.e the user may input values for C, so only C should be filtered, or D or E so on. The problem is due to the user telling- filter callerID between 1 and 10 etc, but the database column has a different name. So to form a working query I have to map callerID to the columnName. As this would be in a procedure I've thought of passing the csv of userInputColumnNames, csv of dbColumns and filter begin and start. Then laboriously extract the values and match the correct db column name and form the query. This works, but however this is very cumbersome and not clean. Could there be a better way of doing this?
Do the column names in the table change?
Or are columns in the table added/removed?
If not, you can generate a number to map to each column in the table like:
SQL> SELECT column_name, ROW_NUMBER() OVER (PARTITION BY 1 ORDER BY column_name) "COLUMN_NUMBER"
2 FROM dba_tab_columns
3 WHERE table_name='DBA_USERS'
4
baim> /
COLUMN_NAME COLUMN_NUMBER
------------------------------ -------------
ACCOUNT_STATUS 1
CREATED 2
DEFAULT_TABLESPACE 3
EXPIRY_DATE 4
EXTERNAL_NAME 5
INITIAL_RSRC_CONSUMER_GROUP 6
LOCK_DATE 7
PASSWORD 8
PROFILE 9
TEMPORARY_TABLESPACE 10
USERNAME 11
USER_ID 12
12 rows selected.
Then when the user selects column 9, you know it maps to the "PROFILE" column.
If the column names can change, or if columns are added/dropped dynamically, then this won't work though.
i have table consist of columns : id,name,job
and i have row stored with this data :
id: 1
name : jason
job: 11,12
id: 2
name : mark
job: 11,14
i want write sql command to fetch names which have value "14" stored in job column only from this table
so how i do that ?
thanks
You can do:
WHERE FIND_IN_SET(14, job)
But that is really not the correct way. The correct way is to normalize your database and separate the job field into its own table. Check this answer for extra information:
PHP select row from db where ID is found in group of IDs
You shouldn't be storing multiple job ids in the same field. You want to normalise your data model. Remove the 'job' column from your names table, and have a second JOB table defined like this:
id | name_id | job_id
1 1 11
2 1 12
3 2 11
4 2 14
where name_id is the primary id ('id') of the entry in the names table.
Then you can do:
SELECT name_id, job_id FROM JOB WHERE name_id = 1;
for example. As well as making your data storage far more extensible - you can now assign unlimited numbers of job_ids to each name for example - it'll also be much faster to execute queries as all your entries are now ints and no string processing is required.
SELECT
*
FROM
MyTable
WHERE
job LIKE '14,%' OR job LIKE '%,14' OR job LIKE '%,14,%'
EDIT: Thanks to onedaywhen
SELECT
*
FROM
MyTable
WHERE
(',' + job + ',') LIKE ('%,14,%')