Concatenate three columns data into one column in Postgres - sql

Can anyone tell me which command is used for concatenate three columns data into one column in PostgreSQL database?
e.g.
If the columns are
begin | Month | Year
12 | 1 | 1988
13 | 3 | 1900
14 | 4 | 2000
15 | 5 | 2012
result like
Begin
12-1-1988
13-3-1900
14-4-2000
15-5-2012

Just use concatenation operator || : http://www.sqlfiddle.com/#!1/d66bb/2
select begin || '-' || month || '-' || year as begin
from t;
Output:
| BEGIN |
-------------
| 12-1-1988 |
| 13-3-1900 |
| 14-4-2000 |
| 15-5-2012 |
If you want to change the begin column itself, begin column must be of string type first, then do this: http://www.sqlfiddle.com/#!1/13210/2
update t set begin = begin || '-' || month || '-' || year ;
Output:
| BEGIN |
-------------
| 12-1-1988 |
| 13-3-1900 |
| 14-4-2000 |
| 15-5-2012 |
UPDATE
About this:
but m not getting null value column date
Use this:
select (begin || '-' || month || '-' || year)::date as begin
from t

Have a look at 9.4. String Functions and Operators

This is an old post, but I just stumbled upon it. Doesn't it make more sense to create a date data type? You can do that using:
select make_date(year, month, begin)
A date seems more useful than a string (and you can even format it however you like using to_char()).

Related

How to transpose columns to rows of a table dynamically using a DB2 sql code where columns may increase over time and no need to change the code?

I have a table where columns will eventually increase over time. I want to write a query that transposes the table even later columns increases and no need to add extra line of code to achieve it. I need to transpose those columns where value is 'Y'
Eg: Source data on day 1
| Emp_ID | DOC 1 | DOC 2 |
| ------ |-------|-------|
| 001 | Y |Y |
| 002 | N |Y |
Day 1 output
| Emp_ID | Transposed |
| ------ |-------|
| 001 | DOC 1 |
| 001 | DOC 2 |
| 002 | DOC 2 |
now eventually the columns may increase, and want the same query block to handle it without any change in code, can we?
Source data on day 2
| Emp_ID | DOC 1 | DOC 2 | DOC 3|
| ------ |-------|-------|------|
| 001 | Y |Y |N |
| 002 | N |Y |Y |
| 003 | N |N |N |
Day 2 output
| Emp_ID | Transposed |
| ------ |-------|
| 001 | DOC 1 |
| 001 | DOC 2 |
| 002 | DOC 2 |
| 002 | DOC 3 |
**Note have considered only docs having Y as a value. Thanks in advance**
You need to construct the following statement dynamically for given base table
MYSCHEMA.MYTAB (EMP_ID INT, DOC1 CHAR, ..., DOCn CHAR):
SELECT T.EMP_ID, V.COLNAME
FROM
MYSCHEMA.MYTAB T
, (
SELECT COLNAME
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA = 'MYSCHEMA' AND TABNAME = 'MYTAB' AND COLNAME <> 'EMP_ID' AND TYPENAME LIKE '%CHAR%'
) V
WHERE
V.COLNAME = 'DOC1' AND T.DOC1 = 'Y'
...
OR V.COLNAME = 'DOCn' AND T.DOCn = 'Y'
The statement has dynamic WHERE part only. The sub-select on SYSCAT.COLUMNS returns all the table columns to transpose (all table columns except EMP_ID).
The following SELECT INTO statement inside the table function generates the final statement needed for whatever number of such columns.
CREATE OR REPLACE FUNCTION MYFUNC ()
RETURNS TABLE (EMP_ID INT, TRANSPOSED VARCHAR (128))
BEGIN
DECLARE V_SQL VARCHAR (4000);
DECLARE V_EMP_ID INT;
DECLARE V_TRANSPOSED VARCHAR (128);
DECLARE SQLSTATE CHAR(5);
DECLARE C1 CURSOR FOR S1;
SELECT
'SELECT T.EMP_ID, V.COLNAME '
|| 'FROM '
|| ' MYSCHEMA.MYTAB T '
|| ', ( '
|| 'SELECT COLNAME '
|| 'FROM SYSCAT.COLUMNS '
|| 'WHERE TABSCHEMA = ''MYSCHEMA'' AND TABNAME = ''MYTAB'' AND COLNAME <> ''EMP_ID'' AND TYPENAME LIKE ''%CHAR%'''
|| ' ) V '
|| 'WHERE '
|| LISTAGG ('V.COLNAME = ''' || COLNAME || ''' AND T.' || COLNAME || ' = ''Y''', ' OR ')
INTO V_SQL
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA = 'MYSCHEMA' AND TABNAME = 'MYTAB' AND COLNAME <> 'EMP_ID' AND TYPENAME LIKE '%CHAR%';
PREPARE S1 FROM V_SQL;
OPEN C1;
L1: LOOP
FETCH C1 INTO V_EMP_ID, V_TRANSPOSED;
IF SQLSTATE = '02000' THEN LEAVE L1; END IF;
PIPE (V_EMP_ID, V_TRANSPOSED);
END LOOP L1;
CLOSE C1;
RETURN;
END#

index match in SQL

Sorry in advance. I am very new in SQL. I don't know how to do a simple task like excel index match equivalent in SQL.
I have 2 tables in SQL (Please ignore the dash, I was using it to align the columns)
Table1
| Name | Limit1 || Limit2 |
| First | A || 05 |
| Second | B || 10 |
| Third | || 10 |
Table2
| Limit1Key|| Limit1Value || Limit2Key ||Limit2Value|
| A || 20,000 || 02 ||2,000,000 |
| B || 50,000 || 05 ||5,000,000 |
| || || 10 ||10,000,000 |
I want to get a final table looking like below.
Result Table
| Name || Limit1 || Limit2 |
| First || 20,000 || 5,000,000 |
| Second || 50,000 || 10,000,000 |
| Third || || 10,000,000 |
If there is already another post similar to this, please guide me to it.
Thank you!
If I understand correctly, you just want two joins:
select t1.*, t2_1.limit1value, t2_2.limit2_value
from table1 t1 left join
table2 t2_1
on t2_1.limit1key = t1.limit1 left join
table2 t2_2
on t2_2.limit2key = t1.limit2;

Check string for substring existence

How can I check whether a certain substring (for instance 18UT) is part of a string in a column?
Redshifts' SUBSTRING function allows me to "cut" a certain substring based on a starting index + length of the subtring, but not check whether a specific substring exists is in the column's value.
Example:
+------------------+
| col |
+------------------+
| 14TH, 14KL, 18AB |
| 14LK, 18UT, 15AK |
| 14AB, 08ZT, 18ZH |
| 14GD, 52HG, 18UT |
+------------------+
Desired result:
+------------------+------+
| col | 18UT |
+------------------+------+
| 14TH, 14KL, 18AB | No |
| 14LK, 18UT, 15AK | Yes |
| 14AB, 08ZT, 18ZH | No |
| 14GD, 52HG, 18UT | Yes |
+------------------+------+
Here is one option:
select col,
case when ', ' || col || ', ' like '%, 18UT, %' then 'yes' else 'no' end has_18ut
from mytable
While this will solve your immediate, problem, it should be note that storing delimited lists in a database table is bad practice, and should be avoided. Each value should go to a separate row instead.

Oracle regex_replace ' from values

I need help wit removing "'pp'" from search results which appear at the biginning of text. Values in search resuls contain spaces and also '. I need to remove only 'pp from bigginig
This sounds like:
select regexp_replace(col, '^pp', '')
Or a case expression:
select (case when col like 'pp%' then substr(col, 3) else col end)
You don't need regular expressions and can use simple string functions.
If you want to use SELECT then:
SELECT value,
CASE
WHEN value LIKE 'pp%'
THEN SUBSTR( value, 3 )
ELSE value
END AS replaced_value
FROM table_name
Outputs:
VALUE | REPLACED_VALUE
:---- | :-------------
pp123 | 123
pp1pp | 1pp
123pp | 123pp
12345 | 12345
and, if you want to UPDATE the table:
UPDATE table_name
SET value = SUBSTR( value, 3 )
WHERE value LIKE 'pp%';
Then:
SELECT * FROM table_name;
Outputs:
| VALUE |
| :---- |
| 123 |
| 1pp |
| 123pp |
| 12345 |
db<>fiddle here

How to perform the same aggregation on every column, without listing the columns?

I have a table with N columns. Let's call them c1, c2, c3, c4, ... cN. Among multiple rows, I want to get a single row with COUNT DISTINCT(cX) for each X in [1, N].
c1 | c2 | ... | cn
0 | 4 | ... | 1
Is there a way I can do this (in a stored procedure) without writing every column name into the query manually?
Why?
We've had a problem where bugs in application servers mean we rewrite good column values with garbage inserted later. To solve this, I'm storing the information log-structure, where each row represents a logical UPDATE query. Then, when given a signal that the record is complete, I can determine if any values were (erroneously) overwritten.
An example of a single correct record in multiple rows: there is at most one value for each column.
| id | initialize_time | start_time | end_time |
| 1 | 12:00am | NULL | NULL |
| 1 | 12:00am | 1:00pm | NULL |
| 1 | 12:00am | NULL | 2:00pm |
Reconciled row:
| 1 | 12:00am | 1:00pm | 2:00pm |
An example of an irreconcilable record that I want to detect:
| id | initialize_time | start_time | end_time |
| 1 | 12:00am | NULL | NULL |
| 1 | 12:00am | 1:00pm | NULL |
| 1 | 9:00am | 1:00pm | 2:00pm | -- New initialize time => irreconcilable!
You need dynamic SQL for that, which means you have to create a function or run a DO command. Since you cannot return values directly from the latter, a plpgsql function it is:
CREATE OR REPLACE function f_count_all(_tbl text
, OUT columns text[]
, OUT counts bigint[])
RETURNS record LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE (
SELECT 'SELECT
ARRAY[' || string_agg('''' || quote_ident(attname) || '''', ', ') || ']
, ARRAY[' || string_agg('count(' || quote_ident(attname) || ')' , ', ') || ']
FROM ' || _tbl
FROM pg_attribute
WHERE attrelid = _tbl::regclass
AND attnum >= 1 -- exclude tableoid & friends (neg. attnum)
AND NOT attisdropped -- exclude deleted columns
GROUP BY attrelid
)
INTO columns, counts;
END
$func$;
Call:
SELECT * FROM f_count_all('myschema.mytable');
Returns:
columns | counts
--------------+--------
{c1, c2, c3} | {17, 1, 0}
More explanation and links about dynamic SQL and EXECUTE in this related question - or a couple more here on SO, try this search.
Related:
Count values for every column in a table
You could even try and return a polymorphic record type to get single columns dynamically, but that's rather complex and advanced. Probably too much effort for your case. More in this related answer.