how to merge two columns and make a new column in sql - sql

I have to merge two columns in one column that has deferent values
I wrote this code but I can't continue:
SELECT Title, ...
FROM BuyItems
input:
| title | UK | US | total |
|-------|-----|-----|-------|
| coca | 3 | 5 | 8 |
| cake | 2 | 0 | 2 |
output:
|title |Origin | Total|
|------|-------|------|
|coca | UK | 3 |
|coca | US | 5 |
|cake | US | 2 |

You can use CROSS APPLY and a table value constructor to do this:
-- EXAMPLE DATA START
WITH BuyItems AS
( SELECT x.title, x.UK, x.US
FROM (VALUES ('coca', 3, 5), ('cake', 2, 0)) x (title, UK, US)
)
-- EXAMPLE DATA END
SELECT bi.Title, upvt.Origin, upvt.Total
FROM BuyItems AS bi
CROSS APPLY (VALUES ('UK', bi.UK), ('US', bi.US)) upvt (Origin, Total)
WHERE upvt.Total <> 0;
Alternatively, you can use the UNPIVOT function:
-- EXAMPLE DATA START
WITH BuyItems AS
( SELECT x.title, x.UK, x.US
FROM (VALUES ('coca', 3, 5), ('cake', 2, 0)) x (title, UK, US)
)
-- EXAMPLE DATA END
SELECT upvt.Title, upvt.Origin, upvt.Total
FROM BuyItems AS bi
UNPIVOT (Total FOR Origin IN (UK, US)) AS upvt
WHERE upvt.Total <> 0;
My preference is usually for the former, as it is much more flexible. You can use explicit casting to combine columns of different types, or unpivot multiple columns. UNPIVOT works just fine, and there is no reason not to use it, but since UNPIVOT works in limited scenarios, and CROSS APPLY/VALUES works in all scenarios, I just go for this option as default.

Use apply:
select v.*
from t cross apply
(values (t.title, 'UK', uk), (t.title, 'US', us)
) v(title, origin, total)
where v.total > 0;

This is a simple unpivot:
SELECT YT.title,
V.Origin,
V.Total
FROM dbo.YourTable YT
CROSS APPLY (VALUES('UK',UK),
('US',US))V(Origin,Total);

Related

What would be a good way to pivot this data?

I have data in the following granularity:
CityID | Name | Post_Science | Post_Science | Post_Reading | Pre_Reading | Post_Writing | Pre_Writing
123 | Bob | 2.0 | 1.0 | 2.0 | 4.0 | 1.0 | 1.0
I'll be calling those <Post/Pre>_XXXXXX columns as Labels. Basically, these column names without the 'Pre' or 'Post' text map to a Label in another table.
I want to pivot the data in a way so that the pre and post values of the same Label are in the same row, for each group of CityID, Name, Label. So it would look like this:
CityID | Name | Pre Category | Post Category | Label
123 | Bob | 1.0 | 2.0 | Science
123 | Bob | 4.0 | 2.0 | Reading
123 | Bob | 1.0 | 1.0 | Writing
The Label comes from a separate table via a join. Hopefully that doesn't confuse anyone. If it does, ignore the column for now.
So there are much more of these categories - Science, Reading, and Writing are just a few I picked for example.
I've thought of two options to get the data in this format:
Unpivot all the data into a long list of all the values at a group of CityID, Name, Label. Then parse the Label name and pivot back into the pre and post values of one category into 1 row
Do a bunch of Unions. So select all Science in one select statement, all the Reading in another select statement and union them. There are about 50 pairings, so 50 union statements
I'm imagining the first option is cleaner than the latter. Any other options though?
This is unpivoting and I strongly recommend apply:
select t.CityId, t.Name, v.*
from t cross apply
(values (t.Post_Science, t.Pre_Science, 'Science'),
(t.Post_Reading, t.Pre_Reading, 'Reading'),
(t.Post_Writing, t.Pre_Writing, 'Writing')
) v(postcategory, precategory, label) ;
UNPIVOT is very particular syntax to do one thing. APPLY introduces lateral joins, which are very powerful for this and many other purposes.
Clearly Gordon's solution would be more performant, but if you have MANY or VARIABLE COLUMNS, here is an option that will dynamically UNPIVOT your data without actually using DYNAMIC SQL
Example
Select A.CityID
,A.Name
,PreCat = max(case when Item Like 'Pre%' then Value end)
,PostCat = max(case when Item Like 'Post%' then Value end)
,Label = substring(Item,charindex('_',Item+'_')+1,50)
From YourTable A
Cross Apply ( values (cast((Select A.* for XML RAW) as xml))) B(XMLData)
Cross Apply (
Select Item = xAttr.value('local-name(.)', 'varchar(100)')
,Value = xAttr.value('.','varchar(max)')
From XMLData.nodes('//#*') xNode(xAttr)
Where xAttr.value('local-name(.)','varchar(100)') not in ('CityId','Name','Other-Columns','To-Exclude')
) C
Group By A.CityID
,A.Name
,substring(Item,charindex('_',Item+'_')+1,50)
Returns
CityID Name PreCat PostCat Label
123 Bob 4.0 2.0 Reading
123 Bob 1.0 2.0 Science
123 Bob 1.0 1.0 Writing

BINARY_CHECKSUM - different result depending on number of rows

I wonder why the BINARY_CHECKSUM function returns different result for the same:
SELECT *, BINARY_CHECKSUM(a,b) AS bc
FROM (VALUES(1, NULL, 100),
(2, NULL, NULL),
(3, 1, 2)) s(id,a,b);
SELECT *, BINARY_CHECKSUM(a,b) AS bc
FROM (VALUES(1, NULL, 100),
(2, NULL, NULL)) s(id,a,b);
Ouput:
+-----+----+------+-------------+
| id | a | b | bc |
+-----+----+------+-------------+
| 1 | | 100 | -109 |
| 2 | | | -2147483640 |
| 3 | 1 | 2 | 18 |
+-----+----+------+-------------+
-- -109 vs 100
+-----+----+------+------------+
| id | a | b | bc |
+-----+----+------+------------+
| 1 | | 100 | 100 |
| 2 | | | 2147483647 |
+-----+----+------+------------+
And for second sample I get what I would anticipate:
SELECT *, BINARY_CHECKSUM(a,b) AS bc
FROM (VALUES(1, 1, 100),
(2, 3, 4),
(3,1,1)) s(id,a,b);
SELECT *, BINARY_CHECKSUM(a,b) AS bc
FROM (VALUES(1, 1, 100),
(2, 3, 4)) s(id,a,b);
Ouptut for both first two rows:
+-----+----+------+-----+
| id | a | b | bc |
+-----+----+------+-----+
| 1 | 1 | 100 | 116 |
| 2 | 3 | 4 | 52 |
+-----+----+------+-----+
db<>fiddle demo
It has strange consequences when I want to compare two tables/queries:
WITH t AS (
SELECT 1 AS id, NULL AS a, 100 b
UNION ALL SELECT 2, NULL, NULL
UNION ALL SELECT 3, 1, 2 -- comment this out
), s AS (
SELECT 1 AS id ,100 AS a, NULL as b
UNION ALL SELECT 2, NULL, NULL
UNION ALL SELECT 3, 2, 1 -- comment this out
)
SELECT t.*,s.*
,BINARY_CHECKSUM(t.a, t.b) AS bc_t, BINARY_CHECKSUM(s.a, s.b) AS bc_s
FROM t
JOIN s
ON s.id = t.id
WHERE BINARY_CHECKSUM(t.a, t.b) = BINARY_CHECKSUM(s.a, s.b);
db<>fiddle demo2
For 3 rows I get single result:
+-----+----+----+-----+----+----+--------------+-------------+
| id | a | b | id | a | b | bc_t | bc_s |
+-----+----+----+-----+----+----+--------------+-------------+
| 2 | | | 2 | | | -2147483640 | -2147483640 |
+-----+----+----+-----+----+----+--------------+-------------+
but for 2 rows I get also id = 1:
+-----+----+------+-----+------+----+-------------+------------+
| id | a | b | id | a | b | bc_t | bc_s |
+-----+----+------+-----+------+----+-------------+------------+
| 1 | | 100 | 1 | 100 | | 100 | 100 |
| 2 | | | 2 | | | 2147483647 | 2147483647 |
+-----+----+------+-----+------+----+-------------+------------+
Remarks:
I am not searching for alternatives like(HASH_BYTES/MD5/CHECKSUM)
I am aware that BINARY_CHECKSUM could lead to collisions(two different calls produce the same output) here scenario is a bit different
For this definition, we say that null values, of a specified type,
compare as equal values. If at least one of the values in the
expression list changes, the expression checksum can also change.
However, this is not guaranteed. Therefore, to detect whether values
have changed, we recommend use of BINARY_CHECKSUM only if your
application can tolerate an occasional missed change.
It is strange for me that hash function returns different result for the same input arguments.
Is this behaviour by design or it is some kind of glitch?
EDIT:
As #scsimon
points out it works for materialized tables but not for cte.
db<>fiddle actual table
Metadata for cte:
SELECT name, system_type_name
FROM sys.dm_exec_describe_first_result_set('
SELECT *
FROM (VALUES(1, NULL, 100),
(2, NULL, NULL),
(3, 1, 2)) s(id,a,b)', NULL,0);
SELECT name, system_type_name
FROM sys.dm_exec_describe_first_result_set('
SELECT *
FROM (VALUES(1, NULL, 100),
(2, NULL, NULL)) s(id,a,b)', NULL,0)
-- working workaround
SELECT name, system_type_name
FROM sys.dm_exec_describe_first_result_set('
SELECT *
FROM (VALUES(1, cast(NULL as int), 100),
(2, NULL, NULL)) s(id,a,b)', NULL,0)
For all cases all columns are INT but with explicit CAST it behaves as it should.
db<>fidde metadata
This has nothing to do with the number of rows. It is because the values in one of the columns of the 2-row version are always NULL. The default type of NULL is int and the default type of a numeric constant (of this length) is int, so these should be comparable. But from a values() derived table, these are (apparently) not exactly the same type.
In particular, a column with only typeless NULLs from a derived table is not comparable, so it is excluded from the binary checksum calculation. This does not occur in a real table, because all columns have types.
The rest of the answer illustrates what is happening.
The code behaves as expected with type conversions:
SELECT *, BINARY_CHECKSUM(a, b) AS bc
FROM (VALUES(1, cast(NULL as int), 100),
(2, NULL, NULL)
) s(id,a,b);
Here is a db<>fiddle.
Actually creating tables with the values suggests that columns with only NULL values have exactly the same type as columns with explicit numbers. That suggests that the original code should work. But an explicit cast also fixes the problem. Very strange.
This is really, really strange. Consider the following:
select v.*, checksum(a, b), checksum(c,b)
FROM (VALUES(1, NULL, 100, NULL),
(2, 1, 2, 1.0)
) v(id, a, b, c);
The change in type for "d" affects the binary_checksum() for the second row, but not for the first.
This is my conclusion. When all the values in a column are binary, then binary_checksum() is aware of this and the column is in the category of "noncomparable data type". The checksum is then based on the remaining columns.
You can validate this by seeing the error when you run:
select v.*, binary_checksum(a)
FROM (VALUES(1, NULL, 100, NULL),
(2, NULL, 2, 1.0)
) v( id,a, b, c);
It complains:
Argument data type NULL is invalid for argument 1 of checksum function.
Ironically, this is not a problem if you save the results into a table and use binary_checksum(). The issue appears to be some interaction with values() and data types -- but something that is not immediately obvious in the information_schema.columns table.
The happyish news is that the code should work on tables, even if it does not work on values() generated derived tables -- as this SQL Fiddle demonstrates.
I also learned that a column filled with NULLs really is typeless. The assignment of the int data type in a select into seems to happen when the table is being defined. The "typeless" type is converted to an int.
For the literal NULL without the CAST (and without any typed values in the column) it entirely ignores it and just gives you the same result as BINARY_CHECKSUM(b).
This seems to happen very early on. The initial tree representation output from
SELECT *, BINARY_CHECKSUM(a,b) AS bc
FROM (VALUES(1, NULL, 100),
(2, NULL, NULL)) s(id,a,b)
OPTION (RECOMPILE, QUERYTRACEON 8605, QUERYTRACEON 3604);
Already shows that it has decided to just use one column as input to the function
ScaOp_Intrinsic binary_checksum
ScaOp_Identifier COL: Union1008
This compares with the following output for your first query
ScaOp_Intrinsic binary_checksum
ScaOp_Identifier COL: Union1011
ScaOp_Identifier COL: Union1010
If you try and get the BINARY_CHECKSUM with
SELECT *, BINARY_CHECKSUM(a) AS bc
FROM (VALUES(1, NULL, 100)) s(id,a,b)
It gives the error
Msg 8184, Level 16, State 1, Line 8 Error in binarychecksum. There are
no comparable columns in the binarychecksum input.
This is not the only place where an untyped NULL constant is treated differently from an explicitly typed one.
Another case is
SELECT COALESCE(CAST(NULL AS INT),CAST(NULL AS INT))
vs
SELECT COALESCE(NULL,NULL)
I'd err on the side of "glitch" in this case rather than "by design" though as the columns from the derived table are supposed to be int before they get to the checksum function.
SELECT COALESCE(a,b)
FROM (VALUES(NULL, NULL)) s(a,b)
Does work as expected without this glitch.

Using dynamic unpivot with columns with different types

i have a table with around 100 columns named F1, F2, ... F100.
I want to query the data row-wise, like this:
F1: someVal1
F2: someVal2
...
I am doing all this inside a SP, therefore, I am generating the sql dynamically.
I have successfully generated the following sql:
select CAST(valname as nvarchar(max)), CAST(valvalue as nvarchar(max)) from tbl_name unpivot
(
valvalue for valname in ([form_id], [F1],[F2],[F3],[F4],[F5],[F6],[F7],[F8],[F9],[F10],[F11],[F12],[F13],[F14],[F15],[F16],[F17],[F18],[F19],[F20],[F21],[F22],[F23],[F24],[F25],[F26],[F27],[F28],[F29],[F30],[F31],[F32],[F33],[F34],[F35],[F36],[F37],[F38],[F39],[F40],[F41],[F42],[F43],[F44],[F45],[F46],[F47],[F48],[F49],[F50],[F51],[F52],[F53],[F54],[F55],[F56],[F57],[F58],[F59],[F60],[F61],[F62],[F63],[F64],[F65],[F66],[F67],[F68],[F69],[F70],[F71],[F72],[F73],[F74],[F75],[F76],[F77],[F78],[F79],[F80],[F81],[F82],[F83],[F84],[F85])
) u
But on executing this query, I get this exception:
The type of column "F3" conflicts with the type of other columns
specified in the UNPIVOT list.
I guess this is because F3 is varchar(100) while form_id, F1 and F2 are varchar(50). According to my understanding, I shouldn't be getting this error because I am casting all the results to nvarchar(max) in the select statement.
This table has all kinds of columns like datetime, smallint and int.
Also, all the columns of this table except one have SQL_Latin1_General_CP1_CI_AS collaltion
What is the fix for this error ?
this solution is you must use a subquery to let all columns be the same type to have the same length.
Try to CAST the values in subquery then unpivot instead of select
select valname, valvalue
from (
SELECT
CAST([form_id] as nvarchar(max)) form_id,
CAST([F1] as nvarchar(max)) F1,
CAST([F2] as nvarchar(max)) F2,
CAST([F3] as nvarchar(max)) F3,
CAST([F4] as nvarchar(max)) F4,
....
FROM tbl_name
) t1 unpivot
(
valvalue for valname in ([form_id], [F1],[F2],[F3],[F4],[F5],[F6],[F7],[F8],[F9],[F10],[F11],[F12],[F13],[F14],[F15],[F16],[F17],[F18],[F19],[F20],[F21],[F22],[F23],[F24],[F25],[F26],[F27],[F28],[F29],[F30],[F31],[F32],[F33],[F34],[F35],[F36],[F37],[F38],[F39],[F40],[F41],[F42],[F43],[F44],[F45],[F46],[F47],[F48],[F49],[F50],[F51],[F52],[F53],[F54],[F55],[F56],[F57],[F58],[F59],[F60],[F61],[F62],[F63],[F64],[F65],[F66],[F67],[F68],[F69],[F70],[F71],[F72],[F73],[F74],[F75],[F76],[F77],[F78],[F79],[F80],[F81],[F82],[F83],[F84],[F85])
) u
In a simplest way I would use CROSS APPLY with VALUES to do unpivot
SELECT *
FROM People CROSS APPLY (VALUES
(CAST([form_id] as nvarchar(max))),
(CAST([F1] as nvarchar(max))),
(CAST([F2] as nvarchar(max))),
(CAST([F3] as nvarchar(max))),
(CAST([F4] as nvarchar(max))),
....
) v (valvalue)
Here is a sample about CROSS APPLY with VALUES to do unpivot
we can see there are many different types in the People table.
we can try to use cast to varchar(max), let columns be the same type.
CREATE TABLE People
(
IntVal int,
StringVal varchar(50),
DateVal date
)
INSERT INTO People VALUES (1, 'Jim', '2017-01-01');
INSERT INTO People VALUES (2, 'Jane', '2017-01-02');
INSERT INTO People VALUES (3, 'Bob', '2017-01-03');
Query 1:
SELECT *
FROM People CROSS APPLY (VALUES
(CAST(IntVal AS VARCHAR(MAX))),
(CAST(StringVal AS VARCHAR(MAX))),
(CAST(DateVal AS VARCHAR(MAX)))
) v (valvalue)
Results:
| IntVal | StringVal | DateVal | valvalue |
|--------|-----------|------------|------------|
| 1 | Jim | 2017-01-01 | 1 |
| 1 | Jim | 2017-01-01 | Jim |
| 1 | Jim | 2017-01-01 | 2017-01-01 |
| 2 | Jane | 2017-01-02 | 2 |
| 2 | Jane | 2017-01-02 | Jane |
| 2 | Jane | 2017-01-02 | 2017-01-02 |
| 3 | Bob | 2017-01-03 | 3 |
| 3 | Bob | 2017-01-03 | Bob |
| 3 | Bob | 2017-01-03 | 2017-01-03 |
Note
when you use unpivot need to make sure the unpivot columns date type are the same.
Many ways a cat can skin you, or vice-versa.
Jokes apart, what D-Shih suggested is what you should start with and may get you home and dry.
In a majority of cases;
Essentially the UNPIVOT operation is concatenating the data from multiple rows. Starting with a CAST operation is the best way forward as it makes the data types identical(preferably a string type like varchar or nvarchar), its also a good idea to go with the same length for all UNPIVOTED columns in addition to having the same type.
In other cases;
If this still does not solve the problem, then you need to look deeper and check whether the ANSI_Padding setting is ON or OFF across all columns on the table. In latter day versions of SQL server this is mostly ON by default, but some developers may customise certain columns to have ANSI_PADDING set to off. If you have a mixed setup like this its best to move the data to another table with ANSI_PADDING set to ON. Try using the same UNPIVOT query on that table and it should work.
Check ANSI_Padding Status
SELECT name
,CASE is_ansi_padded
WHEN 1 THEN 'ANSI_Padding_On'
ELSE 'ANSI_Padding_Off'
AS [ANSI_Padding_Check]
FROM sys.all_columns
WHERE object_id = object_id('yourschema.yourtable')
Many situations be better suited for CROSS APPLY VALUES. It all depends on you, the jockey to choose horses for courses.
Cheers.

Reuse a query to use it for operation on LIMIT and OFFSET on postgresql

Using PostgreSQL 9.4, I have a table like this:
CREATE TABLE products
AS
SELECT id::uuid, title, kind, created_at
FROM ( VALUES
( '61c5292d-41f3-4e86-861a-dfb5d8225c8e', 'foo', 'standard' , '2017/04/01' ),
( 'def1d3f9-3e55-4d1b-9b42-610d5a46631a', 'bar', 'standard' , '2017/04/02' ),
( 'cc1982ab-c3ee-4196-be01-c53e81b53854', 'qwe', 'standard' , '2017/04/03' ),
( '919c03b5-5508-4a01-a97b-da9de0501f46', 'wqe', 'standard' , '2017/04/04' ),
( 'b3d081a3-dd7c-457f-987e-5128fb93ce13', 'tyu', 'other' , '2017/04/05' ),
( 'c6e9e647-e1b4-4f04-b48a-a4229a09eb64', 'ert', 'irregular', '2017/04/06' )
) AS t(id,title,kind,created_at);
Need to split the data into n same size parts. if this table had a regular id will be easier, but since it has uuid then I can't use modulo operations (as far as I know).
So far I did this:
SELECT * FROM products
WHERE kind = 'standard'
ORDER BY created_at
LIMIT(
SELECT count(*)
FROM products
WHERE kind = 'standard'
)/2
OFFSET(
(
SELECT count(*)
FROM products
WHERE kind = 'standard'
)/2
)*1;
Works fine but doing the same query 3 times I don't think is a good idea, the count is not "expensive" but every time someone wants to modify/update the query will need to do it in the 3 sections.
Note that currently n is set as 2 and the offset is set to 1 but both can take other values. Also limit rounds down so there may be a missing value, I can fix it using other means but having it on the query will be nice.
You can see the example here
Just to dispel a myth you can never use an serial and modulus to get parts because a serial isn't guaranteed to be gapless. You can use row_number() though.
SELECT row_number() OVER () % 3 AS parts, * FROM products;
parts | id | title | kind | created_at
-------+--------------------------------------+-------+-----------+------------
1 | 61c5292d-41f3-4e86-861a-dfb5d8225c8e | foo | standard | 2017/04/01
2 | def1d3f9-3e55-4d1b-9b42-610d5a46631a | bar | standard | 2017/04/02
0 | cc1982ab-c3ee-4196-be01-c53e81b53854 | qwe | standard | 2017/04/03
1 | 919c03b5-5508-4a01-a97b-da9de0501f46 | wqe | standard | 2017/04/04
2 | b3d081a3-dd7c-457f-987e-5128fb93ce13 | tyu | other | 2017/04/05
0 | c6e9e647-e1b4-4f04-b48a-a4229a09eb64 | ert | irregular | 2017/04/06
(6 rows)
This won't get equal parts unless parts goes into count equally.

Get even / odd / all numbers between two numbers

I want to display all the numbers (even / odd / mixed) between two numbers (1-9; 2-10; 11-20) in one (or two) column.
Example initial data:
| rang | | r1 | r2 |
-------- -----|-----
| 1-9 | | 1 | 9 |
| 2-10 | | 2 | 10 |
| 11-20 | or | 11 | 20 |
CREATE TABLE initialtableone(rang TEXT);
INSERT INTO initialtableone(rang) VALUES
('1-9'),
('2-10'),
('11-20');
CREATE TABLE initialtabletwo(r1 NUMERIC, r2 NUMERIC);
INSERT INTO initialtabletwo(r1, r2) VALUES
('1', '9'),
('2', '10'),
('11', '20');
Result:
| output |
----------------------------------
| 1,3,5,7,9 |
| 2,4,6,8,10 |
| 11,12,13,14,15,16,17,18,19,20 |
Something like this:
create table ranges (range varchar);
insert into ranges
values
('1-9'),
('2-10'),
('11-20');
with bounds as (
select row_number() over (order by range) as rn,
range,
(regexp_split_to_array(range,'-'))[1]::int as start_value,
(regexp_split_to_array(range,'-'))[2]::int as end_value
from ranges
)
select rn, range, string_agg(i::text, ',' order by i.ordinality)
from bounds b
cross join lateral generate_series(b.start_value, b.end_value) with ordinality i
group by rn, range
This outputs:
rn | range | string_agg
---+-------+------------------------------
3 | 2-10 | 2,3,4,5,6,7,8,9,10
1 | 1-9 | 1,2,3,4,5,6,7,8,9
2 | 11-20 | 11,12,13,14,15,16,17,18,19,20
Building on your first example, simplified, but with PK:
CREATE TABLE tbl1 (
tbl1_id serial PRIMARY KEY -- optional
, rang text -- can be NULL ?
);
Use split_part() to extract lower and upper bound. (regexp_split_to_array() would be needlessly expensive and error-prone). And generate_series() to generate the numbers.
Use a LATERAL join and aggregate the set immediately to simplify aggregation. An ARRAY constructor is fastest in this case:
SELECT t.tbl1_id, a.output -- array; added id is optional
FROM (
SELECT tbl1_id
, split_part(rang, '-', 1)::int AS a
, split_part(rang, '-', 2)::int AS z
FROM tbl1
) t
, LATERAL (
SELECT ARRAY( -- preserves rows with NULL
SELECT g FROM generate_series(a, z, CASE WHEN (z-a)%2 = 0 THEN 2 ELSE 1 END) g
) AS output
) a;
AIUI, you want every number in the range only if upper and lower bound are a mix of even and odd numbers. Else, only return every 2nd number, resulting in even / odd numbers for those cases. This expression implements the calculation of the interval:
CASE WHEN (z-a)%2 = 0 THEN 2 ELSE 1 END
Result as desired:
output
-----------------------------
1,3,5,7,9
2,4,6,8,10
11,12,13,14,15,16,17,18,19,20
You do not need WITH ORDINALITY in this case, because the order of elements is guaranteed.
The aggregate function array_agg() makes the query slightly shorter (but slower) - or use string_agg() to produce a string directly, depending on your desired output format:
SELECT a.output -- string
FROM (
SELECT split_part(rang, '-', 1)::int AS a
, split_part(rang, '-', 2)::int AS z
FROM tbl1
) t
, LATERAL (
SELECT string_agg(g::text, ',') AS output
FROM generate_series(a, z, CASE WHEN (z-a)%2 = 0 THEN 2 ELSE 1 END) g
) a;
Note a subtle difference when using an aggregate function or ARRAY constructor in the LATERAL subquery: Normally, rows with rang IS NULLare excluded from the result because the LATERAL subquery returns no row.
If you aggregate the result immediately, "no row" is transformed to one row with a NULL value, so the original row is preserved. I added demos to the fiddle.
SQL Fiddle.
You do not need a CTE for this, which would be more expensive.
Aside: The type conversion to integer removes leading / training white space automatically, so a string like this works as well for rank: ' 1 - 3'.