I want to generate ID in SQL server 2014 which will be of varchar(7). I am aware of identity column but what I want to do is create ID's based on SEQ_ID of a table. It should start with C and length is 7 Below is an example:-
TABLE A
SEQ_NO NAME
1 a
12 b
30 c
401 d
Output required is:-
SEQ ID NAME
1 C000001 a
12 C000012 b
30 C000030 c
401 C000401 d
Thought of using replicate and Identity column but that does not help me get the desired output. Any thoughts?
Since you are on 2014, yet another option is Format().
To be clear, Format() has some great features, but it is not known to be a high performer.
Select 'C'+format(Seq,'000000')
Returns (for example)
C000012
You can get it using REPLICATE.
SELECT SEQ_NO AS SEQ,
CONCAT('C', REPLICATE('0', 6 - LEN(SQL_NO)), CAST(SEQ_NO AS VARCHAR(10))) ID
NAME
FROM YOUR_TABLE;
using right()
select 'C'+right('000000'+convert(varchar(6),seq_no),6)
demo:
create table a (seq_no int, name varchar(16))
insert into a values
(1 ,'a') ,(12 ,'b') ,(30 ,'c') ,(401,'d')
select *
, Id = 'C'+right('000000'+convert(varchar(6),seq_no),6)
from a
rextester demo: http://rextester.com/XVQ74081
returns:
+--------+------+---------+
| seq_no | name | Id |
+--------+------+---------+
| 1 | a | C000001 |
| 12 | b | C000012 |
| 30 | c | C000030 |
| 401 | d | C000401 |
+--------+------+---------+
Related
I have a table with lots of rows like:
ID | Attributes
1 | {"Rank":1, "LoadLocation": London, "Driver":Tom}
2 | {"Rank":2, "LoadLocation": Southampton, "Driver":Dick}
3 | {"Rank":3, "DischargeLocation": Stratford}
There isn't a template for the JSON - it's a dumping ground for any number of attributes of the ID rows.
For use in a join I'd like to get these into a table this:
ID | Attribute Name | Attribute Value
1 | 'Rank' | 1
1 | 'LoadLocation' | 'London'
1 | 'Driver' | 'Tom'
2 | 'Rank' | 2
2 | 'LoadLocation' | 'Southampton'
2 | 'Driver' | 'Dick'
3 | 'Rank' | 3
3 | 'DischargeLocation'| 'Stratford'
I can see that I probably need to be using OpenJSON, but also that for that I likely need to know the explicit structure. I don't know the structure, even to the point of each row having a different numbe of attributes.
Any help gratefully received!
If you have sql-server-2016 and above, you can use OPENJSON with CROSS APPLY
DECLARE #TestData TABLE (ID INT, Attributes VARCHAR(500))
INSERT INTO #TestData VALUES
(1 ,'{"Rank":1, "LoadLocation": "London", "Driver":"Tom"}'),
(2 ,'{"Rank":2, "LoadLocation": "Southampton", "Driver":"Dick"}'),
(3 ,'{"Rank":3, "DischargeLocation": "Stratford"}')
SELECT T.ID, X.[key] AS [Attribute Name], X.value AS [Attribute Value]
FROM #TestData T
CROSS APPLY (SELECT * FROM OPENJSON(T.Attributes)) AS X
Result:
ID Attribute Name Attribute Value
----------- -------------------- -------------------
1 Rank 1
1 LoadLocation London
1 Driver Tom
2 Rank 2
2 LoadLocation Southampton
2 Driver Dick
3 Rank 3
3 DischargeLocation Stratford
I have event data that looks like this:
id | instance_id | value
1 | 1 | a
2 | 1 | ap
3 | 1 | app
4 | 1 | appl
5 | 2 | b
6 | 2 | bo
7 | 1 | apple
8 | 2 | boa
9 | 2 | boat
10 | 2 | boa
11 | 1 | appl
12 | 1 | apply
Basically, each row is a user typing a new letter. They can also delete letters.
I'd like to create a dataset that looks like this, let's call it data
id | instance_id | value
7 | 1 | apple
9 | 2 | boat
12 | 1 | apply
My goal is to extract all the complete words in each instance, accounting for deletion as well - so it's not sufficient to just get the longest word or the most recently typed.
To do so, I was planning to do a regex operation like so:
select * from data
where not exists (select * from data d2 where d2.value ~ (d.value || '.'))
Effectively I'm trying to build a dynamic regex that adds matches one character more than is present, and is specific to the row it's matching against.
The code above doesn't seem to work. In Python, I can "compile" a regex pattern before I use it. What is the equivalent in PostgreSQL to dynamically build a pattern?
Try simple LIKE operator instead of regex patterns:
SELECT * FROM data d1
WHERE NOT EXISTS (
SELECT * FROM data d2
WHERE d2.value LIKE d1.value ||'_%'
)
Demo: https://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cd064c92565639576ff456dbe0cd5f39
Create an index on value column, this should speed up the query a bit.
To find peaks in the sequential data window functions is a good choice. You just need to compare each value with previous and next ones using lag() and lead() functions:
with cte as (
select
*,
length(value) > coalesce(length(lead(value) over (partition by instance_id order by id)),0) and
length(value) > coalesce(length(lag(value) over (partition by instance_id order by id)),length(value)) as is_peak
from data)
select * from cte where is_peak order by id;
Demo
I have a table with column GetDup and I'd like to the duplicate records based on the value of this column. For example, if value on is 1 in GetDup, then duplicate the record once. If value in the column is 2, then duplicate the record twice and so on and the statement has to be in looping statement.
What will be a good way to write a stored procedures for this? Please help.
Input:
+--------+--------------+---------------+
| Getdup | CustomerName | CustomerAdd |
+--------+--------------+---------------+
| 1 | John | 123 SomeWhere |
| 2 | Bob | 987 SomeWhere |
+--------+--------------+---------------+
What I want:
+--------+--------------+---------------+
| Getdup | CustomerName | CustomerAdd |
+--------+--------------+---------------+
| 1 | John | 123 SomeWhere |
| 1 | John | 123 SomeWhere |
| 2 | Bob | 987 SomeWhere |
| 2 | Bob | 987 SomeWhere |
| 2 | Bob | 987 SomeWhere |
+--------+--------------+---------------+
picture of data
Answer #2 After Clarification
Number Table to the Rescue!
The number table in my example (or tally table, if you want to call it that), is both temporary and very small. To make it bigger, just add more values to z and add more CROSS JOINs. In my opinion, a number table and a calendar table are both things that should be in every database you have. They are extremely useful.
SQL Fiddle
MS SQL Server 2017 Schema Setup:
CREATE TABLE mytable ( Getdup int, CustomerName varchar(10), CustomerAdd varchar(20) ) ;
INSERT INTO mytable (Getdup, CustomerName, CustomerAdd)
VALUES (1,'John','123 SomeWhere'), (2,'Bob','987 SomeWhere')
;
Query 1:
;WITH z AS (
SELECT *
FROM ( VALUES(0),(0),(0),(0) ) v(x)
)
, numTable AS (
SELECT num
FROM (
SELECT ROW_NUMBER() OVER (ORDER BY z1.x)-1 num
FROM z z1
CROSS JOIN z z2
) s1
)
SELECT t1.Getdup, t1.CustomerName, t1.CustomerAdd
FROM mytable t1
INNER JOIN numTable ON t1.getdup >= numTable.num
ORDER BY CustomerName, CustomerAdd
Results:
| Getdup | CustomerName | CustomerAdd |
|--------|--------------|---------------|
| 2 | Bob | 987 SomeWhere |
| 2 | Bob | 987 SomeWhere |
| 2 | Bob | 987 SomeWhere |
| 1 | John | 123 SomeWhere |
| 1 | John | 123 SomeWhere |
--------------------------------------------------------------------------
ORIGINAL ANSWER
EDIT: After further clarification of the problem, this won't duplicate rows, this will only duplicate the data in a column.
Something like one of these might work.
T-SQL
SELECT replicate(mycolumn,getdup) AS x
FROM mytable
MySQL
SELECT repeat(mycolumn,getdup) AS x
FROM mytable
Oracle SQL
SELECT rpad(mycolumn,getdup*length(mycolumn),mycolumn) AS x
FROM mytable
PostgreSQL
SELECT repeat(mycolumn,getdup+1) AS x
FROM mytable
If you can provide more details for exactly what you want and what you're working with, we might be able to help you better.
NOTE 2: Depending on what you need, you may need to do some math magic. You say above if GetDup is 1 then you want one duplicate. If that means that your output should be GetDup``GetDup, then you'll want to add one in the repeat(),replicate() or rpad() functions. ie replicate(mycolumn,getdup+1). Oracle SQL will be a little different, since it uses rpad().
In standard SQL you can use a recursive CTE:
with recursive cte as (
select t.dup, . . .
from t
union all
select cte.dup - 1, . . .
from cte
where cte.dup > 1
)
select *
from cte;
Of course, not all databases support recursive CTEs (and the recursive keyword is not used in some of them).
So, you want recursive solution :
with t as (
select Getdup, CustomerName, CustomerAdd, 0 as id
from table
union all
select Getdup, CustomerName, CustomerAdd, id + 1
from t
where id < getdup
)
insert into table (col1, col2, col3)
select Getdup, CustomerName, CustomerAdd
from t
order by getdup
option (maxrecursion 0);
I'm new to AWS Athena and trying to pivot some rows into columns, similar to the top answer in this StackOverflow post.
However, when I tried:
SELECT column1, column2, column3
FROM data
PIVOT
(
MIN(column3)
FOR column2 IN ('VALUE1','VALUE2','VALUE3','VALUE4')
)
I get the error: mismatched input '(' expecting {',', ')'} (service: amazonathena; status code: 400; error code: invalidrequestexception
Does anyone know how to accomplish what I am trying to achieve in AWS Athena?
Extending #kadrach 's answer.
Assuming a table like this
uid | key | value1 | value2
----+-----+--------+--------
1 | A | 10 | 1000
1 | B | 20 | 2000
2 | A | 11 | 1001
2 | B | 21 | 2001
Single column PIVOT works like this
SELECT
uid,
kv1['A'] AS A_v1,
kv1['B'] AS B_v1
FROM (
SELECT uid, map_agg(key, value1) kv1
FROM vtable
GROUP BY uid
)
Result:
uid | A_v1 | B_v1
----+------+-------
1 | 10 | 20
2 | 11 | 21
Multi column PIVOT works like this
SELECT
uid,
kv1['A'] AS A_v1,
kv1['B'] AS B_v1,
kv2['A'] AS A_v2,
kv2['B'] AS B_v2
FROM (
SELECT uid,
map_agg(key, value1) kv1,
map_agg(key, value2) kv2
FROM vtable
GROUP BY uid
)
Result:
uid | A_v1 | B_v1 | A_v2 | B_v2
----+------+------+------+-----
1 | 10 | 20 | 1000 | 2000
2 | 11 | 21 | 1001 | 2001
You can do a single-column PIVOT in Athena using map_agg.
SELECT
uid,
kv['c1'] AS c1,
kv['c2'] AS c2,
kv['c3'] AS c3
FROM (
SELECT uid, map_agg(key, value) kv
FROM vtable
GROUP BY uid
) t
Credit goes to this website. Unfortunately I've not found a clever way to do a multi-column pivot this way (I nest the query, which is not pretty).
I had the same issue with using PIVOT function. However I used a turn around way to obtain a similar format data set :
select
columnToGroupOn,
min(if(colToPivot=VALUE1,column3,null)) as VALUE1,
min(if(colToPivot=VALUE2,column3,null)) as VALUE2,
min(if(colToPivot=VALUE3,column3,null)) as VALUE3
from
data
group by columnToGroupOn
ID | Col2 | Col3 | SequenceNum
--------------------------------
1 | x | 12 | 5
2 | y | 11 | 6
3 | a | 45 | 7
100 | b | 23 | 8
101 | a | 16 | 9
102 | b | 28 | 10
4 | a | 9 | 11
5 | b | 26 | 12
6 | x | 100 | 13
I have an SSRS report at the moment which you can enter the ID for and it'll show you data for those ID's. For example lets say you enter start ID 2 end ID 5 it'll report back 2,3,4,5 with Col2 and Col3 data.
But what I really want to happen is for it to return 2,3,100,101,102,3,4,5
I believe may be some way to cross reference the SequenceNum column but I'm fairly new to SQL and SSRS can anyone help?
So an user would enter a parameters...
start-ID = 2 which has a SequenceNum of 6
and end-ID = 5 which has an SequenceNum of 12
Extract your starting and ending sequence numbers from value supplied by starting id and ending id respectively and use them in WHERE condition as below
DECLARE #StartingSeqNum INT, #EndingSeqNum
SELECT #StartingSeqNum = SeqNum FROM tableName WHERE ID = #start_id
SELECT #EndingSeqNum = SeqNum FROM tableName WHERE ID = #end_id
SELECT Col2,Col3
FROM tableName
WHERE SeqNum BETWEEN #StartingSeqNum AND #EndingSeqNum
As you are using SSRS you can specify a Value and a Label for your parameters.
Create a dataset with the following SQL as the source:
select distinct ID as Label
,SequenceNum as Value
from YourTable
order by SequenceNum
And then in the properties for your parameter, in Available Values select Get values from query and then select the above dataset. Set the Value field and Label field as your label and value columns and then click OK. You will need to do this for your start and end parameters, using the same dataset.
Your parameters will now be drop down menus that display the ID value to the user, but passes the SequenceNum value to your query. You can then use these to filter your main dataset.