Adding Row Numbers To a SELECT Query Result in SQL Server Without use Row_Number() function - sql

i need Add Row Numbers To a SELECT Query without using Row_Number() function.
and without using user defined functions or stored procedures.
Select (obtain the row number) as [Row], field1, field2, fieldn from aTable
UPDATE
i am using SAP B1 DIAPI, to make a query , this system does not allow the use of rownumber() function in the select statement.
Bye.

I'm not sure if this will work for your particular situation or not, but can you execute this query with a stored procedure? If so, you can:
A) Create a temp table with all your normal result columns, plus a Row column as an auto-incremented identity.
B) Select-Insert your original query, sans the row column (SQL will fill this in automatically for you)
C) Select * on the temp table for your result set.
Not the most elegant solution, but will accomplish the row numbering you are wanting.

This query will give you the row_number,
SELECT
(SELECT COUNT(*) FROM #table t2 WHERE t2.field <= t1.field) AS row_number,
field,
otherField
FROM #table t1
but there are some restrictions when you want to use it. You have to have one column in your table (in the example it is field) which is unique and numeric and you can use it as a reference. For example:
DECLARE #table TABLE
(
field INT,
otherField VARCHAR(10)
)
INSERT INTO #table(field,otherField) VALUES (1,'a')
INSERT INTO #table(field,otherField) VALUES (4,'b')
INSERT INTO #table(field,otherField) VALUES (6,'c')
INSERT INTO #table(field,otherField) VALUES (7,'d')
SELECT * FROM #table
returns
field | otherField
------------------
1 | a
4 | b
6 | c
7 | d
and
SELECT
(SELECT COUNT(*) FROM #table t2 WHERE t2.field <= t1.field) AS row_number,
field,
otherField
FROM #table t1
returns
row_number | field | otherField
-------------------------------
1 | 1 | a
2 | 4 | b
3 | 6 | c
4 | 7 | d
This is the solution without functions and stored procedures, but as I said there are the restrictions. But anyway, maybe it is enough for you.

RRUZ, you might be able to hide the use of a function by wrapping your query in a View. It would be transparent to the caller. I don't see any other options, besides the ones already mentioned.

Related

Pivot with column name in Postgres

I have the following table tbl:
column1 | column2 | column 3
-----------------------------------
1 | 'value1' | 3
2 | 'value2' | 4
How to do "pivot" with column names to produce output like:
column1 | 1 | 2
column2 | 'value1' |'value2'
column3 | 3 | 4
As has been commented, the issue of data types is undefined in the question.
If you are OK with all result columns being type text (every data type can be converted to text), you can use one of these:
Plain SQL
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
(1, t.column1::text)
, (2, t.column2)
, (3, t.column3::text)
) nu(rn, c)
)
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn);
The same with useful column names:
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
('column1', t.column1::text)
, ('column2', t.column2)
, ('column3', t.column3::text)
) nu(rn, c)
)
SELECT * FROM (
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn)
) t (key, row1, row2);
Works in any modern version of Postgres.
The SQL string has to be adapted to the number of rows and columns. See fiddles below!
Using a document type as stepping stone
Makes for shorter code.
With many rows and many columns, performance of the SQL solution may scale better because the intermediate derived table is smaller.
(The thread is limited as you can't have more than ~ 1600 table columns in Postgres.)
Since everything is converted to text anyway, hstore seems most efficient. See:
Key value pair in PostgreSQL
SELECT key
, arr[1] AS row1
, arr[2] AS row2
FROM (
SELECT x.key, array_agg(x.value) AS arr
FROM tbl t, each(hstore(t)) x
GROUP BY 1
) sub
ORDER BY 1;
Technically speaking we would have to enforce the right sort order when in array_agg(), but that should work without explicit ORDER BY. To be absolutely sure you can add one: array_agg(x.value ORDER BY t.ctid) Using ctid for lack of information.
You can do the same with JSON functions in (Postgres 9.3+). Just replace each(hstore(t) with json_each_text(row_to_json(t). The rest is identical.
These fiddles demonstrate how to scale each query:
Original example with 2 rows of 3 columns:
db<>fiddle here
Scaled up to 3 rows of 4 columns:
db<>fiddle here

Rotate rows into columns with column names not coming from the row

I've looked at some answers but none of them seem to be applicable to me.
Basically I have this result set:
RowNo | Id | OrderNo |
1 101 1
2 101 10
I just want to convert this to
| Id | OrderNo_0 | OrderNo_1 |
101 1 10
I know I should probably use PIVOT. But the syntax is just not clear to me.
The order numbers are always two. To make things clearer
And if you want to use PIVOT then the following works with the data provided:
declare #Orders table (RowNo int, Id int, OrderNo int)
insert into #Orders (RowNo, Id, OrderNo)
select 1, 101, 1 union all select 2, 101, 10
select Id, [1] OrderNo_0, [2] OrderNo_1
from (
select RowNo, Id, OrderNo
from #Orders
) SourceTable
pivot (
sum(OrderNo)
for RowNo in ([1],[2])
) as PivotTable
Reference: https://learn.microsoft.com/en-us/sql/t-sql/queries/from-using-pivot-and-unpivot?view=sql-server-2017
Note: To build each row in the result set the pivot function is grouping by the columns not begin pivoted. Therefore you need an aggregate function on the column that is being pivoted. You won't notice it in this instance because you have unique rows to start with - but if you had multiple rows with the RowNo and Id you would then find the aggregation comes into play.
As you say there are only ever two order numbers per ID, you could join the results set to itself on the ID column. For the purposes of the example below, I'm assuming your results set is merely selecting from a single Orders table, but it should be easy enough to replace this with your existing query.
SELECT o1.ID, o1.OrderNo AS [OrderNo_0], o2.OrderNo AS [OrderNo_1]
FROM Orders AS o1
INNER JOIN Orders AS o2
ON (o1.ID = o2.ID AND o1.OrderNo <> o2.OrderNo)
From your sample data, simplest you can try to use min and MAX function.
SELECT Id,min(OrderNo) OrderNo_0,MAX(OrderNo) OrderNo_1
FROM T
GROUP BY Id

Select exactly 5 rows from a table

I have an odd requirement which ideally should be solved in SQL, not the surrounding app.
I need to select exactly 5 rows regardless of how many are actually available. In practice the number of rows available will usually be less than 5 and on some rare occasions it will be more than 5. The "extra" rows should have null in every column.
The app is written in a technology that isn't Turing Complete. This requirement is much more difficult to solve in the app's code than you might imagine! To describe it, the app is effectively a transformer: It takes in a bunch of queries and spits out a report. So please understand the app is NOT written in a "programming language" in the traditional sense.
So for example, if I have a table:
A | B
-----
1 | X
2 | Y
3 | Z
Then a valid result would be
A | B
-----------
2 | Y
1 | X
3 | Z
null | null
null | null
I know this is an unusual requirement. Sadly it can't be solved in the application due to the technology being used.
Ideally this shouldn't require changes to the database but if there is no other way that changes can be arranged.
Any suggestions?
You can do something like this:
select top 5 a, b
from (select a, b, 1 as priority from t union all
select null, null, 2 cross join
(values(1, 2, 3, 4, 5)) v(5)
) x
order by priority;
That is, create dummy rows, append them, and then choose the first five.
I do think that this work should be done in the app, but you can do it in SQL.
Create Table #Test (A int, B int)
Insert #Test Values (1,1)
Insert #Test Values (2,1)
Insert #Test Values (3,1)
Select Top 5 * From
(
Select A, B From #Test
Union All
Select Null, Null
Union All
Select Null, Null
Union All
Select Null, Null
Union All
Select Null, Null
Union All
Select Null, Null
) A
Wrap this in a stored proc..
declare #rowcount int
select top 5* from dbo.test
set #rowcount=##rowcount
if #rowcount<5
Begin
select * from dbo.test
union all
select null from dbo.numbers where n<=5-#rowcount
End
If you use some sort of tally table (although the numbers themselves do not matter, only that the table has enough records), you can use it to create the dummy rows. e.g. using sys.columns:
select top 5 a,b from
(
select a, b, 0 ord from yourTable
union all
select null a,null b, 1 from sys.columns
) t
order by ord
The advantage of the tally would be that if you need another number of rows in the future, you only need to change the top x (provided the tally table has enough rows)
Get those 3 records from your table.
Take a counter variable.
and then from your code add the NULL content until your counter gets 5.

Display duplicate value of two columns in different rows

I have a table in which there can be two newspaper publishing dates for a particular value which is inserted in a single column only under NewsPaperDate. All the remaining values get duplicated. Now I have to write a query in which the two NewsPaperDate values should be shown in a single row under two columns, NewsPaperDate1 and NewsPaperDate2, with the remaining values. Can anyone help in this? The DataBase is Sql Server
The Table structure is
You need to join the table to itself. There are different ways of doing this but based on your screenshot you could do:
select
a.yonja_no,
a.newspaper_date as newspaperdate1,
b.newspaper_date as newspaperdate2
from newspapertable a, newspapertable b
where a.yonja_no = b.yonja_no
and a.newspapere_s > b.newspapere_s
;
(Edited: see comments)
check fiddle link for query execution with sample data
create table tab1(newspaperDate number,b number,c number);
INSERT INTO tab1 VALUES(1,2,3);
INSERT INTO tab1 VALUES(2,2,3);
INSERT INTO tab1 VALUES(3,3,4);
SELECT t1.newspaperDate AS date1,t2.newspaperDate AS date2 , t1.b AS b1,t1.c AS c1 FROM tab1 t1 , tab1 t2
WHERE t1.newspaperDate < t2.newspaperDate AND t1.b=t2.b ;
OUTPUT
| DATE1 | DATE2 | B1 | C1 |
---------------------------
| 1 | 2 | 2 | 3 |
Joining a table to itself is the best approach for your query. Read
this
http://www.thunderstone.com/site/texisman/joining_a_table_to_itself.html

Is it possible to use WHERE IN along with LIKE?

If I have to search for some data I can use wildcards and use a simple query -
SELECT * FROM TABLE WHERE COL1 LIKE '%test_string%'
And, if I have to look through many values I can use -
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
But, is it possible to use both together. That is, the query doesn't just perform a WHERE IN but also perform something similar to WHERE LIKE? A query that just doesn't look through a set of values but search using wildcards through a set of values.
If this isn't clear I can give an example. Let me know. Thanks.
Example -
lets consider -
AnotherTable -
id | Col
------|------
1 | one
2 | two
3 | three
Table -
Col | Col1
------|------
aa | one
bb | two
cc | three
dd | four
ee | one_two
bb | three_two
Now, if I can use
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
This gives me -
Col | Col1
------|------
aa | one
bb | two
cc | three
But what if I need -
Col | Col1
------|------
aa | one
bb | two
cc | three
ee | one_two
bb | three_two
I guess this should help you understand what I mean by using WHERE IN and LIKE together
SELECT *
FROM TABLE A
INNER JOIN AnotherTable B on
A.COL1 = B.col
WHERE COL1 LIKE '%test_string%'
Based on the example code provided, give this a try. The final select statement presents the data as you have requested.
create table #AnotherTable
(
ID int IDENTITY(1,1) not null primary key,
Col varchar(100)
);
INSERT INTO #AnotherTable(col) values('one')
INSERT INTO #AnotherTable(col) values('two')
INSERT INTO #AnotherTable(col) values('three')
create table #Table
(
Col varchar(100),
Col1 varchar(100)
);
INSERT INTO #Table(Col,Col1) values('aa','one')
INSERT INTO #Table(Col,Col1) values('bb','two')
INSERT INTO #Table(Col,Col1) values('cc','three')
INSERT INTO #Table(Col,Col1) values('dd','four')
INSERT INTO #Table(Col,Col1) values('ee','one_two')
INSERT INTO #Table(Col,Col1) values('ff','three_two')
SELECT * FROM #AnotherTable
SELECT * FROM #Table
SELECT * FROM #Table WHERE COL1 IN(Select col from #AnotherTable)
SELECT distinct A.*
FROM #Table A
INNER JOIN #AnotherTable B on
A.col1 LIKE '%'+B.Col+'%'
DROP TABLE #Table
DROP TABLE #AnotherTable
Yes. Use the keyword AND:
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable) AND COL1 LIKE '%test_string%'
But in this case, you are probably better off using JOIN syntax:
SELECT TABLE.* FROM TABLE JOIN AnotherTable on TABLE.COL1 = AnotherTable.col WHERE TABLE.COL1 LIKE '%test_string'
no because each element in the LIKE clause needs the wildcard and there's not a way to do that with the IN clause
The pattern matching operators are:
IN, against a list of values,
LIKE, against a pattern,
REGEXP/RLIKE against a regular expression (which includes both wildcards and alternatives, and is thus closest to "using wildcards through a set of valuws", e.g. (ab)+a|(ba)+b will match all strings aba...ba or bab...ab),
FIND_IN_SET to get the index of a string in a set (which is represented as a comma separated string),
SOUNDS LIKE to compare strings based on how they're pronounced and
MATCH ... AGAINST for full-text matching.
That's about it for string matching, though there are other string functions.
For the example, you could try joining on Table.Col1 LIKE CONCAT(AnotherTable.Col, '%'), though performance will probably be dreadful (assuming it works).
Try a cross join, so that you can compare every row in AnotherTable to every row in Table:
SELECT DISTINCT t.Col, t.Col1
FROM AnotherTable at
CROSS JOIN Table t
WHERE t.col1 LIKE ('%' + at.col + '%')
To make it safe, you'll need to escape wildcards in at.col. Try this answer for that.
If I understand the question correctly you want the rows from "Table" when "Table.Col1" is IN "AnotherTable.Col" and you also want the rows when Col1 IS LIKE '%some_string%'.
If so you want something like:
SELECT
t.*
FROM
[Table] t
LEFT JOIN
[AnotherTable] at ON t.Col1 = at.Col
WHERE (at.Col IS NOT NULL
OR t.Col1 LIKE '%some_string%')
Something like this?
SELECT * FROM TABLE
WHERE
COL1 IN (Select col from AnotherTable)
AND COL1 LIKE '%test_string%'
Are you thinking about something like EXISTS?
SELECT * FROM TABLE t WHERE EXISTS (Select col from AnotherTable t2 where t2.col = t.col like '%test_string%' )