I have a table with following structure
Id
Division
Details
1
A
some text
2
A
some text
3
B
some text
4
B
some text
5
B
some text
I need to add a new column of integer type named "Order" with some data as described below:
Id
Division
Details
Order
1
A
some text
1
2
A
some text
2
3
B
some text
1
4
B
some text
2
5
B
some text
3
As we can see integer data in "Order" column sequence has to be reset if "Division" data changes. There's are thousand of rows in the table, with more than 100 different divisions.
How can I achieve this?
You can use row_number():
select t.*,
row_number() over(partition by division order by id) rn
from mytable t
Related
I used standard SQL to insert data form one table to another in BigQuery using Jupyter Notebook.
For example I have two tables:
table1
ID Product
0 1 book1
1 2 book2
2 3 book3
table2
ID Product Price
0 5 book5 8.0
1 6 book6 9.0
2 4 book4 3.0
I used the following codes
INSERT test_data.table1
SELECT *
FROM test_data.table2
ORDER BY Price;
SELECT *
FROM test_data.table1
I got
ID Product
0 1 book1
1 3 book3
2 2 book2
3 5 book5
4 6 book6
5 4 book4
I expected it appears in the order of ID 1 2 3 4 5 6 which 4,5,6 are ordered by Price
It also seems that the data INSERT and/or SELECT FROM display records in a random order in different run.
How do I control the SELECT FROM output without including the 'Price' column in the output table in order to sort them?
And this happened when I import a csv file to create a new table, the record order is random when using SELECT FROM to display them.
The ORDER BY clause specifies a column or expression as the sort criterion for the result set.
If an ORDER BY clause is not present, the order of the results of a query is not defined.
Column aliases from a FROM clause or SELECT list are allowed. If a query contains aliases in the SELECT clause, those aliases override names in the corresponding FROM clause.
So, you most likely wanted something like below
SELECT *
FROM test_data.table1
ORDER BY Price DESC
LIMIT 100
Note the use of LIMIT - it is important part - If you are sorting a very large number of values, use a LIMIT clause to avoid resource exceeded type of error
I have a data set right now with 3 columns.
Column 1 is Order number and it is sequential in its own right and a foreign key
Column 2 is Batch number and it is a sequence all of its own.
Column 3 is a time stamp
The problem I have is as follows
Order Batch TimeStamp
1 1
2 2
1 3
3 4
2 5
1 6
I am trying to work out the time differences between batches on a per order basis.
Usually I get a sequence number PER orderid but this isnt the case. I am trying to create a view that will do that but my first obstacle is translating those batch sequences into a sequence number PER Order
My ideal Output
Order Batch SequenceNumber TimeStamp
1 1 1
2 2 1
1 3 2
3 4 1
2 5 2
1 6 3
All help is appreciated!!
This is what row_number() does:
select t.*, row_number() over (partition by order order by batch) as seqnum
from t;
Note: you have to escape the column name order because it is a SQL reserved words. Just don't use reserved words for column names.
row_number() is ANSI standard functionality available in most databases (your question doesn't have a database tag). There are other ways to do this, but row_number() is the simplest.
i have a table name conversion and i have these below mentioned columns in it i want to multiply Length\width row elements l*w of 'dimension' values and display them in another new table
Please let me know if anything changes for the same logic in ms access
probably it is simple but i dont know exact query to solve the problem waiting for your solutions
ID area length/width dimensions **new column(L*W) here**
1 1 l 3 3*5=15
2 1 w 5
3 2 l 4
4 2 w 8
5 3 l 6
6 3 w 10
7 4 l 12
8 4 w 13
9 4 W 10
waiting for your reply
You could query the table twice: once for lengths and once for widths and then join by area and multiply the values:
select length.area, length.dimension * width.dimension
from
(select area, dimension from conversion where lenwidth = 'l') length
inner join
(select area, dimension from conversion where lenwidth = 'w') width
on length.area = width.area;
Two remarks:
I suppose that it is a typo that you have two width entries for area 4? Otherwise you would have to decide which value to take in above select statement.
It would not be a good idea to keep the old table and have a new table holding the results. What if you change a value? You would have to remember to change the result accordingly every time. So either ditch the old table or use a view instead of a new table.
Try this
select *,
dimensions*(lead(dimensions) over(order by id)) product
from table1;
Or if you want for the set of area then
select *,
case when length_width='l' and (lead(length_width) over(order by id))='w'
then dimensions*(lead(dimensions) over(order by id))
else 0
end as product
from table1;
fiddle
We are currently using SQL2005. I have a SQL table that stores serial numbers in a single column. Thus 10 000 serial numbers mean 10 000 rows. When these are printed on an invoice, one serial number per row is being printed due to how the information is stored. We currently use the built-in invoice in our ERP system but will change to SSRS if I can get the printing of serials sorted.
How can I read the serial numbers and display it (either in a view or sp) maybe 10 at a time per row. Thus if I am reading 18 serials it will be two rows (1st row with 10 serials and 2nd row with 8 serials). If I am reading 53 serials, it will be 6 rows. Getting this right will cut down on the paper needed for invoice printing to roughly a tenth of what is currently required!
Just to be clear...the serials are currently are stored and print like this :
Ser1
Ser2
Ser3
Ser4
Ser5
I would prefer them to print like this :
Ser1 Ser2 Ser3 Ser4 Ser5 Ser6 Ser7 Ser8 Ser9 Ser10
Ser11 Ser12 Ser13 Ser14 Ser15 Ser16....etc
Thanks
You can use row_number() to assign a unique number to each row. That allows you to group by rn / 10, giving you groups of 10 rows.
Here's an example for 3 instead of 10 rows:
select max(case when rn % 3 = 0 then serialno end) as sn1
, max(case when rn % 3 = 1 then serialno end) as sn2
, max(case when rn % 3 = 2 then serialno end) as sn3
from (
select row_number() over (order by serialno) -1 as rn
, serialno
from #t
) as SubQueryAlias
group by
rn / 3
See it working at SQL Fiddle.
I am a long time fan of Stack Overflow but I've come across a problem that I haven't found addressed yet and need some expert help.
I have a query that is sorted chronologically with a date-time compound key (unique, never deleted) and several pieces of data. What I want to know is if there is a way to find the start (or end) of a region where a value changes? I.E.
DateTime someVal1 someVal2 someVal3 target
1 3 4 A
1 2 4 A
1 3 4 A
1 2 4 B
1 2 5 B
1 2 5 A
and my query returns rows 1, 4 and 6. It finds the change in col 5 from A to B and then from B back to A? I have tried the find duplicates method and using min and max in the totals property however it gives me the first and last overall instead of the local max and min? Any similar problems?
I didn't see any purpose for the someVal1, someVal2, and someVal3 fields, so I left them out. I used an autonumber as the primary key instead of your date/time field; but this approach should also work with your date/time primary key. This is the data in my version of your table.
pkey_field target
1 A
2 A
3 A
4 B
5 B
6 A
I used a correlated subquery to find the previous pkey_field value for each row.
SELECT
m.pkey_field,
m.target,
(SELECT Max(pkey_field)
FROM YourTable
WHERE pkey_field < m.pkey_field)
AS prev_pkey_field
FROM YourTable AS m;
Then put that in a subquery which I joined to another copy of the base table.
SELECT
sub.pkey_field,
sub.target,
sub.prev_pkey_field,
prev.target AS prev_target
FROM
(SELECT
m.pkey_field,
m.target,
(SELECT Max(pkey_field)
FROM YourTable
WHERE pkey_field < m.pkey_field)
AS prev_pkey_field
FROM YourTable AS m) AS sub
LEFT JOIN YourTable AS prev
ON sub.prev_pkey_field = prev.pkey_field
WHERE
sub.prev_pkey_field Is Null
OR prev.target <> sub.target;
This is the output from that final query.
pkey_field target prev_pkey_field prev_target
1 A
4 B 3 A
6 A 5 B
Here is a first attempt,
SELECT t1.Row, t1.target
FROM t1 WHERE (((t1.target)<>NZ((SELECT TOP 1 t2.target FROM t1 AS t2 WHERE t2.DateTimeId<t1.DateTimeId ORDER BY t2.DateTimeId DESC),"X")));