Postgres and Oracle include generate_series/connect by command to allow incrementing a sequence by more than 1. I have a need to increment a sequence by a variable amount before row insertion. For example in Postgres this would look like the query below:
select nextval('mytable_seq') from generate_series(1,3);
What would be the recommended way to accomplish this in Microsoft SQL Server?
There is a stored procedure call you can use. Alternatively, you could set up some sort of while loop that calls next value for multiple times and caches them for use later.
Old question - new answer:
Assuming you've defined your sequence as:
create sequence dbo.IdSequence
as bigint
start with 1
...you can just include the phrase next value for dbo.IdSequence as a column in a select statement. When I have a sequence values I want to be paired to a result set, I'll do something like:
select
next value for dbo.IdSequence as Seq,
someSource.Col1,
someSource.Col2 --> ... and so on
from
dbo.someSource
If I have a need for a specific number of sequence values, I'll use some kind of sql table-valued function that generates dummy values:
select
next value for dbo.IdSequence Seq
from
dbo.FromTo( 1, 5 )
Note that if you make two columns requesting values from the same sequence, they'll return the same value for each column. It's probably not what you want:
select
next value for dbo.IdSequence Seq1,
next value for dbo.IdSequence Seq2
from
dbo.FromTo( 1, 5 )
...returns something like:
Seq1 Seq2
--------------------------
549 549
550 550
551 551
552 552
553 553
The FromTo is a simple function that generates numbers. There are lots of great examples of functions that do this in (lots of) answers to this question.
Related
I have the following data type below, it is a type of key value pair such as 116=0.2875. Big Query has stored this as a string. What I am required to do is to extract the key i.e 116 from each row.
To make things more complicated if a row has more than one key value pair the iteration to be extracted is the one with the highest number on the right e.g {1=0.1,2=0.8} so the extracted number would be 2.
I am struggling to use SQL to perform this, Particularly as some rows have one value and some have multiple:
This is as close as I have managed to get where I can create a bit of code to extract the highest right hand value (which I don't need) but I just cant seem to create something to either get the whole key/value pair which would be fine and work for me or just the key which would be great.
column
,(SELECT MAX(CAST(Values AS NUMERIC)) FROM UNNEST(JSON_EXTRACT_ARRAY(REPLACE(REPLACE(REPLACE(column,"{","["),"}","]"),"=",","))) AS Values WHERE Values LIKE "%.%") AS Highest
from `table`
Here is some sample data:
1 {99=0.25}
2 {99=0.25}
3 {99=0.25}
4 {116=0.2875, 119=0.6, 87=0.5142857142857143}
5 {105=0.308724832214765}
6 {105=0.308724832214765}
7 {139=0.5712754555198284}
8 {127=0.5767967894928858}
9 {134=0.2530120481927711, 129=0.29696599825632086, 73=0.2662459427947186}
10 {80=0.21242613001118038}
Any help on this conundrum would be greatly appreciated!
Consider below approach
select column,
( select cast(split(kv, '=')[offset(0)] as int64)
from unnest(regexp_extract_all(column, r'(\d+=\d+.\d+)')) kv
order by cast(split(kv, '=')[offset(1)] as float64) desc
limit 1
) key
from your_table
if applied to sample data in your question - output is
i have the following SQL Problem, given a table with a specific column e.g tableX:
col1
123
321
456
321
982
666
100
...
the amount of rows in this table can vary (even be 0).
What i need is to divide the rows into a coma separate text.
For example, I could put all values into one string
using this
SET #LIST= (SELECT ',''' + col1+'''' FROM Table FOR XML PATH(''))
('123','321','456','321',...) -- < can be too long
but the Problem is, that #LIST gets too long, therefore I want to divide it based on the number of entries into multiple (sub)lists. For example a fixed size (e.g Maximum 3 Elements) would always look at three Elements until nothing is left.
I was thinking of using some kindof Loop
// some Kind of Loop
('123,321,456')
// end some Kind of loop
and in the next Loop
('321,982,66')
and finally (if less than 3 are remaining) only
('100')
how can i achieve this?
edit: the database is a MSSQL db. if necessary I could sort the entries but they also contain characters (not only numerical). in fact the order doesnt matter.
I have a column that stores 2 values. Example below:
| Column 1 |
|some title1 =ExtractThis ; Source Title12 = ExtractThis2|
I want to remove 'ExtractThis' into one column and 'ExtractThis2' into another column. I've tried using a substring but it doesn't work as the data in column 1 is variable and therefore it doesn't always carve out my intended values. SQL below:
SELECT substring(d.Column1,13,24) FROM dbo.Table d
This returns 'Extract This' but for other columns it either takes too much or too little. Is there a function or combination of functions that will allow me to split consistently on the character? This is consistent in my column unlike my length count.
select substring(col1,CHARINDEX('=',col1)+1,CHARINDEX (';',col1)-CHARINDEX ('=',col1)-1) Val1,
substring(col1,CHARINDEX('=',col1,CHARINDEX (';',col1))+1,LEN(col1)) Val2
from #data
there is duplicate calculation that can be reduced from 5 to 3 to each line.
but I want to believe this simple optimization done by SQL SERVER.
I have a column in the sql server called "Ordinal" that is used to indicate the display order of the rows. It starts from 0 and skips 10 for the next row. so we have something like this:
Id Ordinal
1 0
2 20
3 10
It skips 10 because we wanted to be able to move item in between items (based on ordinal) without having to reassign ordinal number for the entire table.
As you can imagine eventually, Ordinal number will need to be reassign somehow for a move in between operation either on surrounding rows or for the entire table as the unused ordinal numbers between the target items are all used up.
Is there any algorithm that I can use to effectively reorder the ordinal number for the move operation taken in the consideration like long term maintainability of the table and minimizing update operations of the table?
You can re-number the sequences using a somewhat complicated UPDATE statement:
UPDATE u
SET u.sequence = 10 * (c.num_below-1)
FROM test u
JOIN (
SELECT t.id, count(*) AS num_below
FROM test t
JOIN test tr ON tr.sequence <= t.sequence
GROUP BY t.id
) c ON c.id=u.id
The idea is to obtain a count of items with the sequence lower than that of the current row, multiply the count by ten, and assign it as the new count.
The content of test before the UPDATE:
ID Sequence
__ ________
1 0
2 10
3 20
4 12
The content of test after the UPDATE:
ID Sequence
__ ________
1 0
2 30
3 10
4 20
Now the sequence numbers are evenly spread again, so you can continue inserting in the middle until you run out of new sequence numbers; then you can re-number again.
Demo.
These won't answer your question directly--I just thought I might suggest some other approaches:
One possibility--don't try to do it by hand. Have your software manage the numbers. If they need re-writing, just save them with new numbers.
a second--use a "Linked List" instead. In each record store the index of the next record you want displayed, then have your code load that directly into a linked list.
Yet another simple approach. Let's say you're inserting a new record with an ordinal equal x.
First, check if there's a row having ordinal value equal x. In case there's one, just update all the records having the ordinal value equal or bigger than x increasing them by y. Then, you are safe to insert a new record.
This way you're sure you'll not run update every time and of course, you'll keep the order.
I Have an SQL query giving me X results, I want the query output to have a coulmn called
count making the query somthing like this:
count id section
1 15 7
2 3 2
3 54 1
4 7 4
How can I make this happen?
So in your example, "count" is the derived sequence number? I don't see what pattern is used to determine the count must be 1 for id=15 and 2 for id=3.
count id section
1 15 7
2 3 2
3 54 1
4 7 4
If id contained unique values, and you order by id you could have this:
count id section
1 3 2
2 7 4
3 15 7
4 54 1
Looks to me like mikeY's DSum approach could work. Or you could use a different approach to a ranking query as Allen Browne described at this page
Edit: You could use DCount instead of DSum. I don't know how the speed would compare between the two, but DCount avoids creating a field in the table simply to store a 1 for each row.
DCount("*","YourTableName","id<=" & [id]) AS counter
Whether you go with DCount or DSum, the counter values can include duplicates if the id values are not unique. If id is a primary key, no worries.
I frankly don't understand what it is you want, but if all you want is a sequence number displayed on your form, you can use a control bound to the form's CurrentRecord property. A control with the ControlSource =CurrentRecord will have an always-accurate "record number" that is in sequence, and that will update when the form's Recordsource changes (which may or may not be desirable).
You can then use that number to navigate around the form, if you like.
But this may not be anything like what you're looking for -- I simply can't tell from the question you've posted and the "clarifications" in comments.
The only trick I have seen is if you have a sequential id field, you can create a new field in which the value for each record is 1. Then you do a running sum of that field.
Add to your query
DSum("[New field with 1 in it]","[Table Name]","[ID field]<=" & [ID Field])
as counterthing
That should produce a sequential count in Access which is what I think you want.
HTH.
(Stolen from Rob Mills here:
http://www.access-programmers.co.uk/forums/showthread.php?p=160386)
Alright, I guess this comes close enough to constitute an answer: the following link specifies two approaches: http://www.techrepublic.com/blog/microsoft-office/an-access-query-that-returns-every-nth-record/
The first approach assumes that you have an ID value and uses DCount (similar to #mikeY's solution).
The second approach assumes you're OK creating a VBA function that will run once for EACH record in the recordset, and will need to be manually reset (with some VBA) every time you want to run the count - because it uses a "static" value to run its counter.
As long as you have reasonable numbers (hundreds, not thousands) or records, the second approach looks like the easiest/most powerful to me.
This function can be called from each record if available from a module.
Example: incrementingCounterTimeFlaged(10,[anyField]) should provide your query rows an int incrementing from 0.
'provides incrementing int values 0 to n
'resets to 0 some seconds after first call
Function incrementingCounterTimeFlaged(resetAfterSeconds As Integer,anyfield as variant) As Integer
Static resetAt As Date
Static i As Integer
'if reset date < now() set the flag and return 0
If DateDiff("s", resetAt, Now()) > 0 Then
resetAt = DateAdd("s", resetAfterSeconds, Now())
i = 0
incrementingCounterTimeFlaged = i
'if reset date > now increments and returns
Else
i = i + 1
incrementingCounterTimeFlaged = i
End If
End Function
autoincrement in SQL
SELECT (Select COUNT(*) FROM table A where A.id<=b.id),B.id,B.Section FROM table AS B ORDER BY B.ID Asc
You can use ROW_NUMBER() which is in SQL Server 2008
SELECT ROW_NUMBER() OVER (ORDER By ID DESC) RowNum,
ID,
Section
FROM myTable
Then RowNum displays sequence of row numbers.