Is it possible to do a key set pagination for nvarchar(max) column in sql server
I understand that its possible for int, dates as we can check < value or > value
Suppose I have a table
Someid
Name
1
Abc
2
Cde
Currently its written something like this
SELECT * FROM table
WHERE 'somelogic'
Order by Name Asc Offset 1 fetch 1
The problem is its currently ordered by Name because thats the requirement
In page number 100th its taking a lot of time to fetch the result
Edit: Currently there is no condition present on filtering the result based on Name (where Name='somename' -- this is not present) only its being orderd by Name Asc. This is offset pagination if I am not wrong. I want to know if I can do a Keyset pagination for that.
Related
I have a SQL Server table with just 3 columns, one of which is of type varbinary. The data in this column is actually a Json document which among other properties contains information about when the data was last modified. Unfortunately the SQL table itself does not contain information about when its rows were modified.
Now when doing sorting and filtering of the data I of course don't want fetch all rows in order to find e.g. the latest 100 entries.
So my question is: does SQL Server somehow remember when a row was added/modified? I have tried adding a timestamp and this is applied to all existing rows but this is applied randomly I think, because the sorting doesn't work. I don't need a datetime or anything, I just want to be able sort the records based on when they were last modified.
Thanks
For those looking to insert a tamestamp column of type DateTime into an existing DB table, you can do this like so:
ALTER TABLE TestTable
ADD DateInserted DATETIME NOT NULL DEFAULT (GETDATE());
The existing records will automatically get the value equal to the date/time of the moment when column is added.
New records will get up-to-date value upon insertion.
SQL Server will not track historically when a row was inserted or modified so you need to rely on the JSON data to figure that out yourself. You are going to need a new column to make this efficient to query. Once you have your new column you have some options:
Loop through all your records populating the new column with the relevant value from the JSON data.
If your version of SQL Server is recent enough, you can query the JSON data directly. Populate this column using a query like this:
UPDATE MyTable
SET MyNewColumn = JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
The downside of this method is that you need to maintain this
Make SQL Server compute the value from the JSON automatically, for example:
ALTER TABLE MyTable
ADD MyNewColumn AS JSON_VALUE(JsonDataColumn, '$.Customer.DateCreated')
And, create an index to make it efficient:
CREATE INDEX IX_MyTable_MyNewColumn
ON MyTable(MyNewColumn)
Use a new column CreatedDate and store datetime every time you make an Insert.
You could use GetDate() for inserting date in the column.
A UpdatedDate column can be used for updates.
in order to find e.g. the latest 100 entries.
Timestamp is indeed what you need.
It's ever-increasing value, it's updated automatically, so you are always able to find all last modified/inserted rows.
Here is an example:
create table dbo.test1 (id int);
insert into dbo.test1 values(1), (2), (3);
alter table dbo.test1 add ts timestamp;
update dbo.test1
set id = 10
where id = 2
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--10 0x000000001FCFABD2
insert into dbo.test1 (id)
values (100);
select top 1 *
from dbo.test1
order by ts desc;
--id ts
--100 0x000000001FCFABD3
As you see, you always get the last modified/inserted row.
For your purpose just use
select top 100 *
...
order by ts desc;
Thanks. Apparently I didn't look hard enough before I posted this question. The question has been asked a couple of times before and the answer is: Nope! There is no easy solution to this.
SQL Server does not keep track of when a record was created or modified, which was somehow what I was looking for. So I will go for the next best solution, which is probably to create a datetime column, retrieve the modified date from the Json document and then update the record. Or rather, the 1,4 million records:-(
here is my situation,
I have 2 tables,
1st table has all records, and it has IDs
2nd table has new records and it doesnt have ID, yet.
I want to generate ID for 2nd table with max(id) + 1 from 1st table.
when i do this, it makes all rows same id number, but i want to make it unique increment number.
e.g
select max(id) from table1 then it gives '997040'
I want to make second table rows like;
id
997041
997042
997043
997044
i think i need to use cursor or whileloop, or both, but i could not create the actual query.
sorry about bad explanation, i am so confused now
Use ROWNUM to generate incrementing row numbers. E.g.:
SELECT someConstant + ROWNUM FROM source.
CREATE TABLE table_name
(
ID int IDENTITY(997041,1) PRIMARY KEY
)
I hope this sql query would work!!
Or refer http://www.w3schools.com/sql/sql_autoincrement.asp
I'm trying to update a single field in an fusion table using AppInventor. I have successfully obtained the rowid using a select query and stored this value and displayed in a label.
I then want to update a field for this row using the rowid obtained but the rowid is being stored as 'rowid 1001' and not just '1001'
Any suggestions on how I can just have the value of the rowid and not the column heading as well will be greatly appreciated.
Snippet = Do It Result: UPDATE SET 'Name'='Tim' WHERE ROWID = 'rowid 1001'
Many Thanks
The result you get back from the fusiontable is always a table in csv format, in your case it is a 1 column csv table which looks like this
rowid
1001
the first row is the header row, the second row is the rowid you are looking for.
Now just split the result using the split block at \n (new line) to get a list with 2 items. The rowid you are looking for is the second item in that list.
I have a table which I dynamically fill with some data I want to create some statistics for. I have one value which has some values following a certain pattern, so I created an additional column where I map the values to other values so I can group them.
Now before I run my statistics, I need to check if I have to remap these values which means that I have to check if there are null values in that column.
I can do a select like this:
select distinct 1
from my-table t
where t.status_rd is not null
;
The disadvantage is, that this returns exactly one row, but it has to perform a full select. Is there some way that I can stop the select for the first encounter? I'm not interested in the exact row, because when there is at least one row, I have to run an update on all of them anyway, but I would like to avoid running the update unnecessarily everytime.
In Oracle I would do it with rownum, but this doesn't exist in SQLite
select 1
from my-table t
where t.status_rd is not null
and rownum <= 1
;
Use LIMIT 1 to select the first row returned:
SELECT 1
FROM my_table t
WHERE t.status_rd IS NULL
LIMIT 1
Note: I changed the where clause from IS NOT NULL to IS NULL based on your problem description. This may or may not be correct.
I'm phrasing the question title poorly as I'm not sure what to call what I'm trying to do but it really should be simple.
I've a link / join table with two ID columns. I want to run a check before saving new rows to the table.
The user can save attributes through a webpage but I need to check that the same combination doesn't exist before saving it. With one record it's easy as obviously you just check if that attributeId is already in the table, if it is don't allow them to save it again.
However, if the user chooses a combination of that attribute and another one then they should be allowed to save it.
Here's an image of what I mean:
So if a user now tried to save an attribute with ID of 1 it will stop them, but I need it to also stop them if they tried ID's of 1, 10 so long as both 1 and 10 had the same productAttributeId.
I'm confusing this in my explanation but I'm hoping the image will clarify what I need to do.
This should be simple so I presume I'm missing something.
If I understand the question properly, you want to prevent the combination of AttributeId and ProductAttributeId from being reused. If that's the case, simply make them a combined primary key, which is by nature UNIQUE.
If that's not feasible, create a stored procedure that runs a query against the join for instances of the AttributeId. If the query returns 0 instances, insert the row.
Here's some light code to present the idea (may need to be modified to work with your database):
SELECT COUNT(1) FROM MyJoinTable WHERE AttributeId = #RequestedID
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO MyJoinTable ...
END
You can control your inserts via a stored procedure. My understanding is that
users can select a combination of Attributes, such as
just 1
1 and 10 together
1,4,5,10 (4 attributes)
These need to enter the table as a single "batch" against a (new?) productAttributeId
So if (1,10) was chosen, this needs to be blocked because 1-2 and 10-2 already exist.
What I suggest
The stored procedure should take the attributes as a single list, e.g. '1,2,3' (comma separated, no spaces, just integers)
You can then use a string splitting UDF or an inline XML trick (as shown below) to break it into rows of a derived table.
Test table
create table attrib (attributeid int, productattributeid int)
insert attrib select 1,1
insert attrib select 1,2
insert attrib select 10,2
Here I use a variable, but you can incorporate as a SP input param
declare #t nvarchar(max) set #t = '1,2,10'
select top(1)
t.productattributeid,
count(t.productattributeid) count_attrib,
count(*) over () count_input
from (select convert(xml,'<a>' + replace(#t,',','</a><a>') + '</a>') x) x
cross apply x.x.nodes('a') n(c)
cross apply (select n.c.value('.','int')) a(attributeid)
left join attrib t on t.attributeid = a.attributeid
group by t.productattributeid
order by countrows desc
Output
productattributeid count_attrib count_input
2 2 3
The 1st column gives you the productattributeid that has the most matches
The 2nd column gives you how many attributes were matched using the same productattributeid
The 3rd column is how many attributes exist in the input
If you compare the last 2 columns and the counts
match - you can use the productattributeid to attach to the product which has all these attributes
don't match - then you need to do an insert to create a new combination