How to do paging in Pervasive SQL (version 9.1)? I need to do something similar like:
//MySQL
SELECT foo FROM table LIMIT 10, 10
But I can't find a way to define offset.
Tested query in PSQL:
select top n *
from tablename
where id not in(
select top k id
from tablename
)
for all n = no.of records u need to fetch at a time.
and k = multiples of n(eg. n=5; k=0,5,10,15,....)
Our paging required that we be able to pass in the current page number and page size (along with some additional filter parameters) as variables. Since a select top #page_size doesn't work in MS SQL, we came up with creating an temporary or variable table to assign each rows primary key an identity that can later be filtered on for the desired page number and size.
** Note that if you have a GUID primary key or a compound key, you just have to change the object id on the temporary table to a uniqueidentifier or add the additional key columns to the table.
The down side to this is that it still has to insert all of the results into the temporary table, but at least it is only the keys. This works in MS SQL, but should be able to work for any DB with minimal tweaks.
declare #page_number int, #page_size
int -- add any additional search
parameters here
--create the temporary table with the identity column and the id
--of the record that you'll be selecting. This is an in memory
--table, so if the number of rows you'll be inserting is greater
--than 10,000, then you should use a temporary table in tempdb
--instead. To do this, use
--CREATE TABLE #temp_table (row_num int IDENTITY(1,1), objectid int)
--and change all the references to #temp_table to #temp_table
DECLARE #temp_table TABLE (row_num int
IDENTITY(1,1), objectid int)
--insert into the temporary table with the ids of the records
--we want to return. It's critical to make sure the order by
--reflects the order of the records to return so that the row_num
--values are set in the correct order and we are selecting the
--correct records based on the page INSERT INTO #temp_table
(objectid)
/* Example: Select that inserts
records into the temporary table
SELECT personid FROM person WITH
(NOLOCK) inner join degree WITH
(NOLOCK) on degree.personid =
person.personid WHERE
person.lastname = #last_name
ORDER BY person.lastname asc,
person.firsname asc
*/
--get the total number of rows that we matched DECLARE #total_rows
int SET #total_rows =
##ROWCOUNT
--calculate the total number of pages based on the number of
--rows that matched and the page size passed in as a parameter DECLARE
#total_pages int
--add the #page_size - 1 to the total number of rows to
--calculate the total number of pages. This is because sql
--alwasy rounds down for division of integers SET #total_pages =
(#total_rows + #page_size - 1) /
#page_size
--return the result set we are interested in by joining
--back to the #temp_table and filtering by row_num /* Example:
Selecting the data to return. If the
insert was done properly, then
you should always be joining the table
that contains the rows to return
to the objectid column on the
#temp_table
SELECT person.* FROM person WITH
(NOLOCK) INNER JOIN #temp_table
tt ON person.personid =
tt.objectid
*/
--return only the rows in the page that we are interested in
--and order by the row_num column of the #temp_table to make sure
--we are selecting the correct records WHERE tt.row_num <
(#page_size * #page_number) + 1
AND tt.row_num > (#page_size *
#page_number) - #page_size ORDER
BY tt.row_num
I face this problem in MS Sql too... no Limit or rownumber functions. What I do is insert the keys for my final query result (or sometimes the entire list of fields) into a temp table with an identity column... then I delete from the temp table everything outside the range I want... then use a join against the keys and the original table, to bring back the items I want. This works if you have a nice unique key - if you don't, well... that's a design problem in itself.
Alternative with slightly better performance is to skip the deleting step and just use the row numbers in your final join. Another performance improvement is to use the TOP operator so that at the very least, you don't have to grab the stuff past the end of what you want.
So... in pseudo-code... to grab items 80-89...
create table #keys (rownum int identity(1,1), key varchar(10))
insert #keys (key)
select TOP 89 key from myTable ORDER BY whatever
delete #keys where rownumber < 80
select <columns> from #keys join myTable on #keys.key = myTable.key
I ended up doing the paging in code. I just skip the first records in loop.
I thought I made up an easy way for doing the paging, but it seems that pervasive sql doesn't allow order clauses in subqueries. But this should work on other DBs (I tested it on firebird)
select *
from (select top [rows] * from
(select top [rows * pagenumber] * from mytable order by id)
order by id desc)
order by id
Related
I am writing a query to import data from one table to a new table. I need to insert records that do not exist in the new table, and update records that do exist. I am trying to use a MERGE "upsert" method.
I have some unique problems due to the client's database and application structure. The table I am inserting into has a Unique ID field that increments by 1 for each new row, but the table does not do the auto incrementating; the insert statement needs to pull the highest ID in the target table and add 1 for the new record.
From my research, I can't figure out how to do that with MERGE. I do not database permissions to create a sequence. I have tried a lot of things, but currently my query looks like:
MERGE
dbo.targetTable as target
USING
dbo.sourceTable AS source
ON
target.account_no = source.account_ID
WHEN NOT MATCHED THEN
INSERT (
ID,
FIELD1,
FIELD2,
FIELD3
) VALUES (
(SELECT MAX(ID) + 1 FROM dbo.targetTable),
'field1',
'field2',
'field3'
)
The problem I am then running into with this code is that it appears to only run the select statement for the new ID once. That is, if the highest ID in the target table was 10, it would insert every new record with ID 11. That won't work as I'm getting a
Violation of PRIMARY KEY constraint. Cannot insert duplicate key in object error. I've been doing a ton of googling and trying different things but haven't been able to figure this one out. Any help is appreciated, thank you.
EDIT: For clarification, the unique ID column does not auto-populate. If I do not insert a value for the ID column, I get
Cannot insert the value NULL into column 'ID', table 'dbo.targetTable'; column does not allow nulls. UPDATE fails.
And again, as I mentioned originally I do not have permissions to create sequences. It just throws an error and says I do not have permission to do that.
I agree that changing the ID column to auto-increment automatically would be perfect, but I do not have the capability to modify the table like that either.
If you don't need the IDs to be consecutive, you can add the last available ID to a ROW_NUMBER() to generate new, non-repeated IDs.
BEGIN TRANSACTION
DECLARE #NextAvailableID INT = (SELECT ISNULL(MAX(ID), 0) FROM dbo.targetTable WITH (TABLOCKX))
;WITH SourceWithNewIDs AS
(
SELECT
S.*,
NewID = #NextAvailableID + ROW_NUMBER() OVER (ORDER BY S.account_ID)
FROM
dbo.sourceTable AS S
)
MERGE
dbo.targetTable as target
USING
SourceWithNewIDs AS source
ON
target.account_no = source.account_ID
WHEN NOT MATCHED THEN
INSERT (
ID,
FIELD1,
FIELD2,
FIELD3
) VALUES (
NewID,
'field1',
'field2',
'field3'
)
COMMIT
Keep in mind that this example is missing the proper error handling with rollback and the lock used to retrieve the max ID will block all other operations until commited or rollbacked.
If you need the new rows to have consecutive IDs then you can use this same approach with a regular INSERT (with WHERE NOT EXISTS...) instead of a MERGE (will have to write the UPDATE separately).
This is just a different way without using a Merge. Permissions aren't required for temp tables, so I would use one to hold the account numbers that need to be inserted, with an identity field to help with traversal. A while loop can traverse the identity, inserting the values with respect to the source table's account_no- into the target table. Since the insert is done in a loop, the MAX function should grab the target table's MAX(account_no) correctly on each loop.
DECLARE #tempTable TABLE (pkindex int IDENTITY(1,1) PRIMARY KEY, account_no int)
DECLARE #current int = 1
,#endcount int = 0
--account_no's that should be inserted
INSERT INTO #tempTable(account_no)
SELECT account_no
FROM sourceTable
WHERE account_no NOT IN (SELECT account_no FROM targetTable)
SET #endcount = (SELECT COUNT(*) FROM #tempTable)
--looping condition, should select the MAX(ID) with each subsequent loop
WHILE (#endcount > 0) AND (#current <= #endcount)
BEGIN
INSERT INTO dbo.targetTable(ID, FIELD1, FIELD2, FIELD3)
SELECT (SELECT MAX(T2.ID) + 1 FROM dbo.targetTable T2) AS MAXID
,S.field1
,S.field2
,S.field3
FROM #tempTable T INNER JOIN sourceTable S ON T.account_no = S.account_no
WHERE T.pkindex = #current --traversing temp table by its identity
SET #current += 1
END
I would like to update rows with values chosen randomly from a set of possible values.
Ideally I would be able to provide this values at runtime, using JdbcTemplate from Java application.
Example:
In a table, column "name" can contain any name. The goal is to run through the table and change all names to equal to either "Bob" or "Alice".
I know that this can be done by creating a sql function. I tested it and it was fine but I wonder if it is possible to just use simple query?
This will not work, seems that the value is computed once, and applied to all rows:
UPDATE test.table
SET first_name =
(SELECT a.name
FROM
(SELECT a.name, RAND() idx
FROM (VALUES('Alice'), ('Bob')) AS a(name) ORDER BY idx FETCH FIRST 1 ROW ONLY) as a)
;
I tried using MERGE INTO, but it won't even run (possible_names is not found in SET query). I am yet to figure out why:
MERGE INTO test.table
USING
(SELECT
names.fname
FROM
(VALUES('Alice'), ('Bob'), ('Rob')) AS names(fname)) AS possible_names
ON ( test.table.first_name IS NOT NULL )
WHEN MATCHED THEN
UPDATE SET
-- select random name
first_name = (SELECT fname FROM possible_names ORDER BY idx FETCH FIRST 1 ROW ONLY)
;
EDIT: If possible, I would like to only focus on fields being updated and not depend on knowing primary keys and such.
Db2 seems to be optimizing away the subselect that returns your supposedly random name, materializing it only once, hence all rows in the target table receive the same value.
To force subselect execution for each row you need to somehow correlate it to the table being updated, for example:
UPDATE test.table
SET first_name =
(SELECT a.name
FROM (VALUES('Alice'), ('Bob')) AS a(name)
ORDER BY RAND(ASCII(SUBSTR(first_name, 1, 1)))
FETCH FIRST 1 ROW ONLY)
or may be even
UPDATE test.table
SET first_name =
(SELECT a.name
FROM (VALUES('Alice'), ('Bob')) AS a(name)
ORDER BY first_name, RAND()
FETCH FIRST 1 ROW ONLY)
Now that the result of subselect seems to depend on the value of the corresponding row in the target table, there's no choice but to execute it for each row.
If your table has a primary key, this would work. I've assumed the PK is column id.
UPDATE test.table t
SET first_name =
( SELECT name from
( SELECT *, ROW_NUMBER() OVER(PARTITION BY id ORDER BY R) AS RN FROM
( SELECT *, RAND() R
FROM test.table, TABLE(VALUES('Alice'), ('Bob')) AS d(name)
)
)
AS u
WHERE t.id = u.id and rn = 1
)
;
There might be a nicer/more efficient solution, but I'll leave that to others.
FYI I used the following DDL and data to test the above.
create table test.table(id int not null primary key, first_name varchar(32));
insert into test.table values (1,'Flo'),(2,'Fred'),(3,'Sue'),(4,'John'),(5,'Jim');
So, I'm looking at a stored procedure here, which has more than one line like the following pseudocode:
if(select count(*) > 0)
...
on tables having a unique id (or identifier, for making it more general).
Now, in terms of performance, is it more performant to change this clause
to
if(select count([uniqueId]) > 0)
...
where uniqueId is, e.g., an Idx containing double values?
An example:
Consider a table like Idx (double) | Name (String) | Address (String)
Now the 'Idx' is a foreign key which I want to join in a stored procedure.
So, in terms of performance: what is better here?
if(select count(*) > 0)
...
or
if(select count(Idx) > 0)
...
? Or does the SQL Engine Change select count(*) to select count(Idx) internally, so we do not have to bother about this? Because at first sight, I'd say that select count(Idx) would be more performant.
The two are slightly different. count(*) counts rows. count([uniqueid]) counts the number of non-NULL values for uniqueid. Because a unique constraint allows a NULL value, SQL Server actually needs to read the column. This could add microseconds of time to a query, particularly if the page with the id is not already in memory. This also gives SQL Server more opportunities to optimize count(*).
As #lad2025 writes in a comment, the performant solution is to use if (exists . . ..
SELECT t1.*
FROM Table1 t1
JOIN Table2 t2 ON t2.idx = t1.idx
will give you only the rows in t1 that match an idx value in Table2. I'm not sure there is a good reason to do an if(select count...).
If you are really interested in the performance of something like this, just create a temp table with a million rows and give it a go:
CREATE TABLE #TempTable (id int identity, txt varchar(50))
GO
INSERT #TempTable (txt) VALUES (##IDENTITY)
GO 1000000
I want to add a column to my table with a random number using seed.
If I use RAND:
select *, RAND(5) as random_id from myTable
I get an equal value(0.943597390424144 for example) for all the rows, in the random_id column. I want this value to be different for every row - and that for every time I will pass it 0.5 value(for example), it would be the same values again(as seed should work...).
How can I do this?
(
For example, in PostrgreSql I can write
SELECT setseed(0.5);
SELECT t.* , random() as random_id
FROM myTable t
And I will get different values in each row.
)
Edit:
After I saw the comments here, I have managed to work this out somehow - but it's not efficient at all.
If someone has an idea how to improve it - it will be great. If not - I will have to find another way.
I used the basic idea of the example in here.
Creating a temporary table with blank seed value:
select * into t_myTable from (
select t.*, -1.00000000000000000 as seed
from myTable t
) as temp
Adding a random number for each seed value, one row at a time(this is the bad part...):
USE CPatterns;
GO
DECLARE #seed float;
DECLARE #id int;
DECLARE VIEW_CURSOR CURSOR FOR
select id
from t_myTable t;
OPEN VIEW_CURSOR;
FETCH NEXT FROM VIEW_CURSOR
into #id;
set #seed = RAND(5);
WHILE ##FETCH_STATUS = 0
BEGIN
set #seed = RAND();
update t_myTable set seed = #seed where id = #id
FETCH NEXT FROM VIEW_CURSOR
into #id;
END;
CLOSE VIEW_CURSOR;
DEALLOCATE VIEW_CURSOR;
GO
Creating the view using the seed value and ordering by it
create view my_view AS
select row_number() OVER (ORDER BY seed, id) AS source_id ,t.*
from t_myTable t
I think the simplest way to get a repeatable random id in a table is to use row_number() or a fixed id on each row. Let me assume that you have a column called id with a different value on each row.
The idea is just to use this as a seed:
select rand(id*1), as random_id
from mytable;
Note that the seed for the id is an integer and not a floating point number. If you wanted a floating point seed, you could do something with checksum():
select rand(checksum(id*0.5)) as random_id
. . .
If you are doing this for sampling (where you will say random_id < 0.1 for a 10% sample for instance, then I often use modulo arithmetic on row_number():
with t as (
select t.* row_number() over (order by id) as seqnum
from mytable t
)
select *
from t
where ((seqnum * 17 + 71) % 101) < 0.1
This returns about 10% of the numbers (okay, really 10/101). And you can adjust the sample by fiddling with the constants.
Someone sugested a similar query using newid() but I'm giving you the solution that works for me.
There's a workaround that involves newid() instead of rand, but it gives you the same result. You can execute it individually or as a column in a column. It will result in a random value per row rather than the same value for every row in the select statement.
If you need a random number from 0 - N, just change 100 for the desired number.
SELECT TOP 10 [Flag forca]
,1+ABS(CHECKSUM(NEWID())) % 100 AS RANDOM_NEWID
,RAND() AS RANDOM_RAND
FROM PAGSEGURO_WORK.dbo.jobSTM248_tmp_leitores_iso
So, in case it would someone someday, here's what I eventually did.
I'm generating the random seeded values in the server side(Java in my case), and then create a table with two columns: the id and the generated random_id.
Now I create the view as an inner join between the table and the original data.
The generated SQL looks something like that:
CREATE TABLE SEED_DATA(source_id INT PRIMARY KEY, random_id float NOT NULL);
select Rand(5);
insert into SEED_DATA values(1,Rand());
insert into SEED_DATA values(2, Rand());
insert into SEED_DATA values(3, Rand());
.
.
.
insert into SEED_DATA values(1000000, Rand());
and
CREATE VIEW DATA_VIEW
as
SELECT row_number() OVER (ORDER BY random_id, id) AS source_id,column1,column2,...
FROM
( select * from SEED_DATA tmp
inner join my_table i on tmp.source_id = i.id) TEMP
In addition, I create the random numbers in batches, 10,000 or so in each batch(may be higher), so it will not weigh heavily on the server side, and for each batch I insert it to the table in a separate execution.
All of that because I couldn't find a good way to do what I want purely in SQL. Updating row after row is really not efficient.
My own conclusion from this story is that SQL Server is sometimes really annoying...
You could convert a random number from the seed:
rand(row_number over (order by ___, ___,___))
Then cast that as a varchar
, Then use the last 3 characters as another seed.
That would give you a nice random value:
rand(right(cast(rand(row_number() over(x,y,x)) as varchar(15)), 3)
How to write one SQL query that selects a column from a table but returns two columns where the additional one contains an index of the row (a new one, starting with 1 to n). It must be without using functions that do that (like row_number()).
Any ideas?
Edit: it must be a one-select query
You can do this on any database:
SELECT (SELECT COUNT (1) FROM field_company fc2
WHERE fc2.field_company_id <= fc.field_company_id) AS row_num,
fc.field_company_name
FROM field_company fc
SET NOCOUNT ON
DECLARE #item_table TABLE
(
row_num INT IDENTITY(1, 1) NOT NULL PRIMARY KEY, --THE IDENTITY STATEMENT IS IMPORTANT!
field_company_name VARCHAR(255)
)
INSERT INTO #item_table
SELECT field_company_name FROM field_company
SELECT * FROM #item_table
if you are using Oracle or a database that supports Sequence objects, make a new db sequence object for this purpose. Next create a view, and run this.
insert into the view as select column_name, sequence.next from table
In mysql you can :
SELECT Row,Column1
FROM (SELECT #row := #row + 1 AS Row, Column1 FROM table1 )
As derived1
I figured out a hackish way to do this that I'm a bit ashamed of. On Postgres 8.1:
SELECT generate_series, (SELECT username FROM users LIMIT 1 OFFSET generate_series) FROM generate_series(0,(SELECT count(*) - 1 FROM users));
I believe this technique will work even if your source table does not have unique ids or identifiers.
On SQL Server 2005 and higher, you can use OVER to accomplish this:
SELECT rank() over (order by company_id) as rownum
, company_name
FROM company