I'm unable to write a working LIKE query for a field containing the German sharp-s (ß) in a case-insensitive text field.
Using HSQLDB 2.2.9, create a table with a case sensitive field and a case insensitive field.
CREATE CACHED TABLE MYTABLE (MYKEY LONGVARCHAR NOT NULL, PRIMARY KEY (MYKEY));
ALTER TABLE MYTABLE ADD COLUMN SEN LONGVARCHAR;
ALTER TABLE MYTABLE ADD COLUMN INSEN VARCHAR_IGNORECASE;
Write 2 records.
INSERT INTO MYTABLE (MYKEY, SEN, INSEN) VALUES ('1', 'Strauß', 'Strauß');
INSERT INTO MYTABLE (MYKEY, SEN, INSEN) VALUES ('2', 'Strauss', 'Strauss');
Verify.
SELECT * FROM MYTABLE
KEY, SEN, INSEN
'1', 'Strauß', 'Strauß'
'2', 'Strauss', 'Strauss'
The problem query:
SELECT * FROM MYTABLE WHERE INSEN LIKE '%ß%'
WRONG, RETURNS RECORD 2 NOT RECORD 1
These queries work as expected:
SELECT * FROM MYTABLE WHERE SEN LIKE '%ß%'
OK, RETURNS RECORD 1
SELECT * FROM MYTABLE WHERE UCASE(INSEN) LIKE '%ß%'
OK, RETURNS RECORDS 1 AND 2
SELECT * FROM MYTABLE WHERE UCASE(SEN) LIKE '%ß%'
OK, RETURNS NOTHING
SELECT * FROM MYTABLE WHERE SEN='Strauß'
OK, RETURNS RECORD 1
SELECT * FROM MYTABLE WHERE INSEN='Strauß'
OK, RETURNS RECORD 1
SELECT * FROM MYTABLE WHERE SEN='Strauss'
OK, RETURNS RECORD 2
SELECT * FROM MYTABLE WHERE INSEN='Strauss'
OK, RETURNS RECORD 2
Thanks!
Related
I have to run a query with say around 30 columns in where clause. Each column have 1000+ value to compare. I know IN clause is not the best way to do so. Can any one suggest how to run this query without processing error. E.g below
select *
from table
where column1 not in (1,2,3,4......1000+ )
and column2 not in (1,2,3,4......1000+ ) and column3 not in (1,2,3,4......1000+) and so on upto column30.
I am getting error:
SQL Server query processor ran out of internal resources.
I explored other link but did not find solution or suggestion to implement it in a best way.
Create a temp table holding all the possible values:
CREATE TABLE #temp(column1 int, column2 int ....)
INSERT INTO #temp values
(1,1,1...),
(1,2,2...),
.
.
Now, apply SELECT query filter accordingly
Select * from table where column1 not in (SELECT column1 FROM #temp)
and column2 not in (SELECT column2 from #temp)
there is one more approach of using left outer joins
Select * from table as t
LEFT OUTER JOIN #temp as t1
on t.column1 = t1.column1
LEFT OUTER JOIN #temp as t2
on t.column2 = t1.column2
.
.
WHERE t.column1 IS NULL AND t.column2 IS NULL
There are a couple routes you can take based on assumptions.
Assumption 1: 1,2,3,4......1000+ values are same across all columns' NOT IN comparisons
With this assumption, I believe the problem is that SQL Server's resources are exhausted in a query like this:
Note that I am calling your table as MYTABLE.
select *
from MYTABLE -- <---- Remember the name I gave your table
where column1 not in (2, 5, 100, 22, 44, ... thousand other values)
and column2 not in (2, 5, 100, 22, 44, ... thousand other values)
and column3 not in (2, 5, 100, 22, 44, ... thousand other values)
Notice the values we are comparing. All columns are compared to the same set of values.
Solution
Create a table called values_to_compare like so:
create table values_to_compare (comparison_value int primary key);
Just remember that it might be much, much easier to populate values_to_compare table if you just import a text file with values into it rather than typing out a giant insert statement. But, if you choose to write an insert statement, here's how you'd write it. You might have to break your insert statement up in batches of a few hundred entries if SQL Server complains about a large insert statement.
insert into comparison values (2), (5), (100), (22), (44), ... thousand other values;
Then, create an index on MYTABLE like so:
create index on MYTABLE (column1);
Then, write your query in pieces. First, do this:
select *
from MYTABLE
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column1);
Hopefully, this will run fast. If that runs fast enough for you, it's time to add 4 more indexes:
create index on MYTABLE (column2);
create index on MYTABLE (column3);
create index on MYTABLE (column4);
create index on MYTABLE (column5);
Then, add 4 more lines so that the query looks like this:
select *
from MYTABLE
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column1)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column2)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column3)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column4)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column5);
If this works well, add 25 more indexes on each of the columns in MYTABLE. Then, expand the query above to compare all 30 columns. I believe SQL Server will perform well.
Assumption 2: Values you compare with column1..30 are all different
With this assumption, I believe the problem is that SQL Server's resources are exhausted in a query like this:
select *
from MYTABLE
where column1 not in (2, 5, 100, 22, 44, ... thousand other values)
and column2 not in (1, 225, 5619, 8, 9000, ... thousand other values)
and column3 not in (2024, 5223, 0, 552, 4564, ... thousand other values)
Notice the values we are comparing. All columns are compared to different set of values.
Solution
Create a table called values_to_compare like so:
create table values_to_compare (compare_column as varchar(50), comparison_value int, primary key (compare_column, compare_value));
Just remember that it might be much, much easier to populate values_to_compare table if you just import a text file with values into it rather than typing out a giant insert statement. But, if you choose to write an insert statement, here's how you'd write it. You might have to break your insert statement up in batches of a few hundred entries if SQL Server complains about a large insert statement.
insert into comparison values
('column1', 2), ('column1', 5), ('column1', 100), ('column1', 22), ('column1', 44), ... thousand other values ...
, ('column2', 1), ('column2', 225), ('column2', 5619), ('column2', 8), ('column2', 9000), ... thousand other values ...
, ('column3', 2024), ('column3', 5223), ('column3', 0), ('column3', 552), ('column3', 4564) ... thousand other values ...
;
Then, create an index on MYTABLE like so:
create index on MYTABLE (column1);
Then, write your query in pieces. First, do this:
select *
from MYTABLE
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column1 and compare_column = 'column1');
Hopefully, this will run fast. If that runs fast enough for you, it's time to add 4 more indexes:
create index on MYTABLE (column2);
create index on MYTABLE (column3);
create index on MYTABLE (column4);
create index on MYTABLE (column5);
Then, add 4 more lines so that the query looks like this:
select *
from MYTABLE
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column1 and compare_column = 'column1')
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column2 and compare_column = 'column2')
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column3 and compare_column = 'column3')
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column4 and compare_column = 'column4')
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column5 and compare_column = 'column5');
If this works well, add 25 more indexes on each of the columns in MYTABLE. Then, expand the query above to compare all 30 columns. I believe SQL Server will perform well.
Give this a shot.
EDIT
Based on the new information that SQL Server is able to do comparisons up to ~20 columns, we can do split operation.
select * into MYTABLE_TEMP from MYTABLE where 1=2;
We now have a temporary table to store data. Then, execute query comparing only 15 columns. Take the output and dump it into MYTABLE_TEMP.
insert into MYTABLE_TEMP
select *
from MYTABLE
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column1)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column2)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column3)
...
...
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column15);
Create 15 indexes on MYTABLE_TEMP.
create index on MYTABLE_TEMP (column16);
create index on MYTABLE_TEMP (column17);
...
...
create index on MYTABLE_TEMP (column30);
Then, run a query on MYTABLE_TEMP.
select *
from MYTABLE_TEMP
where not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column16)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column17)
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column18)
...
...
and not exists (select 1 from values_to_compare where comparison_value = MYTABLE.column30);
See if that helps.
I wanted to query to the hugeblob attribute in a table. I have tried below, but it doesnt give any data while selecting.
select DBMS_LOB.substr(mydata, 1000,1) from mytable;
Is there any other to do this?
DBMS_LOB.substr() is the right function to use. Ensure that there is data in the column.
Example usage:
-- create table
CREATE TABLE myTable (
id INTEGER PRIMARY KEY,
blob_column BLOB
);
-- insert couple of rows
insert into myTable values(1,utl_raw.cast_to_raw('a long data item here'));
insert into myTable values(2,null);
-- select rows
select id, blob_column from myTable;
ID BLOB_COLUMN
1 (BLOB)
2 null
-- select rows
select id, DBMS_LOB.substr(blob_column, 1000,1) from myTable;
ID DBMS_LOB.SUBSTR(BLOB_COLUMN,1000,1)
1 61206C6F6E672064617461206974656D2068657265
2 null
-- select rows
select id, UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.substr(blob_column,1000,1)) from myTable;
ID UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(BLOB_COLUMN,1000,1))
1 a long data item here
2 null
Is it possible to run select query, check if row exist and then insert some values? I would like to do that in one query. I think about SELECT .. CASE .. THEN, for example:
SELECT user_id, CASE when user_id > 0 then (INSERT INTO another_table ...) ELSE return 0 END
FROM users WHERE user_id = 10
Now I'm able to do that with 2 queries, first do SELECT and second INSERT values (if first query return something).
Thanks!
in general the construct is:
INSERT INTO another_table
SELECT value1,value2..etc
where exists (SELECT user_id FROM users WHERE user_id = 10)
or in this particular case:
INSERT INTO another_table
SELECT value1,value2..etc
FROM users WHERE user_id = 10
If no such user, no rows will be selected and so inserted
A single row in a table has a column with an integer value >= 1 and must be selected however many times the column says. So if the column had '2', I'd like the select query to return the single-row 2 times.
How can this be accomplished?
Don't know why you would want to do such a thing, but...
CREATE TABLE testy (a int,b text);
INSERT INTO testy VALUES (3,'test');
SELECT testy.*,generate_series(1,a) from testy; --returns 3 rows
You could make a table that is just full of numbers, like this:
CREATE TABLE numbers
(
num INT NOT NULL
, CONSTRAINT numbers_pk PRIMARY KEY (num)
);
and populate it with as many numbers as you need, starting from one:
INSERT INTO numbers VALUES(1);
INSERT INTO numbers VALUES(2);
INSERT INTO numbers VALUES(3);
...
Then, if you had the table "mydata" that han to repeat based on the column "repeat_count" you would query it like so:
SELECT mydata.*
FROM mydata
JOIN numbers
ON numbers.num <= mydata.repeat_count
WHERE ...
If course you need to know the maximum repeat count up front, and have your numbers table go that high.
No idea why you would want to do this thought. Care to share?
You can do it with a recursive query, check out the examples in
the postgresql docs.
something like
WITH RECURSIVE t(cnt, id, field2, field3) AS (
SELECT 1, id, field2, field3
FROM foo
UNION ALL
SELECT t.cnt+1, t.id, t.field2, t.field3
FROM t, foo f
WHERE t.id = f.id and t.cnt < f.repeat_cnt
)
SELECT id, field2, field3 FROM t;
The simplest way is making a simple select, like this:
SELECT generate_series(1,{xTimes}), a.field1, a.field2 FROM my_table a;
I was wondering if it is possible to move all rows of data from one table to another, that match a certain query?
For example, I need to move all table rows from Table1 to Table2 where their username = 'X' and password = 'X', so that they will no longer appear in Table1.
I'm using SQL Server 2008 Management Studio.
Should be possible using two statements within one transaction, an insert and a delete:
BEGIN TRANSACTION;
INSERT INTO Table2 (<columns>)
SELECT <columns>
FROM Table1
WHERE <condition>;
DELETE FROM Table1
WHERE <condition>;
COMMIT;
This is the simplest form. If you have to worry about new matching records being inserted into table1 between the two statements, you can add an and exists <in table2>.
This is an ancient post, sorry, but I only came across it now and I wanted to give my solution to whoever might stumble upon this one day.
As some have mentioned, performing an INSERT and then a DELETE might lead to integrity issues, so perhaps a way to get around it, and to perform everything neatly in a single statement, is to take advantage of the [deleted] temporary table.
DELETE FROM [source]
OUTPUT [deleted].<column_list>
INTO [destination] (<column_list>)
All these answers run the same query for the INSERT and DELETE. As mentioned previously, this risks the DELETE picking up records inserted between statements and could be slow if the query is complex (although clever engines "should" make the second call fast).
The correct way (assuming the INSERT is into a fresh table) is to do the DELETE against table1 using the key field of table2.
The delete should be:
DELETE FROM tbl_OldTableName WHERE id in (SELECT id FROM tbl_NewTableName)
Excuse my syntax, I'm jumping between engines but you get the idea.
A cleaner representation of what some other answers have hinted at:
DELETE sourceTable
OUTPUT DELETED.*
INTO destTable (Comma, separated, list, of, columns)
WHERE <conditions (if any)>
Yes it is. First INSERT + SELECT and then DELETE orginals.
INSERT INTO Table2 (UserName,Password)
SELECT UserName,Password FROM Table1 WHERE UserName='X' AND Password='X'
then delete orginals
DELETE FROM Table1 WHERE UserName='X' AND Password='X'
you may want to preserve UserID or someother primary key, then you can use IDENTITY INSERT to preserve the key.
see more on SET IDENTITY_INSERT on MSDN
You should be able to with a subquery in the INSERT statement.
INSERT INTO table1(column1, column2) SELECT column1, column2 FROM table2 WHERE ...;
followed by deleting from table1.
Remember to run it as a single transaction so that if anything goes wrong you can roll the entire operation back.
Use this single sql statement which is safe no need of commit/rollback with multiple statements.
INSERT Table2 (
username,password
) SELECT username,password
FROM (
DELETE Table1
OUTPUT
DELETED.username,
DELETED.password
WHERE username = 'X' and password = 'X'
) AS RowsToMove ;
Works on SQL server make appropriate changes for MySql
Try this
INSERT INTO TABLE2 (Cols...) SELECT Cols... FROM TABLE1 WHERE Criteria
Then
DELETE FROM TABLE1 WHERE Criteria
You could try this:
SELECT * INTO tbl_NewTableName
FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
Then run a simple delete:
DELETE FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
You may use "Logical Partitioning" to switch data between tables:
By updating the Partition Column, data will be automatically moved to the other table:
here is the sample:
CREATE TABLE TBL_Part1
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part1 CHECK(PartitionColumn = 'TBL_Part1'),
CONSTRAINT TBL_Part1_PK PRIMARY KEY(PartitionColumn, id)
);
CREATE TABLE TBL_Part2
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part2 CHECK(PartitionColumn = 'TBL_Part2'),
CONSTRAINT TBL_Part2_PK PRIMARY KEY(PartitionColumn, id)
);
GO
CREATE VIEW TBL(id, val, PartitionColumn)
WITH SCHEMABINDING
AS
SELECT id, val, PartitionColumn FROM dbo.TBL_Part1
UNION ALL
SELECT id, val, PartitionColumn FROM dbo.TBL_Part2;
GO
--Insert sample to TBL ( will be inserted to Part1 )
INSERT INTO TBL
VALUES(1, 'rec1', 'TBL_Part1');
INSERT INTO TBL
VALUES(2, 'rec2', 'TBL_Part1');
GO
--Query sub table to verify
SELECT * FROM TBL_Part1
GO
--move the data to table TBL_Part2 by Logical Partition switching technique
UPDATE TBL
SET
PartitionColumn = 'TBL_Part2';
GO
--Query sub table to verify
SELECT * FROM TBL_Part2
Here is how do it with single statement
WITH deleted_rows AS (
DELETE FROM source_table WHERE id = 1
RETURNING *
)
INSERT INTO destination_table
SELECT * FROM deleted_rows;
EXAMPLE:
postgres=# select * from test1 ;
id | name
----+--------
1 | yogesh
2 | Raunak
3 | Varun
(3 rows)
postgres=# select * from test2;
id | name
----+------
(0 rows)
postgres=# WITH deleted_rows AS (
postgres(# DELETE FROM test1 WHERE id = 1
postgres(# RETURNING *
postgres(# )
postgres-# INSERT INTO test2
postgres-# SELECT * FROM deleted_rows;
INSERT 0 1
postgres=# select * from test2;
id | name
----+--------
1 | yogesh
(1 row)
postgres=# select * from test1;
id | name
----+--------
2 | Raunak
3 | Varun
If the two tables use the same ID or have a common UNIQUE key:
1) Insert the selected record in table 2
INSERT INTO table2 SELECT * FROM table1 WHERE (conditions)
2) delete the selected record from table1 if presents in table2
DELETE FROM table1 as A, table2 as B WHERE (A.conditions) AND (A.ID = B.ID)
It will create a table and copy all the data from old table to new table
SELECT * INTO event_log_temp FROM event_log
And you can clear the old table data.
DELETE FROM event_log
For some scenarios, it might be the easiest to script out Table1, rename the existing Table1 to Table2 and run the script to recreate Table1.