Embded a delete in select statement in postgres - sql

I am trying to embed a Delete operation in SELECT in postgres. Tried the following command but its not working.. :(
select * from tasks where title ilike '%
Delete from tasks where title ilike `%Re%` returning (
Select title from tasks where title ilike `%smoke%`)%'
where the actual query in TS looks like
select * from tasks where title ilike '%${filter}%'
I'm trying to fit the
Delete from tasks where title ilike '%Re%' returning (
Select title from tasks where title ilike '%smoke%')
in the place of '%{filter}%'
I am getting errors all around and not able to perform the operation!

You're trying to execute an invalid SQL statement. If you want to return rows from the delete statement you can use the CTE:
WITH del AS (
-- it will be better to get the task id here
DELETE FROM tasks WHERE title ILIKE '%Re%' RETURNING title
)
SELECT title FROM del WHERE title ILIKE '%smoke%'

You can use function and lateral achieve that.
CREATE temp TABLE tasks (
title text
, misc numeric DEFAULT random() ::numeric
);
INSERT INTO tasks (title)
VALUES ('foo bar');
INSERT INTO tasks (title)
VALUES ('Re')
, ('Re_1')
, ('smoke_1')
, ('smoke_2')
, ('smoke_3')
, ('foo bar')
, ('brown');'
simple function, use case insensitive regex pattern match to delete rows.
CREATE OR REPLACE FUNCTION delete_tasks_title_pattern (x text)
RETURNS void
AS $func$
DELETE FROM tasks
WHERE title ~* x;
$func$
LANGUAGE sql
STRICT;
Then the following query will do select (title ILIKE '%smoke%') and delete (title ~* 'foo')
SELECT
sub.*
FROM (
SELECT
*
FROM
tasks
WHERE
title ILIKE '%smoke%') sub
, LATERAL delete_tasks_title_pattern ('foo');

If I understand your question correctly, you are trying to exploit SQL injection by replacing '%${filter}%' with some sub-query that deletes rows.
You can't have a sub-query that runs a DML statement, but you can use the well known little bobby tables approach to inject a DELETE statement.
Assuming that ${filter} will be replaced at runtime with the value provided, and you pass the following string as the filter parameter:
';delete from tasks where true or title = '
then the query is turned into two queries:
select * from tasks where title ilike '%';
delete from tasks where true or title = '%'

Related

Checking if field contains multiple string in sql server

I am working on a sql database which will provide with data some grid. The grid will enable filtering, sorting and paging but also there is a strict requirement that users can enter free text to a text input above the grid for example
'Engine 1001 Requi' and that the result will contain only rows which in some columns contain all the pieces of the text. So one column may contain Engine, other column may contain 1001 and some other will contain Requi.
I created a technical column (let's call it myTechnicalColumn) in the table (let's call it myTable) which will be updated each time someone inserts or updates a row and it will contain all the values of all the columns combined and separated with space.
Now to use it with entity framework I decided to use a table valued function which accepts one parameter #searchQuery and it will handle it like this:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS #Result TABLE
( ... here come columns )
AS
BEGIN
DECLARE #searchToken TokenType
INSERT INTO #searchToken(token) SELECT value FROM STRING_SPLIT(#searchText,' ')
DECLARE #searchTextLength INT
SET #searchTextLength = (SELECT COUNT(*) FROM #searchToken)
INSERT INTO #Result
SELECT
... here come columns
FROM myTable
WHERE (SELECT COUNT(*) FROM #searchToken WHERE CHARINDEX(token, myTechnicalColumn) > 0) = #searchTextLength
RETURN;
END
Of course the solution works fine but it's kinda slow. Any hints how to improve its efficiency?
You can use an inline Table Valued Function, which should be quite a lot faster.
This would be a direct translation of your current code
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE (
SELECT COUNT(*)
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) > 0
) = (SELECT COUNT(*) FROM searchText)
);
GO
You are using a form of query called Relational Division Without Remainder and there are other ways to cut this cake:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE NOT EXISTS (
SELECT 1
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) = 0
)
);
GO
This may be faster or slower depending on a number of factors, you need to test.
Since there is no data to test, i am not sure if the following will solve your issue:
-- Replace the last INSERT portion
INSERT INTO #Result
SELECT
... here come columns
FROM myTable T
JOIN #searchToken S ON CHARINDEX(S.token, T.myTechnicalColumn) > 0

PostgreSQL - check if column exists and nest condition statement

Transactions column's names in below code are dynamicaly generated (so it means that sometimes particular name/column doesn't exist). Using this select it finishes successfully only in case when every of those names exists, if not, I got error like this (example):
Error(s), warning(s): 42703: column "TransactionA" does not exist
SELECT
*,
((CASE WHEN "TransactionA" IS NULL THEN 0 ELSE "TransactionA" END) -
(CASE WHEN "TransactionB" IS NULL THEN 0 ELSE "TransactionB" END) +
(CASE WHEN "TransactionC" IS NULL THEN 0 ELSE "TransactionC" END)) AS "Account_balance"
FROM Summary ORDER BY id;
Could you tell me please how can I check first if the column exists and then how can I nest another CASE statement or other condition statement to make it working in a correct way?
You can build any query dynamically with information from the Postgres catalog tables. pg_attribute in your case. Alternatively, use the information schema. See:
Query to return output column names and data types of a query, table or view
How to check if a table exists in a given schema
Basic query to see which of the given columns exist in a given table:
SELECT attname
FROM pg_attribute a
WHERE attrelid = 'public.summary'::regclass -- tbl here
AND NOT attisdropped
AND attnum > 0
AND attname IN ('TransactionA', 'TransactionB', 'TransactionC'); -- columns here
Building on this, you can have Postgres generate your whole query. While being at it, look up whether columns are defined NOT NULL, in which case they don't need COALESCE:
CREATE OR REPLACE FUNCTION f_build_query(_tbl regclass, _columns json)
RETURNS text AS
$func$
DECLARE
_expr text;
BEGIN
SELECT INTO _expr
string_agg (op || CASE WHEN attnotnull
THEN quote_ident(attname)
ELSE format('COALESCE(%I, 0)', attname) END
, '')
FROM (
SELECT j->>'name' AS attname
, CASE WHEN j->>'op' = '-' THEN ' - ' ELSE ' + ' END AS op
FROM json_array_elements(_columns) j
) j
JOIN pg_attribute a USING (attname)
WHERE attrelid = _tbl
AND NOT attisdropped
AND attnum > 0;
IF NOT FOUND THEN
RAISE EXCEPTION 'No column found!'; -- or more info
END IF;
RETURN
'SELECT *,' || _expr || ' AS "Account_balance"
FROM ' || _tbl || '
ORDER BY id;';
END
$func$ LANGUAGE plpgsql;
The table itself is parameterized, too. May or may not be useful for you. The only assumption is that every table has an id column for the ORDER BY. Related:
Table name as a PostgreSQL function parameter
I pass columns names and the associated operator as JSON document for flexibility. Only + or - are expected as operator. Input is safely concatenated to make SQL injection impossible.About json_array_elements():
Query for element of array in JSON column
Example call:
SELECT f_build_query('summary', '[{"name":"TransactionA"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]');
Returns the according valid query string, like:
SELECT *, + COALESCE("TransactionA", 0) - COALESCE("TransactionB", 0) AS "Account_balance"
FROM summary
ORDER BY id;
"TransactionC" isn't there in this case. If both existing columns happen to be NOT NULL, you get instead:
SELECT *, + "TransactionA" - "TransactionB" AS "Account_balance"
FROM summary
ORDER BY id;
db<>fiddle here
You could execute the generated query in the function immediately and return result rows directly. But that's hard as your return type is a combination of a table rows (unknown until execution time?) plus additional column, and SQL demands to know the return type in advance. For just id and sum (stable return type), it would be easy ...
It's odd that your CaMeL-case column names are double-quoted, but the CaMeL-case table name is not. By mistake? See:
Are PostgreSQL column names case-sensitive?
How to pass column names containing single quotes?
Addressing additional question from comment.
If someone used column names containing single quotes by mistake:
CREATE TABLE madness (
id int PRIMARY KEY
, "'TransactionA'" numeric NOT NULL -- you wouldn't do that ...
, "'TransactionC'" numeric NOT NULL
);
For the above function, the JSON value is passed as quoted string literal. If that string is enclosed in single-quotes, escape contained single-quotes by doubling them up. This is required on top of valid JSON format:
SELECT f_build_query('madness', '[{"name":"''TransactionA''"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]'); --
("''TransactionA''" finds a match, "TransactionC" does not.)
Or use dollar quoting instead:
SELECT f_build_query('madness', $$[{"name":"'TransactionA'"}
, {"name":"TransactionB", "op": "-"}
, {"name":"TransactionC"}]$$);
db<>fiddle here with added examples
See:
Insert text with single quotes in PostgreSQL
Assuming that id is a unique id in summary, then you can use the following trick:
SELECT s.*,
(COALESCE("TransactionA", 0) -
COALESCE("TransactionB", 0) +
COALESCE("TransactionC", 0)
) AS Account_balance
FROM (SELECT id, . . . -- All columns except the TransactionX columns
FROM (SELECT s.*,
(SELECT TransactionA FROM summary s2 WHERE s2.id = s.id) as TransactionA,
(SELECT TransactionB FROM summary s2 WHERE s2.id = s.id) as TransactionB,
(SELECT TransactionC FROM summary s2 WHERE s2.id = s.id) as TransactionC
FROM Summary s
) s CROSS JOIN
(VALUES (NULL, NULL, NULL)) v(TransactionA, TransactionB, TransactionC)
) s
ORDER BY s.id;
The trick here is that the correlated subqueries do not qualify the TransactionA. If the value is defined for summary, then that will be used. If not, it will come from the values() clause in the outer query.
This is a bit of a hack, but it can be handy under certain circumstances.
Check this example:
UPDATE yourtable1
SET yourcolumn = (
CASE
WHEN setting.value IS NOT NULL
THEN CASE WHEN replace(setting.value,'"','') <> '' THEN replace(setting.value,'"','') ELSE NULL END
ELSE NULL
END
)::TIME FROM (SELECT value FROM yourtable2 WHERE key = 'ABC') AS setting;

SQL joining huge tables by excluding just one column in select statement [duplicate]

I'm trying to use a select statement to get all of the columns from a certain MySQL table except one. Is there a simple way to do this?
EDIT: There are 53 columns in this table (NOT MY DESIGN)
Actually there is a way, you need to have permissions of course for doing this ...
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '<columns_to_omit>,', '') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table>' AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
PREPARE stmt1 FROM #sql;
EXECUTE stmt1;
Replacing <table>, <database> and <columns_to_omit>
(Do not try this on a big table, the result might be... surprising !)
TEMPORARY TABLE
DROP TABLE IF EXISTS temp_tb;
CREATE TEMPORARY TABLE ENGINE=MEMORY temp_tb SELECT * FROM orig_tb;
ALTER TABLE temp_tb DROP col_a, DROP col_f,DROP col_z; #// MySQL
SELECT * FROM temp_tb;
DROP syntax may vary for databases #Denis Rozhnev
Would a View work better in this case?
CREATE VIEW vwTable
as
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
You can do:
SELECT column1, column2, column4 FROM table WHERE whatever
without getting column3, though perhaps you were looking for a more general solution?
If you are looking to exclude the value of a field, e.g. for security concerns / sensitive info, you can retrieve that column as null.
e.g.
SELECT *, NULL AS salary FROM users
To the best of my knowledge, there isn't. You can do something like:
SELECT col1, col2, col3, col4 FROM tbl
and manually choose the columns you want. However, if you want a lot of columns, then you might just want to do a:
SELECT * FROM tbl
and just ignore what you don't want.
In your particular case, I would suggest:
SELECT * FROM tbl
unless you only want a few columns. If you only want four columns, then:
SELECT col3, col6, col45, col 52 FROM tbl
would be fine, but if you want 50 columns, then any code that makes the query would become (too?) difficult to read.
While trying the solutions by #Mahomedalid and #Junaid I found a problem. So thought of sharing it. If the column name is having spaces or hyphens like check-in then the query will fail. The simple workaround is to use backtick around column names. The modified query is below
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(CONCAT("`", COLUMN_NAME, "`")) FROM
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
If the column that you didn't want to select had a massive amount of data in it, and you didn't want to include it due to speed issues and you select the other columns often, I would suggest that you create a new table with the one field that you don't usually select with a key to the original table and remove the field from the original table. Join the tables when that extra field is actually required.
You could use DESCRIBE my_table and use the results of that to generate the SELECT statement dynamically.
My main problem is the many columns I get when joining tables. While this is not the answer to your question (how to select all but certain columns from one table), I think it is worth mentioning that you can specify table. to get all columns from a particular table, instead of just specifying .
Here is an example of how this could be very useful:
select users.*, phone.meta_value as phone, zipcode.meta_value as zipcode
from users
left join user_meta as phone
on ( (users.user_id = phone.user_id) AND (phone.meta_key = 'phone') )
left join user_meta as zipcode
on ( (users.user_id = zipcode.user_id) AND (zipcode.meta_key = 'zipcode') )
The result is all the columns from the users table, and two additional columns which were joined from the meta table.
I liked the answer from #Mahomedalid besides this fact informed in comment from #Bill Karwin. The possible problem raised by #Jan Koritak is true I faced that but I have found a trick for that and just want to share it here for anyone facing the issue.
we can replace the REPLACE function with where clause in the sub-query of Prepared statement like this:
Using my table and column name
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
So, this is going to exclude only the field id but not company_id
Yes, though it can be high I/O depending on the table here is a workaround I found for it.
SELECT *
INTO #temp
FROM table
ALTER TABLE #temp DROP COlUMN column_name
SELECT *
FROM #temp
It is good practice to specify the columns that you are querying even if you query all the columns.
So I would suggest you write the name of each column in the statement (excluding the one you don't want).
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
I agree with the "simple" solution of listing all the columns, but this can be burdensome, and typos can cause lots of wasted time. I use a function "getTableColumns" to retrieve the names of my columns suitable for pasting into a query. Then all I need to do is to delete those I don't want.
CREATE FUNCTION `getTableColumns`(tablename varchar(100))
RETURNS varchar(5000) CHARSET latin1
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE res VARCHAR(5000) DEFAULT "";
DECLARE col VARCHAR(200);
DECLARE cur1 CURSOR FOR
select COLUMN_NAME from information_schema.columns
where TABLE_NAME=#table AND TABLE_SCHEMA="yourdatabase" ORDER BY ORDINAL_POSITION;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
REPEAT
FETCH cur1 INTO col;
IF NOT done THEN
set res = CONCAT(res,IF(LENGTH(res)>0,",",""),col);
END IF;
UNTIL done END REPEAT;
CLOSE cur1;
RETURN res;
Your result returns a comma delimited string, for example...
col1,col2,col3,col4,...col53
I agree that it isn't sufficient to Select *, if that one you don't need, as mentioned elsewhere, is a BLOB, you don't want to have that overhead creep in.
I would create a view with the required data, then you can Select * in comfort --if the database software supports them. Else, put the huge data in another table.
At first I thought you could use regular expressions, but as I've been reading the MYSQL docs it seems you can't. If I were you I would use another language (such as PHP) to generate a list of columns you want to get, store it as a string and then use that to generate the SQL.
Based on #Mahomedalid answer, I have done some improvements to support "select all columns except some in mysql"
SET #database = 'database_name';
SET #tablename = 'table_name';
SET #cols2delete = 'col1,col2,col3';
SET #sql = CONCAT(
'SELECT ',
(
SELECT GROUP_CONCAT( IF(FIND_IN_SET(COLUMN_NAME, #cols2delete), NULL, COLUMN_NAME ) )
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #tablename AND TABLE_SCHEMA = #database
),
' FROM ',
#tablename);
SELECT #sql;
If you do have a lots of cols, use this sql to change group_concat_max_len
SET ##group_concat_max_len = 2048;
I agree with #Mahomedalid's answer, but I didn't want to do something like a prepared statement and I didn't want to type all the fields, so what I had was a silly solution.
Go to the table in phpmyadmin->sql->select, it dumps the query: copy, replace and done! :)
While I agree with Thomas' answer (+1 ;)), I'd like to add the caveat that I'll assume the column that you don't want contains hardly any data. If it contains enormous amounts of text, xml or binary blobs, then take the time to select each column individually. Your performance will suffer otherwise. Cheers!
Just do
SELECT * FROM table WHERE whatever
Then drop the column in you favourite programming language: php
while (($data = mysql_fetch_array($result, MYSQL_ASSOC)) !== FALSE) {
unset($data["id"]);
foreach ($data as $k => $v) {
echo"$v,";
}
}
The answer posted by Mahomedalid has a small problem:
Inside replace function code was replacing "<columns_to_delete>," by "", this replacement has a problem if the field to replace is the last one in the concat string due to the last one doesn't have the char comma "," and is not removed from the string.
My proposal:
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME),
'<columns_to_delete>', '\'FIELD_REMOVED\'')
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '<table>'
AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
Replacing <table>, <database> and `
The column removed is replaced by the string "FIELD_REMOVED" in my case this works because I was trying to safe memory. (The field I was removing is a BLOB of around 1MB)
You can use SQL to generate SQL if you like and evaluate the SQL it produces. This is a general solution as it extracts the column names from the information schema. Here is an example from the Unix command line.
Substituting
MYSQL with your mysql command
TABLE with the table name
EXCLUDEDFIELD with excluded field name
echo $(echo 'select concat("select ", group_concat(column_name) , " from TABLE") from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1) | MYSQL
You will really only need to extract the column names in this way only once to construct the column list excluded that column, and then just use the query you have constructed.
So something like:
column_list=$(echo 'select group_concat(column_name) from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1)
Now you can reuse the $column_list string in queries you construct.
I wanted this too so I created a function instead.
public function getColsExcept($table,$remove){
$res =mysql_query("SHOW COLUMNS FROM $table");
while($arr = mysql_fetch_assoc($res)){
$cols[] = $arr['Field'];
}
if(is_array($remove)){
$newCols = array_diff($cols,$remove);
return "`".implode("`,`",$newCols)."`";
}else{
$length = count($cols);
for($i=0;$i<$length;$i++){
if($cols[$i] == $remove)
unset($cols[$i]);
}
return "`".implode("`,`",$cols)."`";
}
}
So how it works is that you enter the table, then a column you don't want or as in an array: array("id","name","whatevercolumn")
So in select you could use it like this:
mysql_query("SELECT ".$db->getColsExcept('table',array('id','bigtextcolumn'))." FROM table");
or
mysql_query("SELECT ".$db->getColsExcept('table','bigtextcolumn')." FROM table");
May be I have a solution to Jan Koritak's pointed out discrepancy
SELECT CONCAT('SELECT ',
( SELECT GROUP_CONCAT(t.col)
FROM
(
SELECT CASE
WHEN COLUMN_NAME = 'eid' THEN NULL
ELSE COLUMN_NAME
END AS col
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
) t
WHERE t.col IS NOT NULL) ,
' FROM employee' );
Table :
SELECT table_name,column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
================================
table_name column_name
employee eid
employee name_eid
employee sal
================================
Query Result:
'SELECT name_eid,sal FROM employee'
I use this work around although it may be "Off topic" - using mysql workbench and the query builder -
Open the columns view
Shift select all the columns you want in your query (in your case all but one which is what i do)
Right click and select send to SQL Editor-> name short.
Now you have the list and you can then copy paste the query to where ever.
If it's always the same one column, then you can create a view that doesn't have it in it.
Otherwise, no I don't think so.
I would like to add another point of view in order to solve this problem, specially if you have a small number of columns to remove.
You could use a DB tool like MySQL Workbench in order to generate the select statement for you, so you just have to manually remove those columns for the generated statement and copy it to your SQL script.
In MySQL Workbench the way to generate it is:
Right click on the table -> send to Sql Editor -> Select All Statement.
The accepted answer has several shortcomings.
It fails where the table or column names requires backticks
It fails if the column you want to omit is last in the list
It requires listing the table name twice (once for the select and another for the query text) which is redundant and unnecessary
It can potentially return column names in the wrong order
All of these issues can be overcome by simply including backticks in the SEPARATOR for your GROUP_CONCAT and using a WHERE condition instead of REPLACE(). For my purposes (and I imagine many others') I wanted the column names returned in the same order that they appear in the table itself. To achieve this, here we use an explicit ORDER BY clause inside of the GROUP_CONCAT() function:
SELECT CONCAT(
'SELECT `',
GROUP_CONCAT(COLUMN_NAME ORDER BY `ORDINAL_POSITION` SEPARATOR '`,`'),
'` FROM `',
`TABLE_SCHEMA`,
'`.`',
TABLE_NAME,
'`;'
)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE `TABLE_SCHEMA` = 'my_database'
AND `TABLE_NAME` = 'my_table'
AND `COLUMN_NAME` != 'column_to_omit';
I have a suggestion but not a solution.
If some of your columns have a larger data sets then you should try with following
SELECT *, LEFT(col1, 0) AS col1, LEFT(col2, 0) as col2 FROM table
If you use MySQL Workbench you can right-click your table and click Send to sql editor and then Select All Statement This will create an statement where all fields are listed, like this:
SELECT `purchase_history`.`id`,
`purchase_history`.`user_id`,
`purchase_history`.`deleted_at`
FROM `fs_normal_run_2`.`purchase_history`;
SELECT * FROM fs_normal_run_2.purchase_history;
Now you can just remove those that you dont want.

How to minimize sql select?

I have an array of words like this one:
$word1 = array('test1','test2','test3','test4','test5',...,'test20');
I need to search in my table every row that has at least one of these words in the text column. So far, I have this sql query:
SELECT * FROM TABLE WHERE text LIKE '$word1[0]' OR text LIKE '$word1[1]'
OR ... OR text LIKE '$word1[20]'
But I see that this design isn't very efficient. Is there any way I can shorten this query, in such a way that I don't need to write out every word in the where clause?
Example SELECT * FROM TABLE WHERE text IN ($word1)
P.S.: this is an example of what I'm looking for, not an actual query I can run.
If you use a table variable instead of a list to store your words then you can use something like:
DECLARE #T TABLE (Word VARCHAR(255) NOT NULL);
INSERT #T (Word)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
SELECT *
FROM TABLE t
WHERE EXISTS
( SELECT 1
FROM #T
WHERE t.Text LIKE '%' + word + '%'
);
You can also create a table type to store this, then you can pass this as a parameter to a stored procedure if required:
CREATE TYPE dbo.StringList (Value VARCHAR(MAX) NOT NULL);
GO
CREATE PROCEDURE dbo.YourProcedures #Words dbo.StringList READONLY
AS
SELECT *
FROM TABLE t
WHERE EXISTS
( SELECT 1
FROM #Words w
WHERE t.Text LIKE '%' + w.word + '%'
);
GO
DECLARE #T dbo.StringList;
INSERT #T (Value)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
EXECUTE dbo.YourProcedure #T;
For more on this see table-valued Parameters on MSDN.
EDIT
I may have misunderstood your requirements as you used LIKE but with no wild card operator, in which case you can just use IN, however I would still recommend using a table to store your values:
DECLARE #T TABLE (Word VARCHAR(255) NOT NULL);
INSERT #T (Word)
VALUES ('test1'), ('test2'), ('test3'), ('test4'), ('test5'), ('test20');
SELECT *
FROM TABLE t
WHERE t.Text IN (SELECT Word FROM #T);
You can use a SELECT like this without declaring an array:
SELECT * FROM TABLE WHERE text IN ('test1', 'test2', 'test3', 'test4', 'test5')
One solution could be :
Create a table in the database with the searched words in a column called word (by example)- by using wildcard if you need
use this kind of request
SELECT *
FROM TABLE, FILTER_TABLE
WHERE TABLE.text LIKE FILTER_TABLE.word
Although I don't have access to SQL Server 2008 at the moment and SQLfiddle seems sick, it would seem you can use a table value constructor to simplify the expression somewhat;
SELECT * FROM test
JOIN (SELECT w FROM (VALUES('word1'), ('word2'), ('word3'), ('word4')) AS a(w)) a
ON test.text LIKE '%'+a.w+'%';
...which will search the text column in the test table for the words listed as values. If you don't want duplicates of rows where multiple words match, you can just add a DISTINCT to the select.
Note though that you may want to look into fulltext indexing if you're doing extensive searches, a LIKE query to find words in a string in this way will not use any indexes, and will most likely be quite slow unless the data is already in memory.

postgresql: select returning ARRAY

I have a table as:
CREATE TABLE tbl_temp (id serial, friend_id int, name varchar(32));
I wish I could run the following SQL:
PREPARE x AS SELECT {$1,friend_id} FROM tbl_temp WHERE id = ANY($2);
EXECUTE x(33, ARRAY[1,2,3,4])
I basically looking for a statement that will return me an array of two ints first of which will be user input and second will be from table column like friend_id.
Is it really possible in PostgreSQL?
The results from SELECT ($1, friend_id) FROM tbl_temp;
EXECUTE x(44);
row
--------
(44,1)
(44,2)
(44,3)
(3 rows)
If I use PQgetvalue(PGres, 0, 0) how will the result look like: {44,45} or like (44,45)?
I think you want to use the array constructor syntax:
SELECT ARRAY[$1, friend_id] FROM tbl_temp WHERE id = ANY($2)
i'm not sure i understand what you want...
to return an array, do this.
SELECT (44, "friend_id") FROM "tbl_temp" WHERE id = ANY(ARRAY[1,2,3,4]);