I work with QGIS and PostgreSQL with PostGIS. I need help with dynamic queries for PostgreSQL.
Information is structured in tables that contain votes for parties, and other types of information like geographic area or election date.
Some columns contains values that have to be splitted among several parties. For example, we can have a column with name "PartyA_PartyB" and a value of 10, and it should be splitted 5 votes to PartyA and 5 votes to PartyB. Additionally we will have independent columns for PartyA and PartyB (separated), so we need to compute a column where we allocate the original PartyA + PartyA_PartyB/2.
So for example for the given the tables “Election Results” and "Parties":
create table election_results ("Country" text, "PartyA" text, "PartyB" text, "PartyC" text, "PartyA_PartyB" text);
insert into election_results
VALUES
('Argentina', 100, 10, 20, 2),
('Uruguay', 3, 5, 1, 0),
('Chile', 40, 200, 50, 10)
;
create table parties (party text);
insert into parties
VALUES
('PartyA'),
('PartyB'),
('PartyC'),
('PartyD'),
('PartyE')
;
I need to create a new table with a column where 'new' PartyA = PartyA + PartyA_PartyB/2 and 'new' PartyB = PartyB + PartyA_PartyB/2
So with previous data desired result is:
Country
PartyA
PartyB
PartyC
Argentina
101
11
20
Uruguay
3
5
1
Chile
45
205
50
In all cases the special characters that separates the names to be splitted is '_'.
We can have n parties in the column names (for example PartyA_PartyB_PartyD_PartyE). Votes have to be splitted among the n parties.
With my limited understanding I think iterate over the columns could be a solution, look for the '_' character and recalculate.
Note: Please store your values not as text but as a numeric type.
demo: db<>fiddle (2 joined colums)
demo: db<>fiddle (additional 3 joined columns)
Create your new table:
CREATE TABLE parties (
"Country" text,
"PartyA" numeric,
"PartyB" numeric,
"PartyC" numeric
);
Copy values for the "single" columns:
INSERT INTO parties
SELECT "Country", "PartyA", "PartyB", "PartyC"
FROM election_results;
Update the columns with a function
SELECT * FROM split_and_update_parties();
The function could look like this:
CREATE OR REPLACE FUNCTION split_and_update_parties()
RETURNS void
LANGUAGE plpgsql AS
$func$
DECLARE
i record;
j text;
n integer;
BEGIN
FOR i in
SELECT
column_name, -- 1
string_to_array(column_name, '_') -- 2
FROM information_schema.columns
WHERE table_name = 'election_results'
AND column_name ~ 'Party'
LOOP
n = cardinality(i.string_to_array); -- 3
IF n > 1 THEN
FOREACH j in array i.string_to_array LOOP
EXECUTE format('
UPDATE parties p -- 4
SET %I = p.%I + s.val / %s
FROM (
SELECT %I as val, "Country"
FROM election_results
) s
WHERE p."Country" = s."Country"
', j, j, n, i.column_name);
END LOOP;
END IF;
END LOOP;
END
$func$;
Explanation:
Fetch column names from internal information schema
Immediately split the names and convert them into arrays
Count the elements of the arrays to know the divider needed furtherly in the calculation
Loop through all these multiple-party-arrays/columns (with more than 1 element), fetch the original values from the election_results table and update the single-party-columns in the new table
I’m trying to wrap my head around a problem but I’m hitting a blank. I know SQL quite well, but I’m not sure how to approach this.
My problem:
Given a string and a table of possible substrings, I need to find the number of occurrences.
The search table consists of a single colum:
searchtable
| pattern TEXT PRIMARY KEY|
|-------------------------|
| my |
| quick |
| Earth |
Given the string "Earth is my home planet and where my friends live", the expected outcome is 3 (2x "my" and 1x "Earth").
In my function, I have variable bodytext which is the string to examine.
I know I can do IN (SELECT pattern FROM searchtable) to get the list of substrings, and I could possibly use a LIKE ANY clause to get matches, but how can I count occurrences of the substrings in the table within the search string?
This is easily done without a custom function:
select count(*)
from (values ('Earth is my home planet and where my friends live')) v(str) cross join lateral
regexp_split_to_table(v.str, ' ') word join
patterns p
on word = p.pattern
Just break the original string into "words". Then match on the words.
Another method uses regular expression matching:
select (select count(*) from regexp_matches(v.str, p.rpattern, 'g'))
from (values ('Earth is my home planet and where my friends live')) v(str) cross join
(select string_agg(pattern, '|') as rpattern
from patterns
) p;
This stuffs all the patterns into a regular expression. Not that this version does not take word breaks into account.
Here is a db<>fiddle.
I solved the problem with the following code:
CREATE OR REPLACE FUNCTION count_matches(body TEXT, OUT matches INTEGER) AS $$
DECLARE
results INTEGER := 0;
matchlist RECORD;
BEGIN
FOR matchlist IN (SELECT pattern FROM searchtable)
LOOP
results := results + (SELECT LENGTH(body) -
LENGTH(REPLACE(body, matchlist.pattern, ''))) /
LENGTH(matchlist.pattern);
END LOOP;
matches := results;
END;
$$ LANGUAGE plpgsql;
I have a database table (mysql/pgsql) with the following format:
id|text
1| the cat is black
2| a cat is a cat
3| a dog
I need to select the line that contains nth match of a word:
eg: "Select the 3rd match for the word cat, that is the number 2 entry."
Results: the 2nd row from the result where the 3rd word is cat
The only solution I could find is to search for all entries that have the text cat, load them in memory and find the match by counting them. But this is not efficient for a big number of matches(>1 million).
How would you handle this in an efficient way? Is there anything you can do directly in the database? Maybe using other technologies like lucene?
Update: having 1 million strings in memory might not be a big issue but the expectation of the application is to have between 1k-50k active users that might do this operation concurrently.
Consider creating another table with the below structure
Table : index_table
columns :
index_id , word, occurrence, id(foreign key to your original table)
Do one time indexing process as below:
Iterate over each entry in your original table split the text into words and for each word lookup in the new table for existence if not present insert a new entry with occurrence set as 1. If exists insert a new entry with occurrence = existing occurrence +1
Once you have done this one off indexing your selects become pretty simple.
For example for cat with 3rd match will be
SELECT *
FROM original_table o, index_table idx
WHERE idx.word = 'cat'
AND idx.occurrence = 3
AND o.id = idx.id
You do not need Lucene for this job. Furthermore, if you have a large number of positive matches, the effort to pump all required data out of your DB will well exceed the computational cost.
Here's a simple solution:
Index: we require two properties:
efficiently access the words for each id
efficiently access all IDs in ascending order
as follows:
create index i_words on example_data (id, string_to_array(txt, ' '));
Query: find the ID associated with the nth match with the following query:
select id
from (
select id, unnest(string_to_array(txt, ' ')) as word
from example_data
) words
where word = :w -- :w = 'cat'
offset :n - 1 -- :n = 3
limit 1;
Executes in 2ms on 1 million rows.
Here's the full PostgreSQL setup if you'd rather try for yourself than take my word for it:
drop table if exists example_data;
create table example_data (
id integer primary key,
txt text not null
);
insert into example_data
(select generate_series(1, 1000000, 3) as id, 'the cat is black' as txt
union all
select generate_series(2, 1000000, 3), 'a cat is a cat'
union all
select generate_series(3, 1000000, 3), 'a dog'
order by id);
commit;
drop index if exists i_words;
create index i_words on example_data (id, string_to_array(txt, ' '));
select id
from (
select id, unnest(string_to_array(txt, ' ')) as word
from example_data
) words
where word = 'cat'
offset 3 - 1
limit 1;
select
id, word
from (
select id, unnest(string_to_array(txt, ' ')) as word
from example_data
) words
where word = 'cat'
offset 3 - 1
limit 1;
Note that I'm still unsure what exactly "Select the 3rd match for the word cat, that is the number 2 entry" is supposed to mean.
Possible meanings:
the 2nd row from the result where the 3rd word is cat
the 3rd row where the 2nd word is "cat"
from all rows where "cat" appears at least 3 times, take the second row
from all rows where "cat" appears at least 2 times, take the third row
If it's 1 or 2, I think this could be done in an acceptable speed by using a trigram index to reduce the possible number of matching lines. A trigram index (supplied by the module pg_trgm) will allow Postgres to make use of an index when doing a e.g. like '%cat%'.
Assuming that only a small number of rows will satisfy that condition, the resulting lines can then be split into arrays and checked for the nth word.
Something like this:
with matching_rows as (
select id, line,
row_number() over (order by id) as rn
from the_table
where line like '%cat%' -- this hopefully reduces the result to only very few rows
)
select *
from matching_rows
where rn = 3 --<< "the third match for the word cat"
and (string_to_array(line, ' '))[2] = 'cat' -- "the second word is "cat"
Note that a trigram index does have disadvantages as well. Maintaining such an index is much more expensive (=slower) than maintaining a regular b-tree index. So if your table is heavily updated, this might not be a good solution - but you need to test that for yourself.
Also if the condition `like '%cat%' doesn't really reduce the number of rows substantially, this is probably not going to perform well either.
Some more information on trigram indexes:
http://www.depesz.com/index.php/2011/02/19/waiting-for-9-1-faster-likeilike/
http://www.postgresonline.com/journal/archives/212-PostgreSQL-9.1-Trigrams-teaching-LIKE-and-ILIKE-new-tricks.html
Another option would be to filter out the "relevant" rows using Postgres' full text search instead of a plain LIKE condition.
Whatever algorithm you come up with for the database as-it-is is likely to be slow for this kind of data. You do need an efficient text-based search, lucene-based solutions like solr or elasticsearch will do nicely here. It would be the best option here, though finding a match against a 3rd token in a string is not something I know how to build without further googling.
You can also write a job in your db which will let you build a reverse map, string->id. like this:
rownum, id, text
1 1 the cat is black
2 3 nice cat
to
key, rownum, id
1_the 1 1
2_cat 1 1
3_is 1 1
4_black 1 1
1_nice 2 3
2_cat 2 3
If you can order by ID you don't need rownum. You should also call the column something else instead of rownum, I leave it like that for clarity
Now you can search for 1st ID where the word cat is a 2nd word like this by searching
SELECT ID WHERE ROWNUM=1 AND key='3_CAT'
Provided you created an (id, key) or (key, id) index, your searches should be pretty quick.
If you can fit all that data into memory, then you can use a simple Map<MyKey, Long> to do your search. MyKey would be, more or less Pair<Long,String> with proper equals and hashCode (and/or Comparable, if you use TreeMap) implementations.
(Thanks to Daniel Grosskopf for pointing out that I initially misinterpreted the question.)
This query will give you what you want with just SQL. It gets a running total of the counts of the occurrences of a word (e.g. 'cat') within the text, and then it returns the first row that hits the threshold that you want (e.g. 3).
SELECT id, text
FROM (SELECT entries.*,
SUM((SELECT COUNT(*)
FROM regexp_split_to_table(text, E'\\s+') AS words(word)
WHERE word = 'cat')) OVER (ORDER BY id) AS running_count
FROM entries) AS entries_with_running_count
WHERE running_count >= 3
LIMIT 1
See it in action in SQL Fiddle
How would you handle this in an efficient way? Is there any trick you
can do directly in the database?
You are not specifying what other restrictions/requirements you may have or what is your definition of
a big number of matches.
As a general answer I would say that doing string manipulation in the database is not an efficient approach.
It is too slow and imposes much work on your DB which is usually a shared resource.
IMO you should do this programmatically.
A way to do this could be to keep metadata in another table i.e. indexes of rows that contain the text cat and where in the sentence.
You can query this meta-table in order to figure the rows to query from your main table.
This extra table is more efficient than searching your defined table because queries with LIKE on suffixes can not use an index and you will end up with serial scans which would result in very slow performance
Solution for the Postgres database:
Add a new column to your table:
alter table my_table add text_as_array text[];
This column will contain the sentence spliced into words:
"the cat is black" -> ["the","cat","is","black"]
Populate this column with values from current records:
update my_table set text_as_array = string_to_array(text,' ');
(and don't forget to set it's value to string_to_array(text,' ') when inserting new records)
Create a gin index on it:
create index my_table_text_as_array_index on text_as_array gin(text_as_array);
analyze my_table;
Then all you need is run a fast query as simple as this:
select *
from my_table
where text_as_array #> ARRAY['cat']
and text_as_array[3] = 'cat' -- third word in sentence
order by id
limit 1
offset 2 -- second occurrence
It took 11ms to search over ~2,400,000 records in tests I did in my machine.
Explain:
Limit (cost=11252.08..11252.08 rows=1 width=104)
-> Sort (cost=11252.07..11252.12 rows=19 width=104)
Sort Key: id
-> Bitmap Heap Scan on my_table (cost=48.21..11251.83 rows=19 width=104)
Recheck Cond: (text_as_array #> '{cat}'::text[])
Filter: (text_as_array[3] = 'cat'::text)
-> Bitmap Index Scan on my_table_text_as_array_index (cost=0.00..48.20 rows=3761 width=0)
Index Cond: (text_as_array #> '{cat}'::text[])
A "directly in the database" solution seems preferable from an efficiency standpoint as most types of abstraction layer or loading/processing elsewhere are likely to incur additional overheads.
If the source text can be massaged such that only spaces separate the words (as mentioned in the comments - perhaps by pre-processing to suitably replace all non-alphabetical characters?), the following (My)SQL-only solution will work:
#############################################################
SET #searchWord = 'cat', # Search word: Must be lower case #
#n = 1, # n where nth match is to be found #
#############################################################
#matches = 0; # Initialise local variable
SELECT s.*
FROM sentence s
WHERE id =
(SELECT subq.id
FROM
(SELECT *,
#matches AS prevMatches,
(#matches := #matches + LENGTH(`text`) - LENGTH(
REPLACE(LOWER(`text`),
CONCAT(' ', #searchWord, ' '),
CONCAT(#searchWord, ' ')))
+ CASE WHEN LEFT(LOWER(`text`), 4) = CONCAT(#searchWord, ' ') THEN 1 ELSE 0 END
+ CASE WHEN RIGHT(LOWER(`text`), 4) = CONCAT(' ', #searchWord) THEN 1 ELSE 0 END)
AS matches
FROM sentence) AS subq
WHERE subq.prevMatches < #n AND #n <= subq.matches);
Explanation
All instances of ' cat ' on each line are replaced with a word that is one letter shorter. The difference in length is then calculated to find out the number of instances. Finally, the single possibilities of 'cat ' and ' cat' appearing a the start and end of the line are respectively catered for. Having done this, a cumulative total of matches is maintained for each line. This is bundled up into a subquery from which the nth match can be picked by finding the row where the number of cumulative number of matches is no greater than n but the previous total is less than n.
Further potential improvements
The above could of course be slightly simplified by making the source text lower case (which seems sensible if it is being pre-processed) and removing all calls to LOWER().
The subquery calculates a cumulative total number of matches. If it is likely that the same search terms will be reused, it might conceivably be possible to cache these results in another table and use triggers to maintain this whenever records are updated, inserted or deleted - however this would greatly add to the complexity and data storage requirements.
I would search for all rows with "cat" but limit the rows by n. This should give you a reasonably sized subset of your data that is guaranteed to contain the row you are looking for. The SQL would look similar to this:
select id, text
from your_table
where text ~* 'cat'
order by id
limit 3 --nth time cat appears
I would then implement your solution as a pl/pgsql function to get the id that contains the nth occurrence of your word:
CREATE OR REPLACE FUNCTION your_schema.row_with_nth_occurrence(character varying, integer)
RETURNS integer AS
$BODY$
Declare
arg_search_word ALIAS FOR $1;
arg_occurrence ALIAS FOR $2;
v_sql text;
v_sql2 text;
v_count integer;
v_count_total integer;
v_record your_table%ROWTYPE;
BEGIN
v_sql := 'select id, text
from your_table
where text ~* ' || arg_search_word || '
order by id
limit ' || arg_occurrence || ';';
v_count := 0;
v_count_total := 0;
FOR v_record IN v_sql LOOP
v_sql2 := 'SELECT count(*)
FROM regexp_split_to_table('||v_record.text||', E'\\s+') a
WHERE a = '|| arg_search_word ||';';
EXECUTE v_sql2 INTO v_count;
v_count_total := v_count_total + v_count;
IF v_count_total >= arg_occurrence THEN
RETURN v_record.id;
END IF;
END LOOP;
RAISE EXCEPTION '% does not occur % times in the database.', arg_search_word, arg_occurrence;
END;
All this function does is loop through the subset of rows potentially containing the desired word, counts the number of times it occurs in each row, and then returns the Id when it finds the row with the nth occurrence of the word.
Solution one:
Keep the rows in memory but centralized. All clients loop over the same list. Probably fast enough en reasonably memory friendly.
Solution two:
Use the streaming ResultSet technique from the JDBC driver; e.g.
Statement select = connection.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
select.setFetchSize(Integer.MIN_VALUE);
ResultSet result = select.executeQuery(sql);
As explained in http://dev.mysql.com/doc/connector-j/en/connector-j-reference-implementation-notes.html, scroll down to Resultset. This should be memory friendly.
Now simply count on the result rows until satisfied and close the result.
I am having trouble understanding your statement:
eg: "Select the 3rd match for the word cat, that is the number 2
entry." Results: the 2nd row from the result where the 3rd word is cat
I will assume that you mean, you want to search for entries where the 3rd word of the text is "cat", and from those entries you want to second entry.
Since you mentioned that your problem lies with the concurrent access and the speed, you will need to somehow build an index which is optimized for your query. You could use anything for this, database, lucene, etc. My suggestion would be to build the index in-memory. Just think of it as a warm up for your service before it could start serving request.
In your case, you would want some kind of map with the word and word position as the key. This key will then map to a list of row numbers which is matching the key. So in the end, you will just have to do a lookup twice, first is to get a list of row numbers where it matches, then the row number which you want. So the performance you will need in the end will be a simple map lookup + array list lookup (constant).
I've provided a very simple example below. It's untested code, but it should roughly give you an idea.
You could also save the index into a file after it's been built if you want. After you have been the index and load them into memory, this will be very very fast.
// text entry from the DB
public class TextEntry {
private int rowNb;
private String text;
// getters & setters
}
// your index class
public class Index {
private Map<Key, List<Integer>> indexMap;
// getters and setters
public static class Key {
private int wordPosition;
private String word;
// getters and setters
}
}
// your searcher class
public class Searcher {
private static Index index = null;
private static List<TextEntry> allTextEntries = null;
public static init() {
// init all data with some synchronization check
// synchronization check whether index has been built
allTextEntries.forEach(entry -> {
// split the words, and build the index based on the word position and the word
String[] words = entry.split(" ");
for (int i = 0; i < words.length; i++) {
Index.Key key = new Index.Key(i + 1, words[i]);
int rowNumber = entry.getRowNb();
// if the key is already there, just add the row number if it's not the last one
if (indexMap.contains(key)) {
List entryMatch = indexMap.get(key);
if (entryMatch.get(entryMatch.size() - 1) !== rowNumber) {
entryMatch.add(rowNumber);
}
} else {
// if key is not there, add a new one
List entryMatch = new ArrayList<Integer>()
entryMatch.add(rowNumber);
indexMap.put(key, entryMatch);
}
}
});
}
public static TextEntry search(String word, int wordPosition, int resultNb) {
// call init if not yet called, do some check
int rowNb = index.getIndexMap().get(new Index.Key(word, wordPosition)).get(resultNb - 1);
return allTextEntries.get(rowNb);
}
}
In mysql
We need one function where we can count number of occurence of given substring in a field.
Create the Function (This function will count occurence of substring in given column)
CREATE FUNCTION substrCount(
x varchar(255), delim varchar(12)) returns int
return (length(x)-length(REPLACE(x,delim, '')))/length(delim);
This function should be able to find how many times 'cat' was present in text.
Please bear with me for syntax of code as it may not be fully functional(correct as required).
I will break this problem into 3 parts and we can do with the help of stored procedure.
Select all the rows containing the string 'cat' (or any other input).This should select maximum of n rows( n= no of occurences), so we will use limit in our query.
With cursor, iterate matched rows in while roop.
Increment occurence matches per row in count variable and exit once number of matches found.(Should be able to find match within 1 to n loops)
create stored procedure.
Assuming proper index ,this should be fast.
DELIMITER $$
CREATE PROCEDURE find_match(INOUT string_to_match varchar(100),
INOUT occurence_count INTEGER,OUT match_field varchar(100))
BEGIN
DECLARE v_count INTEGER DEFAULT 0;
DECLARE v_text varchar(100) DEFAULT "";
-- declare cursor and select by the order you want.
DEClARE matcher_cursor CURSOR FOR
SELECT textField FROM myTable
where textField like string_to_match
order by id
LIMIT 0, occurence_count;
-- declare NOT FOUND handler
DECLARE CONTINUE HANDLER
FOR NOT FOUND SET v_finished = -1;
OPEN matcher_cursor;
get_matching_occurence: LOOP
FETCH matcher_cursor INTO v_text;
IF v_count = -1 THEN
LEAVE get_matching_occurence;
END IF;
-- use substring count function
v_count:= v_count + substrCount(v_text,string_to_match));
-- if count is equal to greater than occurenece that means matching row is found.
IF (v_count>= occurence_count) THEN
SET match_field = v_text;
v_count:=-1;
END IF;
END LOOP get_matching_occurence;
CLOSE _
END$$
DELIMITER ;
I tested this on a table with 1.2 million rows and it returns data in less than a second. I am using a split function (which is a modified form of Jeff Modem's splitter function) from here: 'http://sqlperformance.com/2012/08/t-sql-queries/splitting-strings-follow-up'.`
-- Step 1. Create table
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Sentence](
[id] [int] IDENTITY(1,1) NOT NULL,
[Text][varchar](250) NULL,
CONSTRAINT [PK_Sentence] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
Step 2. Create a split function
CREATE FUNCTION [dbo].[SplitSentence]
(
#CSVString NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING AS
RETURN
WITH E1(N) AS ( SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1),
E2(N) AS (SELECT 1 FROM E1 a, E1 b),
cteTally(N) AS (SELECT 0
UNION ALL
SELECT TOP (DATALENGTH(ISNULL(#CSVString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E2),
cteStart(N1) AS (SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(#CSVString,t.N,1) = #Delimiter OR t.N = 0))
SELECT Word = SUBSTRING(#CSVString, s.N1, ISNULL(NULLIF(CHARINDEX(#Delimiter,#CSVString,s.N1),0)-s.N1,50))
FROM cteStart s;
Step 3. Create a sql script to return the required data
DECLARE #n int = 3
DECLARE #Word varchar(50) = 'cat'
;WITH myData AS
(SELECT TOP (#n)
id
,[Text]
,sp.word
,ROW_NUMBER() OVER (ORDER BY Id) RowNo
FROM
Sentence
CROSS APPLY (SELECT * FROM SplitSentence(Sentence.[Text],' ')) sp
WHERE Word = #Word)
SELECT
*
FROM
myData
WHERE
RowNo = #n
Assumptions:
1. The sentence has a max length of 250 characters. If needed this can be modified in the create table statement.
2. The sentence will not have more than a 100 words. If more than 100 words are needed, the split function will have to be modified.
3. Any word in the sentence has a max length of 50 characters.
SQL Fiddle demo here: http://sqlfiddle.com/#!3/0a1d0/1
Notes:
I am aware that the original requirement is for MySQL/pgsql,
but I have limited knowledge of these and therefore my solution has been created/tested in MSSQL.
I would simply count the number of words in each line and then do a cumulative sum. I'm not sure what the most efficient way is to count words, but a difference of lengths might win:
select t.*
from (select t.*, sum(cnt) over (order by id) as cumecnt
from (select t.*,
(length(' ' || str || ' ') - length(replace(' ' || str || ' '), ' cat ', '')) / length(' cat ') as cnt
from t
) t
where num > 0
) t
where cumecnt >= 3 and cumecnt - cnt <= 3;
You would simply replace "3" and "cat" with the appropriate strings.
This method requires scanning the strings a handful of times in each row (once for each of the lengths and once for the replace). My guess is that this is faster than various array operations, regular expressions, or text. If you have more complicated definitions of what a word is, then you probably need to use regular expression replace:
Doing the work in the database is usually a big win. However, if you are looking for the 6th match out of one million rows, it might be faster to read back the values from the subquery and do the accumulation in the application. I don't think there is a way to short-circuit the database calculation to stop just on the "6th" row.
I've been doing some research on how to replace a subset of string of characters of a single row base on the values of the columns of other rows, but was not able to do so since the update are only for the first row values of the other table. So I'm planning to insert this in a loop in a plpsql function.
Here are the snippet of my tables. Main table:
Table "public.tbl_main"
Column | Type | Modifiers
-----------------------+--------+-----------
maptarget | text |
expression | text |
maptarget | expression
-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
43194-0 | 363787002:70434600=(386053000:704347000=(414237002:704320005=259470008,704318007=118539007,704319004=50863008),704327008=122592007,246501002=703690001,370132008=30766002)
Look-up table:
Table "public.tbl_values"
Column | Type | Modifiers
-----------------------+--------+-----------
conceptid | bigint |
term | text |
conceptid | term
-----------+------------------------------------------
386053000 | Patient evaluation procedure (procedure)
363787002 | Observable entity (observable entity)
704347000 | Observes (attribute)
704320005 | Towards (attribute)
704318007 | Property type (attribute)
I want to create a function that will replace all numeric values in the tbl_main.expression columns with their corresponding tbl_values.term using the tbl_values.conceptid as the link to each numeric values in the expression string.
I'm stuck currently in the looping part since I'm a newbie in LOOP of plpgsql. Here is the rough draft of my function.
--create first a test table
drop table if exists tbl_test;
create table tbl_test as select * from tbl_main limit 1;
--
create or replace function test ()
RETURNS SETOF tbl_main
LANGUAGE plpgsql
AS $function$
declare
resultItem tbl_main;
v_mapTarget text;
v_expression text;
ctr int;
begin
v_mapTarget:='';
v_expression:='';
ctr:=1;
for resultItem in (select * from tbl_test) loop
v_mapTarget:=resultItem.mapTarget;
select into v_expression expression from ee;
raise notice 'parameter used: %',v_mapTarget;
raise notice 'current expression: %',v_expression;
update ee set expression=replace(v_expression, new_exp::text, term) from (select new_exp::text, term from tbl_values offset ctr limit 1) b ;
ctr:=ctr+1;
raise notice 'counter: %', ctr;
v_expression:= (select expression from ee);
resultItem.expression:= v_expression;
raise notice 'current expression: %',v_expression;
return next resultItem;
end loop;
return;
end;
$function$;
Any further information will be much appreciated.
My Postgres version:
PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.8.2-19ubuntu1) 4.8.2, 64-bit
PL/pgSQL function with dynamic SQL
Looping is always a measure of last resort. Even in this case it is substantially cheaper to concatenate a query string using a query, and execute it once:
CREATE OR REPLACE FUNCTION f_make_expression(_expr text, OUT result text) AS
$func$
BEGIN
EXECUTE (
SELECT 'SELECT ' || string_agg('replace(', '') || '$1,'
|| string_agg(format('%L,%L)', conceptid::text, v.term), ','
ORDER BY conceptid DESC)
FROM (
SELECT conceptid::bigint
FROM regexp_split_to_table($1, '\D+') conceptid
WHERE conceptid <> ''
) m
JOIN tbl_values v USING (conceptid)
)
USING _expr
INTO result;
END
$func$ LANGUAGE plpgsql;
Call:
SELECT *, f_make_expression(expression) FROM tbl_main;
However, if not all conceptid have the same number of digits, the operation could be ambiguous. Replace conceptid with more digits first to avoid that - ORDER BY conceptid DESC does that - and make sure that replacement strings do not introduce ambiguity (numbers that might be replaced in the the next step). Related answer with more on these pitfalls:
Replace a string with another string from a list depending on the value
The token $1 is used two different ways here, don't be misled:
regexp_split_to_table($1, '\D+')
This one references the first function parameter _expr. You could as well use the parameter name.
|| '$1,'
This concatenates into the SQL string a references to the first expression passed via USING clause to EXECUTE. Parameters of the outer function are not visible inside EXECUTE, you have to pass them explicitly.
It's pure coincidence that $1 (_expr) of the outer function is passed as $1 to EXECUTE. Might as well hand over $7 as third expression in the USING clause ($3) ...
I added a debug function to the fiddle. With a minor modification you can output the generated SQL string to inspect it:
SQL function
Here is a pure SQL alternative. Probably also faster:
CREATE OR REPLACE FUNCTION f_make_expression_sql(_expr text)
RETURNS text AS
$func$
SELECT string_agg(CASE WHEN $1 ~ '^\d'
THEN txt || COALESCE(v.term, t.conceptid)
ELSE COALESCE(v.term, t.conceptid) || txt END
, '' ORDER BY rn) AS result
FROM (
SELECT *, row_number() OVER () AS rn
FROM (
SELECT regexp_split_to_table($1, '\D+') conceptid
, regexp_split_to_table($1, '\d+') txt
) sub
) t
LEFT JOIN tbl_values v ON v.conceptid = NULLIF(t.conceptid, '')::int
$func$ LANGUAGE sql STABLE;
In Postgres 9.4 this can be much more elegant with two new features:
ROWS FROM to replacing the old (weird) technique to sync set-returning functions
WITH ORDINALITY to get row numbers on the fly reliably:
PostgreSQL unnest() with element number
CREATE OR REPLACE FUNCTION f_make_expression_sql(_expr text)
RETURNS text AS
$func$
SELECT string_agg(CASE WHEN $1 ~ '^\d'
THEN txt || COALESCE(v.term, t.conceptid)
ELSE COALESCE(v.term, t.conceptid) || txt END
, '' ORDER BY rn) AS result
FROM ROWS FROM (
regexp_split_to_table($1, '\D+')
, regexp_split_to_table($1, '\d+')
) WITH ORDINALITY AS t(conceptid, txt, rn)
LEFT JOIN tbl_values v ON v.conceptid = NULLIF(t.conceptid, '')::int
$func$ LANGUAGE sql STABLE;
SQL Fiddle demonstrating all for Postgres 9.3.
There's also another way, without creating functions... using "WITH RECURSIVE". Used it with lookup talbe of thousands of rows.
You'll need to change following table names and columns to your names:
tbl_main, strsourcetext, strreplacedtext;
lookuptable, strreplacefrom, strreplaceto.
WITH RECURSIVE replaced AS (
(SELECT
strsourcetext,
strreplacedtext,
array_agg(strreplacefrom ORDER BY length(strreplacefrom) DESC, strreplacefrom, strreplaceto) AS arrreplacefrom,
array_agg(strreplaceto ORDER BY length(strreplacefrom) DESC, strreplacefrom, strreplaceto) AS arrreplaceto,
count(1) AS intcount,
1 AS intindex
FROM tbl_main, lookuptable WHERE tbl_main.strsourcetext LIKE '%' || strreplacefrom || '%'
GROUP BY strsourcetext)
UNION ALL
SELECT
strsourcetext,
replace(strreplacedtext, arrreplacefrom[intindex], arrreplaceto[intindex]) AS strreplacedtext,
arrreplacefrom,
arrreplaceto,
intcount,
intindex+1 AS intindex
FROM replaced WHERE intindex<=intcount
)
SELECT strsourcetext,
(array_agg(strreplacedtext ORDER BY intindex DESC))[1] AS strreplacedtext
FROM replaced
GROUP BY strsourcetext