Postgres use index with `split_part` - sql

Context:
I have a test table:
=> \d+ test
Table "public.test"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
---------------+------------------------+-----------+----------+---------+----------+--------
------+-------------
id | character varying(255) | | | | extended |
|
configuration | jsonb | | | | extended |
|
The configuration column contains "well-defined" json, which has a key called source_url (Skipping other non-relevant keys). An example value for configuration column is:
{
"source_url": "https://<resource-address>?Signature=R1UzTGphWEhrTTFFZnc0Q4qkGRxkA5%2BHFZSfx3vNEvRsrlDcHdntArfHwkWiT7Qxi%2BWVJ4DbHJeFp3GpbS%2Bcb1H3r1PXPkfKB7Fjr6tFRCetDWAOtwrDrVOkR9G1m7iOePdi1RW%2Fn1LKE7MzQUImpkcZXkpHTUgzXpE3TPgoeVtVOXXt3qQBARpdSixzDU8dW%2FcftEkMDVuj4B%2Bwiecf6st21MjBPjzD4GNVA%2F6bgvKA6ExrdYmM5S6TYm1lz2e6juk81%2Fk4eDecUtjfOj9ekZiGJVMyrD5Tyw%2FTWOrfUB2VM1uw1PFT2Gqet87jNRDAtiIrJiw1lfB7Od1AwNxIk0Rqkrju8jWxmQhvb1BJLV%2BoRH56OHdm5nHXFmQdldVpyagQ8bQXoKmYmZPuxQb6t9FAyovGMav3aMsxWqIuKTxLzjB89XmgwBTxZSv5E9bkWUbom2%2BWq4O3%2BCrVxYwsqg%3D%3D&Expires-At=1569340020&Issued-At=1568293200"
.
.
}
The URL contains a query param Expires-At
Problem:
There is a scheduled job that runs every 24 hours. This job should find all such records which are expired/about to expire(and then do something about it).
Solution:
I have this query to get my job done:
select * from test where to_timestamp(split_part(split_part(configuration->>'source_url', 'Expires-At=', 2), '&', 1)::bigint) <= now() + interval '24 hours';
Explanation:
The query first splits the source_url at Expires-At= and picks the part present at the right of it and then it splits the resultant string on & and picks the left part of it, thus getting the exact epoch time needed as text
The same query also works for the corner case when Expires-At is the last query param in the source_url
Once it extracts the epoch time as text, it first converts it to a bigint and then convert it to Postgres timestamp and then this timestamp is compared if it is going to be less than or equal to the time 24 hours away from now()
All rows passing the above condition are selected
So, at the end, in each run, scheduler refreshes all the urls that will expire in the next 24 hours (including the ones, which are already expired)
Questions:
Though this solves my problem, I really don't like this solution. This has a lot of string manipulation which I kind of find as un-clean. Is there a much cleaner way to do this?
If we "have" to go with above solution, can we even use indices for this kind of query? I know the functions lower(), upper() extra can be indexed, but I really can't think of any way where I could index this query.
Alternatives:
Unless there is a real clean solution, I am going to go with this:
I would introduce a new key inside configuration json called expires_at, making sure, this gets filled with the correct value, every time a row is inserted.
And then directly query this newly added field(have the index on configuration column).
I admit that this way I am repeating the information Expires-At, but out of all possible solution I could think of, this is the one which I find to be most clean.
Is there a better way than this that you folks can think of?
EDIT:
Updated the query to use substring() with regex instead of inner split_part():
select * from test where to_timestamp(split_part(substring(configuration->>'source_url' from 'Expires-At=\d+'), '=', 2)::bigint) <= now() + interval '24 hours';

Given your current data model, I don't find your WHERE condition that bad.
You can index it with
CREATE INDEX ON test (
to_timestamp(
split_part(
split_part(
configuration->>'source_url',
'Expires-At=',
2
),
'&',
1
)::bigint
)
);
Essentially, youbhave to index the whole expression on the left side of =. You can only do that if all functions and operators involved are IMMUTABLE, which I think they are in your case.
I would change the data model though. First, I don't see the value of having a jsonb column with a single value in it. Why not have the URL as a text column instead?
You could go farther and split the URL into individual parts which are stored in columns.
If all this is a good idea depends on how you use the value in the database: often it is a good idea to split off those parts of the data that you use in WHERE conditions and the like and leave the rest "in a lump". This is to some extent a matter of taste.

You can use a URI parsing module, if that is the part you find unclean. You could use plperl or plpythonu, with whatever URI parser library in them you prefer. But if your json is really "well defined" I don't see much point. Unless you are already using plperl or plpythonu, adding those dependencies probably adds more "dirt" than it removes.
You can build an index:
create index on test (to_timestamp(split_part(split_part(configuration->>'source_url', 'Expires-At=', 2), '&', 1)::bigint));
set enable_seqscan TO off;
explain select * from test where to_timestamp(split_part(split_part(configuration->>'source_url', 'Expires-At=', 2), '&', 1)::bigint) <= now() + interval '24 hours';
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using test_to_timestamp_idx1 on test (cost=0.13..8.15 rows=1 width=36)
Index Cond: (to_timestamp(((split_part(split_part((configuration ->> 'source_url'::text), 'Expires-At='::text, 2), '&'::text, 1))::bigint)::double precision) <= (now() + '24:00:00'::interval))
I would introduce a new key inside configuration json called expires_at, making sure, this gets filled with the correct value, every time a row is inserted.
Isn't that just re-arranging the dirt? It makes the query look nicer, at the expense of making the insert uglier. Perhaps you could put it in an INSERT OR UPDATE trigger.

Related

Meaning of these two queries (sql injection)

Can someone explain why these two queries (sometimes) do cause errors? I googled some explanations but none of them were right. I dont want to fix it. This queries should be actually used for SQL injection attack (I think error based sql injection). Triggered error should be "duplicate entry". I'm trying to found out why are they sometimes counsing errors.
Thanks.
select
count(*)
from
information_schema.tables
group by
concat(version(),
floor(rand()*2));
select
count(*),
concat(version(),
floor(rand()*2))x
from
information_schema.tables
group by
x;
It seems the second one is trying to guess which database the victim of the injection is using.
The second one is giving me this:
+----------+------------------+
| count(*) | x |
+----------+------------------+
| 88 | 10.1.38-MariaDB0 |
| 90 | 10.1.38-MariaDB1 |
+----------+------------------+
Okay, I'm going to post an answer - and it's more of a frame challenge to the question itself.
Basically: this query is silly, and it should be written; find out what it's supposed to do and rewrite it in a way that makes sense.
What does the query currently do?
It looks like it's getting a count of the tables in the current database... except it's grouping by a calculated column. And that column looks like it is Version() and appends either a '0' or a '1' to it (chosen randomly.)
So the end result? Two rows, each with a numerical value, the sum of which adds up to the total number of tables in the current database. If there are 30 tables, you might get 13/17 one time, 19/11 the next, followed by 16/14.
I have a hard time believing that this is what the query is supposed to do. So instead of just trying to fix the "error" - dig in and figure out what piece of data it should be returning - and then rewrite the proc to do it.

Index for comparing to beginning of every word in a column

So I have a table
id | name | gender
---+-----------------+-------
0 | Markus Meskanen | M
1 | Jack Jackson | M
2 | Jane Jackson | F
And I've created an index
CREATE INDEX people_name_idx ON people (LOWER(name));
And then I query with
SELECT * FROM people WHERE name LIKE LOWER('Jack%');
Where %(name)s is the user's input. However, it now matches only to the beginning of the whole column, but I'd like it to match to the beginning of any of the words. I'd prefer not to use '%Jack%' since it would also result into invalid results from the middle of the word.
Is there a way to create an index so that each word gets a separate row?
Edit: If the name is something long like 'Michael Jackson's First Son Bob' it should match to the beginning of any of the words, i.e. Mich would match to Michael and Fir would match to First but ackson wouldn't match to anything since it's not from the beginning.
Edit 2: And we have 3 million rows so performance is an issue, thus I'm looking at indexes mostly.
Postgres has two index types to help with full text searches: GIN and GIST indexes (and I think GIN is the more commonly used one).
There is a brief overview of the indexes in the documentation. There is more extensive documentation for each index class, as well as plenty of blogs on the subject (here is one and here is another).
These can speed up the searches that you are trying to do.
The pg_trgm module does exactly what you want.
You need to create either:
CREATE INDEX people_name_idx ON people USING GIST (name gist_trgm_ops);
Or:
CREATE INDEX people_name_idx ON people USING GIN (name gin_trgm_ops);
See the difference here.
After that, these queries could use one of the indexes above:
SELECT * FROM people WHERE name ILIKE '%Jack%';
SELECT * FROM people WHERE name ~* '\mJack';
As #GordonLinoff answered, full text search is also capable of searching by prefix matches. But FTS is not designed to do that efficiently, it is best in matching lexemes. Though if you want to achieve the best performace, I advise you to give it a try too & measure each. In FTS, your query looks something like this:
SELECT * FROM people WHERE to_tsvector('english', name) ## to_tsquery('english', 'Jack:*');
Note: however if your query filter (Jack) comes from user input, both of these queries above needs some protection (i.e. in the ILIKE one you need to escape % and _ characters, in the regexp one you need to escape a lot more, and in the FTS one, well you'll need to parse the query with some parser & generate a valid FTS' tsquery query, because to_tsquery() will give you an error if its parameter is not valid. And in plainto_tsquery() you cannot use a prefix matching query).
Note 2: the regexp variant with name ~* '\mJack' will work best with english names. If you want to use the whole range of unicode (i.e. you want to use characters, like æ), you'll need a slightly different pattern. Something like:
SELECT * FROM people WHERE name ~* '(^|\s|,)Jack';
This will work with most of the names, plus this will work like a real prefix match with some old names too, like O'Brian.
You can use Regex expressions to find text inside name:
create table ci(id int, name text);
insert into ci values
(1, 'John McEnroe Blackbird Petrus'),
(2, 'Michael Jackson and Blade');
select id, name
from ci
where name ~ 'Pe+'
;
Returns:
1 John McEnroe Blackbird Petrus
Or can use something similar where substring(name, <regex exp>) is not null
Check it here: http://rextester.com/LHA16094
If you know that the words are space separated, You can do
SELECT * FROM people WHERE name LIKE LOWER('Jack%') or name LIKE LOWER(' Jack%') ;
For more control you can use RegEx with MySQl
see https://dev.mysql.com/doc/refman/5.7/en/regexp.html

PostgreSQL, find strings differ by n characters

Suppose I have a table like this
id data
1 0001
2 1000
3 2010
4 0120
5 0020
6 0002
sql fiddle demo
id is primary key, data is fixed length string where characters could be 0, 1, 2.
Is there a way to build an index so I could quickly find strings which are differ by n characters from given string? like for string 0001 and n = 1 I want to get row 6.
Thanks.
There is the levenshtein() function, provided by the additional module fuzzystrmatch. It does exactly what you are asking for:
SELECT *
FROM a
WHERE levenshtein(data, '1110') = 1;
SQL Fiddle.
But it is not very fast. Slow with big tables, because it can't use an index.
You might get somewhere with the similarity or distance operators provided by the additional module pg_trgm. Those can use a trigram index as detailed in the linked manual pages. I did not get anywhere, the module is using a different definition of "similarity".
Generally the problem seems to fit in the KNN ("k nearest neighbours") search pattern.
If your case is as simple as the example in the question, you can use LIKE in combination with a trigram GIN index, which should be reasonably fast with big tables:
SELECT *
FROM a
WHERE data <> '1110'
AND (data LIKE '_110' OR
data LIKE '1_10' OR
data LIKE '11_0' OR
data LIKE '111_');
Obviously, this technique quickly becomes unfeasible with longer strings and more than 1 difference.
However, since the string is so short, any query will match a rather big percentage of the base table. Therefore, index support will hardly buy you anything. Most of the time it will be faster for Postgres to scan sequentially.
I tested with 10k and 100k rows with and without a trigram GIN index. Since ~ 19% match the criteria for the given test case, a sequential scan is faster and levenshtein() still wins. For more selective queries matching less than around 5 % of the rows (depends), a query using an index is (much) faster.

Order by a field containing Numbers and Letters

I need to extract data from an existing Padadox database under Delphi XE2 (yes, i more than 10 years divide them...).
i need to order the result depending on a field (id in the example) containing values such as : '1', '2 a', '100', '1 b', '50 bis'... and get this :
- 1
- 1 b
- 2 a
- 50 bis
- 100
maybe something like that could do it, but those keywords don't exist :
SELECT id, TRIM(TRIM(ALPHA FROM id)) as generated, TRIM(TRIM(NUMBER FROM id)) as generatedbis, etc
FROM "my.db"
WHERE ...
ORDER BY generated, generatedbis
how could i achieve such ordering with paradox... ?
Try this:
SELECT id, CAST('0' + id AS INTEGER) A
FROM "my.db"
ORDER BY A, id
These ideas spring to mind:
create a sort function in delphi that does the sort client-side, using a comparison/mapping function that rearranges the string into something that is compariable, maybe lexographically.
add a column to the table whose data you wish to sort, that contains a modification of the values that can be compared with a standard string comparison and thus will work with ORDER BY
add a stored function to paradox that does the modification of the values, and use this function in the ORDER BY clause.
by modification, I mean something like, separate the string into components, and re-join them with each component right-padded with enough spaces so that all of the components are in the same position in the string. This will only work reliably if you can say with confidence that for each of the components, no value will exceed a certain length in the database.
I am making these suggestions little/no knowledge of paradox or delphi, so you will have to take my suggestions with a grain of salt.

How to handle string ordering in order by clause?

Suppose I want to order the records order by a field (string data type) called STORY_LENGTH. This field is a multi-valued field and I represent the multiple values using commas. For example, for record1, its value is "1" and record2 its value is "1,3" and for record3 its value is "1,2". Now when, I want to order the records according to STORY_LENGTH then records are ordered like this record1 > record3 > record2. Its clear that STORY_LENGTH data type is string and order by ASC is ordering that value considering it as string. But, here comes the problem. For example, when record4="10" and record5="2" and I try to order it looks like record4 > record5 which obviously I don't want. Because 2 > 10 and I am using a string formatted just because of multiple values of the field.
So, anybody, can you help me out of this? I need some good idea to fix.
thanks
Multi-values fields as you describe mean your data model is broken and should be normalized.
Once this is done, querying becomes much more simple.
From what I've understood you want to sort items by second or first number in comma separated values stored in a VARCHAR field. Implementation would depend on database used, for example in MySQL it would look like:
SELECT * FROM stories
ORDER BY CAST(COALESCE(SUBSTRING_INDEX(story_length, ',', -1), '0') AS INTEGER)
Yet it is not generally not good to use such sorting for performance reasons as sorting would require scanning of whole table instead of using index on field.
Edit: After edits it looks like you want to sort on first value and ignore value(s) after comma. As according to some comment above changes in database design are not an option just use following code for sorting:
SELECT * FROM stories
ORDER BY CAST(COALESCE(NULLIF(SUBSTRING_INDEX(story_length, ',', 1), ''), '0') AS INTEGER)