PostgreSQL IS NULL and length - sql

I am trying to get all records from a table where a specific column is NULL. But I am not getting any records. On the other hand, there are many records where the length(field) is indeed 0.
select count(*) from article a where length(for_interest) =0;
count
-------
9
(1 row)
select count(*) from article a where for_interest is NULL ;
count
-------
0
(1 row)
Something about NULLs I didn't get right? More info
select count(*) from article AS a where for_interest is NOT NULL ;
count
-------
15
(1 row)
select count(*) from article ;
count
-------
15
(1 row)
PostgreSQL version is 9.3.2.
Adding sample data, table description etc (new sample table created with just 2 records for this)
test=# \d sample_article
Table "public.sample_article"
Column | Type | Modifiers
--------------+------------------------+-----------
id | integer |
title | character varying(150) |
for_interest | character varying(512) |
test=# select * from sample_article where length(for_interest)=0;
id | title | for_interest
----+-----------------------------------------------------------------+--------------
8 | What is the Logic Behind the Most Popular Interview Questions? |
(1 row)
test=# select * from sample_article where for_interest IS NOT NULL;
id | title | for_interest
----+-----------------------------------------------------------------+--------------
7 | Auto Expo 2014: Bike Gallery | Wheels
8 | What is the Logic Behind the Most Popular Interview Questions? |
(2 rows)
test=# select * from sample_article where for_interest IS NULL;
id | title | for_interest
----+-------+--------------
(0 rows)

Character types can hold the empty string '', which is not a NULL value.
The length of an empty string is 0. The length of a NULL value is NULL.
Most functions return NULL for NULL input.
SELECT length(''); --> 0
SELECT length(NULL); --> NULL
SELECT NULL IS NULL; --> TRUE
SELECT '' IS NULL; --> FALSE

Related

Count string occurances within a list column - Snowflake/SQL

I have a table with a column that contains a list of strings like below:
EXAMPLE:
STRING User_ID [...]
"[""null"",""personal"",""Other""]" 2122213 ....
"[""Other"",""to_dos_and_thing""]" 2132214 ....
"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]" 4342323 ....
QUESTION:
I want to be able to get a count of the amount of times each unique string appears (strings are seperable within the strings column by commas) but only know how to do the following:
SELECT u.STRING, count(u.USERID) as cnt
FROM table u
group by u.STRING
order by cnt desc;
However the above method doesn't work as it only counts the number of user ids that use a specific grouping of strings.
The ideal output using the example above would like this!
DESIRED OUTPUT:
STRING COUNT_Instances
"null" 1223
"personal" 543
"Other" 324
"to_dos_and_thing" 221
"getting_things_done" 146
"Work!!!!!" 22
Based on your description, here is my sample table:
create table u (user_id number, string varchar);
insert into u values
(2122213, '"[""null"",""personal"",""Other""]"'),
(2132214, '"[""Other"",""to_dos_and_thing""]"'),
(2132215, '"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]"' );
I used SPLIT_TO_TABLE to split each string as a row, and then REGEXP_SUBSTR to clean the data. So here's the query and output:
select REGEXP_SUBSTR( s.VALUE, '""(.*)""', 1, 1, 'i', 1 ) extracted, count(*) from u,
lateral SPLIT_TO_TABLE( string , ',' ) s
GROUP BY extracted
order by count(*) DESC;
+---------------------+----------+
| EXTRACTED | COUNT(*) |
+---------------------+----------+
| Other | 2 |
| null | 1 |
| personal | 1 |
| to_dos_and_thing | 1 |
| getting_things_done | 1 |
| TO_dos_and_thing | 1 |
| Work!!!!! | 1 |
+---------------------+----------+
SPLIT_TO_TABLE https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
REGEXP_SUBSTR https://docs.snowflake.com/en/sql-reference/functions/regexp_substr.html

Counting the total number of rows with SELECT DISTINCT ON without using a subquery

I have performing some queries using PostgreSQL SELECT DISTINCT ON syntax. I would like to have the query return the total number of rows alongside with every result row.
Assume I have a table my_table like the following:
CREATE TABLE my_table(
id int,
my_field text,
id_reference bigint
);
I then have a couple of values:
id | my_field | id_reference
----+----------+--------------
1 | a | 1
1 | b | 2
2 | a | 3
2 | c | 4
3 | x | 5
Basically my_table contains some versioned data. The id_reference is a reference to a global version of the database. Every change to the database will increase the global version number and changes will always add new rows to the tables (instead of updating/deleting values) and they will insert the new version number.
My goal is to perform a query that will only retrieve the latest values in the table, alongside with the total number of rows.
For example, in the above case I would like to retrieve the following output:
| total | id | my_field | id_reference |
+-------+----+----------+--------------+
| 3 | 1 | b | 2 |
+-------+----+----------+--------------+
| 3 | 2 | c | 4 |
+-------+----+----------+--------------+
| 3 | 3 | x | 5 |
+-------+----+----------+--------------+
My attemp is the following:
select distinct on (id)
count(*) over () as total,
*
from my_table
order by id, id_reference desc
This returns almost the correct output, except that total is the number of rows in my_table instead of being the number of rows of the resulting query:
total | id | my_field | id_reference
-------+----+----------+--------------
5 | 1 | b | 2
5 | 2 | c | 4
5 | 3 | x | 5
(3 rows)
As you can see it has 5 instead of the expected 3.
I can fix this by using a subquery and count as an aggregate function:
with my_values as (
select distinct on (id)
*
from my_table
order by id, id_reference desc
)
select count(*) over (), * from my_values
Which produces my expected output.
My question: is there a way to avoid using this subquery and have something similar to count(*) over () return the result I want?
You are looking at my_table 3 ways:
to find the latest id_reference for each id
to find my_field for the latest id_reference for each id
to count the distinct number of ids in the table
I therefore prefer this solution:
select
c.id_count as total,
a.id,
a.my_field,
b.max_id_reference
from
my_table a
join
(
select
id,
max(id_reference) as max_id_reference
from
my_table
group by
id
) b
on
a.id = b.id and
a.id_reference = b.max_id_reference
join
(
select
count(distinct id) as id_count
from
my_table
) c
on true;
This is a bit longer (especially the long thin way I write SQL) but it makes it clear what is happening. If you come back to it in a few months time (somebody usually does) then it will take less time to understand what is going on.
The "on true" at the end is a deliberate cartesian product because there can only ever be exactly one result from the subquery "c" and you do want a cartesian product with that.
There is nothing necessarily wrong with subqueries.

Merging Complicated Tables

I'm trying to merge tables where rows correspond to a many:1 relationship with "real" things.
I'm writing a blackjack simulator that stores game history in a database with a new set of tables generated each run. The tables are really more like templates, since each game gets its own set of the 3 mutable tables (players, hands, and matches). Here's the layout, where suff is a user-specified suffix to use for the current run:
- cards
- id INTEGER PRIMARY KEY
- cardValue INTEGER NOT NULL
- suit INTEGER NOT NULL
- players_suff
- whichPlayer INTEGER PRIMARY KEY
- aiType TEXT NOT NULL
- hands_suff
- id BIGSERIAL PRIMARY KEY
- whichPlayer INTEGER REFERENCES players_suff(whichPlayer) *
- whichHand BIGINT NOT NULL
- thisCard INTEGER REFERENCES cards(id)
- matches_suff
- id BIGSERIAL PRIMARY KEY
- whichGame INTEGER NOT NULL
- dealersHand BIGINT NOT NULL
- whichPlayer INTEGER REFERENCES players_suff(whichPlayer)
- thisPlayersHand BIGINT NOT NULL **
- playerResult INTEGER NOT NULL --AKA who won
Only one cards table is created because its values are constant.
So after running the simulator twice you might have:
hands_firstrun
players_firstrun
matches_firstrun
hands_secondrun
players_secondrun
matches_secondrun
I want to be able to combine these tables if you used the same AI parameters for both of those runs (i.e. players_firstrun and players_secondrun are exactly the same). The problem is that the way I'm inserting hands makes this really messy: whichHand can't be a BIGSERIAL because the relationship of hands_suff rows to "actual hands" is many:1. matches_suff is handled the same way because a blackjack "game" actually consists of a set of games: the set of pairs of each player vs. the dealer. So for 3 players, you actually have 3 rows for each round.
Currently I select the largest whichHand in the table, add 1 to it, then insert all of the rows for one hand. I'm worried this "query-and-insert" will be really slow if I'm merging 2 tables that might both be arbitrarily huge.
When I'm merging tables, I feel like I should be able to (entirely in SQL) query the largest values in whichHand and whichGame once then use them combine the tables, incrementing them for each unique whichHand and whichGame in the table being merged.
(I saw this question, but it doesn't handle using a generated ID in 2 different places). I'm using Postgres and it's OK if the answer is specific to it.
* sadly postgres doesn't allow parameterized table names so this had to be done by manual string substitution. Not the end of the world since the program isn't web-facing and no one except me is likely to ever bother with it, but the SQL injection vulnerability does not make me happy.
** matches_suff(whichPlayersHand) was originally going to reference hands_suff(whichHand) but foreign keys must reference unique values. whichHand isn't unique because a hand is made up of multiple rows, with each row "holding" one card. To query for a hand you select all of those rows with the same value in whichHand. I couldn't think of a more elegant way to do this without resorting to arrays.
EDIT:
This is what I have now:
thomas=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------+-------+--------
public | cards | table | thomas
public | hands_first | table | thomas
public | hands_second | table | thomas
public | matches_first | table | thomas
public | matches_second | table | thomas
public | players_first | table | thomas
public | players_second | table | thomas
(7 rows)
thomas=# SELECT * FROM hands_first
thomas-# \g
id | whichplayer | whichhand | thiscard
----+-------------+-----------+----------
1 | 0 | 0 | 6
2 | 0 | 0 | 63
3 | 0 | 0 | 41
4 | 1 | 1 | 76
5 | 1 | 1 | 23
6 | 0 | 2 | 51
7 | 0 | 2 | 29
8 | 0 | 2 | 2
9 | 0 | 2 | 92
10 | 0 | 2 | 6
11 | 1 | 3 | 101
12 | 1 | 3 | 8
(12 rows)
thomas=# SELECT * FROM hands_second
thomas-# \g
id | whichplayer | whichhand | thiscard
----+-------------+-----------+----------
1 | 0 | 0 | 78
2 | 0 | 0 | 38
3 | 1 | 1 | 24
4 | 1 | 1 | 18
5 | 1 | 1 | 95
6 | 1 | 1 | 40
7 | 0 | 2 | 13
8 | 0 | 2 | 84
9 | 0 | 2 | 41
10 | 1 | 3 | 29
11 | 1 | 3 | 34
12 | 1 | 3 | 56
13 | 1 | 3 | 52
thomas=# SELECT * FROM matches_first
thomas-# \g
id | whichgame | dealershand | whichplayer | thisplayershand | playerresult
----+-----------+-------------+-------------+-----------------+--------------
1 | 0 | 0 | 1 | 1 | 1
2 | 1 | 2 | 1 | 3 | 2
(2 rows)
thomas=# SELECT * FROM matches_second
thomas-# \g
id | whichgame | dealershand | whichplayer | thisplayershand | playerresult
----+-----------+-------------+-------------+-----------------+--------------
1 | 0 | 0 | 1 | 1 | 0
2 | 1 | 2 | 1 | 3 | 2
(2 rows)
I'd like to combine them to have:
hands_combined table:
id | whichplayer | whichhand | thiscard
----+-------------+-----------+----------
1 | 0 | 0 | 6 --Seven of Spades
2 | 0 | 0 | 63 --Queen of Spades
3 | 0 | 0 | 41 --Three of Clubs
4 | 1 | 1 | 76
5 | 1 | 1 | 23
6 | 0 | 2 | 51
7 | 0 | 2 | 29
8 | 0 | 2 | 2
9 | 0 | 2 | 92
10 | 0 | 2 | 6
11 | 1 | 3 | 101
12 | 1 | 3 | 8
13 | 0 | 4 | 78
14 | 0 | 4 | 38
15 | 1 | 5 | 24
16 | 1 | 5 | 18
17 | 1 | 5 | 95
18 | 1 | 5 | 40
19 | 0 | 6 | 13
20 | 0 | 6 | 84
21 | 0 | 6 | 41
22 | 1 | 7 | 29
23 | 1 | 7 | 34
24 | 1 | 7 | 56
25 | 1 | 7 | 52
matches_combined table:
id | whichgame | dealershand | whichplayer | thisplayershand | playerresult
----+-----------+-------------+-------------+-----------------+--------------
1 | 0 | 0 | 1 | 1 | 1
2 | 1 | 2 | 1 | 3 | 2
3 | 2 | 4 | 1 | 5 | 0
4 | 3 | 6 | 1 | 7 | 2
Each value of "thiscard" represents a playing card in the range [1..104]--52 playing cards with an extra bit representing if it's face up or face down. I didn't post the actual table for space reasons.
So player 0 (aka the dealer) had a hand of (Seven of Spades, Queen of Spaces, 3 of Clubs) in the first game.
I think you're not using PostgreSQL the way it's intended to be used, plus your table design may not be suitable for what you want to achieve. Whilst it was difficult to understand what you want your solution to achieve, I wrote this, which seems to solve everything you want using a handful of tables only, and functions that return recordsets for simulating your requirement for individual runs. I used Enums and complex types to illustrate some of the features that you may wish to harness from the power of PostgreSQL.
Also, I'm not sure what parameterized table names are (I have never seen anything like it in any RDBMS), but PostgreSQL does allow something perfectly suitable: recordset returning functions.
CREATE TYPE card_value AS ENUM ('1', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K');
CREATE TYPE card_suit AS ENUM ('Clubs', 'Diamonds', 'Hearts', 'Spades');
CREATE TYPE card AS (value card_value, suit card_suit, face_up bool);
CREATE TABLE runs (
run_id bigserial NOT NULL PRIMARY KEY,
run_date timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE players (
run_id bigint NOT NULL REFERENCES runs,
player_no int NOT NULL, -- 0 can be assumed as always the dealer
ai_type text NOT NULL,
PRIMARY KEY (run_id, player_no)
);
CREATE TABLE matches (
run_id bigint NOT NULL REFERENCES runs,
match_no int NOT NULL,
PRIMARY KEY (run_id, match_no)
);
CREATE TABLE hands (
hand_id bigserial NOT NULL PRIMARY KEY,
run_id bigint NOT NULL REFERENCES runs,
match_no int NOT NULL,
hand_no int NOT NULL,
player_no int NOT NULL,
UNIQUE (run_id, match_no, hand_no),
FOREIGN KEY (run_id, match_no) REFERENCES matches,
FOREIGN KEY (run_id, player_no) REFERENCES players
);
CREATE TABLE deals (
deal_id bigserial NOT NULL PRIMARY KEY,
hand_id bigint NOT NULL REFERENCES hands,
card card NOT NULL
);
CREATE OR REPLACE FUNCTION players(int) RETURNS SETOF players AS $$
SELECT * FROM players WHERE run_id = $1 ORDER BY player_no;
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION matches(int) RETURNS SETOF matches AS $$
SELECT * FROM matches WHERE run_id = $1 ORDER BY match_no;
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION hands(int) RETURNS SETOF hands AS $$
SELECT * FROM hands WHERE run_id = $1 ORDER BY match_no, hand_no;
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION hands(int, int) RETURNS SETOF hands AS $$
SELECT * FROM hands WHERE run_id = $1 AND match_no = $2 ORDER BY hand_no;
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION winner_player (int, int) RETURNS int AS $$
SELECT player_no
FROM hands
WHERE run_id = $1 AND match_no = $2
ORDER BY hand_no DESC
LIMIT 1
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION next_player_no (int) RETURNS int AS $$
SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN
COALESCE((SELECT MAX(player_no) FROM players WHERE run_id = $1), 0) + 1 END
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION next_match_no (int) RETURNS int AS $$
SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN
COALESCE((SELECT MAX(match_no) FROM matches WHERE run_id = $1), 0) + 1 END
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION next_hand_no (int) RETURNS int AS $$
SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN
COALESCE((SELECT MAX(hand_no) + 1 FROM hands WHERE run_id = $1), 0) END
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION card_to_int (card) RETURNS int AS $$
SELECT ((SELECT enumsortorder::int-1 FROM pg_enum WHERE enumtypid = 'card_suit'::regtype AND enumlabel = ($1).suit::name) * 13 +
(SELECT enumsortorder::int-1 FROM pg_enum WHERE enumtypid = 'card_value'::regtype AND enumlabel = ($1).value::name) + 1) *
CASE WHEN ($1).face_up THEN 2 ELSE 1 END
$$ LANGUAGE SQL; -- SELECT card_to_int(('3', 'Spades', false))
CREATE OR REPLACE FUNCTION int_to_card (int) RETURNS card AS $$
SELECT ((SELECT enumlabel::card_value FROM pg_enum WHERE enumtypid = 'card_value'::regtype AND enumsortorder = ((($1-1)%13)+1)::real),
(SELECT enumlabel::card_suit FROM pg_enum WHERE enumtypid = 'card_suit'::regtype AND enumsortorder = (((($1-1)/13)::int%4)+1)::real),
$1 > (13*4))::card
$$ LANGUAGE SQL; -- SELECT i, int_to_card(i) FROM generate_series(1, 13*4*2) i
CREATE OR REPLACE FUNCTION deal_cards(int, int, int, int[]) RETURNS TABLE (player_no int, hand_no int, card card) AS $$
WITH
hand AS (
INSERT INTO hands (run_id, match_no, player_no, hand_no)
VALUES ($1, $2, $3, next_hand_no($1))
RETURNING hand_id, player_no, hand_no),
mydeals AS (
INSERT INTO deals (hand_id, card)
SELECT hand_id, int_to_card(card_id)::card AS card
FROM hand, UNNEST($4) card_id
RETURNING hand_id, deal_id, card
)
SELECT h.player_no, h.hand_no, d.card
FROM hand h, mydeals d
$$ LANGUAGE SQL;
CREATE OR REPLACE FUNCTION deals(int) RETURNS TABLE (deal_id bigint, hand_no int, player_no int, card int) AS $$
SELECT d.deal_id, h.hand_no, h.player_no, card_to_int(d.card)
FROM hands h
JOIN deals d ON (d.hand_id = h.hand_id)
WHERE h.run_id = $1
ORDER BY d.deal_id;
$$ LANGUAGE SQL;
INSERT INTO runs DEFAULT VALUES; -- Add first run
INSERT INTO players VALUES (1, 0, 'Dealer'); -- dealer always zero
INSERT INTO players VALUES (1, next_player_no(1), 'Player 1');
INSERT INTO matches VALUES (1, next_match_no(1)); -- First match
SELECT * FROM deal_cards(1, 1, 0, ARRAY[6, 63, 41]);
SELECT * FROM deal_cards(1, 1, 1, ARRAY[76, 23]);
SELECT * FROM deal_cards(1, 1, 0, ARRAY[51, 29, 2, 92, 6]);
SELECT * FROM deal_cards(1, 1, 1, ARRAY[101, 8]);
INSERT INTO matches VALUES (1, next_match_no(1)); -- Second match
SELECT * FROM deal_cards(1, 2, 0, ARRAY[78, 38]);
SELECT * FROM deal_cards(1, 2, 1, ARRAY[24, 18, 95, 40]);
SELECT * FROM deal_cards(1, 2, 0, ARRAY[13, 84, 41]);
SELECT * FROM deal_cards(1, 2, 1, ARRAY[29, 34, 56, 52]);
SELECT * FROM deals(1); -- This is the output you need (hands_combined table)
-- This view can be used to retrieve the list of all winning hands
CREATE OR REPLACE VIEW winning_hands AS
SELECT DISTINCT ON (run_id, match_no) *
FROM hands
ORDER BY run_id, match_no, hand_no DESC;
SELECT * FROM winning_hands;
Wouldn't using the UNION operator work?
For the hands relation:
SELECT * FROM hands_first
UNION ALL
SELECT * FROM hands_second
For the matches relation:
SELECT * FROM matches_first
UNION ALL
SELECT * FROM matches_second
As a more long term solution I'd consider restructuring the DB because it will quickly become unmanageable with this schema. Why not improve normalization by introducing a games table?
In other words Games have many Matches, matches have many players for each game and players have many hands for each match.
I'd recommend drawing the UML for the entity relationships on paper (http://dawgsquad.googlecode.com/hg/docs/database_images/Database_Model_Diagram(Title).png), then improving the schema so it can be queried using normal SQL operators.
Hope this helps.
EDIT:
In that case you can use a subquery on the union of both tables with the rownumber() PG function to represent the row number:
SELECT
row_number() AS id,
whichplayer,
whichhand,
thiscard
FROM
(
SELECT * FROM hands_first
UNION ALL
SELECT * FROM hands_second
);
The same principle would apply to the matches table. Obviously this doesn't scale well to even a small number of tables, so would prioritize normalizing your schema.
Docs on some PG functions: http://www.postgresql.org/docs/current/interactive/functions-window.html
to build new table with all rows of two tables, do:
CREATE TABLE hands AS
select 1 as hand, id, whichplayer, whichhand, thiscard
from hands_first
union all
select 2 as hand, id, whichplayer, whichhand, thiscard
from hands_second
after that, to insert data of new matche, create sequence with start on current last + 1
CREATE SEQUENCE matche START 3;
before insert read sequence value, and use it in inserts:
SELECT nextval('matche');
Your database structure is not great, and I know for sure it is not scalable approach creating tables on fly. There are performance drawbacks creating physical tables instead of using an existing structure. I suggest you refactor your db structure if can.
You can however use the UNION operator to merge your data.

Get previous and next row from rows selected with (WHERE) conditions

For example I have this statement:
my name is Joseph and my father's name is Brian
This statement is splitted by word, like this table:
------------------------------
| ID | word |
------------------------------
| 1 | my |
| 2 | name |
| 3 | is |
| 4 | Joseph |
| 5 | and |
| 6 | my |
| 7 | father's |
| 8 | name |
| 9 | is |
| 10 | Brian |
------------------------------
I want to get previous and next word of each word
For example I want to get previous and next word of "name":
--------------------------
| my | name | is |
--------------------------
| father's | name | is |
--------------------------
How could I get this result?
you didn't specify your DBMS, so the following is ANSI SQL:
select prev_word, word, next_word
from (
select id,
lag(word) over (order by id) as prev_word,
word,
lead(word) over (order by id) as next_word
from words
) as t
where word = 'name';
SQLFiddle: http://sqlfiddle.com/#!12/7639e/1
Why did no-body give the simple answer?
SELECT LAG(word) OVER ( ORDER BY ID ) AS PreviousWord ,
word ,
LEAD(word) OVER ( ORDER BY ID ) AS NextWord
FROM words;
Without subqueries:
SELECT a.word
FROM my_table AS a
JOIN my_table AS b
ON b.word = 'name' AND abs(a.id - b.id) <= 1
ORDER BY a.id
Use Join to get the expected result for SQL Server 2005 plus.
create table words (id integer, word varchar(20));
insert into words
values
(1 ,'my'),
(2 ,'name'),
(3 ,'is'),
(4 ,'joseph'),
(5 ,'and'),
(6 ,'my'),
(7 ,'father'),
(8 ,'name'),
(9 ,'is'),
(10,'brian');
SELECT A.Id , C.word AS PrevName ,
A.word AS CurName ,
B.word AS NxtName
FROM words AS A
LEFT JOIN words AS B ON A.Id = B.Id - 1
LEFT JOIN words AS C ON A.Id = C.Id + 1
WHERE A.Word = 'name'
Result:
Fiddler Demo
Try this
SELECT *
FROM tablename a
WHERE ID IN(SELECT ID - 1
FROM tablename
WHERE word = 'name') -- will fetch previous rows of word `name`
OR ID IN(SELECT ID + 1
FROM tablename
WHERE word = 'name') -- will fetch next rows of word `name`
OR word = 'name' -- to fetch the rows where word = `name`
Here's a different approach, if you want the selects to be fast. It takes a bit of preparation work.
Create a new column (e.g. "phrase") in the database that will contain the words
you want. (i.e. the previous, the current and next).
Write a trigger that on insert appends the new word to the previous
row's phrase and prepends the previous row's word to the new row's word and fills
phrase.
If the individual words can change, you'll need a trigger on update to keep the phrase in sync.
Then just select the phrase. You get much better speed, but at the cost of extra storage and slower insert and harder maintainability. Obviously you have to update the phrase column for the existing records, but you have the SQL to do that in the other answers.

postgresql do not return values on specific numeric criterias

I have a postgresql database which look like the following :
+---------------+---------------- ------+------------ ---+
| id (bigint) | name (varying(255)) | price (real) |
+---------------+---------------- ------+------------ ---+
| 1 | name 1 | 0.33 |
+---------------+---------------- ------+------------ ---+
| 1 | name 2 | 1.33 |
+---------------+---------------- ------+------------ ---+
| 1 | name 3 | 1 |
+---------------+---------------- ------+------------ ---+
And then the results of my queries :
SELECT * FROM my_table WHERE price = 1 -- OK (one row returned)
SELECT * FROM my_table WHERE price = 1.0 -- OK (one row returned)
SELECT * FROM my_table WHERE price = 1.33 -- FAIL (no row returned)
SELECT * FROM my_table WHERE price = 0.33 -- FAIL (no row returned)
When the value can't be cast to an non-floating value no lines are returned by postgresql.
I can't figure out why. Have you the same problem ? How can I fix this ?
One solution I see is to use explicit cast to real datatype:
SELECT * FROM my_table WHERE price = 0.33::real;
id | name | price
----+--------+-------
1 | name 1 | 0.33
SELECT * FROM my_table WHERE price = 1.33::real;
id | name | price
----+--------+-------
1 | name 2 | 1.33
According to documentation:
A numeric constant that contains neither a decimal point nor an
exponent is initially presumed to be type integer if its value fits in
type integer (32 bits); otherwise it is presumed to be type bigint if
its value fits in type bigint (64 bits); otherwise it is taken to be
type numeric. Constants that contain decimal points and/or exponents
are always initially presumed to be type numeric.
Note that:
SELECT 1.33::numeric = 1.33::real;
?column?
----------
f
(1 row)