Query for a json property with the highest (max) date as value in a jsonb column - sql

I have a postgres table with a jsonb column that gets inserted/updated as soon as a status is reached. I would like to query the latest status with it's date.
Given I have the following jsonb status column.
{
"pending": "2018-01-12T12:34:41.785945+00:00",
"started":"2018-01-10T15:52:41.785945+00:00",
"processed":"2018-01-18T12:52:41.785945+00:00"
}
What would be the query to get?
"processed":"2018-01-18T12:52:41.785945+00:00"...
Basically select the property with the max date, Is this possible at all? And if so how would look the query?
EDIT: the Status JSON can have different properties at different times. So it not always the 3 'pending','started','processed'. There could only be pending or in future others.... The question is the latest status with it's date.

Let's create the table with couple of sample data.
DROP TABLE IF EXISTS test;
CREATE TABLE test(
id integer,
status json
);
INSERT INTO test VALUES (1, '{
"pending": "2018-01-12T12:34:41.785945+00:00",
"started":"2018-01-10T15:52:41.785945+00:00",
"processed":"2018-01-18T12:52:41.785945+00:00"
}');
INSERT INTO test VALUES (2, '{
"pending": "2018-01-12T12:31:41.785945+00:00",
"started":"2018-01-20T15:55:41.785945+00:00",
"processed":"2018-01-10T12:20:41.785945+00:00"
}');
I managed to get the max date property by extracting values from json to a separate table, and applying typical max function and case statement to get corresponding property of interest. This probably defeats the purpose of using json type (maybe try json_type(json) instead?)
--- unpack values and put them into different columns
WITH t1 AS(
SELECT id, status->>'pending' AS pending,
status->>'started' AS started,
status->>'processed' AS processed
FROM test),
--- find maximum time corresponding to each_id
t2 AS (
SELECT id, GREATEST(t1.pending, t1.processed, t1.started) AS latest
FROM t1
)
--- join to get matching column that gave latest time
SELECT t1.id, CASE WHEN t1.pending = t2.latest THEN 'pending'
WHEN t1.started = t2.latest THEN 'started'
WHEN t1.processed = t2.latest THEN 'processed'
END AS max_status,
t2.latest AS max_time
FROM t1
JOIN t2
ON t1.id = t2.id
;
You can carry along other columns of interest for each id. This results
id | max_status | max_time
----+------------+----------------------------------
1 | processed | 2018-01-18T12:52:41.785945+00:00
2 | started | 2018-01-20T15:55:41.785945+00:00
(2 rows)
EDIT::
After your edit, I made some changes to take care of arbitrary presence/absence of particular keys in your json field. Let's use json_each(json) to unpack key, value for each id to different columns
SELECT id, (json_each(status)).*
FROM test
WHERE id=1;
id | key | value
----+-----------+------------------------------------
1 | pending | "2018-01-12T12:34:41.785945+00:00"
1 | started | "2018-01-10T15:52:41.785945+00:00"
1 | processed | "2018-01-18T12:52:41.785945+00:00"
(3 rows)
Not quite what we want but almost there. First thing to notice is the column names key, value which we can use to transform data. Second, values are json objects and therefore we have to take care of double quotations after casting to text. trim handles this nicely. Rest is to partition your data by id, get maximum time, and filter rows to get corresponding status.
WITH t1 AS(
SELECT id, key as status, trim(both '"' from value::text) as time_of
FROM test, json_each(status)
),
t2 as(
SELECT id, status,
to_timestamp(time_of, 'YYYY-MM-DD"T"HH24:MI:SS') as time_of,
MAX(to_timestamp(time_of, 'YYYY-MM-DD"T"HH24:MI:SS'))
OVER(PARTITION by id) AS max_time
FROM t1)
SELECT id, status, max_time
FROM t2
WHERE time_of = max_time;

Related

How to delete records in BigQuery based on values in an array?

In Google BigQuery, I would like to delete a subset of records, based on the value of a specific column. It's a query that I need to run repeatedly and that I would like to run automatically.
The problem is that this specific column is of the form STRUCT<column_1 ARRAY (STRING), column_2 ARRAY (STRING), ... >, and I don't know how to use such a column in the where-clause when using the delete-command.
Here is basically what I am trying to do (this code does not work):
DELETE
FROM dataset.table t
LEFT JOIN UNNEST(t.category.column_1) AS type
WHERE t.partition_date = '2020-07-22'
AND type = 'some_value'
The error that I'm getting is: Syntax error: Expected end of input but got keyword LEFT at [3:1]
If I replace the DELETE with SELECT *, it does work:
SELECT *
FROM dataset.table t
LEFT JOIN UNNEST(t.category.column_1) AS type
WHERE t.partition_date = '2020-07-22'
AND type = 'some_value'
Does somebody know how to use such a column to delete a subset of records?
EDIT:
Here is some code to create a reproducible example with some silly data (fill in your own dataset and table name in all queries):
Suppose you want to delete all rows where category.type contains the value 'food'.
1 - create a table:
CREATE TABLE <DATASET>.<TABLE_NAME>
(
article STRING,
category STRUCT<
color STRING,
type ARRAY<STRING>
>
);
2 - Insert data into the new table:
INSERT <DATASET>.<TABLE_NAME>
SELECT "apple" AS article, STRUCT('red' AS color, ['fruit','food'] as type) AS category
UNION ALL
SELECT "cabbage" AS article, STRUCT('blue' AS color, ['vegetable', 'food'] as type) AS category
UNION ALL
SELECT "book" AS article, STRUCT('red' AS color, ['object'] as type) AS category
UNION ALL
SELECT "dog" AS article, STRUCT('green' AS color, ['animal', 'pet'] as type) AS category;
3 - Show that select works (return all rows where category.type contains the value 'food'; these are the rows I want to delete):
SELECT *
FROM <DATASET>.<TABLE_NAME>
LEFT JOIN UNNEST(category.type) type
WHERE type = 'food'
Initial Result
4 - My attempt at deleting rows where category.type contains 'food' does not work:
DELETE
FROM <DATASET>.<TABLE_NAME>
LEFT JOIN UNNEST(category.type) type
WHERE type = 'food'
Syntax error: Unexpected keyword LEFT at [3:1]
Desired Result
This is the code I used to delete the desired records (the records where category.type contains the value 'food'.)
DELETE
FROM <DATASET>.<TABLE_NAME> t1
WHERE EXISTS(SELECT 1 FROM UNNEST(t1.category.type) t2 WHERE t2 = 'food')
The embarrasing thing is that I've seen these kind of answers on similar questions (for example on update-queries). But I come from Oracle-SQL and I think that there you are required to connect your subquery with your main query in the WHERE-statement of the subquery (ie. connect t1 with t2), so I didn't understand these answers. That's why I posted this question.
However, I learned that BigQuery automatically understands how to connect table t1 and 'table' t2; you don't have to explicitly connect them.
Now it is possible to still do this (perhaps even recommended?):
DELETE
FROM <DATASET>.<TABLE_NAME> t1
WHERE EXISTS (SELECT 1 FROM <DATASET>.<TABLE_NAME> t2 LEFT JOIN UNNEST(t2.category.type) AS type WHERE type = 'food' AND t1.article=t2.article)
but a second difficulty for me was that my ID in my actual data is somehow hidden in an array>struct-construction, so I got stuck connecting t1 & t2. Fortunately this is not always an absolute necessity.
Since you did not provide any sample data I am going to explain using some dummy data. In case you add your sample data, I can update the answer.
Firstly,according to your description, you have only a STRUCT not an Array[Struct <col_1, col_2>].For this reason, you do not need to use UNNEST to access the values within the data. Below is an example how to access particular data within a STRUCT.
WITH data AS (
SELECT 1 AS id, STRUCT("Alex" AS name, 30 AS age, "NYC" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Leo" AS name, 18 AS age, "Sydney" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Robert" AS name, 25 AS age, "Paris" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Mary" AS name, 28 AS age, "London" AS city) AS info UNION ALL
SELECT 1 AS id, STRUCT("Ralph" AS name, 45 AS age, "London" AS city) AS info
)
SELECT * FROM data
WHERE info.city = "London"
Notice that the STRUCT is named info and the data we accessed is city and used it in the WHERE clause.
Now, in order to delete the rows that contains an specific value within the STRUCT , in your case I assume it would be your_struct.column_1, you can use DELETE or MERGE and DELETE. I have saved the above data in a table to execute the below examples, which have the same output,
First method: DELETE
DELETE FROM `project.dataset.table`
WHERE info.city = "Sydney"
Second method: MERGE and DELETE
MERGE `project.dataset.table` a
USING (SELECT * from `project.dataset.table` WHERE info.city ="London") b
ON a.info.city =b.info.city
WHEN matched and b.id=1 then
Delete
And the output for both queries,
Row id info.name info.age info.city
1 1 Alex 30 NYC
2 1 Robert 25 Paris
3 1 Ralph 45 London
4 1 Mary 28 London
As you can see the row where info.city = "Sydney" was deleted in both cases.
It is important to point out that your data is excluded from your source table. Therefore, you should be careful.
Note: Since you want to run this process everyday, you could use Schedule Query within BigQuery Console, appending or overwriting the results after each run. Also, it is a good practice not deleting data from your source table. Thus, consider creating a new table from your source table without the rows you do not desire.

How can I fetch the last N rows, WITHOUT ordering the table

I have tables with multiple million rows and need to fetch the last rows of specific ID's
for example the last row which has device_id = 123 AND the last row which has device_id = 1234
because the tables are so huge and ordering takes so much time, is it possible to select the last 200 without ordering the table and then just order those 200 and fetch the rows I need.
How would I do that?
Thank you in advance for your help!
UPDATE
My PostgreSQL version is 9.2.1
sample data:
time device_id data data ....
"2013-03-23 03:58:00-04" | "001EC60018E36" | 66819.59 | 4.203
"2013-03-23 03:59:00-04" | "001EC60018E37" | 64277.22 | 4.234
"2013-03-23 03:59:00-04" | "001EC60018E23" | 46841.75 | 2.141
"2013-03-23 04:00:00-04" | "001EC60018E21" | 69697.38 | 4.906
"2013-03-23 04:00:00-04" | "001EC600192524"| 69452.69 | 2.844
"2013-03-23 04:01:00-04" | "001EC60018E21" | 69697.47 | 5.156
....
See SQLFiddle of this data
So if device_id = 001EC60018E21
I would want the most recent row with that device_id.
It is a grantee that the last row with that device_id is the row I want, but it may or may not be the last row of the table.
Personally I'd create a composite index on device_id and descending time:
CREATE INDEX table1_deviceid_time ON table1("device_id","time" DESC);
then I'd use a subquery to find the highest time for each device_id and join the subquery results against the main table on device_id and time to find the relevant data, eg:
SELECT t1."device_id", t1."time", t1."data", t1."data1"
FROM Table1 t1
INNER JOIN (
SELECT t1b."device_id", max(t1b."time") FROM Table1 t1b GROUP BY t1b."device_id"
) last_ids("device_id","time")
ON (t1."device_id" = last_ids."device_id"
AND t1."time" = last_ids."time");
See this SQLFiddle.
It might be helpful to maintain a trigger-based materialized view of the highest timestamp for each device ID. However, this will cause concurrency issues if most than one connection can insert data for a given device ID due to the connections fighting for update locks. It's also a pain if you don't know when new device IDs will appear as you have to do an upsert - something that's very inefficient and clumsy. Additionally, the extra write load and autovacuum work created by the summary table may not be worth it; it might be better to just pay the price of the more expensive query.
BTW, time is a terrible name for a column because it's a built-in data type name. Use something more appropriate if you can.
The general way to get the "last" row for each device_id looks like this.
select *
from Table1
inner join (select device_id, max(time) max_time
from Table1
group by device_id) T2
on Table1.device_id = T2.device_id
and Table1.time = T2.max_time;
Getting the "last" 200 device_id numbers without using an ORDER BY isn't really practical, but it's not clear why you might want to do that in the first place. If 200 is an arbitrary number, then you can get better performance by taking a subset of the table that's based on an arbitrary time instead.
select *
from Table1
inner join (select device_id, max(time) max_time
from Table1
where time > '2013-03-23 12:03'
group by device_id) T2
on Table1.device_id = T2.device_id
and Table1.time = T2.max_time;

Selecting most recent and specific version in each group of records, for multiple groups

The problem:
I have a table that records data rows in foo. Each time the row is updated, a new row is inserted along with a revision number. The table looks like:
id rev field
1 1 test1
2 1 fsdfs
3 1 jfds
1 2 test2
Note: the last record is a newer version of the first row.
Is there an efficient way to query for the latest version of a record and for a specific version of a record?
For instance, a query for rev=2 would return the 2, 3 and 4th row (not the replaced 1st row though) while a query for rev=1 yields those rows with rev <= 1 and in case of duplicated ids, the one with the higher revision number is chosen (record: 1, 2, 3).
I would not prefer to return the result in an iterative way.
To get only latest revisions:
SELECT * from t t1
WHERE t1.rev =
(SELECT max(rev) FROM t t2 WHERE t2.id = t1.id)
To get a specific revision, in this case 1 (and if an item doesn't have the revision yet the next smallest revision):
SELECT * from foo t1
WHERE t1.rev =
(SELECT max(rev)
FROM foo t2
WHERE t2.id = t1.id
AND t2.rev <= 1)
It might not be the most efficient way to do this, but right now I cannot figure a better way to do this.
Here's an alternative solution that incurs an update cost but is much more efficient for reading the latest data rows as it avoids computing MAX(rev). It also works when you're doing bulk updates of subsets of the table. I needed this pattern to ensure I could efficiently switch to a new data set that was updated via a long running batch update without any windows of time where we had partially updated data visible.
Aging
Replace the rev column with an age column
Create a view of the current latest data with filter: age = 0
To create a new version of your data ...
INSERT: new rows with age = -1 - This was my slow long running batch process.
UPDATE: UPDATE table-name SET age = age + 1 for all rows in the subset. This switches the view to the new latest data (age = 0) and also ages older data in a single transaction.
DELETE: rows having age > N in the subset - Optionally purge old data
Indexing
Create a composite index with age and then id so the view will be nice and fast and can also be used to look up by id. Although this key is effectively unique, its temporarily non-unique when you're ageing the rows (during UPDATE SET age=age+1) so you'll need to make it non-unique and ideally the clustered index. If you need to find all versions of a given id ordered by age, you may need an additional non-unique index on id then age.
Rollback
Finally ... Lets say you're having a bad day and the batch processing breaks. You can quickly revert to a previous data set version by running:
UPDATE table-name SET age = age - 1 -- Roll back a version
DELETE table-name WHERE age < 0 -- Clean up bad stuff
Existing Table
Suppose you have an existing table that now needs to support aging. You can use this pattern by first renaming the existing table, then add the age column and indexing and then create the view that includes the age = 0 condition with the same name as the original table name.
This strategy may or may not work depending on the nature of technology layers that depended on the original table but in many cases swapping a view for a table should drop in just fine.
Notes
I recommend naming the age column to RowAge in order to indicate this pattern is being used, since it's clearer that its a database related value and it complements SQL Server's RowVersion naming convention. It also won't conflict with a column or view that needs to return a person's age.
Unlike other solutions, this pattern works for non SQL Server databases.
If the subsets you're updating are very large then this might not be a good solution as your final transaction will update not just the current records but all past version of the records in this subset (which could even be the entire table!) so you may end up locking the table.
This is how I would do it. ROW_NUMBER() requires SQL Server 2005 or later
Sample data:
DECLARE #foo TABLE (
id int,
rev int,
field nvarchar(10)
)
INSERT #foo VALUES
( 1, 1, 'test1' ),
( 2, 1, 'fdsfs' ),
( 3, 1, 'jfds' ),
( 1, 2, 'test2' )
The query:
DECLARE #desiredRev int
SET #desiredRev = 2
SELECT * FROM (
SELECT
id,
rev,
field,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY rev DESC) rn
FROM #foo WHERE rev <= #desiredRev
) numbered
WHERE rn = 1
The inner SELECT returns all relevant records, and within each id group (that's the PARTITION BY), computes the row number when ordered by descending rev.
The outer SELECT just selects the first member (so, the one with highest rev) from each id group.
Output when #desiredRev = 2 :
id rev field rn
----------- ----------- ---------- --------------------
1 2 test2 1
2 1 fdsfs 1
3 1 jfds 1
Output when #desiredRev = 1 :
id rev field rn
----------- ----------- ---------- --------------------
1 1 test1 1
2 1 fdsfs 1
3 1 jfds 1
If you want all the latest revisions of each field, you can use
SELECT C.rev, C.fields FROM (
SELECT MAX(A.rev) AS rev, A.id
FROM yourtable A
GROUP BY A.id)
AS B
INNER JOIN yourtable C
ON B.id = C.id AND B.rev = C.rev
In the case of your example, that would return
rev field
1 fsdfs
1 jfds
2 test2
SELECT
MaxRevs.id,
revision.field
FROM
(SELECT
id,
MAX(rev) AS MaxRev
FROM revision
GROUP BY id
) MaxRevs
INNER JOIN revision
ON MaxRevs.id = revision.id AND MaxRevs.MaxRev = revision.rev
SELECT foo.* from foo
left join foo as later
on foo.id=later.id and later.rev>foo.rev
where later.id is null;
How about this?
select id, max(rev), field from foo group by id
For querying specific revision e.g. revision 1,
select id, max(rev), field from foo where rev <= 1 group by id

Get last record of a table in Postgres

I'm using Postgres and cannot manage to get the last record of my table:
my_query = client.query("SELECT timestamp,value,card from my_table");
How can I do that knowning that timestamp is a unique identifier of the record ?
If under "last record" you mean the record which has the latest timestamp value, then try this:
my_query = client.query("
SELECT TIMESTAMP,
value,
card
FROM my_table
ORDER BY TIMESTAMP DESC
LIMIT 1
");
you can use
SELECT timestamp, value, card
FROM my_table
ORDER BY timestamp DESC
LIMIT 1
assuming you want also to sort by timestamp?
Easy way: ORDER BY in conjunction with LIMIT
SELECT timestamp, value, card
FROM my_table
ORDER BY timestamp DESC
LIMIT 1;
However, LIMIT is not standard and as stated by Wikipedia, The SQL standard's core functionality does not explicitly define a default sort order for Nulls.. Finally, only one row is returned when several records share the maximum timestamp.
Relational way:
The typical way of doing this is to check that no row has a higher timestamp than any row we retrieve.
SELECT timestamp, value, card
FROM my_table t1
WHERE NOT EXISTS (
SELECT *
FROM my_table t2
WHERE t2.timestamp > t1.timestamp
);
It is my favorite solution, and the one I tend to use. The drawback is that our intent is not immediately clear when having a glimpse on this query.
Instructive way: MAX
To circumvent this, one can use MAX in the subquery instead of the correlation.
SELECT timestamp, value, card
FROM my_table
WHERE timestamp = (
SELECT MAX(timestamp)
FROM my_table
);
But without an index, two passes on the data will be necessary whereas the previous query can find the solution with only one scan. That said, we should not take performances into consideration when designing queries unless necessary, as we can expect optimizers to improve over time. However this particular kind of query is quite used.
Show off way: Windowing functions
I don't recommend doing this, but maybe you can make a good impression on your boss or something ;-)
SELECT DISTINCT
first_value(timestamp) OVER w,
first_value(value) OVER w,
first_value(card) OVER w
FROM my_table
WINDOW w AS (ORDER BY timestamp DESC);
Actually this has the virtue of showing that a simple query can be expressed in a wide variety of ways (there are several others I can think of), and that picking one or the other form should be done according to several criteria such as:
portability (Relational/Instructive ways)
efficiency (Relational way)
expressiveness (Easy/Instructive way)
If your table has no Id such as integer auto-increment, and no timestamp, you can still get the last row of a table with the following query.
select * from <tablename> offset ((select count(*) from <tablename>)-1)
For example, that could allow you to search through an updated flat file, find/confirm where the previous version ended, and copy the remaining lines to your table.
The last inserted record can be queried using this assuming you have the "id" as the primary key:
SELECT timestamp,value,card FROM my_table WHERE id=(select max(id) from my_table)
Assuming every new row inserted will use the highest integer value for the table's id.
If you accept a tip, create an id in this table like serial. The default of this field will be:
nextval('table_name_field_seq'::regclass).
So, you use a query to call the last register. Using your example:
pg_query($connection, "SELECT currval('table_name_field_seq') AS id;
I hope this tip helps you.
To get the last row,
Get Last row in the sorted order: In case the table has a column specifying time/primary key,
Using LIMIT clause
SELECT * FROM USERS ORDER BY CREATED_TIME DESC LIMIT 1;
Using FETCH clause - Reference
SELECT * FROM USERS ORDER BY CREATED_TIME FETCH FIRST ROW ONLY;
Get Last row in the rows insertion order: In case the table has no columns specifying time/any unique identifiers
Using CTID system column, where ctid represents the physical location of the row in a table - Reference
SELECT * FROM USERS WHERE CTID = (SELECT MAX(CTID) FROM USERS);
Consider the following table,
userid |username | createdtime |
1 | A | 1535012279455 |
2 | B | 1535042279423 | //as per created time, this is the last row
3 | C | 1535012279443 |
4 | D | 1535012212311 |
5 | E | 1535012254634 | //as per insertion order, this is the last row
The query 1 and 2 returns,
userid |username | createdtime |
2 | B | 1535042279423 |
while 3 returns,
userid |username | createdtime |
5 | E | 1535012254634 |
Note : On updating an old row, it removes the old row and updates the data and inserts as a new row in the table. So using the following query returns the tuple on which the data modification is done at the latest.
Now updating a row, using
UPDATE USERS SET USERNAME = 'Z' WHERE USERID='3'
the table becomes as,
userid |username | createdtime |
1 | A | 1535012279455 |
2 | B | 1535042279423 |
4 | D | 1535012212311 |
5 | E | 1535012254634 |
3 | Z | 1535012279443 |
Now the query 3 returns,
userid |username | createdtime |
3 | Z | 1535012279443 |
Use the following
SELECT timestamp, value, card
FROM my_table
ORDER BY timestamp DESC
LIMIT 1
These are all good answers but if you want an aggregate function to do this to grab the last row in the result set generated by an arbitrary query, there's a standard way to do this (taken from the Postgres wiki, but should work in anything conforming reasonably to the SQL standard as of a decade or more ago):
-- Create a function that always returns the last non-NULL item
CREATE OR REPLACE FUNCTION public.last_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE SQL IMMUTABLE STRICT AS $$
SELECT $2;
$$;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.LAST (
sfunc = public.last_agg,
basetype = anyelement,
stype = anyelement
);
It's usually preferable to do select ... limit 1 if you have a reasonable ordering, but this is useful if you need to do this within an aggregate and would prefer to avoid a subquery.
See also this question for a case where this is the natural answer.
The column name plays an important role in the descending order:
select <COLUMN_NAME1, COLUMN_NAME2> from >TABLENAME> ORDER BY <COLUMN_NAME THAT MENTIONS TIME> DESC LIMIT 1;
For example: The below-mentioned table(user_details) consists of the column name 'created_at' that has timestamp for the table.
SELECT userid, username FROM user_details ORDER BY created_at DESC LIMIT 1;
In Oracle SQL,
select * from (select row_number() over (order by rowid desc) rn, emp.* from emp) where rn=1;
select * from table_name LIMIT 1;

How to select 10 rows below the result returned by the SQL query?

Here is the SQL table:
KEY | NAME | VALUE
---------------------
13b | Jeffrey | 23.5
F48 | Jonas | 18.2
2G8 | Debby | 21.1
Now, if I type:
SELECT *
FROM table
WHERE VALUE = 23.5
I will get the first row.
What I need to accomplish is to get the first and the next two rows below. Is there a way to do it?
Columns are not sorted and WHERE condition doesn't participate in the selection of the rows, except for the first one. I just need the two additional rows below the returned one - the ones that were entered after the one which has been returned by the SELECT query.
Without a date column or an auto-increment column, you can't reliably determine the order the records were entered.
The physical order with which rows are stored in the table is non-deterministic.
You need to define an order to the results to do this. There is no guaranteed order to the data otherwise.
If by "the next 2 rows after" you mean "the next 2 records that were inserted into the table AFTER that particular row", you will need to use an auto incrementing field or a "date create" timestamp field to do this.
If each row has an ID column that is unique and auto incrementing, you could do something like:
SELECT * FROM table WHERE id > (SELECT id FROM table WHERE value = 23.5)
If I understand correctly, you're looking for something like:
SELECT * FROM table WHERE value <> 23.5
You can obviously write a program to do that but i am assuming you want a query. What about using a Union. You would also have to create a new column called value_id or something in those lines which is incremented sequentially (probably use a sequence). The idea is that value_id will be incremented for every insert and using that you can write a where clause to return the remaining two values you want.
For example:
Select * from table where value = 23.5
Union
Select * from table where value_id > 2 limit 2;
Limit 2 because you already got the first value in the first query
You need an order if you want to be able to think in terms of "before" and "after".
Assuming you have one you can use ROW_NUMBER() (see more here http://msdn.microsoft.com/en-us/library/ms186734.aspx) and do something like:
With MyTable
(select row_number() over (order by key) as n, key, name, value
from table)
select key, name, value
from MyTable
where n >= (select n from MyTable where value = 23.5)