Unable to use stream UDFs on MAPKEYS index - aerospike

I have a bin with map as datatype and created a secondary on MAPKEYS. Now i want to run a udf with filter on MAPKEYS index. It gives the error AEROSPIKE_ERR_INDEX_NOT_FOUND.
This is my aql query:
aql> aggregate test.check_password('hii') on test.user in MAPKEYS where pids = 'test2'
Error: (201) AEROSPIKE_ERR_INDEX_NOT_FOUND
whereas the normal query works
aql> select * from test.user in MAPKEYS where pids = 'test2'
returns some data
Sample data inserted for testing, in the ideal case it will be a Map of String to Object

aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k1', MAP('{"test1": "t1", "test2": "t2", "test3":"t3", "test4":"t4", "test5":"t5"}'), "t2bin", "t1bin")
aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k2', MAP('{"test1": "t1", "test3":"t3", "test4":"t4", "test5":"t5"}'), "t2b", "t1b")
aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k3', MAP('{"test1": "t1", "test2":"t22", "test4":"t4", "test5":"t5"}'), "t2b", "t1b")
aql> CREATE MAPKEYS INDEX pidIndex ON test.user (pids) STRING
OK, 1 index added.
aql> select * from test.user in MAPKEYS where pids="test2"
+--------------------------------------------------------------------------------+---------+---------+
| pids | test2 | test1 |
+--------------------------------------------------------------------------------+---------+---------+
| MAP('{"test2":"t22", "test4":"t4", "test5":"t5", "test1":"t1"}') | "t2b" | "t1b" |
| MAP('{"test2":"t2", "test3":"t3", "test4":"t4", "test5":"t5", "test1":"t1"}') | "t2bin" | "t1bin" |
+--------------------------------------------------------------------------------+---------+---------+
I inserted three records in your format, one did not have the test2 key in its map (k2). I then created the secondary index on the MAPKEY and ran the query, gave me the desired result.
AGGREGATE is used to run a stream User Defined Function on this result set of records. What is the UDF code that you want to run?
(AGGREGATE test.check_password("hii") ....implies you have a test.lua file which has a check_password() function that takes a string argument. )
You must create the secondary index on the MAP Keys first. Its reporting index not found. To check if you have the index, you can do:
aql> show indexes
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
| ns | bin | indextype | set | state | indexname | path | sync_state | type |
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
| "test" | "pids" | "MAPKEYS" | "user" | "RW" | "pidIndex" | "pids" | "synced" | "STRING" |
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
1 row in set (0.000 secs)
OK

Related

How can I update a list attribute in aql (aerospike)?

I have an aql table that looks like this:
| id | range
+---------------+-----------------------------------------------------
| "testId" | LIST('[{"start":"1000", "end":"2999"}]') |
+---------------+-----------------------------------------------------
I've been trying to unsuccessfully update the range using aql.
I tried this command:
insert into dsc.testTable (pk,'range')
values ('testId', LIST('[{"start":"500", "end":"1000"}]'))
But no luck. Help?
In your command, replace LIST with JSON.
See my example on namespace test, set demo below:
aql> insert into test.demo (pk,'range') values ('testId', json('[{"start":"500", "end":"1000"}]'))"
OK, 1 record affected.
aql> select * from test.demo where pk='testId'
+-----------------------------------------+
| range |
+-----------------------------------------+
| LIST('[{"start":"500", "end":"1000"}]') |
+-----------------------------------------+
1 row in set (0.001 secs)
OK

postgres insert data from an other table inside array type columns

I have tow table on Postgres 11 like so, with some ARRAY types columns.
CREATE TABLE test (
id INT UNIQUE,
category TEXT NOT NULL,
quantitie NUMERIC,
quantities INT[],
dates INT[]
);
INSERT INTO test (id, category, quantitie, quantities, dates) VALUES (1, 'cat1', 33, ARRAY[66], ARRAY[123678]);
INSERT INTO test (id, category, quantitie, quantities, dates) VALUES (2, 'cat2', 99, ARRAY[22], ARRAY[879889]);
CREATE TABLE test2 (
idweb INT UNIQUE,
quantities INT[],
dates INT[]
);
INSERT INTO test2 (idweb, quantities, dates) VALUES (1, ARRAY[34], ARRAY[8776]);
INSERT INTO test2 (idweb, quantities, dates) VALUES (3, ARRAY[67], ARRAY[5443]);
I'm trying to update data from table test2 to table test only on rows with same id. inside ARRAY of table test and keeping originals values.
I use INSERT on conflict,
how to update only 2 columns quantities and dates.
running the sql under i've got also an error that i don't understand the origin.
Schema Error: error: column "quantitie" is of type numeric but expression is of type integer[]
INSERT INTO test (SELECT * FROM test2 WHERE idweb IN (SELECT id FROM test))
ON CONFLICT (id)
DO UPDATE
SET
quantities = array_cat(EXCLUDED.quantities, test.quantities),
dates = array_cat(EXCLUDED.dates, test.dates);
https://www.db-fiddle.com/f/rs8BpjDUCciyZVwu5efNJE/0
is there a better way to update table test from table test2, or where i'm missing the sql?
update to show result needed on table test:
**Schema (PostgreSQL v11)**
| id | quantitie | quantities | dates | category |
| --- | --------- | ---------- | ----------- | --------- |
| 2 | 99 | 22 | 879889 | cat2 |
| 1 | 33 | 34,66 | 8776,123678 | cat1 |
Basically, your query fails because the structures of the tables do not match - so you cannot insert into test select * from test2.
You could work around this by adding "fake" columns to the select list, like so:
insert into test
select idweb, 'foo', 0, quantities, dates from test2 where idweb in (select id from test)
on conflict (id)
do update set
quantities = array_cat(excluded.quantities, test.quantities),
dates = array_cat(excluded.dates, test.dates);
But this looks much more convoluted than needed. Essentially, you want an update statement, so I would just recommend:
update test
set
dates = test2.dates || test.dates,
quantities = test2.quantities || test.quantities
from test2
where test.id = test2.idweb
Note that this ues || concatenation operator instead of array_cat() - it is shorter to write.
Demo on DB Fiddle:
id | category | quantitie | quantities | dates
-: | :------- | --------: | :--------- | :------------
2 | cat2 | 99 | {22} | {879889}
1 | cat1 | 33 | {34,66} | {8776,123678}

Do UPSERT based on specific value of JSON in Postgres 10

I have a Postgres table messages as follows:
Column | Type | Collation | Nullable |
-----------+--------------------------+-----------+----------
id | integer | | not null |
message | jsonb | | |
date | timestamp with time zone | | not null |
id | message | date
1 | {"name":"alpha", "pos":"x"} | 2020-02-11 12:31:44.658667+00
2 | {"name":"bravo", "pos":"y"} | 2020-02-11 12:32:43.123678+00
3 | {"name":"charlie", "pos":"z"}| 2020-02-11 12:38:37.623535+00
What I would like to do is do an UPSERT based on the value of the name key i.e., if there is an insert with same name value, then the other value pos is updated, otherwise a new entry is created.
I did CREATE UNIQUE INDEX message_name ON messages((message->>'name'));
I found the INSERT ON CONFLICT in Postgres 9.5+ but I can't understand how to use the unique index with this.
I don't know if this is the correct approach to do it in the first place so if there is a better way to do this, I would appreciate the input.
You need to repeat the expression from the index:
insert into messages (message)
values ('{"name":"alpha", "pos":"new pos"}')
on conflict ((message->>'name'))
do update
set message = jsonb_set(messages.message, '{pos}'::text[], excluded.message -> 'pos', true)
;
If you have more keys in the JSON and want to replace (or add) all of them, you can use this:
insert into messages (message)
values ('{"name":"alpha", "pos":"new pos", "some key": 42}')
on conflict ((message->>'name'))
do update
set message = messages.message || (excluded.message - 'name')
;

Querying on key of a Map in Aerospike

I'm trying to store a map in aerospike and fetch the data based on the key of the map.
First I created a Index on the bin where i'm storing the map
aql> create mapkeys index status on test.myset (state) String
aql> show indexes
+--------+---------+-----------+---------+-------+-----------+---------+------------+----------+
| ns | bin | indextype | set | state | indexname | path | sync_state | type |
+--------+---------+-----------+---------+-------+-----------+---------+------------+----------+
| "test" | "state" | "MAPKEYS" | "myset" | "RW" | "status" | "state" | "synced" | "STRING" |
+--------+---------+-----------+---------+-------+-----------+---------+------------+----------+
1 row in set (0.000 secs)
OK
Then I used java client to store the map
AerospikeClient client = new AerospikeClient("127.0.0.1",3000);
WritePolicy writePolicy = new WritePolicy();
writePolicy.timeout=500;
for(int i = 1;i<10;i++){
Key key = new Key("test","myset",""+i);
client.delete(writePolicy, key);
HashMap<String,String> map = new HashMap<String,String>();
map.put("key1", "string1");
map.put("key2", "string2");
map.put("key3", "string3");
Bin bin = new Bin("state", map);
client.put(writePolicy, key, bin);
}
I checked the data through apl and the data is clearly present.
aql> select * from test.myset
+--------------------------------------------------------+
| state |
+--------------------------------------------------------+
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
| {"key1":"string1", "key2":"string2", "key3":"string3"} |
+--------------------------------------------------------+
10 rows in set (0.019 secs)
Now when I try to query based on the index created it gives
aql> select * from test.myset where status = 'key1'
0 rows in set (0.000 secs)
Error: (204) AEROSPIKE_ERR_INDEX
aql> select * from test.myset where state = 'key1'
0 rows in set (0.000 secs)
Error: (201) AEROSPIKE_ERR_INDEX_NOT_FOUND
Can someone help me with this. I searched for that error but found no information. Thank you.
Secondary indexes on MapKeys, MapValues, Lists are supported by Aerospike, apart from Numeric, String and Geo2DSphere types.
For your scenario, you can query on the Mapkey as follows.
select * from test.myset in mapkeys where state='key1'
This should return the results.
In AQL, if you type help, you should get the following for queries
SELECT <bins> FROM <ns>[.<set>]
SELECT <bins> FROM <ns>[.<set>] WHERE <bin> = <value>
SELECT <bins> FROM <ns>[.<set>] WHERE <bin> BETWEEN <lower> AND <upper>
SELECT <bins> FROM <ns>[.<set>] WHERE PK = <key>
SELECT <bins> FROM <ns>[.<set>] IN <indextype> WHERE <bin> = <value>
SELECT <bins> FROM <ns>[.<set>] IN <indextype> WHERE <bin> BETWEEN <lower> AND <upper>
Similarly, you can run a query for the MapValue as well.
Update:
As of Aerospike 3.8.1, Secondary Index on List and Map are officially supported.
Original Response:
Query by secondary index on map keys, map values, or list values are
not officially supported yet.
That said, the functionality and syntax is somewhat available. You need to:
Create a secondary index with type MAPKEYS, MAPVALUES or LIST (you're using type STRING at the moment)
Select as follows (you're missing the IN MAPKEYS part):
SELECT * FROM namespace.setname IN MAPKEYS WHERE bin = 'keyValue'
The query syntax, as well as some other bits, is available if you type help while in the AQL console.

"Invalid combination field1 field2 field3" error message while trying to insert record into postgresql database

I'm trying to restore a database from server A to server B. For some reason, the import fails on 3 specific INSERT statements:
INSERT INTO tbl1 (device_id, group_name, param_id, value) VALUES (15, 'regX', 13, '4323');
INSERT INTO tbl1 (device_id, group_name, param_id, value) VALUES (15, 'device', 1, 'aatd');
INSERT INTO tbl1 (device_id, group_name, param_id, value) VALUES (15, 'regX', 14, 'ttdf');
The error returned is:
ERROR: Invalid combination of device, group, and parameter
It's the same error each record.
Here's what the table definition looks like:
testdb=# \d+ tbl1;
Table "public.tbl1"
Column | Type | Modifiers | Storage | Stats target | Description
------------+------------------------+-----------+----------+--------------+-------------
device_id | integer | | plain | |
group_name | character varying(255) | | extended | |
param_id | integer | | plain | |
value | character varying(255) | | extended | |
Other records that look similar work, with no issues. For example:
INSERT INTO tbl1 (device_id, group_name, param_id, value) VALUES (103, 'regX', 13, '130');
In fact, the database / import file has over 900 records and these are the only 3 that fail.
How I created the dump file / How I'm importing the dump:
To export:
pg_dump --create -U postgres origdb > outputfile.sql
And then on the new server, to import:
psql -f outputfile.sql -U postgres
What I've Tried So Far:
I've confirmed that in the original database, these records exist, and match what was generated by the dump command.
Here's what the data looks like in the original database:
origdb=# select * from tbl1 where device_id = 15;
device_id | group_name | param_id | value
-----------+------------+----------+--------------
15 | regX | 13 | 4323
15 | device | 1 | aatd
15 | regX | 14 | ttdf
(3 rows)
I've tried to import these records manually on the new server vs. importing the entire dump file. I get the same error message.
I've also been checking to see what pk's have been defined...
testdb=# SELECT
pg_attribute.attname,
format_type(pg_attribute.atttypid, pg_attribute.atttypmod)
FROM pg_index, pg_class, pg_attribute, pg_namespace
WHERE
pg_class.oid = 'tbl1'::regclass AND
indrelid = pg_class.oid AND
nspname = 'public' AND
pg_class.relnamespace = pg_namespace.oid AND
pg_attribute.attrelid = pg_class.oid AND
pg_attribute.attnum = any(pg_index.indkey)
AND indisprimary;
attname | format_type
---------+-------------
(0 rows)
Questions:
I'm not quite sure where it's getting the names "device, group, and parameter" in the error message ... what do these correspond to? I assume field names, but how can I verify this?
Any suggestions on what else to check to troubleshoot? I'm just hunting around to look for any foreign keys on this table etc.?? But any suggestions would be appreciated.
I didn't make this database so I'm not sure of all the relations etc.
Thanks.
This looks like a trigger that blocks these specific inserts and displays a custom message.
The trigger can be disabled in the original database but not on the new one.
See your user created triggers with this command:
SELECT * FROM pg_trigger;