How can I update a list attribute in aql (aerospike)? - aerospike

I have an aql table that looks like this:
| id | range
+---------------+-----------------------------------------------------
| "testId" | LIST('[{"start":"1000", "end":"2999"}]') |
+---------------+-----------------------------------------------------
I've been trying to unsuccessfully update the range using aql.
I tried this command:
insert into dsc.testTable (pk,'range')
values ('testId', LIST('[{"start":"500", "end":"1000"}]'))
But no luck. Help?

In your command, replace LIST with JSON.
See my example on namespace test, set demo below:
aql> insert into test.demo (pk,'range') values ('testId', json('[{"start":"500", "end":"1000"}]'))"
OK, 1 record affected.
aql> select * from test.demo where pk='testId'
+-----------------------------------------+
| range |
+-----------------------------------------+
| LIST('[{"start":"500", "end":"1000"}]') |
+-----------------------------------------+
1 row in set (0.001 secs)
OK

Related

tidb cannot fuzzy query with like '_' for double byte character?

In my program, I want to lookup "测试" with
select * from test where name like '测_';
After my test, I found that MySQL can, but tidb can't?
This seems to be working fine for me. What character set is your table, column and connection? What TiDB version are you using?
mysql> CREATE TABLE t1 (id char(2) character set utf8mb4 primary key);
Query OK, 0 rows affected (0.18 sec)
mysql> INSERT INTO t1 VALUES('测试'),('测x');
Query OK, 2 rows affected (0.12 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM t1 WHERE id LIKE '测_';
+--------+
| id |
+--------+
| 测x |
| 测试 |
+--------+
2 rows in set (0.11 sec)
mysql> SELECT tidb_version();
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tidb_version() |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Release Version: v5.4.0
Edition: Community
Git Commit Hash: 55f3b24c1c9f506bd652ef1d162283541e428872
Git Branch: heads/refs/tags/v5.4.0
UTC Build Time: 2022-01-25 08:39:26
GoVersion: go1.16.4
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.11 sec)

HIVE SQL: Select rows whose values contain string in a column

I want to select rows whose values contain a string in a column.
For example, I want to select all rows whose values contain a string '123' in the column 'app'.
table:
app id
123helper xdas
323helper fafd
2123helper dsaa
3123helper fafd
md5321 asdx
md5123 dsad
result:
app id
123helper xdas
2123helper dsaa
3123helper fafd
md5123 dsad
I am not familiar with SQL query.
Could anyone help me? .
Thanks in advances.
In a number of ways:
like:
select * from table
where app like '%123%'
rlike:
...
where app rlike '123'
instr:
...
where instr(app, '123')>0
locate:
...
where locate('123', app)>0
Invent your own way.
Read manual: String Functions and Operators.
Try the following using like
select
*
from yourTable
where app like '%123%'
Output:
| app | id |
| ---------- | ---- |
| 123helper | xdas |
| 2123helper | dsaa |
| 3123helper | fafd |
| md5123 | dsad |
Please use below query,
select app, id from table where app like '%123%';
Below are few additional information,
like '123%' --> Starts with 123
like '%123' --> Ends with 123
like '%123%'--> Contains 123 anywhere in the string

Unable to use stream UDFs on MAPKEYS index

I have a bin with map as datatype and created a secondary on MAPKEYS. Now i want to run a udf with filter on MAPKEYS index. It gives the error AEROSPIKE_ERR_INDEX_NOT_FOUND.
This is my aql query:
aql> aggregate test.check_password('hii') on test.user in MAPKEYS where pids = 'test2'
Error: (201) AEROSPIKE_ERR_INDEX_NOT_FOUND
whereas the normal query works
aql> select * from test.user in MAPKEYS where pids = 'test2'
returns some data
Sample data inserted for testing, in the ideal case it will be a Map of String to Object
aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k1', MAP('{"test1": "t1", "test2": "t2", "test3":"t3", "test4":"t4", "test5":"t5"}'), "t2bin", "t1bin")
aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k2', MAP('{"test1": "t1", "test3":"t3", "test4":"t4", "test5":"t5"}'), "t2b", "t1b")
aql> INSERT INTO test.user (PK, pids, test2, test1) VALUES ('k3', MAP('{"test1": "t1", "test2":"t22", "test4":"t4", "test5":"t5"}'), "t2b", "t1b")
aql> CREATE MAPKEYS INDEX pidIndex ON test.user (pids) STRING
OK, 1 index added.
aql> select * from test.user in MAPKEYS where pids="test2"
+--------------------------------------------------------------------------------+---------+---------+
| pids | test2 | test1 |
+--------------------------------------------------------------------------------+---------+---------+
| MAP('{"test2":"t22", "test4":"t4", "test5":"t5", "test1":"t1"}') | "t2b" | "t1b" |
| MAP('{"test2":"t2", "test3":"t3", "test4":"t4", "test5":"t5", "test1":"t1"}') | "t2bin" | "t1bin" |
+--------------------------------------------------------------------------------+---------+---------+
I inserted three records in your format, one did not have the test2 key in its map (k2). I then created the secondary index on the MAPKEY and ran the query, gave me the desired result.
AGGREGATE is used to run a stream User Defined Function on this result set of records. What is the UDF code that you want to run?
(AGGREGATE test.check_password("hii") ....implies you have a test.lua file which has a check_password() function that takes a string argument. )
You must create the secondary index on the MAP Keys first. Its reporting index not found. To check if you have the index, you can do:
aql> show indexes
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
| ns | bin | indextype | set | state | indexname | path | sync_state | type |
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
| "test" | "pids" | "MAPKEYS" | "user" | "RW" | "pidIndex" | "pids" | "synced" | "STRING" |
+--------+--------+-----------+--------+-------+------------+--------+------------+----------+
1 row in set (0.000 secs)
OK

How to spool three columns from a table and check the summation of the third column in UNIX shell script

I have created a query which results three columns. I am able to fetch the details in the spool file and based on that i am checking a condition that the sum of all the values (numerical) from the third column is 0 or not. If not 0, then in the mail the complete result should come.
Issues i am facing are:
1) When i am writing simple SELECT query for the three columns, the results are not coming as three columns and single row for single record. But it is displaying as one row for each column value.
i.e. in TOAD the result is as:
|Column_name_1 | Column_name_2 | Column_name_3 |
+--------------+-----------------+----------------+
| text_1 | text_2 | num_1 |
| text_3 | text_4 | num_2 |
But in the spool file, i am getting result as--
|text_1 |
|text_2 |
|num_1 |
| text_3 |
| text_4 |
| num_2 |
2) The other issue is i am not getting any header in the spool file.
Can anyone please look into this and let me know how to proceed.
Try adding SET RECSEP OFF to fix the issue 1, which will solve your problem for record seperation.
Add SET HEADING ON to print the column headers.
See this link for a learning.

No Changed Rows Produced by this mySQL update query. Why?

I am at a loss this morning. Maybe my coffee was drugged? Simple problem- get the existing ids into this temp table for an export.
Tables like so:
Table person
+--------+-----------------------+
| id | email |
+--------+-----------------------+
| 142755 | xxxxxxx#xxxxxxxxx.com |
+--------+-----------------------+
Table no_dma
+--------+------------------------+
| person | email |
+--------+------------------------+
| 0 | xxxxxxx#xxxxxxxxx.com |
+--------+------------------------+
Query:
UPDATE
person, no_dma
SET no_dma.person = person.id
WHERE person.email = no_dma.email;
I have verified the existence of at least some matching email addresses in the two tables but the update produces
Query OK, 0 rows affected (9.31 sec)
Rows matched: 0 Changed: 0 Warnings: 0
Clearly I have a little dain bramamge today.
Help me out? What am I doing incorrectly?
// EDIT
Per comments below I made these queries:
mysql> select person, email from no_dma limit 0,1;
+--------+------------------------+
| person | email |
+--------+------------------------+
| 0 | tom_r1989#xxxxxxx.com
+--------+------------------------+
1 row in set (0.00 sec)
mysql> select email from no_dma where email = 'tom_r1989#xxxxxxx.com';
Empty set (0.00 sec)
mysql> select email from no_dma where TRIM(email) = 'tom_r1989#xxxxxxx.com';
Empty set (0.46 sec)
Both tables have email field stored as varchar with collation set to latin1_swedish_ci.
And this this query, WTH?
mysql> SELECT CONCAT('"',email,'"') from no_dma limit 0,3;
+-----------------------+
| CONCAT('"',email,'"') |
+-----------------------+
" |amjor308#xxx.com
" |utt#xxx.com
" |00000000#xxx.com
+-----------------------+
mysql> SELECT email from no_dma limit 0,3;
+--------------------+
| email |
+--------------------+
|+amjor308#xxx.com
|mutt#xxx.com
|000000000#xxx.com
+--------------------+
What is going on there? Looks like newlines but I thought TRIM() handled those?
mysql> SELECT TRIM(email) from no_dma limit 0,3;
+--------------------+
| TRIM(email) |
+--------------------+
|+amjor308#aol.com
|mutt#excite.com
|000000000#aol.com
+--------------------+
3 rows in set (0.00 sec)
UPDATE: FOUND ISSUE
import was done on a Windows generated CSV but mysqlimport was given arg
--lines-terminated-by='\n'
Reimported data works fine.
Sorry to have wasted folks time.
Table no_dma has a trailing space. The data is not the same.
Edit:
SET ANSI_PADDING?
Is it really a space: is ASCII 160
What does a hash or checksum of each value reveal?
What are the string lengths?
The statement is fine, I think. B/c I tested it and it worked.