After upgrade to karate v1.0.1, table data type has an issue, here is my scenario and error returned:
Scenario: table test
* table table1 =
| column |
| 'row1' |
| 'row2' |
* print table1
* print table1
>>>> js failed:
01: karate.log('[print]',table1)
<<<<
org.graalvm.polyglot.PolyglotException: ReferenceError: "table1" is not defined
- <js>.:program(Unnamed:1)
I downgraded to v0.9.6 to check and this issue did not occur
The = symbol is not required.
* table table1
| column |
| 'row1' |
| 'row2' |
* print table1
You are welcome to update the docs via an open-source PR.
Related
I'm trying to filter a table with a list of strings as a parameter, but as I want to make the parameter optional (in Python sql user case) I can't use IN operator.
With postgresql I was able to build the query like this:
SELECT *
FROM table1
WHERE (id = ANY(ARRAY[%(param_id)s]::INT[]) OR %(param_id)s IS NULL)
;
Then in Python one could choose to pass a list of param_id or just None, which will return all results from table1. E.g.
pandas.read_sql(query, con=con, params={param_id: [id_list or None]})
However I couldn't do the same with snowflake because even the following query fails:
SELECT *
FROM table1
WHERE id = ANY(param_id)
;
Does Snowflake not have ANY operator? Because it is in their doc.
If the parameter is a single string literal 1,2,3 then it first needs to be parsed to multiple rows SPLIT_TO_TABLE
SELECT *
FROM table1
WHERE id IN (SELECT s.value
FROM TABLE (SPLIT_TO_TABLE(%(param_id)s, ',')) AS s);
Agree with #Yuya. This is not very clear in documentation. As per doc -
"IN is shorthand for = ANY, and is subject to the same restrictions as ANY subqueries."
However, it does not work this way - IN works with a IN list where as ANY only works with subquery.
Example -
select * from values (1,2),(2,3),(4,5);
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
| 4 | 5 |
+---------+---------+
IN works fine with list of literals -
select * from values (1,2),(2,3),(4,5) where column1 in (1,2);
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
+---------+---------+
Below gives error (though as per doc IN and = ANY are same) -
select * from values (1,2),(2,3),(4,5) where column1 = ANY (1,2);
002076 (42601): SQL compilation error:
Invalid query block: (.
Using subquery ANY runs fine -
select * from values (1,2),(2,3),(4,5) where column1 = ANY (select column1 from values (1),(2));
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
+---------+---------+
Would it not make more sense for both snowflake and postgresql to have two functions/store procedures that have one/two parameters.
Then the one with the “default” just dose not asked this fake question (is in/any some none) and is simpler. Albeit it you question is interesting.
I've created two table , "conversion_log_" on project1 and "test_table" on project2 in Google bigQuery.
I have been trying to select and insert data from conversion_log_ to test_table . I want to transfer orderid(STRING) of conversion_log_ whose pgid and luid matches pgid and luid in test_table but I got this error "Unrecognized name: hitobito_test at [6:10]" . I'm sure my tablename is correct.
I can't get the cause of this error . Could anyone tell me ??
sorry I'm beginner at BigQuery so if I oversight something , Please let me know .
insert into hitobito_test.test_table(orderid)
select orderid
from
`kuzen-198289.conversion_log.conversion_log_` as p
where
p.pgid = hitobito_test.test_table.pgid
AND
p.luid = hitobito_test.test_table.luid
test_table
pgid | luid | cv_date | orderid
4587 | U2300 | null | null
4444 | U7777 | null | null
conversion_log_
pgid | luid | cv_date | orderid |
3232 | U5454 | 2020-08-01 | xcdf23
9786 | U3745 | 2020-08-02 | fgtd43
4587 | U2300 | 2020-08-02 | aaav3 ⬅︎ I need to send this orderid to the first line in test_table
If I add prijectname like below , I got this message
"Syntax error: Expected end of input but got identifier "hitobito_test" at [6:33]"
insert into galvanic-ripsaw-281806.hitobito_test.test_table(orderid)
select orderid
from
`kuzen-198289.conversion_log.conversion_log_` as p
where
p.pgid = galvanic-ripsaw-281806.hitobito_test.test_table.pgid
AND
p.luid = hitobito_test.test_table.luid
`
Please try this:
INSERT INTO `galvanic-ripsaw-281806.hitobito_test.test_table`(orderid)
SELECT
orderid
FROM
`kuzen-198289.conversion_log.conversion_log_` AS p
WHERE
EXISTS (
SELECT 1
FROM
`galvanic-ripsaw-281806.hitobito_test.test_table` h
WHERE
p.pgid = h.pgid AND p.luid = h.luid)
I am trying to print the column names from a table called 'meta' and I need also its data types.
I tried this query
SELECT meta FROM INFORMATION_SCHEMA.TABLES;
but it throws an error saying no information schema available. Could you please help me, I am a beginner in SQL.
Edit:
select tables.name from tables join schemas on
tables.schema_id=schemas.id where schemas.name=’sprl_db’ ;
This query gives me all the tables in database 'sprl_db'
You can use the monetdb catalog:
select c.name, c.type, c.type_digits, c.type_scale
from sys.columns c
inner join sys.tables t on t.id = c.table_id and t.name = 'meta';
as you are using monetDB you can get that by using sys.columns
sys.columns
it will return all information related to table columns
you can also check Schema, table and columns documentation for monetDB
in sql server we get that like this exec sp_columns TableName
If I understand correctly you need to see the columns and the types of a table you (or some other user) defined called meta?
There are at least two ways to do this:
First (as #GMB mentioned in their answer) you can query the SQL catalog: https://www.monetdb.org/Documentation/SQLcatalog/TablesColumns
SELECT * FROM sys.tables WHERE NAME='meta';
+------+------+-----------+-------+------+--------+---------------+--------+-----------+
| id | name | schema_id | query | type | system | commit_action | access | temporary |
+======+======+===========+=======+======+========+===============+========+===========+
| 9098 | meta | 2000 | null | 0 | false | 0 | 0 | 0 |
+------+------+-----------+-------+------+--------+---------------+--------+-----------+
1 tuple
So this gets all the relevant information about the table meta. We are mostly interested in the value of the column id because this uniquely identifies the table.
(Please note that this id will probably be different in your system)
After we have this information we can query the columns table with this table id:
SELECT * FROM sys.columns WHERE table_id=9098;
+------+------+------+-------------+------------+----------+---------+-------+--------+---------+
| id | name | type | type_digits | type_scale | table_id | default | null | number | storage |
+======+======+======+=============+============+==========+=========+=======+========+=========+
| 9096 | i | int | 32 | 0 | 9098 | null | true | 0 | null |
| 9097 | j | clob | 0 | 0 | 9098 | null | true | 1 | null |
+------+------+------+-------------+------------+----------+---------+-------+--------+---------+
2 tuples
Since you are only interested in the names and types of the columns, you can modify this query as follows:
SELECT name, type FROM sys.columns WHERE table_id=9098;
+------+------+
| name | type |
+======+======+
| i | int |
| j | clob |
+------+------+
2 tuples
You can combine the two queries above with a join:
SELECT col.name, col.type FROM sys.tables as tab JOIN sys.columns as col ON tab.id=col.table_id WHERE tab.name='meta';
+------+------+
| name | type |
+======+======+
| i | int |
| j | clob |
+------+------+
2 tuples
The second, and preferred way to get this information if you are using the mclient utility of MonetDB, is by using the describe meta-command of mclient. When used without arguments it presents a list of tables that have been defined in the current database and when it is given the name of the table it prints its SQL definition:
sql>\d
TABLE sys.data
TABLE sys.meta
sql>\d sys.meta
CREATE TABLE "sys"."meta" (
"i" INTEGER,
"j" CHARACTER LARGE OBJECT
);
You can use the \? meta-command to see a list of all meta-commands in mclient:
sql>\?
\? - show this message
\<file - read input from file
\>file - save response in file, or stdout if no file is given
\|cmd - pipe result to process, or stop when no command is given
\history - show the readline history
\help - synopsis of the SQL syntax
\D table - dumps the table, or the complete database if none given.
\d[Stvsfn]+ [obj] - list database objects, or describe if obj given
\A - enable auto commit
\a - disable auto commit
\e - echo the query in sql formatting mode
\t - set the timer {none,clock,performance} (none is default)
\f - format using renderer {csv,tab,raw,sql,xml,trash,rowcount,expanded,sam}
\w# - set maximal page width (-1=unlimited, 0=terminal width, >0=limit to num)
\r# - set maximum rows per page (-1=raw)
\L file - save client-server interaction
\X - trace mclient code
\q - terminate session and quit mclient
For MySQL:
SELECT column_name,
data_type
FROM information_schema.columns
WHERE table_schema = ’ yourdatabasename ’
AND table_name = ’ yourtablename ’;
Output:
+-------------+-----------+
| COLUMN_NAME | DATA_TYPE |
+-------------+-----------+
| Id | int |
| Address | varchar |
| Money | decimal |
+-------------+-----------+
I have a table with md5 sums for files and use the following query to find the files which exist in one hashing-run and not in the other (oldt vs newt):
SELECT *
FROM md5_sums as oldt
WHERE NOT EXISTS (SELECT *
FROM md5_sums as newt
WHERE oldt.file = newt.file
and oldt.relpath = newt.relpath
and newt.starttime = 234)
and oldt.starttime = 123
now I want to put a flag in an extra column with an update clause, like
update md5_sums
set only_in_old = 'X'
where
and there I want a reference to the upper query as subquery, but i cannot find a proper way. Is there a possibility to use the results from the upper query for the where clause from the update-query?
(I added now some Table Screenshots with simple Table Data)
Table Description
Table Data before UPDATE
desired Table Data after UPDATE
SQLite does not support aliasing the updated table.
In your case you don't need that.
You can use the table's name md5_sums inside the subquery since you aliased the table of the SELECT statement as newt.
UPDATE md5_sums
SET only_in_old = 'X'
WHERE NOT EXISTS (
SELECT 1 FROM md5_sums AS newt
WHERE md5_sums.file = newt.file
AND md5_sums.relpath = newt.relpath
AND newt.starttime = 234
)
AND starttime = 123
See the demo.
Results:
| file | relpath | starttime | only_in_old |
| ------- | -------- | --------- | ----------- |
| abc.txt | /var/tmp | 123 | |
| abc.txt | /var/tmp | 234 | |
| def.txt | /tmp | 123 | X |
| xyz.txt | /tmp | 234 | |
I hope this helps you in converting the select statement into an update statement,
UPDATE md5_sums
SET only_in_old = 'X'
WHERE NOT EXISTS (SELECT *
FROM md5_sums newt
WHERE file = newt.file
and relpath = newt.relpath
and newt.starttime = 1551085649.7764235)
and starttime = 1551085580.009046
I'm trying to update an table with data from another table (using PostgreSQL). My table is something like:
+-------------------+-------------------+-----------------------+
| id_location | user_location | social_sec_number |
+-------------------+-------------------+-----------------------+
| 00000000001 | Jason (null) | 812.539.037 |
+-------------------+-------------------+-----------------------+
| 00000000002 | Jennifer (null) | 066.307.382 |
+-------------------+-------------------+-----------------------+
| 00000000003 | Albert (null) | 560.732.535 |
+-------------------+-------------------+-----------------------+
And I want Into this:
+-------------------+---------------------------+-----------------------+
| id_location | user_location | social_sec_number |
+-------------------+---------------------------+-----------------------+
| 00000000001 | Jason (812.539.037) | 812.539.037 |
+-------------------+---------------------------+-----------------------+
| 00000000002 | Jennifer (066.307.382) | 066.307.382 |
+-------------------+---------------------------+-----------------------+
| 00000000003 | Albert (560.732.535) | 560.732.535 |
+-------------------+---------------------------+-----------------------+
Columns id_location and user_location are in the same table TableLocation, but social_sec_number are in another table.
My code trying update them (this code does not reflect what was shown in the examples of tables):
WITH tb_cpf
AS (
SELECT doc.nr_documento_identificacao
FROM core.tb_usuario_localizacao ul
INNER JOIN core.tb_localizacao l ON ul.id_localizacao = l.id_localizacao
INNER JOIN client.tb_pess_doc_identificacao doc ON ul.id_usuario = doc.id_pessoa
WHERE l.ds_localizacao ilike '%(null)%'
AND doc.cd_tp_documento_identificacao = 'CPF'
)
UPDATE core.tb_localizacao AS l
SET l.ds_localizacao = REPLACE(l.ds_localizacao, '(null)', tb_cpf)
WHERE l.id_localizacao IN (
SELECT l.id_localizacao
FROM core.tb_usuario_localizacao ul
INNER JOIN core.tb_localizacao l ON ul.id_localizacao = l.id_localizacao
INNER JOIN client.tb_pess_doc_identificacao doc ON ul.id_usuario = doc.id_pessoa
WHERE l.ds_localizacao ilike '%(null)%'
AND doc.cd_tp_documento_identificacao = 'CPF'
)
AND tb_pess_doc_identificacao.id_pessoa = tb_usuario_localizacao.id_usuario
AND tb_usuario_localizacao.id_localizacao = tb_localizacao.id_localizacao;
And I receive this ugly error:
ERROR: column "tb_cpf" does not exist
LINE 11: ... of l.ds_localizacao = REPLACE (l.ds_localization, '(null)', tb_cpf)
^
********** Error **********
ERROR: column "tb_cpf" does not exist
SQL state: 42703
Character: 433
That way, it is possible that each record with "null" is replaced by social security (for example) on every table?
You're misunderstanding with clause usage. Think of it like a view or like a subquery. It works exactly like a memory table. You have to expose all columns you're gonna use.
But, why are you complicating it so much? Why don't you do something as simple as this?
update mytable m1
set m1.mycolumn = replace( m1.mycolumn,
'(null)',
(
select mycolumn2
from mytable2 m2
where m2.id = m1.id
)
)
where m1.mycolumn like '%(null)'
Or, if you really want to use with (for performance issues):
with t1 as
(select t.id,
t.mycolumn
from mytable
)
update mytable2 t2
set t2.mycolumn = replace (t2.mycolumn, '(null)', t1.mycolumn)
where t2.id = t1.id
and t2.mycolumn like '%(null)'
The tb_cpf from the CTE works like any other relation in your main UPDATE statement. You are missing a column identifier, so if you add .nr_documento_identificacao to tb_cpf then the update should work:
WITH tp_cpf AS (
...
)
UPDATE core.tb_localizacao AS l
SET l.ds_localizacao = REPLACE(l.ds_localizacao, 'null', tb_cpf.nr_documento_identificacao)
WHERE ...