I am using bq command line to read data from multiple tables with similar names, and have subsidiary query problem.
Simple example:
bq query --append=true --destination_table=xxxxxxxxxxxx:my_table.result
SELECT udid FROM (TABLE_QUERY(xxxxxxxxxxxx:my_table,'table_id
CONTAINS "data_2014_05_05"'))
When I run that query in the BQ GUI I get the results. However, when I do it from
the command line I get: "Error evaluating subsidiary query".
In addition, if I test only the subsidiary query from the command line:
bq query "SELECT * FROM xxxxxxxxxxxx:my_table.__TABLES__
WHERE table_id CONTAINS 'data_2014_05_05'"
it works fine and I get the tables' info.
So why is there "Error evaluating subsidiary query" in the main query?
Is there a problem with subsidiaries in bq command line?
There is no example whatsoever online or in the documentation.
Remove/escape the special characters, such as the quotes, in your query when passing it to the command line tool.
Related
I am running simple select query in BigQuery GCP Console and it works perfectly fine. But, when I run the same query using BQ CLI, it fails.
When I run the same query without the "WHERE" clause, it works.
SELECT field_path FROM `GCP_PROJECT_ID.MY_DATASET.INFORMATION_SCHEMA.COLUMN_FIELD_PATH`
WHERE table_name="MY_TABLE_NAME"
Below is the error message
Error in query string: Error processing job 'GCP_project_ID:jobidxxxxx': Unrecognized name:
MY_DATASET at [1:1xx]
I have tried the following "WHERE" clauses as well. None of these work as well.
... WHERE table_name IN ("MY_TABLE_NAME")
... WHERE table_name like "%MY_TABLE_NAME%"
I have reproduced your query on my own datasets using the Command Line tool and it worked fine. This is the command I ran:
bq query --use_legacy_sql=false 'SELECT field_path FROM `<projectName>.<DatasetName>.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS`where table_name="<TableName>"'
I'm attempting to execute a 236 line query in DataGrip in an attached BigQuery console. When I select the whole script to run, it always only executes up to the 19th line. Because of that, I get this error
[HY000][100032] [Simba][BigQueryJDBCDriver](100032) Error executing query job. Message: Syntax error: Unexpected end of script at [19:49] com.simba.googlebigquery.support.exceptions.GeneralException: [Simba][BigQueryJDBCDriver](100032) Error executing query job. Message: Syntax error: Unexpected end of script at [19:49]
I've tried running it as a SQL file as well, but that resulted in the same error. I Know that this query is valid because it returns the desired results when I run it directly in the Google Cloud query editor. Has anyone else run into this issue, and is there a fix?
We introduced BigQuery dialect recently and it does not apply to your previously created data source after update, since this data source was created with custom driver with 'Generic' dialect as default. It is needed to change dialect in driver options, then you'll have all related consoles and datasources with correct one.
I execute a SQL query (for PostgreSQL) via psql.exe inside a Windows batch. I get an error I can't explain, saying that a FROM clause is missing for a table that is not called within the query (see below). When I search in the batch file for geo_c3_0_3_mo table, the string is not found...
Any idea on this kind of issue?
EDIT :
If I copy-paste the query from the batch file into a pgAdminIII SQL query window, the query runs perfectly and no error message is returned.
When I remove one of the subqueries, the error either disappear or mention another badly written table name (for instance: missing FROM-clause for table "geoc__0_3_mo")... It seems more and more that the issue comes from the length of the line (19,413 characters!). To me, it is not possible to write the query on several lines within a batch file, like inside a pgAdminIII SQL query window. The solution would be to keep the query inside a *.sql file and to call that file from the batch file.
Write the query to a tempfile in your batch, then execute it with psql -f. This will bypass command-line length issues.
I'm trying to run queries in standard mode via command line. I'm using recently version of this bigquery-2.0.17.
firstly I tried to use bq query command:
bq query --use_legacy_sql=False "SELECT string_col FROM `source_name`"
And get exception
FATAL Flags parsing error: Unknown command line flag 'use_legacy_sql'
After, I've run in shell mode
project_name> query --use_legacy_sql=False "SELECT string_col FROM `source_name`"
And get:
float() argument must be a string or a number
Could you advise me, how can I run query in command line.
Thanks
The commands you are running are all correct. The only thing I do differently is I read the queries from a file so I don't have problems with my bash trying to interpret the ` sign, like so:
cat query.sql | bq query --use_legacy_sql=False
(You won't have this issue in the bq shell interpreter).
Supposing your queries are all correct and following the Standard syntax (you can also run a simple query like "select [1, 2]" to see if it works, if it does, then maybe your source_name has some issue going on), the only thing that comes to mind is maybe if you try to reinstall gcloud and even update it (currently we are at bq version 2.0.24), as this seems to be more related to environment rather than command syntax.
Need to use gcloud installation instead of alone bq.
Thanks
We have a huge SQL script involving tens of tables, subqueries, hundreds of attributes. It works perfectly in the test database but returns sql subquery returns more than 1 row error when running in the production database. The script was working perfectly up until now. The problem is, all I get is a one-line error specified above with no clues whatsoever which exact subquery causes the error which makes it near to impossible to debug. The question is, how am I supposed to know which line of the SQL causes the error? Is there any way to "debug" it line by line like you would do it in a programming language?
I am using TOAD with Oracle 11g.
Add print or DBMS_OUTPUT.PUT_LINE commands to your script to print messages. And/or use exception handlers in the script. Possibly add some variables that count or label which statement you are at, and output that in the exception handler.
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/errors.htm
Once you have found the query that causes the problem, convert it to a similar query with an appropriate group by and having count(*) > 1 so that you can see what data caused the problem. For instance if you have a correlated subquery that looks like:
(select name from names where id=foo.id)
then write a similar query
select id from names group by id having count(*) > 1
to identify the offending data.
If you have multiple subqueries in the query that produces the error, you could temporarily convert the subqueries to use temporary tables and search them all for duplicates.