I'm trying to run queries in standard mode via command line. I'm using recently version of this bigquery-2.0.17.
firstly I tried to use bq query command:
bq query --use_legacy_sql=False "SELECT string_col FROM `source_name`"
And get exception
FATAL Flags parsing error: Unknown command line flag 'use_legacy_sql'
After, I've run in shell mode
project_name> query --use_legacy_sql=False "SELECT string_col FROM `source_name`"
And get:
float() argument must be a string or a number
Could you advise me, how can I run query in command line.
Thanks
The commands you are running are all correct. The only thing I do differently is I read the queries from a file so I don't have problems with my bash trying to interpret the ` sign, like so:
cat query.sql | bq query --use_legacy_sql=False
(You won't have this issue in the bq shell interpreter).
Supposing your queries are all correct and following the Standard syntax (you can also run a simple query like "select [1, 2]" to see if it works, if it does, then maybe your source_name has some issue going on), the only thing that comes to mind is maybe if you try to reinstall gcloud and even update it (currently we are at bq version 2.0.24), as this seems to be more related to environment rather than command syntax.
Need to use gcloud installation instead of alone bq.
Thanks
Related
I am running simple select query in BigQuery GCP Console and it works perfectly fine. But, when I run the same query using BQ CLI, it fails.
When I run the same query without the "WHERE" clause, it works.
SELECT field_path FROM `GCP_PROJECT_ID.MY_DATASET.INFORMATION_SCHEMA.COLUMN_FIELD_PATH`
WHERE table_name="MY_TABLE_NAME"
Below is the error message
Error in query string: Error processing job 'GCP_project_ID:jobidxxxxx': Unrecognized name:
MY_DATASET at [1:1xx]
I have tried the following "WHERE" clauses as well. None of these work as well.
... WHERE table_name IN ("MY_TABLE_NAME")
... WHERE table_name like "%MY_TABLE_NAME%"
I have reproduced your query on my own datasets using the Command Line tool and it worked fine. This is the command I ran:
bq query --use_legacy_sql=false 'SELECT field_path FROM `<projectName>.<DatasetName>.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS`where table_name="<TableName>"'
I have been able to run an SQL script script.sql from within SQL Plus as follows:
SQL>#script.sql
I did that after starting SQL Plus and logging from within Powershell in as follows:
C:\projects\temp> sqlplus myuser/mypassword#my.tns.address
Now what I'd like to do is run the script above directly from Powershell. It is my understanding that I should be able to do that like this:
C:\projects\temp> sqlplus myuser/mypassword#my.tns.address script.sql
I expected this to work fine, especially considering I'm using the same user and password, the same TNS, and calling the same script.
Instead of running the script however, all this command does is output the Help-text for SQL Plus, i.e. the same text that is shown when running sqlplus -H.
I'm assuming there is some syntax error, or something else wrong with the command above, but the problem is finding out what that error is.
Is there something obviously wrong with the last command above? Or is there some way for me to turn up the verbosity, so that I can get a hint about what could be wrong?
The "#" is missing to invoke the script. Try this instead
C:\projects\temp> sqlplus myuser/mypassword#my.tns.address #script.sql
Is there a programmatic way to validate HiveQL statements for errors like basic syntax mistakes? I'd like to check statements before sending them off to Elastic Map Reduce in order to save debugging time.
Yes there is!
It's pretty easy actually.
Steps:
1. Get a hive thrift client in your language.
I'm in ruby so I use this wrapper - https://github.com/forward/rbhive (gem install rbhive)
If you're not in ruby, you can download the hive source and run thrift on the included thrift configuration files to generate client code in most languages.
2. Connect to hive on port 10001 and run a describe query
In ruby this looks like this:
RBHive.connect(host, port) do |connection|
connection.fetch("describe select * from categories limit 10")
end
If the query is invalid the client will throw an exception with details of why the syntax is invalid. Describe will return you a query tree if the syntax IS valid (which you can ignore in this case)
Hope that helps.
"describe select * from categories limit 10" didn't work for me.
Maybe this is related to the Hive version one is using.
I'm using Hive 0.8.1.4
After doing some research I found a similar solution to the one Matthew Rathbone provided:
Hive provides an EXPLAIN command that shows the execution plan for a query. The syntax for this statement is as follows:
EXPLAIN [EXTENDED] query
So for everyone who's also using rbhive:
RBHive.connect(host, port) do |c|
c.execute("explain select * from categories limit 10")
end
Note that you have to substitute c.fetch with c.execute, since explain won't return any results if it succeeds => rbhive will throw an exception no matter if your syntax is correct or not.
execute will throw an exception if you've got an syntax error or if the table / column you are querying doesn't exist. If everything is fine, no exception is thrown but also you'll receive no results, which is not an evil thing
In the latest version hive 2.0 comes with hplsql tool which allows us to validate hive commands without actually running them.
Configuration:
add the below XML in hive/conf folder and restart hive
https://github.com/apache/hive/blob/master/hplsql/src/main/resources/hplsql-site.xml
To Run the hplsql and validate the query , please use the below command:
To validate Singe Query
hplsql -offline -trace -e 'select * from sample'
(or)
To Validate Entire File
hplsql -offline -trace -f samplehql.sql
If the query syntax is correct , the response from hplsql would be something like this:
Ln:1 SELECT // type
Ln:1 select * from sample // command
Ln:1 Not executed - offline mode set // execution status
if the query Syntax is wrong , the syntax issue in the query will be reported
If the hive version is older, we need to manually place the hplsql jars inside the hive/lib and proceed.
I have the following problem, I need to put in a script that is going to run before the new version is rolled the SQL code that enables the pgAgent in PostgreSQL. However, this code should be run on the maintenance database (postgres) and the database where we run the script file is another one.
I remember that in SQL Server there is a command "use " so you could do something like:
use foo
-- some code
use bar
-- more code
is there something similar in PostgreSQL?
You can put in your file something like:
\c first_db_name
select * from t; --- your sql
\c second_db_name
select * from t; --- your sql
...
Are you piping these commands through the psql command? If so, \c databasename is what you want.
psql documentation
You can't switch databases in Postgres in this way. You actually have to reconnect to the other database.
PostgreSQL doesn't have the USE command. You would most likely use psql with the --dbname option to accomplish this, --dbname takes the database name as a parameter. See this link for details on the other options you can pass in you will also want to check out the --file option as well. http://www.postgresql.org/docs/9.0/interactive/app-psql.html
well after looking on the web for some time I found this which was what I need it
http://www.postgresonline.com/journal/archives/44-Using-DbLink-to-access-other-PostgreSQL-Databases-and-Servers.html
I have an oracle script that I am trying to convert to valid db2 syntax. Within this sql file I have various calls to other sql files passing in a parameter using the '#' syntax.
e.g.
#script1 param1
#script2 param2
Can anyone help me with valid db2 equivalent statements? Is there an equivalent run command in db2? is it possible to pass parameters to a sql script in db2?
thanks,
smauel
The thing you are after is the DB2 Command Line Processor (CLP).
If you want to execute a script, you would execute in the CLP:
db2 -vtf script1
-f tells the CLP to run command input from the given file.
Here's the full list of options.
Unfortunately db2 doesn't support passing parameters to a script. You would have to combine your db2 -vtf commands with other scripting commands (such as sed) to generate the scripts for you, as in this example.
1) place the filename.sql file in SQLLIB/BIN
2) run db2cmd
3) execute this to connect to the required db
db2 connect to *dbname* user *userid* using *password*
4) excute this command
db2 -vtf *filename.sql*
This should execute the sql statements in the file one by one. The sql statements must be ending with a semicolon
There is an easier way for passing in parameters, that works fine for us (it might not work with (complex) multiline sql statements).
Convert your sql-script into a shell script by adding 'db2 ' at the beginning of each line. Than you can use the standard variable replacement syntax from your shell in your scripts.
so instead of
insert ...
update ...
you will have
db2 insert ...
db2 update ...
Place file in one directory.
Open db2cmd.exe as administrator
Navigate to directory where you have place the script
type db2 -vtf `