Hopefully this is a simple question. I want to write a shell script that calls a SQL script to do some queries in a Rocket UNIVERSE database. I am doing this from the Server command line (the same machine where the database resides).
In SQLSERVER I might do something like the following:
sqlcmd -S myServer\instanceName -i C:\myScript.sql
In Oracle like this:
SQL>#/dir/test.sql
In UV I can't figure it out:
uvsh ??? some.sql file
So in the test.sql file I might have something like the following:
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=91;"
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=92;"
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=93;"
So can this be done or am I going about this the wrong way? Maybe a different method is more optimal? -- Thanks!
You can send the list of commands to the UniVerse process with the following command.
C:\U2\UV\HS.SALES>type test.sql | ..\bin\uv
You will not need the '"' around each statement you described.
For the HS.SALES account the following SQL commands should work:
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='2';
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='3';
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='4';
Note that this may not do what you want, this will display the results to standard out. Also, caution should be taken when sending commands to UniVerse in this manner. If all of the UniVerse licenses are in use the uv command will fail, and the SQL commands will never execute.
Related
As we can take a backup file of database using pg_dump command, similarly can we take backup of a select query result.
For example if i have a query select * from tablename; then i want to take backup result of the query that can be restored somewhere.
You can use something like
copy (select * from tablename) to 'path/to/file';
it will generate csv file with results very same manner as pg_dump does (in fact in plain mode it actually runs COPY commands)
update
and if you want DDL as well, you can
create table specname as select * from tablename
and then
pg_dump -s specname
Many times a day I have to write similar queries to get single record:
select t.*
from some_table t
where t.Id = 123456
maybe there is some shortcuts for retrieving single record? Like entering id, table and SQL server generates rest code automatically
In Sql Server Go to
Tools-> Options-> Environments->Keyboard
You will get shortcuts, there you can define your own as well as get the standards.
you can set a short cut for a fully executable query like
select * from table where id =20
but not like below
select * from
I am working with Impala and fetching the list of tables from the database with some pattern like below.
Assume i have a Database bank, and tables under this database are like below.
cust_profile
cust_quarter1_transaction
cust_quarter2_transaction
product_cust_xyz
....
....
etc
Now i am filtering like
show tables in bank like '*cust*'
It is returning the expected results like, which are the tables has a word cust in its name.
Now my requirement is i want all the tables which will have cust in its name and table should not have quarter2.
Can someone please help me how to solve this issue.
Execute from the shell and then filter
impala-shell -q "show tables in bank like '*cust*'" | grep -v 'quarter2'
Query the metastore
mysql -u root -p -e "select TBL_NAME from metastore.TBLS where TBL_NAME like '%cust%' and TBL_NAME not like '%quarter2%'";
Is it possible to append the results of running a query to a table using the bq command line tool? I can't see flags available to specify this, and when I run it it fails and states "table already exists"
bq query --allow_large_results --destination_table=project:DATASET.table "SELECT * FROM [project:DATASET.another_table]"
BigQuery error in query operation: Error processing job '':
Already Exists: Table project:DATASET.table
Originally BigQuery did not support the standard SQL idiom
INSERT foo SELECT a,b,c from bar where d>0;
and you had to do it their way with --append_table
But according to #Will's answer, it works now.
Originally with bq, there was
bq query --append_table ...
The help for the bq query command is
$ bq query --help
And the output shows an append_table option in the top 25% of the output.
Python script for interacting with BigQuery.
USAGE: bq.py [--global_flags] <command> [--command_flags] [args]
query Execute a query.
Examples:
bq query 'select count(*) from publicdata:samples.shakespeare'
Usage:
query <sql_query>
Flags for query:
/home/paul/google-cloud-sdk/platform/bq/bq.py:
--[no]allow_large_results: Enables larger destination table sizes.
--[no]append_table: When a destination table is specified, whether or not to
append.
(default: 'false')
--[no]batch: Whether to run the query in batch mode.
(default: 'false')
--destination_table: Name of destination table for query results.
(default: '')
...
Instead of appending two tables together, you might be better off with a UNION ALL which is sql's version of concatenation.
In big query the comma or , operation between two tables as in SELECT something from tableA, tableB is a UNION ALL, NOT a JOIN, or at least it was the last time I looked.
Just in case someone ends up finding this question in Google, BigQuery has evolved a lot since this post and now it does support Standard.
If you want to append the results of a query to a table using the DML syntax feature of the Standard version, you could do something like:
INSERT dataset.Warehouse (warehouse, state)
SELECT *
FROM UNNEST([('warehouse #1', 'WA'),
('warehouse #2', 'CA'),
('warehouse #3', 'WA')])
As presented in the docs.
For the command line tool it follows the same idea, you just need to add the flag --use_legacy_sql=False, like so:
bq query --use_legacy_sql=False "insert into dataset.table (field1, field2) select field1, field2 from table"
According to the current documentation (March 2018): https://cloud.google.com/bigquery/docs/loading-data-local#appending_to_or_overwriting_a_table_using_a_local_file
You should add:
--noreplace or --replace=false
I have written a BCP process with queryout as option. As a result the query is exectuted and the results are (or should be) written to the designated output file. The query that is being used has been confirmed in SQL Query Analyzer (using MS SQL 2000) to generate a known result set.
However, when I execute the batch file with the BCP command it will return zero rows (get the response "0 rows copied"). However, I can take this query and run it outside of the BCP process (in query analyzer) and get 42,745 rows. I can also create a view and execute a simpler query and have it work using the BCP...queryout option. The query I am using is joining information from two tables:
bcp "select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
model_id, (len(build_string)/4) as feature_count, build_string
from my_db..builds a, my_db..models b
where a.model_id = b.model_id and obj_id like '_________C%' and obj_id not like '1G0%'" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c
As you can see the query is more complex that "select * from my_db..builds". Essentially if I create a view using the more complex query and then run the bcp...queryout with a simple, straightforward query as noted to retrieve the data from the view it works fine. I can't figure out though why the more complex query doesn't work in the BCP command. Could it be timing out before returning results, or is it that BCP doesn't know how to handle a complex "join-style" query?
I find you can usually avoid issues with the bcp utility if you create a view of the query you want to execute and then select from that instead. In this case:
CREATE VIEW [MyOutputView] as
select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
...
...
and obj_id not like '1G0%'
GO
Then the bcp command becomes:
bcp "SELECT * FROM MyDb.dbo.MyOutputView" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c