I am new to using beeline and I am using a statement like the following:
beeline -u 'jdbc:hive2://myserver' --outputformat=csv2 -f export.sql > results.csv
The statement works as desired but the resulting CSV is very verbose with all the INFO statements preceding and following the actual CSV data. How can I modify the statement so that the only items in the CSV is the actual data and none of the other statements? I just want a clean dataset in results.csv
You can do that by making silent=true and verbose=false
beeline -u 'jdbc:hive2://myserver' --outputformat=csv2 --silent=true --verbose=false -f export.sql > results.csv
Related
I want to replace the latest partition on my BQ table with the already available data from an adhoc table.
Could anyone please help me in doing this?
The below command is not helping:
bq query \
--use_legacy_sql=false \
--replace \
--destination_table 'mydataset.table1$20160301' \
'SELECT
column1,
column2
FROM
mydataset.mytable'
I guess you need to use bq cp instead of bq query
From here:
To copy a partition, use the bq command-line tool's bq cp (copy)
command with a partition decorator ($date) such as $20160201.
Optional flags can be used to control the write disposition of the
destination partition:
-a or --append_table appends the data from the source partition to an existing table or partition in the destination dataset.
-f or --force overwrites an existing table or partition in the destination dataset and doesn't prompt you for confirmation.
-n or --no_clobber returns the following error message if the table or partition exists in the destination dataset: Table
'project_id:dataset.table or table$date' already
exists, skipping. If -n is not specified, the default behavior is to
prompt you to choose whether to replace the destination table or
partition.
bq --location=location cp \
-f \
project_id:dataset.source_table$source_partition \
project_id:dataset.destination_table$destination_partition
So don't forget to use -f param to destroy old partition
Does anyone here knows how to execute multiple sql files in bq command line? Sample if I have 2 sql files named test1.sql and test2.sql, how should I do it?
If I do this:
bq query --use_legacy_sql=false > test1.sql
this only executes the test1.sql.
What I want to do is to execute both test1.sql and tes2.sql.
There isn't : bq query
Best option if you want one line is to use the && operator.
bq query --use_legacy_sql=false > test1.sql && bq query --use_legacy_sql=false > test2.sql
There is an alternative way, which is using a shell script in order to loop through all of the files:
#!/bin/bash
FILES="/path/to/sqls"
for f in $FILES
do
bq query --use_legacy_sql=false < "$f"
done
Using psql we can export a query output to a csv file.
psql -d somedb -h localhost -U postgres -p 5432 -c "\COPY (select * from sometable ) TO 'sometable.csv' DELIMITER ',' CSV HEADER;"
However I need to export the query output to a new table in a new sqlite3 database.
I also looked at pg_dump, but haven't been able to figure it out a way with it.
The reason I want to export it as a new table in a new sqlite3 db without any intermediately CSV conversion is because
The query output is going to run into GBs, I have disk space constraints - so rather than csv export and then create a new sqlite3 db, need to get this in one shot
My solution is using the standard INSERT SQL statements.
It's required the same table scheme. The grep command removes the problematic characters, such as -- or blanklines.
pg_dump --data-only --inserts --table=sometable DBNAME | grep -v -e '^SET' -e '^$' -e '^--' | sqlite3 ./target.db
I hope this will help you.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to export data as CSV format from SQL Server using sqlcmd?
I want to generate CSV file using select query in SQL-server.
The code below is working correctly in mySQL:
select * into outfile 'd:/report.csv' fields terminated by ',' from tableName;
It generated the CSV file.
Does anybody know how can I create a CSV file using select query in SQL-server?
Will this do the work
sqlcmd -S server -U loginid -P password -d DBname -Q "select * from tablename" -o output.csv
EDIT:
Use -i options if you want to execute a SQL script like -i sql_script_filename.sql
SQLCMD -S MyInstance -E -d sales -i query_file.sql -o output_file.csv -s
You can use OPENROWSET() to read from a CSV file within a T-SQL query but AFAIK you can't write to one. This is really what SSIS/DTS is for.
If you're working with it interactively in SQL Server Management Studio you could export a grid to a file.
I am trying to do a mysql dump of a few rows in my database. I can then use the dump to upload those few rows into another database. The code I have is working, but it dumps everything. How can I get mysqldump to only dump certain rows of a table?
Here is my code:
mysqldump --opt --user=username --password=password lmhprogram myResumes --where=date_pulled='2011-05-23' > test.sql
Just fix your --where option. It should be a valid SQL WHERE clause, like:
--where="date_pulled='2011-05-23'"
You have the column name outside of the quotes.
You need to quote the "where" clause.
Try
mysqldump --opt --user=username --password=password lmhprogram myResumes --where="date_pulled='2011-05-23'" > test.sql
Use this code for specific table rows, using LIKE condition.
mysqldump -u root -p sel_db_server case_today --where="date_created LIKE '%2018
%'" > few_rows_dump.sql