I have a massive database, and I found an error, when the migration is a single ", returns the error.
ERROR: extra data after last expected column
my data is
...
0,direccion N"16, 109, 420000
0,otra direccion N"32", 109, 320000
...
my command to migrate
$ psql -U user sat -c "copy table FROM '/file.csv' WITH (FORMAT CSV, DELIMITER(','));"
The strange thing is that when I erase the double quotes and can migrate, there will be some way to escape or ignore "
Double quotes are the default quotation character for the COPY command. Use the QUOTE option to modify this:
psql -U user sat -c "copy table FROM '/file.csv' WITH (QUOTE '~', FORMAT CSV, DELIMITER(','));"
See PostgreSQL COPY Documentation
Related
I am trying to create a Format File to bulk import a .csv file but i, am getting an error.
Query I used
"BCP -SMSSQLSERVER01.[Internal_Checks].[Jan_Flat] format out -fC:\Desktop\exported data\Jan_FlatFormat.fmt -c -T -Uasda -SMSSQLSERVER01 -PPASSWORD"
I am getting an error
"A valid table name is required for in, out, or format options."
This is the error. can anyone suggest what need to do.
According to the bcp Utility documentation the first parameter should be a [Database.]Schema.{Table | View | "query"}, so don't put -SMSSQLSERVER01 where you've got it. Also use format nul instead of format out.
Try using:
bcp.exe [Internal_Checks].[Jan_Flat] format nul "-fC:\Desktop\exported data\Jan_FlatFormat.fmt" -c -SMSSQLSERVER01 -T -Uasda -PPASSWORD
Note the quotes " around the -f switch because your path name contains space characters.
Also note that the -c switch causes single-byte characters (ASCII/OEM/codepage with SQLCHAR) to be written out. If your table contains nchar, nvarchar or ntext columns you should consider using the -w switch instead so as to write out UTF-16 encoded data (using SQLNCHAR).
I am trying to pull all the jdk packages installed on set of hosts by sending a sql select statement to osquery on linux shell via pssh .
Here is the query:
pssh -h myhosts -i 'echo "SELECT name FROM rpm_packages where name like '%jdk%';"| osqueryi --json'
but usage of "%" is giving me below error.
Error: near line 1: near "%": syntax error
I tried to escape % ,but the error remains same. Any ideas how to overcome this error?
You aren't getting this error from your shell but from the query parser, and it's not actually caused by the % character, but to the ' that immediately precedes it. Look at where you have quotes:
'echo "SELECT name FROM rpm_packages where name like '%jdk%';"| osqueryi --json'
^----------------------------------------------------^ ^-------------------^
These quotes are consumed by the shell when it parses the argument. Single quotes tell the shell to ignore any otherwise-special characters inside and treat what is within the quotes as part of the argument -- but not the quotes themselves.
After shell parsing finishes, the actual, verbatim argument that gets sent to pssh looks like this:
echo "SELECT name FROM rpm_packages where name like %jdk%;"| osqueryi --json
Note that all of the single quotes have been erased. The result is that your query tool sees the % (presumably modulus) operator in a place that it doesn't expect -- right after another operator (like) which makes about as much sense to the parser as name like * jdk. The parser doesn't understand what it means to have two consecutive binary operators, so it complains about the second one: %.
In order to get a literal ' there, you need to jump through this hoop:
'\''
^^^^- start quoting again
|||
|\+-- literal '
|
\---- stop quoting
So, to fix this, replace all ' instances inside the string with '\'':
pssh -h myhosts -i 'echo "SELECT name FROM rpm_packages where name like '\''%jdk%'\'';"| osqueryi --json'
osqueryi accepts a single statement on the command line. Eliminating the echo can make quoting a bit simpler:
osqueryi --json "SELECT * FROM users where username like '%jdk%'"
You will, however, need the quotes to pass through your pssh command line.
While osqueryi is great for short simple things, if you're building a frequent polling service, osqueryd with scheduled queries is generally simpler.
I created a text file. The name of this is "test.txt" and the content is first part below. I also created script with the name insert.sh.
I run the command with ./insert.sh test.txt.
If the words / strings are in single quotes, it will insert the words into the columns. Also it will insert numbers without single quotes. The csv that I will eventually use won't have single quotes and I don't want to change the data.
How can I insert the content of the variable into single quotes inside the INSERT INTO command?
I am using psql.
Text file, test.txt
'one','ten','hundred'
'two','twenty','twohundred'
Script, insert.sh:
#!/bin/bash
while read cell
do
name=$cell
echo "$cell"
####Insert from txt into table####
sudo -u username -H -- psql -d insert_test -c "
INSERT INTO first (ten, hundred, thousend) VALUES ($cell);
"
done < $1
something like this:
INSERT INTO first (ten, hundred, thousend) VALUES (INSERT" $cell "QUOTES);
UPDATE:
I changed the code. I added the single quotes around $cell as you suggested.
#!/bin/bash
while read cell
do
name=$cell
echo "$cell"
####Insert from txt into table####
sudo -u username -H -- psql -d insert_test -c "
INSERT INTO first (ten, hundred, thousend) VALUES ('$cell');
"
done < $1
and I removed the quotes out of the text file since the csv file that I want to use later wont have any single quotes.
new text file.
one,ten,hundred
two,twenty,twohundred
and im getting the error:
one,two,three
ERROR: INSERT has more target columns than expressions
LINE 2: INSERT INTO first (ten, hundred, thousend) VALUES ('one,two,...
You need to modify the $IFS (Internal Field Separator) variable to determine the line separator used by Bash. Since you used a CSV like file, you IFS come to , character, thus this is the result $IFS=,. Note that if you need to do others stuff in you script, you need to redefine the $IFS var to the original state, so you need to store it in an temportal variable before you change it, something like $OLDIFS=$IFS.
readline read the entire line and separate the values depending on $IFS var, thus you need to write the adecauted quantity of var where readline will store the words, i.e., if you line have 3 words, you need to give 3 vars to readline, e.g.: file: foo,baz,bar, readline -r word1 word2 word3. If you don't give the correct amount of vars, readline will store the rest of word in a single var, that is your problem.
So, a solution to your problem would be:
#!/bin/bash
$OLDIFS=$IFS # If you need to do more stuff.
while IFS=, read -r word1 word2 word3
do
sudo -u username -H -- psql -d insert_test -c
"INSERT INTO first (ten, hundred, thousend) VALUES (${word1}, ${word2}, ${word3});"
done < $1
$IFS=$OLDIFS # Same of line 2.
# ...
NOTE: This is insecure because lead with easily to a SQL injection. If you use this, only use in a local database that don't have any sensetive data.
I am exporting query results form hive using beeline, here is my command :
beeline -u 'jdbc:hive2://myhost.com:10000/mydb;principal=hive/myhost.COM' --incremental=true --silent=true --outputformat=dsv --disableQuotingForSV=true --delimiterForDSV=\, --showHeader=false --nullemptystring=true -f myquery.hql --hiveconf DT_ID=${DT_ID} > ${spoolFile}
This is my query :
SELECT id, concat('"',c_name,'"'), app_name from mytab where dt_id='${hiveconf:DT_ID}';
But I get results like this, for fields having my field separator(,) in column value:
66,**^#**"(Chat\, Social\, Music\, Utilities)"**^#**,Default
Note the ^#. Why is it coming? How can avoid it? What is that character? If it is quote, I am to have it, so that I can remove the concat in my query. I tried playing with --disableQuotingForSV=true/false. But that did not help me.
I'm trying to run a big query query from the command line, but because my query is very long I've written it in a text file. The query works from the GUI and I'm overwriting a table that already exsists
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable '`cat query.txt`'
However, I'm getting error results:
Error in query string: Error processing job
'dev:bqjob_r_00000123456789456123_1': Encountered "
"\'cat query.txt\' "" at line 1, column 1.
Was expecting: EOF
Do I need to put the entire file path in the .txt filename? (this doesn't seem to make a difference)
Are there any characters I need to be careful with in the text file (e.g. "\" or quotation marks) ?
I'm using where clauses and group by clauses - is that an issue?
Instead of cat, just pipe the input from the file. The command would be:
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable < query.txt
This will send the contents of query.txt to the bq tool.
Elliot is right, now if you want to cat, sed or anything, pipe it:
cat query.txt | bq query