Pass varchar param to sql file via Batch file - sql

I have a batch file which runs multiple sql files and passes parameters to them.
The input parameters are stored in a txt file which is read by batch file.
I am trying to pass varchar values to IN clause of SQL.
For e.g.
In my input file i this entry
var1="'tommy','jim'"
In Batch file
<code to read file , assuming %1 has the var1 value>
set param=%~1
sqlplus %DBCONN% #%programFolder%\test.sql %param%
test.sql (name is varchar2)
select * from table where name in (&1);
This gives error, saying invalid number
as it tries to run
select * from table where name in (tommy);
If i echo right before the sql stmt, its displays 'tommy','jim'
but in sql its removing jim and the single quotes ...
Please help!
Now i edited entry in Input file as
var1="'''tommy''','''jim'''"
it goes as select * from table where name in ('tommy');
But it truncates the second value
Any clue how to include comma ??

Finally found a way
input file -
var1=('tommy','jim')
Remove the braces from the sql file
select * from table where name in &1;
This works , i have no idea why was taking a comma was such an issue, there should have been some way to pass the comma !
If anyone finds out please let me know.
Thanks

The tilde (~) strips quotes from the variable.
Try using var1='tommy','jim' in your input file and set param=%1 in your batch script.

Related

ERROR: extra data after last expected column on PostgreSQL while the number of columns is the same

I am new to PostgreSQL and I need to import a set of csv files, but some of them weren't imported successfully. I got the same error with these files: ERROR: extra data after last expected column. I have investigated this error report and learned that these errors occur might because the number of columns of the table is not equal to that in the file. But I don't think I am in this situation.
For example, I create this table:
CREATE TABLE cast_info (
id integer NOT NULL PRIMARY KEY,
person_id integer NOT NULL,
movie_id integer NOT NULL,
person_role_id integer,
note character varying,
nr_order integer,
role_id integer NOT NULL
);
And then I want to copy the csv file:
COPY cast_info FROM '/private/tmp/cast_info.csv' WITH CSV HEADER;
Then I got the error:
**ERROR: extra data after last expected column
CONTEXT: COPY cast_info, line 8801: "612,207,2222077,1,"(segments \"Homies\" - \"Tilt A Whirl\" - \"We don't die\" - \"Halls of Illusions..."**
The complete row in this csv file is as follows:
612,207,2222077,1,"(segments \"Homies\" - \"Tilt A Whirl\" - \"We don't die\" - \"Halls of Illusions\" - \"Chicken Huntin\" - \"Another love song\" - \"How many times?\" - \"Bowling balls\" - \"The people\" - \"Piggy pie\" - \"Hokus pokus\" - \"Let\"s go all the way\" - \"Real underground baby\")/Full Clip (segments \"Duk da fuk down\" - \"Real underground baby\")/Guy Gorfey (segment \"Raw deal\")/Sugar Bear (segment \"Real underground baby\")",2,1
You can see that there's exactly 7 columns as the table has.
The strange thing is, I found that the error lines of all these files contain the characters backslash and quotation mark (\"). Also, these rows are not the only row that contains \" in the files. I wonder why this error doesn't appear in other rows. Because of that, I am not sure if this is the problem.
After modifying these rows (e.g. replace the \" or delete the content while remaining the commas), there are new errors: ERROR: invalid input syntax for line 2 of every file. And the errors occur because the data in the last column of these rows have been added three semicolons(;;;) for no reason. But when I open these csv files, I can't see the three semicolons in those rows.
For example, after deleting the content in the fifth column of this row:
612,207,2222077,1,,2,1
I got the error:
**ERROR: invalid input syntax for type integer: "1;;;"
CONTEXT: COPY cast_info, line 2, column role_id: "1;;;"**
While the line 2 doesn't contain three semicolons, as follows:
2,2,2163857,1,,25,1
In principle, I hope the problem can be solved without any modification to the data itself. Thank you for your patience and help!
The CSV format protects quotation marks by doubling them, not by backslashing them. You could use the text format instead, except that that doesn't support HEADER, and also it would then not remove the outer quote marks. You could instead tweak the files on the fly with a program:
COPY cast_info FROM PROGRAM 'sed s/\\\\/\"/g /private/tmp/cast_info.csv' WITH CSV;
This works with the one example you gave, but might not work for all cases.
ERROR: invalid input syntax for line 2 of every file. And the errors
occur because the data in the last column of these rows have been
added three semicolons(;;;) for no reason. But when I open these csv
files, I can't see the three semicolons in those rows
How are you editing and viewing these files? Sounds like you are using something that isn't very good at preserving formatting, like Excel.
Try actually naming the columns you want processed in the copy statement:
copy cast_info (id, person_id, movie_id, person_role_id, note, nr_order, role_id) from ...
According to a friend's suggestion, I need to specify the backslashes as escape characters:
copy <table_name> from '<csv_file_path>' csv escape '\';
and then the problem is solved.

Pass quoted value sql plus param

I have a shell script script.sh, and a sql script modify_database.sql
In script.sh, I launch the sql script with some param using :
sqlplus user/password ... #modify_database.sql « param_id » « param_newValue »
And in the sql script, I get the parameters value with :
UPDATE T_TEST SET C1 = &param_newValue WHERE ID = &param_id
But I have a lot of problem with the « param_newValue » parameter.
In fact, this parameter could contain some spaces, slash, single quote and double quote character.
For example : L’avion du pilote était surnommé le « Redoutable »
I have a bunch of values to update.
Each new values are stored in a txt file, like that :
id;value
1; L’avion du pilote était surnommé le « Redoutable »
2; Another example of a « test » that’s it
How can I do to pass this param to sqlplus and set the value like that ? The quoted part is giving me a hard time :/
EDIT :
Example, in the input file I have :
123;;;"aaaa/vvvv/COD_039/fff=Avion d'office";"aaaa/vvvv/COD_039/fff=Hello d'orien";
I get this line, I do a cut command to get the first column (ID), the 4th column (to replace), and the 5th column (replacement).
I have a XML node, with a node which contain the 4th column text, and need to replace by the 5th column content.
So I do :
sqlplus -s $DBUSER/$DBPASSWORD#$DBHOST:$DBPORT/$DBSCHEMA #majConfXML.sql "$id" "$newcode"
update T_TEST set XML = updatexml(xmltype(XML_CONF),
'//A[#name="XXX"]//B[#name="YYY"]//pkValue','&2').getClobVal()
But in the param, there is some quotes.
So when I do the update request with '&2', the request is running but it does not work properly.
How can I escape/pass the param value into the updateXML request ?
Thank you
Finally, I find a problem in my bash script before the sqlplus call.
In fact, it was xargs command which I use who interpret simple quote. I simply add double quotes around each string with slash and quote in my input file.
Next, before calling sqlplus, I replace all “ ‘ “ (quote) by “ ‘’ “ (quote quote)

Line contains invalid enclosed character data or delimiter at position

I was trying to load the data from the csv file into the Oracle sql developer, when inserting the data I encountered the error which says:
Line contains invalid enclosed character data or delimiter at position
I am not sure how to tackle this problem!
For Example:
INSERT INTO PROJECT_LIST (Project_Number, Name, Manager, Projects_M,
Project_Type, In_progress, at_deck, Start_Date, release_date, For_work, nbr,
List, Expenses) VALUES ('5770','"Program Cardinal
(Agile)','','','','','',to_date('', 'YYYY-MM-DD'),'','','','','');
The Error shown were:
--Insert failed for row 4
--Line contains invalid enclosed character data or delimiter at position 79.
--Row 4
I've had success when I've converted the csv file to excel by "save as", then changing the format to .xlsx. I then load in SQL developer the .xlsx version. I think the conversion forces some of the bad formatting out. It worked at least on my last 2 files.
I fixed it by using the concatenate function in my CSV file first and then uploaded it on sql, which worked.
My guess is that it doesn't like to_date('', 'YYYY-MM-DD'). It's missing a date to format. Is that an actual input of your data?
But it could also possibly be the double quote in "Program Cardinal (Agile). Though I don't see why that would get picked up as an invalid character.

Sqlcmd trailing spaces in output file

Here is my simplified scenario:
I have a table in SQL Server 2005 with single column of type varchar(500). Data in the column is always 350 characters in length.
When I run a select on it in SSMS query editor, copy & paste the result set in to a text file, the line length in the file is 350, which matches the actual data length.
But when I use sqlcmd with the -o parameter, the resulting file has line length 500, which matches the max length of varchar(500).
So question is, without using any string functions in select, is there a way to let sqlcmd know not to treat it like char(500) ?
You can use the sqlcmd formatting option -W to remove trailing spaces from the output file.
Read more at this MSDN article.
-W only works with default size of 256 for variable size columns. If you want more than that you got to use -y modifier which will tell you its mutually exclusive with -W. Basically you are out of luck and as in my case file grows from 0.5M to 172M. You have to use other ways to strip white space post file generation. Some PowerShell command or something.

How to output data from iSQL to csv file _with_ headings?

I'm trying to query a Sybase ASA 8 database with the iSQL client and export the query results to a text file in CSV format. However the column headings are not exported to the file. There is no special option to specify that, neither in the iSQL settings nor in the OUTPUT statement.
The query and output statement looks like this:
SELECT * FROM SomeTable;
OUTPUT TO 'C:\temp\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE ''
The result is a file like
1;Miller;Steve;1980-06-28
2;Jones;Martha;1965-11-02
3;Waters;Richard;1979-10-15
while I'd like to have
ID;LASTNAME;FIRSTNAME;DOB
1;Miller;Steve;1980-06-28
2;Jones;Martha;1965-11-02
3;Waters;Richard;1979-10-15
Any hints?
I would have suggested to start with another statement:
SELECT 'ID;LASTNAME;FIRSTNAME;DOB' FROM dummy;
OUTPUT TO 'C:\\temp\\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '';
and add the APPEND option on your query... but I can't get APPEND to work (but I'm using a ASA 11 engine).
Try this one
SELECT 'ID','LASTNAME','FIRSTNAME','DOB' union
SELECT string(ID),LASTNAME,FIRSTNAME,DOB FROM SomeTable;
OUTPUT TO 'C:\\temp\\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '';
Simply add the option
WITH COLUMN NAMES
to your statement and it adds a header line with the column names.
The complete statement is therefore:
SELECT * FROM SomeTable; OUTPUT TO 'C:\temp\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '' WITH COLUMN NAMES
See sybase documentation.
I am able to use the isql command to output quoted CSV.
Example
$ isql $DATABASE $USERNAME $PASSWORD -b -d, -q -c
select username, fullname from users
gives the result:
username,fullname
"jdoe","Jane Doe"
"msmith","Mark Smith"
Command-line flags
(copied from the man page)
-b: Run isql in non-interactive batch mode. In this mode, the isql processes its standard input, expecting one SQL command per line.
-dDELIMITER: Delimits columns with delimiter.
-c: Output the names of the columns on the first row. Has any effect only with the -d or -x options.
-q: Wrap the character fields in double quotes.
Escaping Issue
You might run into problems if the query results contain double-quotes, though. The quotes aren't escaped properly, so they result in invalid CSV:
> select 'string","with"quotes' as quoted_string
quoted_string
"string","with"quotes"
You are already familiar with the OUTPUT options. There is no option that gives you what you want.
Ok, the problem is the receiving end does not accept standard CSV files, it needs semi-colons.
If you are scripting, then you are better off getting the output in the format that is closest to what you need, and then awk-ing the output file. Very fast and you can change anything you need. I think your best option is ASCII or default output format, which will provide Comma (not colon) Separated Values, in an ASCII character text file, and includes column Headers. Then use a single awk command to convert the commas to semi-colons.
Found an easier solution, Place the headers in one file say header.txt ( it will contain a single line "col_1|col_2|col_3") then to combine the header file and your output file run:
cat header.txt my_table.txt > my_table_wth_head.txt
isql -S<Server> -D<Database>-U<UserName> -s \; -P<password>\$\1 -w 10000 -iname.sql > output.csv
If you use the FORMAT EXCEL option, it will output the rows with the column name in the first row. Then once you get it into excel you can save it into another format if you need to.
SELECT * FROM SOMETABLE;
OUTPUT TO 'C:\temp\sometable.xls' FORMAT EXCEL DELIMITED BY ';' QUOTE ''
Recently I needed to solve similar issue with some prehistoric ASA7 which does not support the WITH COLUMN NAMES for .CSV output.
The solution for me was the .DBF file, which has the columns structure in it and can be processed automatically, much better than .XLS
SELECT * FROM SomeTable;
OUTPUT TO 'C:\temp\sometable.dbf' FORMAT DBASEIII;