Pass byte[] as parameter to sql insert script - sql

I am trying to upload the binary[] of a Zip folder to my database. I used Get-Content -Encoding Byte -ReadCount 0 to read the data into a variable. I want to use this variable in an INSERT statement. Unfortunately, sqlcmd doesn't like the size of the variable, and gives me this error:
Program 'SQLCMD.EXE' failed to run: The filename or extension is too longAt line:1 char:1.
I have tried using the -Q option to run the query, and also -i to run a sql file.
DECLARE #data varbinary(MAX)
SET #data = '$(data_stuff)'
INSERT INTO MyTable
(v1,v2,v3,v4,v5)
VALUES
(v1,v2,v3,v4,#data)
sqlcmd -S servername -E -i .\file.sql -v data = "$binarydata"
Is there a workaround for doing this?

In a SQL query/batch/.sql file, binary/varbinary/image literal data values must be in hexadecimal format with a 0x prefix:
INSERT INTO tableName ( binaryColum ) VALUES ( 0x1234567890ABCDEF )
I don't know what the maximum length of a binary literal is, but I suspect things might stop working, or be very slow, if you exceed more than a few hundred kilobytes.
I recommend using ADO.NET directly via PowerShell, which will also let you use binary parameter values (SqlParameter): How do you run a SQL Server query from PowerShell?

Related

PSQL lo_import in client side script

we have a simple sql script we maintain that sets up your schema and populates a set of text/example values - so it's just like create table, create table table insert into table... and we run it with a simple shell script which calls psql
one of our tables requires files - what I wanted to do was just have the files in the same directory as the script and do something like insert into repository (id, picture) values ('first', lo_import('first.jpg'))
but I get errors saying must be superuser to use server-side script. Is there any way I can achieve this? I have just a .sql file and a bunch of image files and by running psql against the file import them?
Running as superuser is not an option.
Using psql, you could write a shell script like
oid=`psql -At -c "\lo_import 'first.jpg'" | tail -1 | cut -d " " -f 2`
psql -Aqt -c "INSERT INTO repository (id, picture) values ('first', $oid)"
because comments can't have code - thanks to Laurenz, I got it "working" like this:
drop table if exists some_landing_table;
create table some_landing_table( load_time timestamp, filename varchar, data bytea);
\set the_file 'example.jpg';
\lo_import 'example.jpg';
insert into some_landing_table
select now(), 'example.jpg', string_agg(data,decode('','escape') order by pageno)
from
pg_largeobject
where
loid = (select max(loid) from pg_largeobject);
select lo_unlink( max(loid) ) from pg_largeobject;
however, that is ugly for two reasons -
I don't seem to be able to get the result of \lo_import into a variable in any way. even though select \lo_import filename works select \lo_import filename into x doesn't.
I can't use a variable - if I do \lo_import :the_file - it just says example.jpg doesn't exist - enven though if I put it in directly it works perfectly
I can't find a simpler way of providing a 0 length bytea field than decode('','escape')

Why does database query using sqlcmd produce just empty file while same query works in Management Studio?

I have made recently a huge MSSQL script which was working until the server environment changed and some queries are not anymore allowed. So I had to make a .bat file which executes this query from command line.
I get no error or something else. I just get a file with no entry. But I receive a lot of entries if I use the code of the query in the Management Studio.
Does somebody see where is the mistake in my command line?
I inserted some new lines for reading the code better. Everything except command PAUSE is on one line in the batch file.
EDIT: I figured out that the problem is in the last WHERE clause in the LIKE operater. If I take out the LIKE operater it works. It has nothing todo with the % caracter. it is effective the LIKE operater. Does anybody know how to fix that?
sqlcmd -S connection\string -U user -P password -d dbName -s";" -Q "SET NOCOUNT ON;
SELECT [per_nummer] as EmployeeID,
[per_id] as System_nr,
[per_pid] as PID,
[per_anrede] as Gender,
[per_vname] as Vorname,
[per_name] as Name,
[per_telEx] as Telefon,
[per_email] as Email,
[per_instradierungHauptort] as Instradierung,
[per_gebnr] as Gebaeudenummer,
(SELECT mobileTelephoneNumber FROM [dbName].[dbo].[import_zuko_GLDAP_DUMP] WHERE UserID = per_pid ) as Mobile,
[per_business_area] as Business_Area,
(SELECT csgdivision FROM [dbName].[dbo].[import_zuko_GLDAP_DUMP] WHERE UserID = per_pid ) as Division,
NULL as Bereich,
(SELECT csgCompany FROM [dbName].[dbo].[import_zuko_GLDAP_DUMP] WHERE UserID = per_pid ) as Firma,
[per_sprache] as Korespondenzsprache,
(SELECT roomnumber FROM [dbName].[dbo].[import_zuko_GLDAP_DUMP] WHERE UserID = per_pid ) as Bueronummer,
[per_floor] as Etage,
(SELECT convert(varchar, convert(date,[per_eintrittsdatum]), 104)) as Eintritt_per,
(SELECT convert(varchar, convert(date,[per_austrittsdatum]), 104)) as Austritt_per,
CONVERT(VARCHAR(10), GETDATE(), 104) AS letzte_mutation,
per_lm as Linemanager,
(SELECT TOP 1 per_pid FROM [dbName].[dbo].[person] WHERE per_nummer = master_table.per_lm) AS lm_pid,
(SELECT convert(varchar, convert(date,[per_lm_von]), 104)) as lm_von,
(SELECT convert(varchar, convert(date,[per_lm_bis]), 104)) as lm_bis,
NULL FROM [dbName].[dbo].[person] as master_table
WHERE
[per_nummer] IS NOT NULL AND per_pidStatus = 'A' AND
([per_pid] LIKE('A%')OR [per_pid] LIKE('F%')OR [per_pid] LIKE('W%'))"
-w 1000 -W -o "\\servername\G$\path\to\file.csv"
PAUSE
Different default options
Do you have read carefully sqlcmd Utility documentation by Microsoft?
There is at top the important information:
Because different default options may apply, you might see different behavior when you execute the same query in SQL Server Management Studio in SQLCMD Mode and in the sqlcmd utility.
Therefore the empty file could be caused by different default options.
Space between option and value of option
Further the documentation contains at top:
Currently, sqlcmd does not require a space between the command line option and the value. However, in a future release, a space may be required between the command line option and the value.
Double quotes inside an argument string are often problematic, see answer on Why double quotes should be always only at beginning and end of an argument string?
Therefore I suggest to follow the advice of Microsoft and change -s";" to -s ";" with a space charcter after -s as done for all other options, too.
Escaping percentage sign
Do you have tried to escape each % in the query string with an additional % and using therefore 3 times %% in the query string?
Command line interpreter cmd.exe could interpret the string between two % as a reference to an environment variable and as there is no such environment variable, removes everything between two % from the command line.
Update: It turned out that indeed the not escaped percentage signs resulted in a wrong query string and therefore in an empty results file.
Command with path and extension
General hint:
It is always advisable to specify in batch files commands like sqlcmd with full name which means with path and file extension as this makes the execution of the application independent on the values of the environment variables PATH and PATHEXT.

Passing variables between batch file and .sql file for mssql server

I am attempting to create a batch file in windows that will take a user's input, and pass that along to a sql file containing the following query, so that I can set a siteid, like in the following sql query:
exec sp_addlinkedserver [sqlserver1]
select * from [sqlserver1].onesource.dbo.admsites where siteid = '123'
I want to then take the results of this query, particularly the admsiteid, and then use the results of the query, and insert that into the originatorid (using another .sql file:
Use Onesource
update OSCsettings set originatorid = 'whatever-the-admsiteid-is'
How would I go about passing along these variables?
sqlcmd with the -v command line
-v var = "value"
You can specify multiple variables in the list.
See:
http://msdn.microsoft.com/en-us/library/ms162773.aspx
and
http://msdn.microsoft.com/en-us/library/ms188714.aspx

Cyrillic symbols in SQL code are not correctly after insert

I use SQL Server 2005 and I am try to store Cyrillic characters but I can't with SQL code by trying to run this is SQL Server:
INSERT INTO Assembly VALUES('Македонски парлиамент број 1','','');
Or from C# is happening the same problem but inserting/updating the column from SQL Server it work and it is store normally.
The datatype of column is nvarchar.
You have to add N prefix before your string.
When you implicitly declare a string variable it is treated as varchar by default. Adding prefix N denotes that the subsequent string is in Unicode (nvarchar).
INSERT INTO Assembly VALUES(N'Македонски парлиамент број 1','','');
Here is some reading:
http://databases.aspfaq.com/general/why-do-some-sql-strings-have-an-n-prefix.html
https://msdn.microsoft.com/en-IN/library/ms186939.aspx
What is the meaning of the prefix N in T-SQL statements?
I'm not sure if you are doing a static stored procedure or scripting, but maybe the text is not being encoded properly when you save it to disk. I ran into this, and my problem was solved in PowerShell by correcting the encoding of the SQL that I saved to disk for osql processing:
Out-File -FilePath "MyFile.sql" -InputObject $MyRussianSQL -Encoding "Unicode" -Force;
& osql -U myuser -P password -i "MyFile.sql";

Execute SQL from file in bash

I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.