SQL Server export stored procedure results to CSV file keeping char(9) format - sql

I'm having a problem with exporting the results of a stored procedure to a csv file and keeping the results as a 9 character string. The results of the stored procedure is a simple one column output which looks fine when executed in SSMS but the returned values in the csv that have leading zeros are being returned without the zeros. The table column is type varchar(13) and I have done a convert to try and keep the leading zeros from being dropped but no luck.
Here is the stored procedure:
SELECT DISTINCT
convert(char(8),n.NIIN)
FROM IMMS_ELEC.dbo.NIINList n
Here is the simple BCP script I'm using:
DECLARE #string AS NVARCHAR(4000)
SELECT #string = 'BCP "exec CPLINK_Dev.dbo.spSelectLOG_NiinDistinct"
QUERYOUT:\data.csv -c -T -t'
exec master.dbo.xp_cmdshell #string

Excel loves to think it knows how to format your data better than you do... Here's a trick you can use to outsmart it. Open a new spreadsheet, select all cells and change the type to text (from general), then copy your data from notepad (or SSMS), paste it into Excel, and use text to columns if you have to... Excel should stop messing with your formats then.
You can also probably do the same thing with import data from Excel, but I find the first much more effective and you can use it to copy and paste directly from SSMS grid results as well.

Related

Save output into Varbinary(max) format

Using this SQL statement, I pull locally stored Excel sheet into SQL Server.
SELECT *
FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0','Excel 12.0;Database=C:\SampleTestFiles\outputFile.xlsx;HDR=YES', [Sheet1$])
And, I get the complete Excel sheet details with multiple rows in output console.
Now, I want to save the Excel file into a database column of datatype varbinary(max).
Please can anyone let me know how to do that?
Use application code like C# to read the file and then insert the binary data into your varbinary column in the database.

Export out CSV Files from SQL Based of Dates

I am trying to figure out how to generate multiple csv exports from a table in SQL based off the date field in this table. The date is different for many records. I would like to export records containing the same date to a csv. Every time a different date is found, a csv will be exported containing the data for the selected columns and date. How can I go about creating a script to perform this type of action. Is there a way to have the script go through the date column automatically and export out the data with the 4 fields selected and generated a csv for those docs that share the same date, etc.
Example:
Select Box, Highprice, Lowprice, Date
where date="2016-01-31"
As others suggest this is not a good method. Also there are few issues related to this.
By default, sql server is not granting permission to execute xp_cmdshell
The files will be generated in the Server and should have proper accessing privileges for that.
I gave you the way to do it for a static date and you may need to write a cursor to loop all days that are needed.
select #sql = 'bcp YourDB.dbo.YourTbl where date=''2016-01-31'' out
c:\temp\CSV2016-01-31.csv -c -t, -T -S'+ ##servername
exec master..xp_cmdshell #sql

BULK INSERT with two row terminators

I am trying to import a text file, so the result would be just words in a seperate rows of one column.
For example a text:
'Hello Mom,
we meet again'
should give 5 records:
'Hello'
'Mom,'
'we'
'meet'
'again'
I tried to accomplish this with BULK INSERT with ROWTERMINATOR = ' ', but there is a problem with treating new line as a terminator too and I get 'Mom,we' in one of the results.
From what i know, there is no way to add a second ROWTEMRMINATOR to BULK INSERT (true?).
What is the best way you know to achieve the result as specified above?
The file cannot be preprocessed outside of SQL Server and the method should be useful for hundreds of files with thausands lines of words, imported at a different times, not just once.
Given:
The file cannot be preprocessed outside of SQL Server
Option 1
Why not use OPENROWSET(BULK...)? This would allow you to import/insert (which takes care of the row terminator) while at the same time splitting (which takes care of the field terminator). Depending on whether or not you can create a Format File, it should look something like one of the following:
Format File = split each row
INSERT INTO dbo.TableName (ColumnName)
SELECT split.SplitVal
FROM OPENROWSET(BULK 'path\to\file.txt',
FORMATFILE='Path\to\FormatFile.XML') data(eachrows)
CROSS APPLY SQL#.String_Split(data.eachrow, N' ', 2) split;
No Format File = split entire file as a single row
INSERT INTO dbo.TableName (ColumnName)
SELECT split.SplitVal
FROM OPENROWSET(BULK 'path\to\file.txt', SINGLE_CLOB) data(allrows)
CROSS APPLY SQL#.String_Split(
REPLACE(data.allrows, NCHAR(10), N' '),
N' ',
2 -- remove empty entries
) split;
Notes:
For both methods you need to use a string splitter. SQLCLR-based splitters are the fastest and in the examples above I used one from the SQL# library (which I created but the String_Split function is available in the Free version). You can also write your own. If you do write your own and are not using a Format File, it might be a good idea to allow for multiple split characters so you can pass in both " " and "\n" and get rid of the REPLACE().
If you can write your own SQLCLR string splitter, then it might be even better to just write a SQLCLR stored procedure that accepts an input parameter for #FilePath, reads the file, does the splitting, and spits out the words as many rows of a single column:
INSERT INTO dbo.TableName(ColumnName)
EXEC dbo.MySQLCLRproc(N'C:\path\to\file.txt');
If you are not using (or cannot use) a Format File, then be sure to use the proper "SINGLE_" option as you can do either SINGLE_CLOB (returns VARCHAR(MAX) for standard ASCII file) or SINGLE_NCLOB (returns NVARCHAR(MAX) for Unicode file).
Even if you can create a Format File, it might be more efficient to pull in the entire file as a single string, depending on the size of the files, as splitting a large string can be done rather quickly and would be a single function call, whereas a file of thousands of short lines would be thousands of function calls that are also fast, but likely not 1000 times faster than the single call. But if the file is 1 MB or larger then I would probably still opt for doing the Format File and processing as many short lines.
Option 2
If by "pre-processed" you mean altered, but that there is no restriction on simply reading them and inserting the data from something external to SQL Server, you should write a small .NET app which reads the rows, splits the lines, and inserts the data by calling a Stored Procedure that accepts a Table-Valued Parameter (TVP). I have detailed this approach in another answer here on S.O.:
How can I insert 10 million records in the shortest time possible?
This could be compiled as a Console App and used in batch (i.e. .CMD / .BAT) scripts and even scheduled as a Windows Task.

Wildcard character in field delimiter - reading .csv file

A .csv file gets dumped nightly onto my FTP server by an external company. (Thus I have no control over its format, nor will they change it as it's used by several other companies.)
My mission is to create a job that runs to extract the info from the file (and then delete it) and insert the extracted data into a SQL table.
This is the format of the info contained within the .csv file:
[Message]Message 1 contains, a comma,[Cell]27747642512,[Time]3:06:10 PM,[Ref]144721721
[Message]Message 2 contains,, 2 commas,[Cell]27747642572,[Time]3:06:10 PM,[Ref]144721722
[Message],[Cell]27747642572,[Time]3:06:10 PM,[Ref]144721723
I have a SQL Server 2012 table with the following columns:
Message varchar(800)
Cell varchar(15)
Time varchar(10)
Ref varchar(50)
I would like to use something like the SQL bulk insert (see below) to read from the .csv file and insert into the SQL table above.
BULK INSERT sms_reply
FROM 'C:\feedback.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
The delimiter in the .csv file is not common. I was wondering if there was a way for me to use a wildcard character when specifying the delimiter. e.g. '[%]'?
This would then ignore whatever was between the square brackets and extract the info in between.
I cannot simply use the commas to delimit the fields as there could be commas in the fields themselves as illustrated in the .csv example above.
Or if anyone has any other suggestions, I'd really appreciate it.
TIA.
My approach would be:
Bulk load the csv file into a staging table. All columns accepting data would be varchar.
Do an sql update to get rid of the brackets and their contents.
Do whatever other processing is required.
Insert data from your staging table to your prodution table.
I've managed to overcome this by using a handy function I found here
which comes from an article located here.
This function returns each line in the .csv file and then I use string manipulation to extract the fields between the "[variable_name]" delimiters and insert directly into my SQL table.
Clean, simple and quick.
NOTE: You have to enable OLE Automation on your SQL server:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO

Blank lines in a .csv files created through SSMS :OUT command

I'm using code like this in MSSQL 2008 R2 management Studio to write the results of a SELECT statement to a csv file on a network share:
SET NOCOUNT ON;
GO
:OUT \\163.123.45.678\SomeFolder\mydata.csv
SELECT id, name, surname FROM sometable;
GO
This creates the mydata.csv file at the correct location but there is an extra blank line at the end of the csv file. How do I prevent that blank line from being created in the csv file?
Is the above the best way to write the output of a sql query to a text file? I can't use BCP.
Thanks.
I think you can use SSIS package to get the query result in .CSV format. sometime you may get blank line or space in between two lisne, in that case 'Derived Column' transformation task and remove or trim the unwanted spaces and lines.
Can you try the following from the command prompt?
C:\>sqlcmd -S SQLServerHost -W -Q "SET NOCOUNT ON; SELECT TOP 5 date_id, previous_date, next_date FROM dates WHERE month_key = 201212" > testNoLine.txt
The result:
The generated file has no extra line at the bottom (unlike the SSMS execution).
In the end using a SSIS package was a much better solution.