SQL Server 2008 - TSQL Read CSV file - sql

I am working on a project that basically entails on importing a CSV file into a SQL Server 2008 R2 database. The CSV file is generated from an Excel file that is populated by a "manager" with PR hours for his employees. This also includes some additional information such as which job and phase the employees were working on and also includes the number of hours for an equipment (if used).
Once you generate a CSV file for that, it's not exactly the usual straighforward "column" based CSV file. It's more like a "row" based CSV file with each row being kind of unique. Due to this caveat involved, I cannot do a straight dump (using BULK insert or OPENROWSET) to SQL, which would essential create a (temp) table with the appropriate column filled data.
I am looking to use the fields within the CSV file based on the "location" of that field in the row.
So, basically the positions of the data will remain the same, since every CSV is based on a TEMPLATE file - so all I have to do is navigate through the CSV file using SQL code to find the right field based on it's position in the ROW. I hope that gives you guys a better understanding of what I am trying to achieve here. Sorry for the long wall of text.
I researched a bit and here's what I have come up with so far:
Reads CSV files into a temp table through a custom SQL function (Reading lines from a file)
https://www.simple-talk.com/sql/t-sql-programming/reading-and-writing-files-in-sql-server-using-t-sql/
This one is interesting. Dumps the whole file as a BLOB and then you can sift through the data.
http://www.mssqltips.com/sqlservertip/1643/using-openrowset-to-read-large-files-into-sql-server/
Finally, this one essential splits out the rows and creates separates records per row. Interesting..
http://ask.sqlservercentral.com/questions/17408/how-to-read-a-text-file.html
If anyone has any suggestions or steps that I could follow to get through this, I would greatly appreciate it.
To the Mods: If I have posted something (especially the links) that shouldn't be here, please feel free to remove it. I apologize if I did.
Thanks much.. Hope to hear some positive responses! :)
Warm Regards,
Pranav

If the file is not too large, another option is to post-process the file in Excel using a VBA macro. Of course, you'd need to come up to speed using the Excel object model and VBA, but the recording function makes it fairly simple. One advantage of the VBA approach is that it seems you really do want to do row by row processing, and VBA is better for that, whereas SQL is better for set-based operations.

Related

Outgrew google sheets but do not have expertise in SQL. Is there an interim solution?

Our nonprofit uses google sheets to transform data. The first file has the raw data, which comes to us in a CSV. Data gets passed from one file to another with =importrange. Intermediate files transform various parts of it with lot of google sheet formulas such as =split, =vlookup, =if, =textjoin, =concatenate, etc. The final file has the data in the form that we can use to create pages in our website.
The first file has about 150 columns. The new 10M cell limit should let us get about 60k rows, but even that number freezes up, and we need to get up into the millions of rows. All of the transformer files, together, add up to about 3k columns.
We assume that the ultimate solution is to re-create it all in a SQL database, but we do not have any expertise of that type, nor the funding to hire someone.
Is there an easy way to transform a google sheet (with formulas) into
a SQL file?
Is there an easy interim solution, which we can use for a
while?

What is the best way to approach transferring an 'empty row delimited' excel spreadsheet into two tables in an SQL database?

I have a collection of excel spreadsheets that are formatted... less than ideally.
I'm testing out some solutions involving SQLBulkCopy and OleDB, but I'm a bit concerned about how to handle the format of this sheet.
I was considering writing a custom Insert statement, but would like to see if there may be some easier way to implement a heuristic.
Below is a sample of the data I will be parsing:
The highlighted columns are the ones I'll be loading into the two tables. One table will hold order #s, and the other table will hold all the lines below that order number.
Any suggestions on tackling this would be lovely. The excel sheets are hand entered, so some weird cases exist (one order number with multiple carriers, which imposes the question of whether I should treat the first row with the order number as a line in the database structure I designed.
I'm implementing this importer within VB.net, to my dismay, to avoid being looked at funny by my coworkers :).
One approach would be to save the worksheet to a text file (e.g., CSV) and then use AWK to split it at the empty row. Some examples are in this SO answer: Bash how to split file on empty line with awk
You could then import the CSV files directly into the database.
Amusingly , if I wrote anything in VB.NET I'd definitely get looked at funny by my coworkers
So I'd use a library called EPPlus to read the excel and not have to worry about converting it. How you do the blank line detection is an open question- checking that the Value of ten cells on the row is Nothing or Empty would suffice. Then take the next row as your parent, and proceed with subsequent rows as children until the next blank
Take a look at this answer for more info on how to detect blank rows in Excel- if you get stuck turning any of the c# into vb shoot us a question. Online converters exist because the two languages are the same thing under the hood

Filter certain SQL data formatted in one column into a new column

Before I begin I found this to be most relevant with the research I have done.
How to split the data from one column into separate columns using the contents of another column in SQL
Attached are pictures of my progress so far. How can I display this information such as it is shown in the excel file without disrupting the GROUP BY filter in my Query?
It's a Fishbowl Database, newest version. I am running the queries through Flamerobin which you see in the picture. Trying to organize the query to display correctly so I can format it into 'iReports' and export it into an excel spreadsheet like the one shown. Maybe there is some part of this that would better be done in excel?
Notice the numbers for Qty are different, that's ok right now.
My reputation is too low to post pictures I am sorry. Here are the two JPGs in my Dropbox. I really appreciate the help.
https://www.dropbox.com/sh/r2rw5r2awsyvzs9/AAAXXg27CMPOYtZFqPX3Dx6la?dl=0

Help Importing CSV file with Variable Columns per Row into SQL Table using Import tool or SSIS

I am stuck with a CSV file with over 100,000 rows that contains product images from a provider. Here are the details of the issue, I would really appreciate some tips to help resolve this. Thanks.
The File has 1 Row per product and the following 4 columns.
ID,URL,HEIGHT,WIDTH
example: 1,http://i.img.com,100,200
Problem starts when a product has multiple images.
Instead of having 1 row per image the file has more columns in same row.
example:
1,http://i.img.com,100,200,//i.img.com,20,100,//i.img.com,30,50
Note that only first image has "http://" remaining images start with "//"
There is no telling how many images per product hence no way to tell how many total columns per row or max columns.
How can I import this using SSIS or sql import wizard.
Also I need to do this on regular intervals.
Thank you for your help.
I don't think that you can use any standard SSIS task or wizard to do this. You're going to have to write some custom code which parses each line. You can do this in SSIS using VB code or you can import the file into a staging table that's just a single column to hold each row and do the parsing in SQL. SSIS will probably be faster for this kind of operation.
Another possibility is to preprocess the file using regex or a search-and-replace command. Try to get double-quotes around the image list then you should be able to import the whole file fine, with the quoted part going into a single column. Catching the start of the string should be easy enough given the "http:\" for which you can search. Determining where the end quote goes might be more of a problem.
A third potential solution would be to get the source to fix the data. Even if you can't get the images in separate rows (or another file with separate rows, which would be ideal), maybe you can get the double-quotes added from the source as part of the export. This would likely be less error-prone than using the search-and-replace method.
Good luck!

how can you parse an excel (.xls) file stored in a varbinary in MS SQL 2005?

problem
how to best parse/access/extract "excel file" data stored as binary data in an SQL 2005 field?
(so all the data can ultimately be stored in other fields of other tables.)
background
basically, our customer is requiring a large volume of verbose data from their users. unfortunately, our customer cannot require any kind of db export from their user. so our customer must supply some sort of UI for their user to enter the data. the UI our customer decided would be acceptable to all of their users was excel as it has a reasonably robust UI. so given all that, and our customer needs this data parsed and stored in their db automatically.
we've tried to convince our customer that the users will do this exactly once and then insist on db export! but the customer can not require db export of their users.
our customer is requiring us to parse an excel file
the customer's users are using excel as the "best" user interface to enter all the required data
the users are given blank excel templates that they must fill out
these templates have a fixed number of uniquely named tabs
these templates have a number of fixed areas (cells) that must be completed
these templates also have areas where the user will insert up to thousands of identically formatted rows
when complete, the excel file is submitted from the user by standard html file upload
our customer stores this file raw into their SQL database
given
a standard excel (".xls") file (native format, not comma or tab separated)
file is stored raw in a varbinary(max) SQL 2005 field
excel file data may not necessarily be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, different "formats", ...)
requirements
code completely within SQL 2005 (stored procedures, SSIS?)
be able to access values on any worksheet (tab)
be able to access values in any cell (no formula data or dereferencing needed)
cell values must not be assumed to be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, formulas, different "formats", ...)
preferences
no filesystem access (no writing temporary .xls files)
retrieve values in defined format (e.g., actual date value instead of a raw number like 39876)
My thought is that anything can be done, but there is a price to pay. In this particular case, the price seems to bee too high.
I don't have a tested solution for you, but I can share how I would give my first try on a problem like that.
My first approach would be to install excel on the SqlServer machine and code some assemblies to consume the file on your rows using excel API and then load them on Sql server as assembly procedures.
As I said, This is just a idea, I don't have details, but I'm sure others here can complement or criticize my idea.
But my real advice is to rethink the whole project. It makes no sense to read tabular data on binary files stored on a cell of a row of a table on database.
This looks like an "I wouldn't start from here" kind of a question.
The "install Excel on the server and start coding" answer looks like the only route, but it simply has to be worth exploring alternatives first: it's going to be painful, expensive and time-consuming.
I strongly feel that we're looking at a "requirement" that is the answer to the wrong problem.
What business problem is creating this need? What's driving that? Try the Five Whys as a possible way to explore the history.
It sounds like you're trying to store an entire database table inside a spreadsheet and then inside a single table's field. Wouldn't it be simpler to store the data in a database table to begin with and then export it as an XLS when required?
Without opening up an instance Excel and having Excel resolve worksheet references I'm not sure it's doable at all.
Could you write the varbinary to a Raw File Destination? And then use an Excel Source as your input to whatever step is next in your precedence constraints.
I haven't tried it, but that's what I would try.
Well, the whole setup seems a bit twisted :-) as others have already pointed out.
If you really cannot change the requirements and the whole setup: why don't you explore components such as Aspose.Cells or Syncfusion XlsIO, native .NET components, that allow you to read and interpret native Excel (XLS) files. I'm pretty such with either of the two, you should be able to read your binary Excel into a MemoryStream and then feed that into one of those Excel-reading components, and off you go.
So with a bit of .NET development and SQL CLR, I guess this should be doable - not sure if it's the best way to do it, but it should work.