Access randomly drops decimals when inserting from txt file - sql

I have a number of .csv files in the following format with example data listed:
ID,Lat,Long
1,-43.76120167,158.0299917
2,-43.76119833,158.03
3,-43.7612,158.0299983
4,-43.76120167,158.0299967
The values change from file to file, but they're always the same format and similar amounts. What you see above is exactly how it shows up in the .csv file (when opened with a texpad/notepad not in Excel - so we can eliminate any Excel problems now).
However, when I run the following INSERT statement as an Access SQL query:
INSERT INTO Table1 SELECT * FROM [Text;Database=C:\;Hdr=Yes].[ImportFile.csv];
This is what shows up in my Access database table:
ID,Lat,Long
1,-43,158
2,-43,158
3,-43,158
4,-43,158
Now, I know what you're thinking. Let me just say that my table design in Access is set up such that ID is a Number/Long Integer, and both of my Lat and Long fields are set up as Number/Double with 4 Decimal Places. I've double checked this a million times and it can be confirmed because not all input files share this problem.
This is what is troubling me... where are all the digits after my decimal point going? I need to have them.
What confuses me even more is that some files read just fine and the decimal points stay in there just fine... same table, same insert query. Every file is generated from the same source and formatted the same for what it's worth.
However, if I fire up Access itself and run the import from text file wizard, the values end up just fine. Access automatically makes the field a double with auto decimals (I have also tried using auto decimals in my desired table, to no avail).
Anyone have any idea what is going on?
Thanks!

This is well known problem.
The Jet database engine determines the data types from data source. One of solutions is to use/create Schema.ini file.
Also, keep in mind that, in order to determine data type for columns, only first few rows are scanned.
For more info, please see here

Related

SSIS importing percentage column from Excel displaying as NULL in database

I have an ETL process set up to take data from an Excel spreadsheet and store it in a database using SSIS. However, one of the columns in the the Excel file is formatted as a percent, and it will sometimes erroneously be stored as a NULL value in the database, as if there was some sort of translation error.
Pictured is the exact format being used for the column in Excel.
Interestingly, these percent values do load properly on some days, but for some reason one particular Excel sheet I was given as an example of this issue will not load any of them at all when put through the SSIS processor.
In Excel, these values will show up like "50.00%", and when the SSIS processor is able to translate them properly it will display as the decimal equivalent in the database, "0.5", which is what I want instead of the NULL values. The data type I am using in SSIS for this is Unicode string [DT_WSTR], and it is saved as an NVARCHAR in the database.
Any insight as to why these values will sometimes not display/translate as intended? I have tried messing around with the data types in SSIS/SQL Server, but it has either resulted in no change or error. When I put test values in the Excel sheet, such as "test" to see if it is importing anything at all from this column, it does seem to work (just not for the percent numbers that I need).
The issue was caused by the "mixed data types" that were present in the first few rows of my data (the "mixed" part being blank fields), which would explain why some sheets would work and others wouldn't.
https://stackoverflow.com/a/542573/11815822
Setting the connection string to accommodate for this fixed the issue.

Changing the length of Text fields in an Access linked table

I am exporting a file from a system as .csv. My aim is to link to this file as a table (which matches the output field for field) and then run the queries and export.
The problem I am having is that, upon import, all the fields are 255 bytes wide rather than what they need to be.
Here's what I've tried so far:
I've looked at ALTER TABLE but I cannot run multiple ALTER TABLE statements in one macro.
I've also tried appending the table into another table with the correct structure but it seems to overwrite the structure.
I've also tried using the Left function with the appropriate field length, but when I try to export, I pretty much just see 5 bytes per column.
What I would like is a suggestion as to what is the best path to take given my situation. I am not able to amend the initial .csv export, and I would like to avoid VBA if possible, as I am not at all familiar with it.
You don't really need to worry about the size of Text fields in an Access linked table that is connected to a CSV file. Access simply assigns each Text field the largest possible maximum size: 255. It does not mean that every value is actually 255 characters long, it just means that any values in those fields can be at most 255 characters long.
Even if you could change the structure of the linked table (which you can't), it wouldn't make any real difference except to possibly truncate longer Text values, and you could easily do that with a String function. For example, if a particular field had to be restricted to 15 characters then you could simply use Left([fieldName], 15) as a query column or as the control source in a report.
In the end, as the data set is not that large, I have set this up to append from my source data into a table with the correct structure. I can now run my processes against this table as per normal.

SQL Server 2008 - TSQL Read CSV file

I am working on a project that basically entails on importing a CSV file into a SQL Server 2008 R2 database. The CSV file is generated from an Excel file that is populated by a "manager" with PR hours for his employees. This also includes some additional information such as which job and phase the employees were working on and also includes the number of hours for an equipment (if used).
Once you generate a CSV file for that, it's not exactly the usual straighforward "column" based CSV file. It's more like a "row" based CSV file with each row being kind of unique. Due to this caveat involved, I cannot do a straight dump (using BULK insert or OPENROWSET) to SQL, which would essential create a (temp) table with the appropriate column filled data.
I am looking to use the fields within the CSV file based on the "location" of that field in the row.
So, basically the positions of the data will remain the same, since every CSV is based on a TEMPLATE file - so all I have to do is navigate through the CSV file using SQL code to find the right field based on it's position in the ROW. I hope that gives you guys a better understanding of what I am trying to achieve here. Sorry for the long wall of text.
I researched a bit and here's what I have come up with so far:
Reads CSV files into a temp table through a custom SQL function (Reading lines from a file)
https://www.simple-talk.com/sql/t-sql-programming/reading-and-writing-files-in-sql-server-using-t-sql/
This one is interesting. Dumps the whole file as a BLOB and then you can sift through the data.
http://www.mssqltips.com/sqlservertip/1643/using-openrowset-to-read-large-files-into-sql-server/
Finally, this one essential splits out the rows and creates separates records per row. Interesting..
http://ask.sqlservercentral.com/questions/17408/how-to-read-a-text-file.html
If anyone has any suggestions or steps that I could follow to get through this, I would greatly appreciate it.
To the Mods: If I have posted something (especially the links) that shouldn't be here, please feel free to remove it. I apologize if I did.
Thanks much.. Hope to hear some positive responses! :)
Warm Regards,
Pranav
If the file is not too large, another option is to post-process the file in Excel using a VBA macro. Of course, you'd need to come up to speed using the Excel object model and VBA, but the recording function makes it fairly simple. One advantage of the VBA approach is that it seems you really do want to do row by row processing, and VBA is better for that, whereas SQL is better for set-based operations.

SQL solution to handle the order of cells in Toad

I have a flat file with a lot of columns and I need it to imported to a database through Toad, so I spent all morning to create manually the table, before to import the file., but the person who extract the data from another database told me that he sent me the same file with correct data in each columns, because the first file have in some places strange characters /#! etc.
Well when I open the file I realize that the order of columns is not the same like in first file that he send me and in which I have based to build the table!
I mean when I try to upload the file the columns intersect the whole data!
So, the first way is to change the whole structure of the table.
ID INTEGER NOT NULL,
NAME VARCHAR2(100 CHAR) NOT NULL,
etc ..+250 rows
but it takes too long ..
The another option is to drag the cells:
I am sure that exist a better solution that already I know., for example recognize the columns by name and make the change in a few steps and not waste all day in something that simple!
Yes. Right Click on the table name in Schema Browser, and you'll find the import data option. I don't have Toad now, but I am almost sure there is a better way to organize columns with Toad importer.
However, Toad importer seemed slow to me.

how can you parse an excel (.xls) file stored in a varbinary in MS SQL 2005?

problem
how to best parse/access/extract "excel file" data stored as binary data in an SQL 2005 field?
(so all the data can ultimately be stored in other fields of other tables.)
background
basically, our customer is requiring a large volume of verbose data from their users. unfortunately, our customer cannot require any kind of db export from their user. so our customer must supply some sort of UI for their user to enter the data. the UI our customer decided would be acceptable to all of their users was excel as it has a reasonably robust UI. so given all that, and our customer needs this data parsed and stored in their db automatically.
we've tried to convince our customer that the users will do this exactly once and then insist on db export! but the customer can not require db export of their users.
our customer is requiring us to parse an excel file
the customer's users are using excel as the "best" user interface to enter all the required data
the users are given blank excel templates that they must fill out
these templates have a fixed number of uniquely named tabs
these templates have a number of fixed areas (cells) that must be completed
these templates also have areas where the user will insert up to thousands of identically formatted rows
when complete, the excel file is submitted from the user by standard html file upload
our customer stores this file raw into their SQL database
given
a standard excel (".xls") file (native format, not comma or tab separated)
file is stored raw in a varbinary(max) SQL 2005 field
excel file data may not necessarily be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, different "formats", ...)
requirements
code completely within SQL 2005 (stored procedures, SSIS?)
be able to access values on any worksheet (tab)
be able to access values in any cell (no formula data or dereferencing needed)
cell values must not be assumed to be "uniform" between rows -- i.e., we can't just assume one column is all the same data type (e.g., there may be row headers, column headers, empty cells, formulas, different "formats", ...)
preferences
no filesystem access (no writing temporary .xls files)
retrieve values in defined format (e.g., actual date value instead of a raw number like 39876)
My thought is that anything can be done, but there is a price to pay. In this particular case, the price seems to bee too high.
I don't have a tested solution for you, but I can share how I would give my first try on a problem like that.
My first approach would be to install excel on the SqlServer machine and code some assemblies to consume the file on your rows using excel API and then load them on Sql server as assembly procedures.
As I said, This is just a idea, I don't have details, but I'm sure others here can complement or criticize my idea.
But my real advice is to rethink the whole project. It makes no sense to read tabular data on binary files stored on a cell of a row of a table on database.
This looks like an "I wouldn't start from here" kind of a question.
The "install Excel on the server and start coding" answer looks like the only route, but it simply has to be worth exploring alternatives first: it's going to be painful, expensive and time-consuming.
I strongly feel that we're looking at a "requirement" that is the answer to the wrong problem.
What business problem is creating this need? What's driving that? Try the Five Whys as a possible way to explore the history.
It sounds like you're trying to store an entire database table inside a spreadsheet and then inside a single table's field. Wouldn't it be simpler to store the data in a database table to begin with and then export it as an XLS when required?
Without opening up an instance Excel and having Excel resolve worksheet references I'm not sure it's doable at all.
Could you write the varbinary to a Raw File Destination? And then use an Excel Source as your input to whatever step is next in your precedence constraints.
I haven't tried it, but that's what I would try.
Well, the whole setup seems a bit twisted :-) as others have already pointed out.
If you really cannot change the requirements and the whole setup: why don't you explore components such as Aspose.Cells or Syncfusion XlsIO, native .NET components, that allow you to read and interpret native Excel (XLS) files. I'm pretty such with either of the two, you should be able to read your binary Excel into a MemoryStream and then feed that into one of those Excel-reading components, and off you go.
So with a bit of .NET development and SQL CLR, I guess this should be doable - not sure if it's the best way to do it, but it should work.