I am using OPENROWSET and Microsoft.ACE.OLEDB.12.0 to retrieve data in SSMS from an Excel 2007 (.xlsx) file. When I execute the query I only get the title of the first column (Excel cell A1) and F2, F3, etc. for the rest of the columns and no data within the table. The Excel file contains data; however, OPENROWSET is not retrieving anything other than cell A1.
I have tried both
SELECT *
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0','Excel 12.0;Database=C:\filepath.xlsx','SELECT * FROM [Sheet1$]')
and
SELECT *
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0','Excel 12.0;Database=C:\filepath.xlsx',[Sheet1$])
I get the same result for both queries.
I have also tried using the Import/Export Wizard (64-bit) and when I am on the 'Select Source Tables and Views' and use Edit Mappings... I receive the following error:
Column information for the source and destination data could not be retrieved
-Unexpected error from external database driver(1).
**Additional information**
->Unexpected error from external database driver (1).(Microsoft Access Database Engine)
I have a 64-bit version of SQL Server that is being used. Also, this is an Excel file that I receive from an outside source. The other Excel files I receive from outside sources do not give me any problems when retrieving data using the same method. Additionally, I am using SQL Server 2008 R2 and the Excel file is located on the same drive as the SQL Server.
Any help is greatly appreciated. It would be nice if I could get this working so I can handle moving the data from these files over to a SSIS package.
Related
Using this SQL statement, I pull locally stored Excel sheet into SQL Server.
SELECT *
FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0','Excel 12.0;Database=C:\SampleTestFiles\outputFile.xlsx;HDR=YES', [Sheet1$])
And, I get the complete Excel sheet details with multiple rows in output console.
Now, I want to save the Excel file into a database column of datatype varbinary(max).
Please can anyone let me know how to do that?
Use application code like C# to read the file and then insert the binary data into your varbinary column in the database.
Initial situation:
I'm stuck in a simple task (in my opinion it should be simple...)
I have a collection of data which should be exported weekly to Excel.
This export contains 104 columns, from which 57 are nvarchar(max) and contains item descriptions and other information in different languages for our sales guys.
The report will have something around 2000 to 8000 rows.
I use a SQL Server 2017 CU 16
My intention:
I intended to do an SSI Job with an Excel template where the columns are predefined (width, data type and so on).
This job would have something like those steps:
Delete existing Excel file
copy Excel template as a new Excel file
DataFlowTask using SQL Server as the source and Excel destination as the target
What I already tried:
If I use the excel template with only headers, I get the following error for each of the nvarchar(max) columns:
[Excel Destination [2]] Error: An error occurred while setting up a
binding for the "ColumnName" column. The binding status was
"DT_NTEXT".
When I prepare the template having it prefilled with one row. This row has a long text (more than 255 characters) for the columns where nvarchar(max) is in the source, everything runs fine but, this dummy line is still existing.
Another try I did was dropping the sheet using an "Execute SQL Task" to the Excel File Connection and recreating the sheet using a create table statement in another "Execute SQL Task" to the same Excel file connection, I get the same error as above. Although I'm using NTEXT as the datatype for the relevant columns.
Question:
How can I export data seamlessly into a preformatted excel file which contains NTEXT?
Thank you very much in advance for any assistance.
I just started working with MS SQL Server Management Studio to run a repetitive task of:
Change the source table in the FROM section
Execute the query
Save the file as csv that can be opened in grid form in MS Excel
I know this is far-fetch, but is it possible for me to write a query in a form of:
list = {a,b,...,n}
FOR i = 1 to n
SELECT *
FROM [Server].[list{i}] as Table
WHERE *conditions*
SAVE AS list{i}.csv
NEXT i
Thanks!
Just make sure you get yourself a read-only user so you can't drop tables or delete data or anything.
There are tons of examples and tutorials for using Excel ODBC mssql like these:
https://www.youtube.com/watch?v=joi2HTh45YQ
So I have a relatively simple SSIS job that does the following:
Execute SQL Task - Drop Sheet from Excel File
drop table `Table`
Execute SQL Task - Create Sheet in Excel File
CREATE TABLE `Table` (
`Col1` VarChar (255) ,
`Col2` Long ,
`Col3` VarChar (84) ,
`Col4` VarChar (60) ,
`Col5` VarChar (255) ,
`Col6` VarChar (20) ,
`Col7` VarChar (255) ,
`Col8` VarChar (255) ,
`Col9` VarChar(255))
Data Flow Task - Export Data from SQL to Excel
This just runs an SQL query [OLE DB Source], coverts everything to unicode strings, and exports the data to an Excel Destination.
NOTE: This job executes perfectly with no errors in BIDS 2005. However, when I initially tried running it in BIDS 2008 (32 bit mode), I got the following error on both the Drop Sheet and Create Sheet Execute SQL Tasks mentioned above:
Warning: Multiple-step OLE DB operation generated errors. Check each
OLE DB status value, if available. No work was done.
I found I could fix this by changing the ConnectionType property of my Execute SQL Tasks to ADO, and using the following connection string:
Data Source=\\<filelocation>\<ExcelFileName>.xls;Provider=Microsoft.ACE.OLEDB.12.0;Extended Properties=Excel 8.0;
At this point, I'm back to the package executing in BIDS 2008 with no errors. Everything runs great!
Then I tried to update the job from exporting to an Excel 97-2003 .xls file to exporting to an Excel 2007 .xlsx file.
So, per Microsoft, I had to change my Execute SQL Tasks to use the following connection string:
Data Source=\\<filelocation>\<ExcelFileName>.xlsx;Provider=Microsoft.ACE.OLEDB.12.0;Extended Properties=Excel 12.0 XML;
I also had to update the connection manager for my Excel Destination step (which DOES support the Excel 2007 format in BIDS 2008 per Microsoft) to the following:
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\<file location>\<Excel filename.xlsx;Extended Properties="Excel 12.0 XML;HDR=YES";
So at this point, all the connections test good, and everything APPEARS like it should work. However, when I execute the package I have the following problem.
The Drop Sheet Execute SQL Task completes successfully according to BIDS. However, it doesn't actually drop the sheet in the Excel file anymore. It DOES delete all the data contained in the sheet though.
Because of #1, the Create Sheet Execute SQL Task fails because the sheet was never actually dropped.
Any ideas why this isn't working anymore? Honestly I've looked all over the internet and SO, and I have yet to see someone explain how to do this. Is there some new command to drop a sheet in Excel 2007?
For anyone else who may find this:
I fixed the issue with exporting data to XLSX files by replacing the two Execute SQL tasks with two File System Tasks. One that deletes the existing file, and another that copies a "template" Excel file (basically just an empty spreadsheet with column headers) to the reporting directory.
Then I export the data to the new template file using the data flow task.
Not my ideal solution, I much prefer the old method of having one file and just re-creating the table. However, apparently that's no longer an option in Excel 2007, and the File System task method using a template file will work.
The two most helpful resources I found when working on this job are:
SSIS - Excel 2007 Connection Guide
Copy/Rename Files Using SSIS File System Task
Am trying to import Excel 2003 data into SQL table for SQL Server 2008.
Tried to add a linked server but have met with little success.
Now am trying to check if there's a way to use the BCP utility to do a BULK insert or BULK operation with OPENROWSET, using a format file to get the Excel mapping.
First of all, how can I create a format file for a table, that has differently named columns than the Excel spreadsheet colums?
Next, how to use this format file to import data from say a file at: C:\Folder1\Excel1.xsl
into table Table1 ?
Thank you.
There's some examples here that demonstrate what the data file should look like (csv) and what the format file should look like. Unless you need to do this lots I'd just hand-craft the format file, save the excel data to csv, then try using bcp or OPENROWSET.
The format file specifies the column names for the destination. The data file doesn't have column headings so you don't need to worry about the excel (source) cols being different.
If you need to do more mapping etc, then create an SSIS package. You can use the data import wizard to get you started, then save as SSIS package, then edit to your heart's content.
If it's a one-off I'd use the SQL data import size, from right-click on database in mgmt studio. If you just have a few rows to import from excel I typically open a query to Edit Top 200 rows, edit the query to match the columns I have in excel, then copy and paste the rows from excel into SQL mgmt studio. Doesn't handle errors very well, but quick.