My insert statement has been working, but randomly stops inserting, until I go into the workbook I'm inserting into and manually change something and save.
The db workbook currently has around 15,000 rows that have been previously inserted, but now when I run the macro, it doesn't insert anything and there are no errors. I've stepped through the code, and every line executes normally.
I need help figuring out why nothing is being inserted, when it worked before.
Sub Insert
Dim con As ADODB.Connection
Dim InsertSQL As String
InsertSQL = "INSERT INTO [Sheet1$] ([FIELD1],[FIELD2],[FIELD3],[FIELD4],[FIELD5]) VALUES( 1234567895,9350.00,#9/12/2019#,'username',#9/12/2019 10:05 AM#)"
Set con = New ADODB.Connection
con.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & dbpath & ";Extended Properties=""Excel 12.0;HDR=Yes;IMEX=0""; Mode=ReadWrite;"
con.Execute InsertSQL
con.Close
End Sub
Edit:
I've set default values in the first 8 rows of my db workbook so as not to cause any limitations in data types as specified in:
This is an issue with the Jet OLEDB provider. It looks at the first 8
rows of the spreadsheet to determine the data type in each column. If
the column does not contain a field value over 256 characters in the
first 8 rows, then it assumes the data type is text, which has a
character limit of 256. The following KB article has more information
on this issue: http://support.microsoft.com/kb/281517
It's possible this issue is due to some blank rows in the worksheet, but I can't confirm.
Related
I am using excel for a macro to paste information with a sql query but in the table where I have the information in the columns I have the same repeated name and the names must be those.
The macro would be the following:
"Select [code], [name], [PCR] from [Book$B2:H]"
The table where I want to get the information would be the following:
I need the query to copy the information that is in bold but i have PCR in 3 columns so its getting only the first one.
If you're using ADO in your VBA code, then you can change the connection string to say that your data doesn't have headers. This then allows you to refer to fields by their position rather than their name. To do this, add HDR=No into the Extended Properties of the connection string.
Your SQL query could then be something like this:
SELECT F1, F2, F4, F6, F8 FROM [Book$C2:H]
Setting up the connection string would be something like this:
' Set up connection
Dim cn As Object
Set cn = CreateObject("ADODB.Connection")
' Connection string for Excel 2007 onwards .xlsm files
With cn
.Provider = "Microsoft.ACE.OLEDB.12.0"
.ConnectionString = "Data Source=" & ThisWorkbook.FullName & ";" & _
"Extended Properties=""Excel 12.0 Macro;HDR=No"";"
.Open
End With
This assumes that your VBA code is in the same workbook as the data - if that's not the case, then just change the value for the Data Source. See connectionstrings.com for any other potential variations you might need to make for different types of Excel file
The easiest solution is likely to involve:
Insert a row C.
in C3: =if(C1="",B1&"."&C2,C1&"."&C2) and drag across.
But to fit into the bigger picture we would need to know about the bigger picture.
I create named range that covers data I need to query using ADODB
SourceWB.Names.Add Name:=SOME_RANGE_NAME, RefersTo:=SOME_RANGE
I setup a connection run SQL query
sConn = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & sSourceName_ & "; Extended Properties=""Excel 12.0 Macro;HDR=YES"";"
Set oConn_ = New ADODB.Connection
oConn_.Open sConn
Dim oRs As New ADODB.Recordset
oRs.Open sSQL, oConn_, adOpenStatic, adLockReadOnly, adCmdText
The SQL query is
SELECT * FROM [SOME_RANGE_NAME] WHERE ....
The problem is: these commands are in cycle, where every time there may be other range referenced by SOME_RANGE_NAME. If the range is changing within one sheet, everything is ok. As soon the SOME_RANGE_NAME references range in other sheet I get the following error:
no value given for one or more required parameters
The solution was proper closing of connections!
I have an excel file (raw data) which I have to import to another excel file (visualization interface). But, before importing the raw data file, I have to filter out the data of some columns. Can I write an sql query within my import file vba code?
For instance, I want to filter out the blank values from one column and see only numbers greater than 10 from another column.
This is possible. In your Visualization interface file (viz.xls) create a new module and add the
Microsoft ActiveX Data Objects 2.8 Library
Using Tools >> References
The following code will get you in the ballpark:
Sub getFromRawData()
Dim rawFile As String
Dim adoConn As ADODB.Connection
Dim adoRS As ADODB.Recordset
Dim strSQL As String
'The xlsx file to treat as a database
rawFile = "c:\myFolder\myrawdatafile.xlsx"
'Open a connection to the workbook
adoConn.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & rawFile & ";Extended Properties=""Excel 12.0 Xml;HDR=YES"";"
'Write the SQL necessary to get the data you want (assuming Sheet1 and making up column names, which will be the first row of the sheet by default)
strSQL = "SELECT * FROM [Sheet1$] WHERE [onecolumn] IS NOT NULL AND [anotherColumn] > 10;"
'Now we open up our recordset using the connection and the sql statement
adoRS.Open strSQL, adoConn
'Last, we dump the results in this viz sheet
ThisWorkbook.Sheet1.Range("A2").CopyFromRecordset adoRS
'If you want the header row, you'll have to get that from the recordset:
Dim adoField As ADODB.Field
Dim intHdrCol
'initial column to start writing header
intHdrCol = 1
'loop through the fields in the recordset and write the column
For Each adoField In adoRS.Fields
Sheet1.Cells(1, intHdrCol).Value = adoField.Name
intHdrCol = intHdrCol + 1
Next adoField
'Close the connection
adoRS.Close
adoConn.Close
End Sub
Depending on your version of Excel you'll have to fiddle with that Connection String. The SQL statement will have to modified as well, obviously.. but with some tinkering this thing is ready to rock and roll.
I am using ADODB to query data form a worksheet in the Active workbook. The data resides on it's own sheet, and has column headers. I've defined the table as an excel ListObject - excel's automatic table formatting construct.
I open the connection like this:
Set cn = CreateObject("ADODB.Connection")
Set rs = CreateObject("ADODB.Recordset")
strCon = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & ThisWorkbook.Path & "\" & _
ThisWorkbook.Name & ";Extended Properties=""Excel 8.0;HDR=Yes;IMEX=1"";"
cn.Open strCon
Then I can fetch a recordset using a simple SQL statement:
strSQL = "SELECT * from [sheet1$]
rs.Open strSQL, cn, 0, 1 'cursortype = adOpenForwardOnly, locktype = adOpenReadonly
This all works fine... until I insert a new row in the table on sheet1. The new row is not included in subsequent queries, even if I close, set to nothing, and re-open both the connection and recordset variables in my code.
If I save and close the workbook, and then re-open it, the new records ARE included in the query, which leads me to believe this might be a caching issue. I've searched for ADODB Cache Flush etc, but most results appear to be related to PHP or Access. I've also tried a variety of other options for Cursor Type and Lock Type, with no difference.
Can anyone suggest how I can ensure that each time I run my query I get all the rows, even after I insert new rows in the table?
Figured out a solution:
Since I'm using Excel 2010, I discovered that I can use a newer version of ADODB.
So, instead of defining my connection string like this:
"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="...
I changed it to this:
"Provider=Microsoft.ACE.OLEDB.12.0;Data Source="...
and the problem is solved. New inserts and edits are now showing up immediately after I make them. This also removes the issue of the known memory leak in OLEDB.4.0, so that's a bonus.
I have a SSIS package to upload data from Excel file into an Sql Server 2005 table.
The excel file will have varied lines of data ranging from 20k - 30k lines.
The upload works fine, when all the data are correct. But obviously fails when there is a small problem even in a single row. Examples like mandatory values presented null, inconvertable values (data type mismatch) etc.
I want to validate the excel file before the upload and want to tell the user which row and column has got the error...
Any idea as to how to accomplish this, without consuming much time and resources.
Thanks
It might be easiest to load into a temporary table that does not have any mandatory values etc and check that before appending it to the main table.
EDIT re comment
Dim cn As ADODB.Connection
Dim rs As ADODB.Recordset
''This is not necessarily the best way to get the workbook name
''that you need
strFile = Workbooks(1).FullName
''Note that if HDR=No, F1,F2 etc are used for column names,
''if HDR=Yes, the names in the first row of the range
''can be used.
''This is the Jet 4 connection string, you can get more
''here : http://www.connectionstrings.com/excel
strCon = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & strFile _
& ";Extended Properties=""Excel 8.0;HDR=Yes;IMEX=1"";"
Set cn = CreateObject("ADODB.Connection")
Set rs = CreateObject("ADODB.Recordset")
cn.Open strCon
''Note that HDR=Yes
''Pick one:
strSQL = "SELECT Frst, Secnd FROM TheRange WHERE SomeField Is Null" ''Named range
strSQL = "SELECT Frst, Secnd FROM [Sheet1$C3:C67] WHERE Val(Secnd)=0" ''Range
strSQL = "SELECT Frst, Secnd FROM [Sheet1$] WHERE First<Date()" ''Sheet
rs.Open strSQL, cn
Sheets("Sheet2").Cells(2, 1).CopyFromRecordset rs
I have recently been working on a number of similar packages in SSIS and the only way that I have been able to get around this is to have a holding table similar Remou's suggestion.
This table is extremely generic, where all fields are NULLable and VARCHAR(255). I then have a validation Stored Procedure that checks things such as typing, the existance of data etc before I move the data into a "live" situation. Although it may not be the most elegant of solutions, it gives you alot of control of the way you check the data and also means that you shouldn't have to worry about converting the file(s) to .CSV first.