I am doing some analysis on the US data customer credit card complaints database - see https://catalog.data.gov/dataset/consumer-complaint-database
Saving the file to a csv and running the following code leads to an error
Sub test()
Dim FolderPath as String
Dim strfilename as String
Dim strtextline as String
strfilename = "C:\Work\VB\complaints.csv"
Open strfilename For Input As #1
strFileContent = Input(LOF(iFile),1)
Close #1
End Sub
Runtime error '5'
Invalid Procedure Call or Argument
Is this an error with Visual Basic on my installation? I'm running VBA on Excel with Microsoft Office Professional Plus 2019 with Windows 10.
The file is simply too large to read it in one chunk - its size is 1.5 GB and it holds more than 3 million rows.
You could rewrite your routine to read the file in chunks or row by row, but before you do so, you need to make up your mind what you want to do with the data. You cannot store them in Excel - it's maximum row number is a little bit above 1m (1'048'576 to be precise) and even with this limitation, it's nearly impossible to work. If Excel doesn't crash at all, it will be painfully slow.
My advice would be to store the data in a real database. There are a lot of free db systems around, eg SQL server, PostGres, MariaDB... You need of course learn about databases and SQL to handle that, but I don't see any alternative - Excel, as said, is not build and not capable to handle that amount of data.
Related
i try to connect my xls with access database. Below code work greate when i have installed full access program on my machine. Problem is when i try tu use it on machine what have only installed Run-time version of access.
I have use this references:
Visual Basic For Applications
Microsoft Excel 14.0 Object Library
OLE Automation
Microsoft Office 14.0 Object Library
Microsoft Forms 2.0 Object Library
When i try to run below code i get error: ActiveX component can't create object or return reference to this object (Error 429)
Sub mcGetPromoFromDB()
Application.ScreenUpdating = False
Dim daoDB As DAO.Database
Dim daoQueryDef As DAO.QueryDef
Dim daoRcd As DAO.Recordset
'Error on line below
Set daoDB = Application.DBEngine.OpenDatabase("K:\DR04\Groups\Functional\DC_Magazyn\Sekcja_Kontroli_Magazynu\Layout\dbDDPiZ.accdb")
Set daoRcd = daoDB.OpenRecordset("kwPromoIDX", dbOpenDynaset)
Dim tempTab() As Variant
For Each Article In collecArticle
daoRcd.FindNext "IDX = " & Article.Index
Article.PromoName = daoRcd.Fields(1).Value
Article.PromoEnd = "T" & Format(daoRcd.Fields(2).Value, "ww", vbMonday, vbUseSystem)
Next
Application.ScreenUpdating = True
End Sub
Is IDX an indexed field?
Is kwPromoIDX optimized for this specific purpose? I mean does it only contain the fields required for this update, or are you pulling extra useless fields? Perhaps something like "SELECT IDX, [Field1Name], [Field2Name] FROM kwPromoIDX" would be more efficient.
Since you are only reading the table records, and don't seem to need to actually edit them instead of dbOpenDynaset, use dbOpenSnapshot.
Just throwing out an idea here, you'd have to test to see if it made any difference, but perhaps you could try to reverse your logic. Loop through the recordset 1 by 1 and locate the IDX within your worksheet.
Another thing I've done in the past is use .CopyFromRecordset and copied the entire recordset into a temporary worksheet and done the juggling back and forth entirely within Excel, to eliminate the back and forth.
Lastly, another approach can be to quickly loop through the entire recordset and populate an array, collection, ... and then work with it instead of Access. This way the data is all virtual and you reduce the back and forth with Access.
You'll need to do some testing to see what works best in your situation.
I want to split a multi-page MS Publisher 2010 document into a set of separate documents, one per page.
The starting document is from a mail-merge, and I am trying to produce a set of numbered and named tickets as PDFs to send to people for an event (this is for a charity). The mail-merge seems to work fine and I can save the merged document and it looks OK with e.g. a list of fifty people giving me a 50-page document.
Ideally the result would be a set of PDFs.
I have tried to create some simple VBA code to do this, but it is not working consistently. If I try this very simple macro below , I get the correct number of documents, but only perhaps 1 or 2 documents with the correct contents out of every five. Most of the documents are completely empty.
Sub splitter()
Dim i As Integer
Dim Source As Document
Dim Target As Document
Set Source = ActiveDocument
For i = 1 To Source.Pages.Count
Set Target = Documents.Add
Source.Pages(i).Shapes.Range.Copy
Target.Pages(1).Shapes.Paste
Target.SaveAs Filename:="C:\Temp\Ticket_" & i
Target.Close
Set Target = Nothing
Next i
End Sub
I did sometimes get an error that the clipboard is busy, but not always.
Another approach might be to start with the master document and do this looping over the separate documents and fill in the personal details for each person's ticket and directly produce the PDFs. But that seems more complex, and I am not a VB programmer (but been doing C++ etc for 20+ years, so I can program :-) )
A final annoyance is that it seems to keep opening a new Publisher window for each document. It takes a while to then close 50+ copies of publisher, and the laptop starts to crawl...
Please advise how best to get round these issues. I am probably missing something trivial, being a relative VB(A) newbie.
Thanks in advance for any suggestions
Try coding something like this:
Open Publisher application (CreateObject()?)
Open Publisher document (doc.Open(filename))
Store the total amount of pages in a global variable (doc.Pages.Count)
Close document (doc.Close())
Loop the following for each page
Copy the pub file and rename it to name & "page" & X
Open the new pub file
Remove all Pages except page X from the pub file
doc.Save()
doc.Close()
Copying files with VBA is easy, but copying pages in Publisher VBA is quite a hassle, so this should be easier to achieve
I hope this is an appropriate place to ask this question.
I have recently built a data analysis tool in Excel that works by submitting inputs to a SAS Stored Process (as an 'input stream'), running the processes and displaying the results in Excel.
I also use some code to check for and remove all active stored processes from the workbook before running the process again.
This runs successfuly the first 2 times, but fails on the third attempt. It always fails on the third attempt and I can't figure out why.
Is there some kind of memory allocation for Excel VBA that's exhausted by this stage? Or some other buffer that's maxed out? I've stepped-in to every line of the VBA code and it appears to hang (on the third run) at the following line:
SAS.InsertStoredProcess processLoc, _
outputSheet.Range("A1"), , , inputStream
Code used to initiate SAS Add-in for Microsoft Office:
Dim SAS As SASExcelAddIn
Set SAS = Application.COMAddIns.Item("SAS.ExcelAddIn").Object
Code used to delete stored processes from target output sheet:
Dim Processes As SASStoredProcesses
Set Processes = SAS.GetStoredProcesses(outputSheet)
Dim i As Integer
For i = 1 To Processes.Count
' MsgBox Processes.Item(i).DisplayName
Processes.Item(i).Delete
Next i
Code used to insert and run stored process:
Dim inputStream As SASRanges
Set inputStream = New SASRanges
inputStream.Add "Prompts", inputSheet.Range("DrillDown_Input")
SAS.InsertStoredProcess processLoc, _
outputSheet.Range("A1"), , , inputStream
Cheers
On reflection, my theory here would be that you are hitting the limit of multibridge connections. Each multibridge connection represents a port, and the more ports you have the more parallel connections are enabled. By default there are three, perhaps you have two, or you are kicking off another STP at the same time?
This would explain the behaviour. I had a spreadsheet that called STPs and it would always fail on the fourth call because the first three were running. You can get around this by either a) increasing the number of multibridge connections or b) chaining your processes so they run sequentially.
I don't know if this'll be useful, but I had a similar problem running a DLL written in C++ through VBA. The problem occurred because the function in the DLL returned a double value, which I didn't need and so the code in VBA did
Call SomeProcessFromDLL()
But the process was returning a double floating point value which was 'filling up' some buffer memory in VBA and VBA has a limited buffer (I think it gave up at 8 tries). So the solution for me was
Dim TempToDiscard as Double
TempToDiscard = SomeProcessFromDLL()
Maybe looking at the documentation of the process being called would help here, especially if it's returning some value to be discarded anyway, like a
Return 0;
I never liked using the IOM in VBA, mainly due to issues with references and having to do client installs when rolling out applications. Last year I found a MUCH better way to connect Excel and SAS - using the Stored Process web application. Simply set up your server side SAS process with streaming output, and pass your inputs via an Excel Web query. No client installs, no worries about SAS version upgrades, hardly any code - am surprised it's not used more often!
See: http://rawsas.blogspot.co.uk/2016/11/sas-as-service-easy-way-to-get-sas-into.html
I'm experiencing an intermittent issue with an application that's Excel 2010 front-end, Access 2010 back end. It's in use by 5-10 users simultaneously. Recently, users have started intermittently receiving the following error:
Run-time error '3035': System resource exceeded.
Sometimes the Debug button is grayed out so I can't jump to the code that caused the error, but when it's available to click, it takes me to the following code:
'Open connection to back end DB
Set db = OpenDatabase(dbPath)
'Open a recordset of a table
Set RS = db.OpenRecordset(Tbl)
'loop through rows in a 2D array
For i = FR To LR
RS.AddNew
'loop through columns of the 2D array
For j = 1 to LC
'set values for various fields in the new record, using values from the array
Next
RS.Update
Next
Here, the RS.Update is marked as the line that's causing the error.
What's odd is that this problem comes and goes; users will repeatedly receive it when attempting to submit a certain data set, then, several hours later, when they try to submit the same data set again, the operation succeeds without the error. It's also perplexing that sometimes the Debug button is available and sometimes it isn't.
One issue might be the size of the Access back end; it's currently ~650 MB, and we didn't start getting these messages until it grew to around 600 MB.
Any ideas as to what could be causing this? Various Google hits indicate that this problem sometimes happens when a join query has too many fields, but this is just a recordset of a table, not a join query.
Ahh, methink that this is one of the strange errors that pop-up when you write a lot to a backend database that can't keep up with the management of the lock file.
The solution is to make sure you keep a connection open to the back-end database from each of your client and that you hold onto that connection until you close the client.
Just open a recordset to a table (say a dummy table with only one record), and keep that recordset open until you close the application.
Resource-wise, keeping this connection open will have no detrimental effect on performance or memory consumption, but it will ensure that the lock file is not continuously created/deleted every time a connection is open then closed.
Keeping that connection open will also substantially increase the performance of your data access.
Edit:
You should be more explicit when using recordsets and specify exactly the mode of operation you need:
Set RS = db.OpenRecordset(Tbl, dbOpenTable, dbFailOnError)
or, faster if you are only appending data:
Set RS = db.OpenRecordset(Tbl, dbOpenTable, dbAppendOnly + dbFailOnError)
Also make absolutely sure you close the recordset once you're finished with appending the data!:
Set RS = db.OpenRecordset(Tbl, dbOpenTable, dbAppendOnly + dbFailOnError)
With RS
'loop through rows in a 2D array
For i = FR To LR
.AddNew
'loop through columns of the 2D array
For j = 1 To LC
'set values for various fields in the new record,
'using values from the array
Next
.Update
Next
.Close
End With
Set RS = Nothing
This is caused by running out of available virtual memory (VM) aka swap disk. A 32 bit app cannot use more than 2gb and for some reason Access uses a lot of VM and when it needs more and cannot get any then you run out of system resources.
Solution is to make sure your VM is at least 4 times the RAM and to restart your PC at least daily, only this clears out the VM from garbage left lying around from other apps.
You will never have had this issue on a 32 bit OS, its only now with 64 bit OS that this happens.
Short Story
My 3035 error was caused by an add.new to a file without a primary key.
Composing a copy of the file with a primary key assigned and replacing the key-less file appears to have corrected the problem.
Long Story
I have been experiencing 3035 error on an Add.New to an existing backend table with >81,000 records. After searching the web for ideas and coming up dry I reflected on possible issues.
I compacted/repaired the backend files to no affect. Then decided to check the file design. It turns out there was no primary key assigned.
Assigning a primary key to the auto number ID field caused the same 3035 error! So I copied the data structure to a new file, assigned a primary key to the new file and then did a query append of the original file to the new file. Finally I renamed the files.
Using the new file appears to be working.
I need to get the Folder size and display the info on a report (SSRS). I need to do this for a number of Databases (loop!). These DB's are websites' backends.
Are any samples available for this? Does xp_filesize and the like the right solution?
Looking at the question and Tomalak's response, and I'm assuming the reporting server will be able to reach the folders held in the DB:
Firstly set up the query to get you back the result-set of paths - I assume you'll have no trouble with this part. Next you'll need to add a custom code function to your report: http://msdn.microsoft.com/en-us/library/ms155798.aspx - This function will take the folder path as a parameter, and pass back the size of the folder. You'll have to write in VB.Net if you want to embed the code in the report, or you could code up a DLL and bring that in.
An example VB.Net code block (Remember you may need to prefix objects with System.IO.) http://www.freevbcode.com/ShowCode.asp?ID=4287
Public Shared Function GetFolderSize(ByVal DirPath As String, _
Optional IncludeSubFolders as Boolean = True) As Long
Dim lngDirSize As Long
Dim objFileInfo As FileInfo
Dim objDir As DirectoryInfo = New DirectoryInfo(DirPath)
Dim objSubFolder As DirectoryInfo
Try
'add length of each file
For Each objFileInfo In objDir.GetFiles()
lngDirSize += objFileInfo.Length
Next
'call recursively to get sub folders
'if you don't want this set optional
'parameter to false
If IncludeSubFolders then
For Each objSubFolder In objDir.GetDirectories()
lngDirSize += GetFolderSize(objSubFolder.FullName)
Next
End if
Catch Ex As Exception
End Try
Return lngDirSize
End Function
Now, in your report, in your table, you'd have for the cell that shows the folder size an expression something like:
=Code.GetFolderSize(Fields!FolderPath.Value)
I doubt this approach will be performant for a manually-viewed report, but you might get away with it for small result sets, or a scheduled report delivered by email?
Oh, and this piece suggests you 'may' run into permissions issues using System.IO from within RS: http://blogs.sqlxml.org/bryantlikes/pages/824.aspx
Could you clarify who should do what in your scenario? Do you want SQL Server do get the info or do you want Reporting Server do that?
What exactly do you mean by "folder size"? Is "one folder, sum up each file" enough or does it need to be recursive? Either way, I'd go for a little custom .NET function that uses System.IO.Directory and it's relatives.
I'd consider splitting this into two pieces, maybe a Windows Service to scan the directories and aggregate the data into a database, then use SSRS to report on the database as usual.
The reason I suggest this is to use master..xp_filesize and it's kin the account the SQL Server service is starting with needs access to the paths to be scanned. Once this turns into accessing paths on other machines I'd be less comfortable with the security implications of that.
Hope this helps
In SSRS you can to do this with the help of custom data extension. U need give the path for the datasource as your folder name and it will retrive your files and its related informations and displayed
For further reference and custom dll use this
http://www.devx.com/dbzone/Article/31336/0/page/4
I have done this earlier.
Note: you have to make related changes to Report Designer and Report Manager configuration files.