I get an error by reading an excel file using openpyxl (version 2.4.1) - openpyxl

The excel file is 200Mb with more than 1700 worksheets. Is there any limitation in openpyxl?
TypeError: init() got an unexpected keyword argument 'applyFillId'
I tried with a smaller file and it is working so far

File size is limited only by how much memory your computer has, though a file of that size should probably be opened in read-only mode. The error you're seeing has probably been fixed in a more recent version.

Related

Issue Importing .xlsx into Oracle SQL Developer

I am trying to export a .xlsx file in to oracle via the import wizard. However, when I select the .xlsx file nothing happens, usually when I import .csv I then specify format etc, but I am just brought back to the home screen. The file is quite small so I don't see why this wouln't work. Does anyone have any advice?
The fastest way is to convert your excel data into csv and import it as usual. Depending on the size of file, version of sql developer, operating system there seems to be some problems with memory (especialy on 64-bit systems with 64-bit jdk) though the file looks small.
Some report says they are succeded to import xls file after increasing the SQL Developer virtual memory limit by adding a line like AddVMOption -Xmx1280M or larger into SQLDeveloper.conf file.
Converting xls to csv is easy, fast an less stressful than messing with config file.

How do I make an ACCDE file when the original project is too large

I have a client who needs to ensure that the system cannot be compromised from a 'disgruntled employee' - ie taking a copy of the 'front end' (the data is not a problem, it is the actual workings and coding that needs to be secured.
The current system is too large to make an ACCDE file (40,000kb).
I have tried reducing, compacting etc. No Joy
I also tried creating a brand new copy and re-importing all modules and objects. No Joy
I then tried creating a blank database with only 1 form and some code to undertake the importing of the objects. The code worked fine.
The file with just the form and code worked in creating the ACCDE file but the code then failed to run as it is 'transferring objects' and ACCDE files will not allow that.
Are there any alternatives to solve the original problem?
Thank you
When you are creating ACCDE file, be sure to check if project is compilable:
After this, you'll get errors, that prevents you to create ACCDE file.
Other point is that if you are using other files as source libraries (through References), then you should compile them also. ACCDE would accept only ACCDE files as Access files as library sources.
Anyway, just try first solution, it should help you as I think. Because 40 Mb - not a size for ACCDE. Only limitations I know: not more than 1000 objects in database, not more 2 Gb database size for 32 bit Access.
Van Ng - Thank you.
I missed the obvious - after checking if the project using 'Compile Project...' I found some old, redundant code that I had left in with a variable that had not been declared.
I have fixed that and it compiled.
Problem Solved - many thanks.

Source files constantly getting appended with garbage when saving on USB drive/stick

I'm a hardware/software developer and prefer to keep my source files on an external USB drive or thumbdrive. The problem I've been facing for years it seems is when I try to save these files. When I save a file and then compile I occasionally get syntax errors on lines that don't even exist in my source. For example it will complain about line 83 when my source file is only 80 lines long. Lets say I have the file open in a particular editor (I prefer Programmers Notepad or Notepad++) and see only 80 lines... I can save but then if I pop open the same file in parallel using something like Wordpad sure enough... I see the extra lines and it looks like the last few lines of real code got munged and then appended to the end of the file.
At first I thought it was the editor but have found that no matter what editor I use it happens fairly frequently. I recently upgraded to a new laptop and OS (From Win7 to Win10) and lo and behold the same problem exists here. Tried different USB drives/sticks.... same deal. Am I missing something??? This problem has been driving me bonkers! :P

Can EPPLUS read sections of Excel worksheets?

I use the following code to read in an excel file creating a package:
Public _package As ExcelPackage = Nothing
Dim flInfo As New FileInfo(FILENAME)
_package = New ExcelPackage(flInfo) 'load the excel file
This code is great when I want to load an entire file, but sometimes whether due to my local hardware issues or limitations with Epplus itself, if my file size exceeds 100MB then the import process crashes.
Therefore is there a way of loading part of an excel file into an EPPLUS package, in fact is it possible to pick and select certain portions of the Excel file?
Many thanks
How many rows and columns of data do you have? EPPlus has trouble when the file gets too big. Version 4 improved it much but did not solve it. Unfortunately, there is no great work around:
EPPlus, handling big ExcelWorksheet

Out of memory error when merging large numbers of PDFs using Zend_PDF

We're using the Zend_PDF module in SugarCRM to merge pdf invoices that our system generates. I have been able to successfully merge a number of PDFs (around 10 to 30 in my tests), but we're getting memory errors when we try to merge larger numbers of pdf files. The error looks something like this:
[30-Jan-2012 14:10:20] PHP Fatal error: Allowed memory size of 268435456 bytes exhausted at /usr/local/src/php-5.3.8/Zend/zend_operators.c:1265 (tried to allocate 68134 bytes) in /srv/www/htdocs/sugar6_mf/Zend/Pdf/Element/Object/Stream.php on line 442
The above error was generated when we tried to merge 457 pdf files - that's files, not pages. We're going to need to merge 5,000 and more at a time eventually.
Can anyone offer any help/advice on how to address this?
If needed, ask, and I'll post the code on how the merged pdf is being generated.
Thanks.
I should preface this answer by saying that I know nothing about SugarCRM - my response is based solely on my knowledge of Zend_Pdf.
If my understanding is correct, you have a PHP script (hopefully not running inside Apache considering the length of time it will take to process 5,000 files) that is taking multiple PDF files as input using the Zend_Pdf::load() method and then iterating through the pages of each PDF object and adding them to one target instance of Zend_Pdf, which you are then writing to a file using the save() method.
Using this approach, even if you unset() each of the source PDF objects after you've added the pages to the target PDF object, you'll still need enough memory to store the entire output file. If you blew through 250MB with only 457 files, then I'm guessing your input PDF files are probably about 500KB, so your output file is going to be absolutely huge, so you are still going to end up running out of memory.
My advice would be to ditch this method entirely and use pdftk instead, which you could invoke using the exec() function. I'm sure there's a limit to the size of the arguments you can provide to exec(), so it will probably be a multi-step process with several intermediate files, but ultimately I think this will be a faster, more robust solution.
And just to re-iterate an earlier point, I would not run this process within Apache. I would set up a cron job that runs at the appropriate intervals and drops the output file into a secure area on your web/file server.