Splitting Large Files With VB.NET - vb.net

Does anyone know what the best way to split up large files in VB.NET? These files can be in excess of 10GB. I have found ways of doing it by googling all day! most of the solutions I have found almost work. But what I really want to know is what is the most efficient way to do this?
Many thanks

I don't know whether you have stumbled on for example following topics. If so, didn't they fulfil your wishes or would you like other alternatives?
How to split a large file into smaller files using VB.NET 2003?
Split large file into smaller files by number of lines in C#?

Related

Best way to export nvarchar(max) data with line breaks?

I am still yet to find a reasonable solution to this problem. I need to export a table which has two columns with nvarchar(max) data. The data will have new lines, line breaks and sometimes Unicode characters, so a flat file format doesn't make much sense.
What is the optimal way to achieve this task? I have previously attempted it in C#, but the time taken to do the job was too slow. I am open to solutions in any reasonable domain, i.e. SQL Management Studio, C#, Powershell etc.
Thanks

Is there a way to automatically import data into a form field in Adobe Acrobat Pro?

I'm open to other solutions as well.
My issue is this. We have about 500+ and growing different PDFs that need to have certain information (company info, phone numbers, etc.) added to form fields dynamically. The reason this needs to be dynamic is that this information changes regularly and we do not want to have to update all 500 PDFs each time it changes. So I am looking for some way to set up the PDFs so that they all read from a single external source (could be something as simple as a text file) dynamically upon opening the PDF in Acrobat Pro.
I have done some on-the-fly PDF creation in the past through PHP, however this does not seem like the best solution here as the PDFs need to be edited a lot by non-programmers and such. I'd prefer not to go that route and just stick to finding a way to get a few lines of data into the PDFs they create.
I've researched this a bit and it seems... possible, but confusing? This is the best thing I could find so far:
http://www.pdfscripting.com/public/department48.cfm
But the three solutions that it offers near the bottom all sound convoluted. Just wondering if there is something simpler that I am missing. All I really need to do is have the PDF import a few small chunks of text. Seems like it should be easy...
I think you can give http://www.codeproject.com/Tips/679606/Filling-PDF-Form-using-iText-PDF-Library a try. Hopefully it fulfills your needs.

What is the best way to read Excel File using VB.NET?

As far as I know there are 2-ways to read Excel files using VB.NET which are using:
OLEDB
COM component
I am using both but I want to know which is more efficient or best method to use when it comes to reading Excel data.
I am currently working on extracting a specific record on a spreadsheet with massive number of data, thousands of rows and still counting.
Any help would be much appreciated

How to store and handle a big wordlist?

I have I .txt file with 200k+ words in it. I will use this file to check if a specific word exists or not. What would be the best way to store this so that I can read it fast, I will not change the content of the file just read from it.
I'm guessing its not so good to have it in a .txt file and then save it to some sort of array. I've heard something about database is that the way to go here?
I just need to be pointed in the right direction.
Have you tried loading it into memory? It might not be that bad. By loading it into memory I mean putting it inside an NSArray like you suggested already.
The other way to go could be a database. SQLite is the way to go on iOS. (or Core Data but that is a bit overkill for your problem) I am not going to explain how to use SQLite because there are lots of tutorials to find on the internet :)

sql server openrowset to read huge xml files in one step

It's my first post ever... and I really need help on this one so any one who has some knowlege on the subject - please help!
What I need to do is to read an xml file into sql server data tables. I was looking over and over for solutions to this one and have found a few actualy. The problem is the size of the xml which is being loaded. It weights 2GB (and there will 10GB ones). I have managed to do this but I saw one particular solution which seems to me to be a great one but I cannot figure it out.
Ok lets get to the point. Currently I do it this way:
I read an entire XML using the openrowset into a variable. (this takes the whole ram memory...)
next I use the .node() thing to pull out the data and fill the tables with them.
Thats a two-step process. I was wondering if I could do it in only one step. I saw that there are things like format files and there are numerus examples on how to use that to pull out data from flat files or even excel documents in a record-based maner (in stead of sucking the whole thing into a variable) but I CANNOT find any example which would show how to read that huge XML into a table parsing the data on the fly (based on the format file). Is it even possible? I would really appreciate some help, or guidence on where to find a good example.
Pardon my English - it's been a while since I had to write so much in that language :-)
Thanks in advance!
For very large files, you could use SSIS: Loading XML data into SQL Server 2008
It gives you the flexibility of transforming the XML data, as well as reducing your memory footprint for very large files. Of course, it might be slower compared to using OPENROWSET in BULK mode.