Open,Read,Write Files on Network Attached Storage via VBScript - vba

I have thousands of small CSV files I want to aggregate (with a little munging in-script first). They are on a NAS device, a "SNAP" Server to be more exact. I've had some success with VBA from Excel - doing about 700 files in about a minute, if I recall (was a month ago). Actually, it was half-success: the snap server is home to 80% pdfs and some proprietary-format files and only 20% CSVs. The Loop to test for filetype took the execution time north of 2 hours and the script apparently completely ignored date filtering I put in. The quick result or 'success' was on 700 copies of the CSVs I made and put on my C drive. I've been doing VBA scripting for almost 20 years, and I think I'm decent at it; I do a lot of CSV reading and writing from VBA the last 9 years. So my question is more about your experience with snap servers or NAS generally.
Can I not treat the snap server more or less like any drive/folder with VBA?
Would VBScript be more appropriate? (already using FileSystemObject, after all)
If I can use VBS can I store the script on the NAS and run it using taskscheduler?
I'd appreciate any tips or gotchas from you folks who have experience with snap servers!

Some thoughts on the choice of language:
VB Script is more lightweight than VBA in that it does not require MS Office to be installed. The syntax is similar so there is no real productivity difference.
Moving forward Powershell would be strongly recommended for Windows system admin tasks, general text file processing, etc.
Some thoughts on using the NAS server:
a) If running your script on a workstation you should be able to use a URI string \\myserver\myshare to connect to a share on the NAS. If not you may need to map a drive letter to that share before your script runs.
b) If you want to run your script on the NAS there are 2 things to consider: is the NAS OS locked so that you may not add your own scheduled task and is it Linux or some flavor of Windows. Many NAS products use embedded Linux so running a VBA or VBScript solution directly on the NAS may not work unless it is based on something like Embedded XP and you have access to Scheduled Tasks, etc.
Hope this helps...

Related

Running Malware In VirtualBox

For a project I am working on I want to collect data of malware in a virtualbox for 30 seconds and then revert the VirtualBox back to its original state and repeat this process 500 times for 500 different malware links that I have in a txt file. Before I revert to the normal VirtualBox state, I want to collect data from a program that is monitoring that malware. What is the best way to do this?
Edit: I'd also like to point out that I have code to read the opcodes that are being used by the application. All I would like to do is automate this process for the virtualbox.
I am not aware of such a feature in virtualbox or vmware but you can always use third party tools to compare the state of the different parts (like registry) before and after the execution of malwares.
I heard Ashampoo unistaller is a great tool to do the job but personally never tested it before.
Another option is to use sanboxes like sandboxie or cuckoo sandbox to capture the changes.
Another option is to use online sandboxes like hybrid-analysis which is perfect for what you want to do.
Just keep in mind that most malwares use anti-VM techniques to prevent execution in VMs so you probably will not be able to capture all the features of the malwares.
Hope it helps.

coldfusion - cfprint issues with large spool files

I am using cfprint from ColdFusion to print multiple PDFs from a directory. The problem I am having is that when the files are spooled to the printer the size of the file dramatically increases and slows down everything. The file in the folder is 125K and when it is in the printer spool it increases up to 15.7MB. Here is the ColdFusion code:
<cfprint
source="[FILELOCATION]/[FILE].pdf"
color="yes"
printer="[printer name]">
The files will eventually print but it can take upwards of 15-20 minutes. Does anyone have any solutions for this issue? I have tried with both CF generated PDFs and ones that I have created from scratch. Thanks
Queue up two to five at a time. Pause to allow processing. Mark them as printed, move or delete them, move to the next batch...Time this out yourself to see how much time you need to allow. That way you don't compound a bunch of work for the server and create a bottleneck on your CF server.
If you are just doing this with one server consider using a secondary low priority server and run a developer edition fully paid for EULA compliant registered version of Coldfusion (or Railo) and dedicate that server for just printing so your other server can do useful things.
Edit
So the OP has a Coldfusion print bottleneck. In your server that does the printing (same as your CF server I assume?) and IF this is a windows server (not sure your server version), there is print queue folder. Provided you have access to this folder, you can do a few things. You can create a method for FTP-ing your files to this folder (or copy if it is the same server). The printer will queue up the job and off it goes. You can do some functions like check the print queue for file count. If the file count is greater zero, check back in 15 minutes. If the count is zero, copy over a few more files.
You be creating a scheduled task in your CFAdmin and automate. There is a getprinterInfo() so you can check if the printer is off line and do other things like check for another printer somewhere else if you need to reroute print jobs. You can also set up several print servers and attach printers to them and hit several print servers and check print queue folders.
The magic is endless, goal is to offset work to something other than your Coldfusion server.
So to recap:
Seperate concerns by not doing cfprint
Create escape routes to other priters if you can.
If you must use coldfusion then queue up a dedicated Coldfusion server for print management stuff.
Use getPrinterInfo() and dump out things to see what you can use, trap etc.
Ben forta has a tool that can check for several printers, consider incorperating this.
Next use cfftp (or cffile if you are on the same server) provided you have access and copy files to print queue folders, doing no cfprint at all.
Here is a link on print spool stuff (another link in the doc shows you how you can change the spool location).
When it is over you are going to be the coldfusion print master with escape routes and checks and everything.

Extracting Data from a VERY old unix machine

Firstly apologies if this question seems like a wall of text, I can't think of a way to format it.
I have a machine with valuable data on(circa 1995), the machine is running unix (SCO OpenServer 6) with some sort of database stored on it.
The data is normally accessed via a software package of which the license has expired and the developers are no longer trading.
The software package connects to the machine via telnet to retrieve data and modify data (the telnet connection no longer functions due to the license being changed).
I can access the machine via an ODBC driver (SeaODBC.dll) over a network, this was how I was planning to extract the data but so far I have retrieved 300,000 rows in just over 24 hours, in total I estimate there will be around 50,000,000 rows total so at current speed it will take 6 months!
I need either a quicker way to extract the data from the machine via ODBC or a way to extract the entire DB locally on the machine to an external drive/network drive or other external source.
I've played around with the unix interface and the only large files I can find are in a massive matrix of single character folder (eg A\G\data.dat, A\H\Data.dat ect).
Does anyone know how to find out the installed DB systems on the machine? Hopefully it is a standard and I'll be able to find a way to export everything into a nicely formatted file.
Edit
Digging around the file system I have found a folder under root > L which contains lots of single lettered folders, each single lettered folder contains more single letter folders.
There are also files which are named after the table I need (eg "ooi.r") which have the following format:
<Id>
[]
l for ooi_lno, lc for ooi_lcno, s for ooi_invno, id for ooi_indate
require l="AB"
require ls="SO"
require id=25/04/1998
{<id>} is s
sort increasing Id
I do not recognize those kinds of filenames A\G\data.dat and so on (filenames with backslashes in them???) and it's likely to be a proprietary format so I wouldn't expect much from that avenue. You can try running file on these to see if they are in any recognized format just to see...
I would suggest improving the speed of data extraction over ODBC by virtualizing the system. A modern computer will have faster memory, faster disks, and a faster CPU and may be able to extract the data a lot more quickly. You will have to extract a disk image from the old system in order to virtualize it, but hopefully a single sequential pass at reading everything off its disk won't be too slow.
I don't know what the architecture of this system is, but I guess it is x86, which means it might be not too hard to virtualize (depending on how well the SCO OpenServer 6 OS agrees with the virtualization). You will have to use a hypervisor that supports full virtualization (not paravirtualization).
I finally solved the problem, running a query using another tool (not through MS Access or MS Excel) worked massively faster, ended up using DaFT (Database Fishing Tool) to SELECT INTO a text file. Processed all 50 million rows in a few hours.
It seems the dll driver I was using doesn't work well with any MS products.

VbaProject.OTM deployment

I came by this page and was thinking about the best method to distribute my VbaProject.OTM file (located into %appdata%\Microsoft\Outlook\) to a bunch of ~30 users at my office. Is it better to simply copy/paste the OTM file onto the network and then copy/paste it back to all users' computers (manually or with a .bat) OR would it be better to use the method described in the link above to generate a OPS file and import it back with Proflwiz.exe? What's the difference?
We are all on Microsoft Office Outlook 2003 actually, we might upgrade to 2007 one day but still years from now.
Finally came up with some elements to deploy a Outlook VBA Project. There are a lot of ways to do this, but the easiest way to do so without installing anything and keeping the same methodology would be to run a OTM file directly from a server. I found out that the process outlook.exe has a parameter altvba that allows to specify another path to run the OTM file from. Here is en example:
outlook.exe /altvba "\\myServer\myFolder\myFile.otm"
This allows me to update only one file to get all computers updated. Obviously, if the file is big and the server's ping is on the high side, it may delay the launch of Outlook. The other problem with this method is that everybody will have to shut down Office if you want to update the OTM file on the server (and if you do work in an office where everyone uses Outlook, you do know that it is impossible to get everyone to shut it down at the same time, except if you code a macro to do so eventually). To prevent both those problems, I could setup a batch file to copy the server OTM file clientside everytime there is a new version (just have to check the NTFS last-modify attribute). This way, Outlook will boot with a local file, the batch file take 2-3 seconds to copy the file if needed (or will launch Outlook instantaneously) and there will be no problem updating the OTM file on the server. Users will have to start Outlook with the batch file (or with the slightly different outlook.exe path with the altvba parameter, so either way they need a different shortcut/file to start off the first time). One other advantage of the altvba is that it's still easy for the user to run Outlook without it (to see if the VBA is problematic or not in case Outlook is sluggish) and the file will remain unchanged after a Outlook reinitialization.
Others solutions include a COM complement that can be developed in a lot on languages including VB6 (no conversion needed from VBA). There is also a bunch of tools included into Microsoft Office XP Developer that could help getting the job done (not free however, especially if you need the most up-to-date version).

How do I distribute updates to a Access database front end?

I've got an Access 2007 database that I developed which connects to SQL Server for the actual data storage. I used the Package Solution Wizard to create a distributable installer which included access runtime (with an ACCDE file) which I went around and installed on 15 or so PCs. Anyway, my question is, what is the best way to distribute updates to this database? Right now I'd need to go around and remove and reinstall. That's not a problem... I was just wondering if there was another way.
I've tried leaving the front end on a network share but it seems that most people suggest storing the front-end on the local machine, which makes sense. The problems I've run into when I leave it on a network share (at least with Access 2003 mdbs) is that I find myself needing to compact and repair often and I also have to kill the open sessions (user's who have the file open) when upgrading. I would imagine it could also hypothetically create an unnecessary bottleneck if the user was not on the local network.
Automating front-end distribution is trivial. It's a problem that has been solved repeatedly. Tony Toews's http://autofeupdater.com is one such solution that is extremely easy to implement and completely transparent to the end user.
We developed a vbscript 'launcher' for our access apps. That is what is linked to on the start menu of user's pcs and it does the following.
It checks a version.txt file located on a network share to see whether it contains different text to a locally stored copy
If the text is different it copies the access mdb and the new version.txt to the user's hard drive.
Finally it runs the mdb in access
In order to distribute an update to the user's pc all that is required is to change the text in version.txt on the network share.
Perhaps you can implement something similar to this
Make a batch file on the server (network drive).
Create a shortcut link to that batch file.
Copy the shortcut to User's Desktop.
When user double-clicks on shortcut, it will copy a fresh copy from network to local.
Replace old database.adp on the server drive when you update a new version.
Each user gets a copy of database.adp on their machine.
Remove Security warning when opening file from network share is here.
Batch File
#ECHO OFF
REM copy from network drive to local
xcopy "Your_Network_Drive\database.adp" "C:\User\database.adp" /Y /R /F
REM call your database file - Access 2007
"C:\Program Files\Microsoft Office\Office12\MSAccess.EXE" "C:\User\database.adp"
This is a very old post and I used the autofeupdater until it stopped working so I wrote one of my own and it has evolved over the last few years into something that I have used with many clients. It's so simple to use and there is no interface. Just an EXE and a very simple config file.
Please check it out here. I can also help with custom solutions if none of the configurations work for your needs. http://www.dafran.ca/MS-Access-Front-End-Loader.aspx
After trying all of the solutions above (not exactly these solutions but these are the common suggestions in the Access community), I developed a system entirely within Access using VBA that allows an admin DB to create and publish objects to client DBs without the need for user intervention or management of multiple DB files.
This approach has several benefits:
1. It simplifies the development process by having a dedicated environment (admin DB) for development and testing totally separate from the client DBs.
2. It simplifies the update/distribution process by allowing a developer to push out updates in real time that client DBs can implement in the background, without involving users. Can also allow devs to roll back to previous versions if desired.
3. It could be used as a kind of change management system within Access for developers who want to commit multiple changes to objects and modules and retain past changes.
4. It allows for easier user access control by allowing an admin to easily assign certain objects to specific users/roles without needing to maintain multiple versions of the DB.
I will hopefully post the code to GitHub soon, I just have to get clearance from my workplace to release it. I will edit this post to include the link when I have.
We have usually kept the Access front ends on network drives, and just put up with the need to compact and repair on a regular basis. You will probably find you need to do that even when they are installed locally, anyway.
If you must have it installed locally, there are various tools which will enable you to "push out" software updates, and the guys over on ServerFault would have more information on those. Assuming such tools aren't available, the only other option I can think of is to write a small loader program that checks the local .MDB against a master copy on the server, and re-copies it across if they are different, before then launching the MDB.