I know that I can share files using Shared Folders in Virtual PC, but this method seems to have pretty poor performance. Is there another method to share files that provides better performance? (Besides using something other than Virtual PC)
The best way to do it is probably set up proper bridge network connection between host machine and VM.
Using VirtualBox, I had problems setting up shared folders (I tried setting it up, and it wasn't working intuitively right away, so I got fed up with it). Thus, I just ftp'ed to the host OS (which I already had set up since I was on Linux), and transfered the file that way.
I would suggest timing transferring a reasonably sized file via shared folders, and then time it again using FTP... if it's faster, that's your solution :-)
Sorry I can't give actual performance metrics on that!
Related
I was running Linux in an ARM-based detailed CPU model in FS mode, and I was doing the checkpoint after the first time I launched the CPU. However, I needed to frequently transfer files to the ARM Linux, so I am wondering if there is any way to do so without re-launching the CPU model again (e.g., directly transfer files to linux through sftp, or mounting the host file system)? Great thanks!
Currently, I just added the files to the Linux disk image, and relaunched the CPU model from scratch (which takes more than 1.5 hours).
Here are the possibilities that I'm aware of:
use 9P. Semi outdated patch at: http://gem5.org/WA-gem5 but easy to get working again.
9P is designed explicitly to mount host directories on guest, and is therefore the nicest solution.
See also: https://github.com/cirosantilli2/gem5-issues/issues/24
QEMU example.
use a second disk image, normally squashfs which is easy to generate quickly and conveniently.
unmount, make changes to image, remount. So a bit annoying, but doable, and possibly the easiest to get working.
Not currently exposed on fs.py, patch mentioned at: How to attach multiple disk images in a simulation with gem5 fs.py?
m5 readfile + zip.
OK, this is likely going to be slow, just mentioning it ;-)
guest to host networking: as you mentioned, if that were possible, you could mount FTPs around
However I don't think it is supported, see: How to do port forwarding from guest to host and vice versa in gem5?
Also it would require messing with NFS setups on host / guest, which is always a bad thing.
With QEMU, as usual, it is possible.
The current situation is a mess. The main reason is that it is a bit hard to nicely integrate 9P / multidisk into fs.py. But I'm certain it is possible, we just need a brave soul.
Related thread about how to expand an existing disk image if space is your concern: https://www.mail-archive.com/gem5-users#gem5.org/msg16494.html
Mailing list thread: https://www.mail-archive.com/gem5-users#gem5.org/msg16477.html
I need to see the working of a malware (stuxnet) as a course reading assignment. I have its source code in C, but I don't know how to see its working. I though of running it in a virtual machine in Ubuntu but I am not sure if it will infect my computer or not.
How should I run, rather, test it?
Too see how it works, you don't need to be an expert in C, but it will help if you are familiar with the language.
To test the file, compile it to a windows or linux binary and then run it from inside a virtual machine (VM). Malware can break out of the VM and infect the host machine, so backing up your data or using a separate computer might be a good idea.
If I recall correctly, stuxnet had a very specific start routine and wouldn't start unless a number of hardwareIDs existed.
Can someone provide an example (or a link to one) illustrating how to sync system files (not database) between a local computer and a remote computer/server not on the same network?
Syncing files within the same pc and syncing files between pc's within the same network is straightforward and rather simple. I have those scenarios working nicely.
I need to sync files from "C:\FilesToSync" to a remote endpoint or an IP address. a WCF, HTTP, FTP, or TCP implementation is fine. Just need to learn how the sync needs to be set-up for any of those.
Thanks
this is a really good question. for 'remote' front-end developers / designers like me lol, kinda hard to choose a nicely recommended solution; in some places it's git (OS independent), in certain .NET projects, it's TortoiseSVN-VisualStudio; when i search online, tons and tons of dunno-what-to-choose like BitBucket, GitHub's Fork.
I also am looking for the right 'generic' soln for months now...
Example:
1. working on a soln on one pc, large heavy files, the xml and psd types for instance
2. uploading/synchronising with my remote hosting server for clients
3. backup server on another location, comparing/collaborating/synchronising
some ppl might say 'u know, different projects require different solns, don't cum up with that cr*p please, thanks;
in the end, I use Notepad++ for remote files and my localhost, and Google Drive only for small doc files; meanwhile, winning combo dinosaur-style soln for the laymen in this field ;) ...
usb and last'modified'date on MyComputer !
duh
How can I prevent my Cocoa app from using any virtual memory, or if that is not possible, securely clear the virtual memory contents (on the hard drive) after usage?
I'm worried about such things, because let's say I'm creating an app like 1Password that stores passwords. And let's say, while the passwords are temporarily shown to the user and read from memory, what if virtual memory is needed? Then I run the risk of having the actual passwords exposed on the hard drive for intruders to look at!
Another example would be encryption software. A file is put in, and encrypted file is put out. If virtual memory is needed, the unencrypted file contents may be exposed on hard drive. This is very bad, because the user would expect only the original file itself to be exposed. The user would not expect the file contents to lay exposed on the hard drive because of virtual memory usage! In fact, the user shouldn't have to worry about such things.
Apple provides a system level feature that solves this problem called Secure Virtual Memory that's on by default on newer Macs (I think Snow Leopard onwards). You can turn it on and off from Security pane of System Preferences.
As far as I know, there's no easy way to do this at the application level, although you could certainly encourage your users to enable it.
Okay so I want to make an application that launches other applications. However, the goal here is to make the app "portable" in that I can go from one windows desktop to another while using the same application from a usb drive. So here is a different rundown of what I mean:
I have aplication X. I use it on machine 1 and I want to use it on machine 2. However, machine 2 is my buddy's and he does not want me installing things on it. So, I take all the files that the installer made on my system, and put them into folders. App X put files in the windows folder that it expects when it is launched. If I merely run the the app and it looks in the windows dir it will not find the files. I do not have/want the ability to put files in the windows dir. I want to tell the app to look in folder a for files in folder b instead of where it would normally look. I could then use this program on any machine without having to modify the machine in any way.
Is this doable? If so what is it called so I can look it up?
EDIT: the win dir was an example. I would like the app to be self contained in a folder on the thumb drive. I want to redirect the where the app looks for files to a folder I specify.
This can be done, but how easily depends entirely on the program that you are launching.
The sorts of things that applications will do are:
Just run happily being executed anywhere (no dependencies). These are very easy!
Require some environment variables to be set up. This is easy to do - you can launch a new process with a modified environment if you wish.
Read files from disk. Usually when loading things like .dlls, applications will search on the PATH for the dlls, so they can be copied into the application folder (next to the .exe) and it will run happily on any system. However, in some cases applications will use fixed (or at least, less flexible) paths so that they will be harder to launch successfully.
Read registry settings. This is trickier. You need to know what state is required by the application, have your launcher record the old registry state, change it and run the application, then wait for application exit to restore the original state. This has to be bullet-proof to avoid corruption of the user's registry.
Ultimately you'll need to investigate, for each app you want to launch, just what it needs to run.
If the apps are commercial, then be careful that you are not breaking any licensing (EULA) terms by running them in this way.
Another alternative would be to set up a virtual PC image and simply execute that on the host PC so there is no need to worry about any special cases for each application. Depending on the VPC software you have available you may need to install software on the host PC to allow a virtual PC session to be run though, which may defeat the purpose/intent.
I think the system you describe is U3 (more info at http://en.wikipedia.org/wiki/U3). It requires the application to follow the U3 protocol, but if the application does, then it can be run off of a U3 flash drive without any install or admin permissions required on the host machine.
It's a proprietary technology, and supported by only a few vendors that I've seen.
If you really want portability and power, consider VMWare Player, and carry and entire machine, customized to your needs, on the flash drive. Of course, your friend would probably have to allow you to install VMWare Player.