I was running Linux in an ARM-based detailed CPU model in FS mode, and I was doing the checkpoint after the first time I launched the CPU. However, I needed to frequently transfer files to the ARM Linux, so I am wondering if there is any way to do so without re-launching the CPU model again (e.g., directly transfer files to linux through sftp, or mounting the host file system)? Great thanks!
Currently, I just added the files to the Linux disk image, and relaunched the CPU model from scratch (which takes more than 1.5 hours).
Here are the possibilities that I'm aware of:
use 9P. Semi outdated patch at: http://gem5.org/WA-gem5 but easy to get working again.
9P is designed explicitly to mount host directories on guest, and is therefore the nicest solution.
See also: https://github.com/cirosantilli2/gem5-issues/issues/24
QEMU example.
use a second disk image, normally squashfs which is easy to generate quickly and conveniently.
unmount, make changes to image, remount. So a bit annoying, but doable, and possibly the easiest to get working.
Not currently exposed on fs.py, patch mentioned at: How to attach multiple disk images in a simulation with gem5 fs.py?
m5 readfile + zip.
OK, this is likely going to be slow, just mentioning it ;-)
guest to host networking: as you mentioned, if that were possible, you could mount FTPs around
However I don't think it is supported, see: How to do port forwarding from guest to host and vice versa in gem5?
Also it would require messing with NFS setups on host / guest, which is always a bad thing.
With QEMU, as usual, it is possible.
The current situation is a mess. The main reason is that it is a bit hard to nicely integrate 9P / multidisk into fs.py. But I'm certain it is possible, we just need a brave soul.
Related thread about how to expand an existing disk image if space is your concern: https://www.mail-archive.com/gem5-users#gem5.org/msg16494.html
Mailing list thread: https://www.mail-archive.com/gem5-users#gem5.org/msg16477.html
Related
I have installed dropbox python client for linux and I noticed the sync bandwidth is quite limited:
$ dropbox status
Syncing (252,088 files remaining, 18 days left)
Downloading 252,088 files (35.1 KB/sec, 18 days left)
Is there a way to make it faster?
Note: Yes I have a 100Mbit/s internet connexion...
Firstly, check if there is a 75% cap enabled, as mentioned here
If there isn't then it's probably your Internet, try switching to a different network source (from wireless to wired) or use a different Internet connection. I had the same issue before and it was solved by changing to a different Internet connection, yes I have 100Mbit/s too but it didn't help.
Alternatively
If you already have another synced up dropbox, just copy the files over to the new install of Dropbox, if you're just trying to get the initial sync done.
Also take a look at LAN Sync, a feature in Dropbox
This honestly isn't a SO question because is isn't really a programming question, a forum like Superuser.com might be better suited perhaps.
edit: saw that you already have a superuser account, my bad. :)
This question is related to Upload Arduino code on virtual serial port through Arduino IDE. The main problem is being able to upload code onto virtual COM port instead of using Arduino so I could take the binary code output and use it in some other application. The problem with that is that the process of uploading is also related to the bootloader on the Arduino and that's why the upload process never reaches 100%. The suggested solution was either to implement a bootloader in my application or use something that is already out there.
My question now is can I make use of the different programmer modes in the Arduino IDE to sort of by-pass the bootloader so the upload process can reach 100% and the code would actually reach the virtual COM port?
Any help would be highly appreciated. Thanks
Sounds like your Virtual Serial Port driver is getting stuck on some timeouts or buffers. The IDE calls avrdude with a specific protocol to match with what is built into the Arduino's bootloader, loaded on the AVR. There are other bootloader (in fact many, too many to mention), some of which may have different timing and such, but to use them would basically no longer be Arduino. to see the possibilities .\avrdude.exe -c.
If you are just trying to get a dump of what is going over the Serial port. I have used Virtual Serial Ports Emulator . It is very versatile in that its modular, allowing you to build up what you want.
Also, as mentioned from the other threads about this, note the data over the serial port is formated ontop of the STK500 protocol. You also mention in other thread that you don't want to use another tool to get the data. Whereas in order to use another protocol you would need to change the source compiler.java and rebuild the project as to call avrdude with the new protocol. So you might as well just get it with another tool. see below this will tie back in.
you can get the raw binary, from what is being fed to avrdude. Where as it may not be initially obvious. As avrdude get ELF not BIN. The Arduino IDE contains all of avr-gcc and its tools. Where avr-objcopy located in .\Arduino\hardware\tools\avr\bin can convert the IDE's output that is funneled into avrdude, to the binary you likely desire. No need for scoping the serial port stream.
To do this by hand, You need to locate IDE's temp working directory, by enabling the IDE's verbose compile prints. And also likely put avr-objcopy in your path. then simply call it as in the below example, substituting your sketch's filename, in place of mine.
C:\Users\mflaga\AppData\Local\Temp\build6135656488044319492.tmp>avr-objcopy -I ihex FilePlayer.cpp.hex -O binary FilePlayer.cpp.bin
Where as you could replace avrdude.exe with a batchfile that calls both avrdude and avr-objcopy to automat
My current role requires me to setup environment which mimics the customer's and perform various checks to replicate and then analyze the problem.
Chances are, I often find working with Windows environments such like XP, Server 2003, Server 2008 is a bit painful without having the handly linux-based shell and some command-line progamming languages such as Perl.
Of course I can just install everything onto the new system and then start working, but it is a bit time-consuming and boring.
So I am wondering which is a better way of working around this?
I can for sure use Qemu to create a portable linux image which doesn't require any host system interference, even without the need of rebooting so to use it. The weakness of this is I have to figure out a way to transfer the files between hosting Windows and embedded Linux. The good part is that I can use all the weapons in Linux's arsenal.
Or I can start looking for a proper portable progamming language such like Movable Python, some variant of Perl or even Lua as a embedded language. Pros: familiar with the language; Cons: have to use scripts to do everything.
My day-to-day activities envolves but not limit to :
Checking the text logs and/or xml.
Grepping important sections from logs for further analysis.
some automate process like application server configuration etc...
automated functional testing - and result comparison
some system admin's job, networking diagnostics, checking process and services, etc...
Any good ideas? Thanks a lot in advance!
While I am a die-hard Linux fan I would recommend in your case to look at Cygwin, preferably on a USB drive or similar. It can live in a single directory, be started with a simple script and end up with (almost) all the Unix goodness, but still being able to access all of the host platform resources.
There are the usual warts related to / vs \ and even worse the case insensitive but case preserving filenames with lot's of spaces in them, but that's equally obnoxious on any other command line.
There is also Mingw but it's scope is more limited I found. It works exceedingly well in a couple of selected target areas, but less so for a GP wide unix-like environment.
I have had a cygwin folder on all my windows machines (and the ones I had to use/repair/maintain) for a very long time now.
Okay so I want to make an application that launches other applications. However, the goal here is to make the app "portable" in that I can go from one windows desktop to another while using the same application from a usb drive. So here is a different rundown of what I mean:
I have aplication X. I use it on machine 1 and I want to use it on machine 2. However, machine 2 is my buddy's and he does not want me installing things on it. So, I take all the files that the installer made on my system, and put them into folders. App X put files in the windows folder that it expects when it is launched. If I merely run the the app and it looks in the windows dir it will not find the files. I do not have/want the ability to put files in the windows dir. I want to tell the app to look in folder a for files in folder b instead of where it would normally look. I could then use this program on any machine without having to modify the machine in any way.
Is this doable? If so what is it called so I can look it up?
EDIT: the win dir was an example. I would like the app to be self contained in a folder on the thumb drive. I want to redirect the where the app looks for files to a folder I specify.
This can be done, but how easily depends entirely on the program that you are launching.
The sorts of things that applications will do are:
Just run happily being executed anywhere (no dependencies). These are very easy!
Require some environment variables to be set up. This is easy to do - you can launch a new process with a modified environment if you wish.
Read files from disk. Usually when loading things like .dlls, applications will search on the PATH for the dlls, so they can be copied into the application folder (next to the .exe) and it will run happily on any system. However, in some cases applications will use fixed (or at least, less flexible) paths so that they will be harder to launch successfully.
Read registry settings. This is trickier. You need to know what state is required by the application, have your launcher record the old registry state, change it and run the application, then wait for application exit to restore the original state. This has to be bullet-proof to avoid corruption of the user's registry.
Ultimately you'll need to investigate, for each app you want to launch, just what it needs to run.
If the apps are commercial, then be careful that you are not breaking any licensing (EULA) terms by running them in this way.
Another alternative would be to set up a virtual PC image and simply execute that on the host PC so there is no need to worry about any special cases for each application. Depending on the VPC software you have available you may need to install software on the host PC to allow a virtual PC session to be run though, which may defeat the purpose/intent.
I think the system you describe is U3 (more info at http://en.wikipedia.org/wiki/U3). It requires the application to follow the U3 protocol, but if the application does, then it can be run off of a U3 flash drive without any install or admin permissions required on the host machine.
It's a proprietary technology, and supported by only a few vendors that I've seen.
If you really want portability and power, consider VMWare Player, and carry and entire machine, customized to your needs, on the flash drive. Of course, your friend would probably have to allow you to install VMWare Player.
I know that I can share files using Shared Folders in Virtual PC, but this method seems to have pretty poor performance. Is there another method to share files that provides better performance? (Besides using something other than Virtual PC)
The best way to do it is probably set up proper bridge network connection between host machine and VM.
Using VirtualBox, I had problems setting up shared folders (I tried setting it up, and it wasn't working intuitively right away, so I got fed up with it). Thus, I just ftp'ed to the host OS (which I already had set up since I was on Linux), and transfered the file that way.
I would suggest timing transferring a reasonably sized file via shared folders, and then time it again using FTP... if it's faster, that's your solution :-)
Sorry I can't give actual performance metrics on that!