I need to use FreeRDP to connect overseas to a windows desktop located in Canada (I live in Belgium). Now, I also need to connect through an OpenVPN connection first before using RDP.
Issue I'm having is that I have a lag when moving around on the remote desktop. As a developper, it makes it quite frustrating to do my work.
Here's a video link demo'ing this lag:
https://www.youtube.com/watch?v=cpO26YUF2qg
This is the command I use to connect :
xfreerdp /u:username /v:192.168.x.x /bpp:8 -grab-keyboard -wallpaper```
I tried various bpp settings and it doesn't seem to change much.
Is there a way I could tweak the RDP connection even more ? I need this just to code in IDEs, manage databases, etc. I do not need super high-quality graphics or anything (I won't be playing videos from here ;)). I think 256 colors should be fine, but wondering if they are other compression settings I could use?
While I'll have done everything I can with the RDP, my lan admin is also looking into tweaking the VPN connection...
Thanks a million for your time.
Pat
You may try using h264 mode of freerdp (version 2.0 recommended/required). In general, the h264 will use fewer data to transfer the image, but it highly depends on what you are doing, YMMV. It is worth trying, the option is:
/gfx-h264[:[[AVC420|AVC444],mask:<value>]
/gfx-h264:AVC444 works best for me. There are also several additional gfx-* configuration options, you may test with them.
Related
First of all, I don't mean version control such as git.
I do use git locally but, I'm trying to determine the best way to do back-ups of source code (as well as other app assets) in case of hardware failure or such.
I was thinking I could set up a script to tar my project folders, and encrypt them with gpg. I would then save the encrypted tar to external hard drives and to 1 or more off-site locations using a service such as amazon drive or dropbox.
Currently, I'm a sole developer so my thinking was that this method should be okay. But I wanted to get some input to make sure I'm doing this the best/most reliable way possible.
If there is a better approach to this that may be more applicable to small teams, then please let me know, as I'm more than happy to do the extra work implementing the approach.
There are much of ways of doing that.
But, if you always work local and you need a simple way of doing that, you may take a look at run scripts if some specific usb device is plugged in.
Meaning that a simple backup script with tar would run if you plug in your specific backup hdd.
Take a look at udev rules in linux.
udev is a generic device manager running as a daemon on a Linux system and listening (via a netlink socket) to uevents the kernel sends out if a new device is initialized or a device is removed from the system. The udev package comes with an extensive set of rules that match against exported values of the event and properties of the discovered device. A matching rule will possibly name and create a device node and run configured programs to set up and configure the device.
Take a look at these posts:
https://unix.stackexchange.com/questions/65891/how-to-execute-a-shellscript-when-i-plug-in-a-usb-device
&
https://askubuntu.com/questions/401390/running-a-script-on-connecting-usb-device
If you plan to go further, to extend the team or even to keep your code for a while in other words, if you want to be professional, I would go with a scalable and reliable tool designed for this: use a real backup and restore tool and don't use scripts. A lot of people, small (and even not so small) companies are doing it and they end up in trouble: maintenance, scalabolity, update, and so on.
There are plenty of backup & restore tools for different purposes and/or platforms, prices and so on. https://en.wikipedia.org/wiki/List_of_backup_software would be a good start :)
Cheers
Werlan
For a project I am working on I want to collect data of malware in a virtualbox for 30 seconds and then revert the VirtualBox back to its original state and repeat this process 500 times for 500 different malware links that I have in a txt file. Before I revert to the normal VirtualBox state, I want to collect data from a program that is monitoring that malware. What is the best way to do this?
Edit: I'd also like to point out that I have code to read the opcodes that are being used by the application. All I would like to do is automate this process for the virtualbox.
I am not aware of such a feature in virtualbox or vmware but you can always use third party tools to compare the state of the different parts (like registry) before and after the execution of malwares.
I heard Ashampoo unistaller is a great tool to do the job but personally never tested it before.
Another option is to use sanboxes like sandboxie or cuckoo sandbox to capture the changes.
Another option is to use online sandboxes like hybrid-analysis which is perfect for what you want to do.
Just keep in mind that most malwares use anti-VM techniques to prevent execution in VMs so you probably will not be able to capture all the features of the malwares.
Hope it helps.
What I want to be able to do is that, run a vmware machine on workstation, windows xp or 7 for instance. But all the changes I make while I am running the machine e.g. create a file, install something etc, I don't want them to be written to the system/image. Instead it should act like even the image itself is sandboxed, and when I shut down the machine, the image stays the same.
Now, I know about the snapshots functionality, but I basically want to save the time that is expended while reverting an image, on every power down session. Instead it should be such that the changes aren't written to the image/system in the first place (and instead are done in something like memory or a temporary location etc), and thus there is no need to revert when the system is powered off.
Now, is this possible to achieve with just vmware workstation itself? if not than is it possible with some third party tool or something of the sort? if yes then specifically which tool? or if this is possible utilizing any other concepts, say ramdisks etc or anything at all really?
Any help at all is really appreciated!
If I understand your you correctly, defining the VM's disks as nonpersistant should help.
Most use cases I've seen with xperf involve using xperfview on the same computer. A remote record and play back for me don't seem to work well. Symbols are not resolved correctly. Is there a known issue with remote record and local play with xperf/xperfview?
Why do you try remote connection? if you use xperf -d to stop logging the ETL contains all metadata, so that the symbols can be loaded from any PC you want. Copy it from PC A to PC B and view the ETL there.
Now that the 8.1 version of WPT is out, the recommended way to record traces is not with xperf.exe but with wprui.exe. This makes trace recording much simpler and much less error prone. See this blog post for details:
http://randomascii.wordpress.com/2013/04/20/xperf-basics-recording-a-trace-the-easy-way/
And yes, you absolutely should be able to record traces on one machine and view them on another.
I wrote a few iPhone apps using Core Data for persistent storage. Everything is working great but I would like to add the ability for users to back up their data to a PC (via WiFi to a PC app) or to a web server.
This is new to me and I can't seem to figure out where to begin researching the problem. I don't want to overcomplicate the issue if there is an easy way to implement this.
Is anyone familiar enough with what I am looking to do to point me in the right direction or give me a high level overview of what I should be considering?
The data is all text and would be perfectly stored in .csv files if that matters.
Unfortunately, I don't think there's a good all-purpose solution under the current SDK. Here are some ideas:
If you only want backup, you could just back up the whole sqlite file to the server or over wifi, but you then can't really use it with anything other than Core Data (and you might even run into trouble with iPhone-Mac compatibility, e.g. between 32-bit and 64-bit types).
A very robust solution would be to implement cloud storage with a REST API and sync the iPhone and desktop app to the server (this is what the Evernote app does, for instance), but that is obviously much more work.
You could also manually convert your data to a .csv and send that to the server or desktop, but parsing it could be problematic (and you'd have to worry about the data getting corrupted). If you did want to go that route, here is a tutorial.