Related
I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?
kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.
TL;DR
noob wants to setup dev machine/workspace on old hardware using windows 10 and load up 5+ software programs with similar file size and disk impact as Visual Studios. Wants reduce the impact these programs have on his already resource scarce laptop. Buying new hardware is the last resort, what is a viable workaround?
I have a laptop that I use for school and I am looking into using it as a development work space. (Visual Studio, SSMS, .NET, Jetbrains, Github Desktop, Infragistics Studio and the works) However I also don't want these programs to slow down my regular student workflow (Word, Excel, browser) and take up resources. Additionally some of the development programs I intend to only test drive during their trial period so I don't want them to stick around in my file system. A lot of the things these programs do overlaps so eventually I will be removing some of the programs that are not a good fit for what I am doing(Training for Web Development).
My area of concern is that Memory usage per Task manager floats around 50% and Disk hits 99% on a regular basis. My goal is to reduce the impact of loading even more software to my computer. It currently has the basic office programs for school but I think the cause of it being gloated is that it is a 4yr old computer (Lenovo Ideapad Z370) Intel Core i5-2410M dual-core/4GB DDR3-1333 RAM/500GB 5400RPM, which may not be the most optimal hardware to have windows 10 running on.
To address this problem, could I just load my development programs to a external hard drive and then connect it to the laptop only when I am in "developer workflow" ?
I've done some initial looking into and this solution is said to be non-viable solution because programs vary in portability. If this is the case, could you propose alternatives such as loading the programs to a VM and connecting to it when I need the programs? What are other possible solutions to my resource problem?
I have a dropbox account and a onedrive account and a $25 Azure Credit provided by the school which I have at my disposal. Solution should be cost-effective. Goal is to squeeze the last ounce of value of current hardware before upgrading.
Thanks in Advance! #noob
Hello All I found what I was looking for!
Azure Cloud has a "Developer Ready" image. The VM holds Visual studios and other helpful tools preloaded. However you need to have a MSDN subscription and a Window 10 Professional product key. I had neither so I went with another option of a VM with preloaded SQL Server. From there I was able to load up all the demoware and tools as well as SSMS. I can now access my tools through RDP from work, home, school, any other MS machine. Best of all, now I don't need to buy new hardware and the pay per minute use keeps the price within my allotted credits from Azure.
TL;DR
Free VM to tap into my dev space and develop from anywhere
All,
If I were to develop a kiosk app using Windows presentation foundation, c# and .net, what hardware requirements would I need. I plan on making it a standalone desktop app. It would contain images, and about 1-2 minutes of video. What kind of CPU (pentium, dual-core, what clock speed, graphic card?, memory? )
What if I made the kiosk a web app? What hardware requirements would I need?
Thanks,
Rohit
Not the exact answer, but I would go with the cheapest actual PC configuration. Your software will probably run on an actual 300$ budget PC.
If you need more proof and want to go with older PC. Test it on a relative PC, I'm sure you could find someone with a P4. ;-) Else just buy one used (probably for 50$).
ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.
Our network team is thinking of setting up a virtual desktop environment (via Windows 2008 virtual host) for each developer.
So we are going to have dumb terminals/laptops and should be using the virtual desktops for all of our work.
Ours is a Microsoft shop and we work with all versions of .net framework. Not having the development environments on the laptops is making the team uncomfortable.
Are there any potential problems with that kind of setup? Is there any reason to be worried about this setup?
Unless there's a very good development-oriented reason for doing this, I'd say don't.
Your developers are going to work best in an environment they want to work in. Unless your developers are the ones suggesting it and pushing for it, you shouldn't be instituting radical changes in their work environments without very good reasons.
I personally am not at all a fan of remote virtualized instances for development work, either. They're often slower, you have to deal with network issues and latency, you often don't have as much control as you would on your own machine. The list goes on and on, and little things add up to create major annoyances.
What happens when the network goes down? Are your dev's just supposed to sit on their hands? Or maybe they could bring cards and play real solitare...
Seriously, though, Unless you have virtual 100% network uptime, and your dev's never work off-site (say, from home) I'm on the "this is a Bad Idea" side.
One option is to get rid of your network team.
Seriously though, I have worked with this same type of setup through VMWare and it wasn't much fun. The only reason why I did it was because my boss thought it might be worth a try. Since I was newly hired, I didn't object. However, after several months of programming this way, I told him that I preferred to have my development studio on my machine and he agreed.
First, the graphical interface isn't really clear with a virtual workstation since it's sending images over the network rather than having your video card's graphical driver render the image. Constant viewing of this gave me a headache.
Secondly, any install of components or tools required the network administrator's help which meant I had to hurry up and wait.
Third, your computer is going to process one application faster than your server is going to process many apps and besides that, it has to send the rendered image over the network. It doesn't sound like it slows you down but it does. Again, hurry up and wait.
Fourth, this may be specific to VMWare but the virtual disk size was fixed to 4GB which to my network guy seemed to think it was enough. This filled up rather quickly. In order for me to expand the drive, I had to wait for the network admin to run partition magic on my drive which screwed it up and I had to have him rebuild my installation.
There are several more reasons but I would strongly encourage you to protest if you can. Your company is probably trying to impliment this because it's a new fad and it can be a way for them to save money. However, your productivity time will be wasted and that needs to be considered as a cost.
Bad Idea. You're taking the most critical tool in your developers' arsenal and making it run much, much, much slower than it needs to, and introducing several critical dependencies along the way.
It's good if you ever have to develop on-site, you can move your dev environment to a laptop and hit the road.
I could see it being required for some highly confidential multiple client work - there is a proof that you didn't leak any test data or debug files from one customer to another.
Down sides:
Few VMs support multiple monitors - without multiple monitors you can't be a productive developer.
Only virtualbox 3 gets close to being able to develop for opengl/activeX on a VM.
In my experience Virtual environments are ideal for test environments (for testing deployments) and not development environments. They are great as a blank slate / clean sheet for testing. I think the risk of alienating your developers is high if you pursue this route. Developers should have all the best tools at their disposal, i.e. high spec laptop / desktop, this keeps morale and productivity high.
Going down this route precludes any home-working which may or may not be an issue. Virtual environments are by their nature slower than dedicated environments, you may also have issues with multiple monitor setups on a VM.
If you go that route, make sure you bench the system aggressively before any serious commitment.
My experience of remote desktops is that it's ok for occasional use, but seldom sufficient for intensive computations and compilation typical of development work, especially at crunch time when everyone needs resources at the same time.
Not sure if that will affect you, but both VMWare and Virtual PC work very slow when viewed via Remote Desktop. For some reason Radmin (http://www.radmin.com/ ) does a much better job.
I regularly work with remote development environments and it is OK (although it takes some time to get used to keep track in which system you're working at the moment ;) ) - but most of the time I'm alone on the system.