Is it possible, by any stable method, to enable ReadyBoost on Windows Server 2008? [closed] - windows-server-2008

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I know the standard answer is No. However hear out the reasons for wanting it, and then we'll go for whether it is possible to achieve the same effect as ReadyBoost via either enabling (and installing) ReadyBoost or using third party software.
Reasons for using Widows Server 2008 as a development environment on a laptop:
64-Bit, so you get the full use of 4GB RAM.
SharePoint developer, so you can run SharePoint locally and debug successfully.
Hyper-V, so you get hardware virtualisation of test environments and the ability to demo full solutions stored in Hyper-V on the road
So all of that equals: Windows Server 2008 (64) on a laptop.
Now because we are running Hyper-V, we require a large volume of disk space. This means we are using 5,000 rpm 250GB HDD.
So we are on a laptop, we are not able to use solid state HDD, and we only have 4GB of RAM and the throughput of a laptop motherboard rather than a server one... all of which means we are not flying... this thing isn't a sluggard but it's not zippy either.
Windows Server 2008 is based on the same code base as Vista. Vista features ReadyBoost, which enables USB 2 flash devices to be used as a weak cache for system files, which visibly increases the performance of Vista. As the codebases are similar, it should be possible for ReadyBoost to work on WS2008, however Microsoft have not shipped or enabled ReadyBoost in WS2008.
Given that we are running WS2008 on a laptop as a development environment, how can we achieve the performance gains of ReadyBoost through the use of flash devices in Windows Server 2008?
For the answer to be accepted it must outline an end to end process for achieving the performance gain.
Answers of 'No' will not be accepted as I understand some third party tools achieve some of the functionality, but I haven't seen a full end-to-end description of how to get going with them.

With Virtual machines, the answer to "do you really need so much memory" is a resounding YES. Trying to run 4-6 virtual machines eacch configured with 512MB or more really stresses out the system.
The ability to use ANYTHING as additonal virtual memory is key.

Is everything that's installed
64bit?
Do you have hardware virtualization
capabilities and is it turned on in
the bios?
Have you enabled superfetch?
Turn of desktop experience.
And last but not least, have a look
at this article and see if it
gives you any pointers.
Too add: It doesn't look like there is a reasonable way of using ReadyBoost on WS2008

OK, so this isn't quite ReadyBoost but the end result should be quite similar. Here is a video on youtube you can follow on how to do this on Vista - WS2008 should be no different.
http://www.youtube.com/watch?v=A0bNFvCgQ9w
Also, you may want to upgrade the hard drive on your laptop:
Recommend ST9500420ASG 500GB 7200RPM 16MB SATA w/ G-Shock Sensor

Related

Reducing the impact on diskspace when loading new software on a dev machine.

TL;DR
noob wants to setup dev machine/workspace on old hardware using windows 10 and load up 5+ software programs with similar file size and disk impact as Visual Studios. Wants reduce the impact these programs have on his already resource scarce laptop. Buying new hardware is the last resort, what is a viable workaround?
I have a laptop that I use for school and I am looking into using it as a development work space. (Visual Studio, SSMS, .NET, Jetbrains, Github Desktop, Infragistics Studio and the works) However I also don't want these programs to slow down my regular student workflow (Word, Excel, browser) and take up resources. Additionally some of the development programs I intend to only test drive during their trial period so I don't want them to stick around in my file system. A lot of the things these programs do overlaps so eventually I will be removing some of the programs that are not a good fit for what I am doing(Training for Web Development).
My area of concern is that Memory usage per Task manager floats around 50% and Disk hits 99% on a regular basis. My goal is to reduce the impact of loading even more software to my computer. It currently has the basic office programs for school but I think the cause of it being gloated is that it is a 4yr old computer (Lenovo Ideapad Z370) Intel Core i5-2410M dual-core/4GB DDR3-1333 RAM/500GB 5400RPM, which may not be the most optimal hardware to have windows 10 running on.
To address this problem, could I just load my development programs to a external hard drive and then connect it to the laptop only when I am in "developer workflow" ?
I've done some initial looking into and this solution is said to be non-viable solution because programs vary in portability. If this is the case, could you propose alternatives such as loading the programs to a VM and connecting to it when I need the programs? What are other possible solutions to my resource problem?
I have a dropbox account and a onedrive account and a $25 Azure Credit provided by the school which I have at my disposal. Solution should be cost-effective. Goal is to squeeze the last ounce of value of current hardware before upgrading.
Thanks in Advance! #noob
Hello All I found what I was looking for!
Azure Cloud has a "Developer Ready" image. The VM holds Visual studios and other helpful tools preloaded. However you need to have a MSDN subscription and a Window 10 Professional product key. I had neither so I went with another option of a VM with preloaded SQL Server. From there I was able to load up all the demoware and tools as well as SSMS. I can now access my tools through RDP from work, home, school, any other MS machine. Best of all, now I don't need to buy new hardware and the pay per minute use keeps the price within my allotted credits from Azure.
TL;DR
Free VM to tap into my dev space and develop from anywhere

Dualcore vs Quadcore for Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Recently I am thinking to upgrade my current development laptop in newer machine. I'm all the while working under Linux/Windows dual boot and doing development work on both.
My current development platform including Java (Eclipse), Ruby/RoR (Gvim/Atom), Blender (learning), Erlang, ANSI C (VS/gcc), Android Studio for Android development, VirtualBox running Windows for Microsoft Office suite, C# development and MSSQL development. Sometimes need to debug as well under Virtualbox Windows by running Eclipse. Natively on Linux using MySQL/Postgresql for development and testing. I'm interested in exploring 3D and game programming as well.
Occasionally I do play some 3D games on Windows such as Modern Warfare 4, BF4 etc.
Now for the new year, thinking to upgrade to Macbook Pro but I'm undecided on to look at dualcore or should I stick to quadcore? Is there any benefit if I'm using quadcore for development purposes?
Googled and found the link below but it is done in year 2007.
http://blog.codinghorror.com/choosing-dual-or-quad-core/
http://blog.codinghorror.com/quad-core-desktops-and-diminishing-returns/
Understanding that utilizing multiple cores is mostly software or OS responsibility and this is easier to update to utilize those extra fire power.
So is it still trivial for development machine to have quadcore CPU as of 2015/2016? I've already targeted I'd take 16GB of RAM but not on CPU choice.
If you are looking for an upgrade I would recommend you to first look for a laptop that comes with a SSD harddisk because disk I/O is the typical performance bottleneck.
As for whether you should go for dual or quad cores... I personally think it doesn't really matter because not every piece of software is written to fully utilizes all CPU cores. It really depends on how the software is implemented. For instance if it is a multi-threaded or multi-process program then you will benefit heavily otherwise you probably won't see much of a difference. But the speed of your CPU core will definitely make a difference thou.
I see so you are also into games programming and some serious FPS gaming like BF4, then you will definitely need a powerful quad chipset and also an excellence graphics card. Otherwise if it is just for pure web/server development (not games), a good dual core should do it.

Do virtual machines need swap partitions? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running Ubuntu on my physical machine; and VirtualBox to run various virtual ones on top.
Most of the time, I was doing "straight forward" installations; but today I wanted to be "smart" and checked out the partitions that the fedora or ubuntu installers will create on my virtual disks.
And sure, when going for the defaults, some GBs of my virtual disks will be used as "swap space".
Now I am wondering: assuming that I have plenty of physical memory (so I can assign 6 or 8 GB to a VM) - is there any sense in dedicated swap space for a a virtual machine?
This is answered at ServerFault:
TL;DR: use swap: 1. avoids out of memory error, 2. guest OS is better at memory management
Ignoring the fact that people are dealing with OS specific reasons I
have two reasons why it's a bad idea to not run with a swap
partition/file.
If you have 1.5 GB of RAM allocated to a VM with no space file/partition and it wants to use 1.5 GB + 1 MB it will report an out
of memory error. With the swap space it will be able to swap data out
of active memory and onto the disk.
The guest OS does a much better job of memory management than the host. This is why technology like memory ballooning exists because the
Host can make educated guesses on what memory isn't needed right now
but the guest knows at a much more intelligent level (this keeps OS
memory from being swapped out which could kill your performance).
Swap partitions are used to free your physical memory when it goes out of space. In modern day machines, with plenty of memory, it depends on the type of applications you would be running. If you are planning to run such memory intensive programs like video editors, high end games or something of that sort, virtual memory or swap space is an asset. But if it is not the case then you are safe to avoid swap space, provided you have enough memory. But it is safe to have a fallback.
That depends on what programs you are running on your host system along with the virtual machine, or what programs you are running within the virtual machine.
The only upper bound on memory that software can consume is the total memory (physical or virtual) available to it. There are plenty of programs that require large amounts of memory when behaving normally, and plenty of circumstances that cause a program to consume large amounts of memory (e.g. loading of input files). There are also plenty of faulty programs that unintentionally consume large amounts of memory.
You can often get an idea by examining requirements or recommendations (e.g. memory and drive space) of the programs you run. Failing that, try it out.

Why would you need to know about each processor in particular? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm curious to understand what could be the motivation behind the fine-grained detail of each virtual processor that the Windows 8 task manager seems to be focusing on.
Here's a screenshot (from here):
I know this setup could only exist in a non-standard, costly, important server environment (1TB RAM!), but what is the use of a heatmap? Or, setting processor affinity:
What I'm asking is, under what circumstances a developer would care if specific processor X is being used more than processor Y (instead of just knowing that a single non-multithreaded process is maxing out a core, which would be better shown as a process heatmap, instead of a processor heatmap), or care whether a process will use this or that processor (which I can't expect a human to guess better than an auto-balancing algorithm)?
In most cases, it doesn't matter, and the heatmap does nothing more than look cool.
Big servers, though, are different. Some processors have a "NUMA", or Non-Uniform Memory Access, architecture. In these cases, some processor cores are able to access some chunks of memory faster than other cores. In these cases, adjusting the process affinity to keep the process on the cores with faster memory access might prove useful. Also, if a processor has per-core caches (as many do), there might be a performance cost if a thread were to jump from one core to another. The Windows scheduler should do a good job avoiding switches like these, but I could imagine in some strange workloads you might need to force it.
These settings could also be useful if you want to limit the number of cores an application is using (say to keep some other cores free for another dedicated task.) It might also be useful if you're running a stress test and you are trying to determine if you have a bad CPU core. It also could work around BIOS/firmware bugs such as the bugs related to high-performance timers that plagued many multi-core CPUs from a few years back.
I can't give you a good use case for this heat map (except that it looks super awesome), but I can tell you a sad story about how we used CPU affinity to fix something.
We were automating some older version of MS Office to do some batch processing of Word documents and Word was occasionally crashing. After a while of troubleshooting and desperation, we tried setting Word process' affinity to just one CPU to reduce concurrency and hence reduce the likelihood of race conditions. It worked. Word stopped crashing.
One possible scenario would be a server that is running multiple VMs where each client is paying to have access to their VM.
The administrator may set the processor affinities so that each VM has guaranteed access to X number of cores (and would charge the client appropriately).
Now, suppose that the administrator notices that the cores assigned to ABC Company Inc.'s VMs are registering highly on the heatmap. This would be a perfect opportunity to upsell ABC Company Inc and get them to pay for more cores.
Both the administrator and ABC Company Inc win - the administrator makes more money, and ABC Company Inc experience better performance.
In this way, the heatmap can function as a Decision Support System which helps ABC Company Inc decide whether their needs merit more cores, and helps the administrator to target their advertising better to the their customers that can benefit.

Is it feasible to virtualize developer machines? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
It's budgeting time and Corporate is balking at the cost of replacing a coworker's machine who is due for it, needs it, and deserves it.
Our group is a small ISV/SAAS that exists as a division of a larger media group. We are not a cost center, we make money, even this year. We are owned by a mid-size media group whose business model is quite different, and seems driven only by reducing costs.
Our software stack is Visual Studio 2008, SQL 2008, on Windows Server 2008 (so that multiple root websites can be hosted and debugged on each dev's machine). Our target hardware is 3GHz quad-core workstation, 4GB RAM, and RAID 1 mirrored hard drives so that we are protected against the productivity loss of losing a developer hard drive.
Corporate wants to give us a couple powerful, but hand-me-down, decommissioned servers, and then each developer would have a virtual workstation on that server. The computers sitting on our desktops would be dumb terminals at $400-500 each.
I'm trying to be neutral but I doubt it's hard to discern my bias. I'd like to see real developer reactions to this, and I figure this is the best place to get that.
Please include arguments for or against, evidence if you've seen this tried and how well (or not) it has gone.
This sounds like a well intentioned idea, but:
In my experience you need multiple cores, lots of memory, and fast disks to be productive in today's modern IDE's. I don't see that happening in a virtual environment with any economy. Individual boxes are still better.
It's also an issue of control. In a virtual environment I can imagine all kinds of restrictions. Will you still be able to install your own tools, for example?
Ultimately, it's misguided. If this idea increases build times by any substantial amount, any savings in hardware will quickly be erased by lost productivity. Conversely, money that is spent on decent individual machines for developers will quickly pay for itself over and over in reduced build times.
Good quality individual machines are an investment, not a cost.
Development is disk-bound, i.e. you spend your time waiting for builds which is a disk-bound process most of the time. If you're all sharing a machine build times will become much worse.
Aside from all of the givens (perfomance, disk space, etc...):
I would be OK with this as long as I still had multiple monitor support.
Without that, it is a no-go.
Basic failure to understand what a developer box is actually doing much of the time:
When building its chewing through processor and disk - especially disk.
When testing you're talking about having one or more instances of Visual Studio running (once you get past two things start to get interesting), database server, website/services plus all the other stuff (browsers with a lot of tabs open, notebook software, and heaven only knows what else) all spread across multiple monitors (at least two). Lots of cores, lots of memory please!
I can quite happily accept that there's an argument for virtualisation - a good dev box should be able to host multiple, concurrent VMs in order to isolate some of the above and to provide "clean" environments for testing. Note that that's the box for ONE developer hosting multiple VMs solely for the benefit of that one developer...
Our team is developing on remote server (no GUI stuff, plain old vim) for quite some time without problems. Granted it requires rather powerful server and sometimes is starts to be bit on a slow side if everyone start to compile at the same time.
But as a bonus you are very mobile in terms where you can develop from (we all are having laptops) be it in office, home, sunny beach (last one was probably overstatement).
Bute yeah, that might not all work well for graphics heavy apps of course.
It sounds like your group is not offering the solutions that you have considered in a well documented format, otherwise corporate would not be shoving decisions down your throat. If you have a documented process for development, corporate might want to discuss changing the process with you, but as soon as you say, "this change would break our process and we would have to retool our development workflow", they will see the pain of the $$ in reworking the process and most likely back off. That said, once your process is documented, you should internally be ruthless about trying to make it more efficient and cost effective, and have an open mind about corporate's suggestions.
I assume you have machines already for SVN / TRAC, your Continuous Integration server, product demos, testing, etc. and that the only possible use your team could make of these servers is for personal VMs.
I do many things that peg my processor at 100%. Compiles certainly achieve this. Now imagine having to share that processor with 10 other developers. The loss in productivity will become quite apparent. If you have a multi-core PC, this won't be as painful. Get an Intel i7 and you probably won't even notice it when 8 people are logged in. Most programs (including my compiler) can't use more than 1 processor anyway.
That said, it's a viable solution to reduce costs. I used to work at a company who has since switched to these dumb terminals. It works fine. My university had HP UNIX machines that were dumb terminals. They logged into a server that split up the processor ownership among however many people were logged in. What people would do is log into a server and check the number of people logged in. If there were too many, they'd search for the next one, because build times are noticeably slower. I'd never log into the easy to remember server names. =)
It definitely works, but also reduces productivity due to longer build times, especially when multiple people are building at the same time. Since productivity is such a difficult thing to quantify, it might be hard to argue your point.
Graphics acceleration might also be an issue if you need to do anything with animation, video, or image editing. You can't really test video playback through an RDP session since the framerate and/or color depth isn't high enough.
Regardless of performance, at my company we are moving to laptops as developer machines. The main advantage is that developers can bring their computers to meetings, conferences, etc. Also being able to sit next to a colleague when you're helping him with a problem, and having your own development environment available, is very valuable.