ADAM Load Balancing - windows-server-2008

I'm interested in load balancing an ADAM instance but don't even know where to begin. Documentation on the web is pretty scarce. Has anyone done this before and more importantly can point me in the right direction for a How To?

As far as I remember you just need to install another ADAM. During the the first part of the installation, you just indicate, that it's not a new directory, but a second server from an existing directory.

Related

OpenStack Nova modification

I know the Structure of Open-stack and the basic idea of how it works. Could someone explain how I would go about modifying the scheduler for nova tho. I was thinking that I could download the code from git-hub then change some code around. The only problem with it is that I can not run anything because of the whole setup with the rest of the modulus. Could someone give me a general high level overview of how or were I could start?
First you need to get a devstack setup where you can run, test, modify code. Here is a link which can help you get a devstack up.
https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

How to share a dll in a network directory not the same as the application directory

I'll admit up front that I'm still learning good methods of deployment and that I don't have a size 12 brain to accomplish the following task. Now with that being said...
We have around 8 exe's that exist on a network drive that reference a dll that is in the same share/folder. We'd like to have a common network share in a different location that would contain this dll and any future dll's that we create so we'd have only one place to make changes (presuming the apps do not need to be recompiled). I've not found a satisfactory answer for why a dll should not be shared on a network so I'm wondering what the best practice would be for doing this. If this is something that is acceptable and routinely done then what steps are necessary to accomplish this? Thanks in advance for any help you can give.
If this shared DLL has any chance of being updated during a future release, what you have now, a local copy stored against each application, is far better.
You say your stated objective is to be able to make one change to the DLL and have all apps update. I've heard the positive side of this case made before.
"We get to roll out improvements to all our apps"
Which, seen from a half-empty glass is :-
"We get to introduce common bugs to all our apps".
Even at eight projects, imagine what your launches will be like.
"Er, hello QA. Can you test this one new app? (plus the seven others that we've done nothing to but might be broken as a result)".
Libraries should only be shared if they are mature and unlikely to change. Sorry to be so up-front about it, but I've faced a zealot who absolutely believed as you do. It was only when our bottom-line (predictably) nose-dived that my concerns were listened to. Dragons ahead. Be warned!

Best Practices for Development Backups

I am a intermediate-level developer (I think). I almost always work alone.
I have always just save my code to my hard drive and then published it to my server. I almost always over-write old code. If I make a big mistake I will get a backup restored from my web host. Obviously this can be a pain and cost time.
I know there must be a better way. I guess I could just save copies each time I change a file but that seems like it could get confusing too if I have 1000 different versions each time I make a minor tweak.
What is the best solution? It seems GIT type services may be more hassle than it is worth in my situation.
You work on the wrong way...
Try a CVS, that kind of software administrate versions and changes by itself.
Read about how implement a SVN solution, that will help you a lot.
GIT, Codeplex and other repositories are based on that kind of tools.
the problem in steps:
Source control
Google has a full list of articles, if you use GIT, this is my favorite:
http://nvie.com/posts/a-successful-git-branching-model/
and see this other
www.bignerdranch.com/blog/you-need-source-code-control-now/
but Source control isn't just a backup.
See
Is there a fundamental difference between backups and version control?
Backup your Source
You need a copy of your code in a backup server, site or external dispositive.
GIT or Subversion is for source control not for copy.
see "Backup Best Practices: Read This First!"
and set a tool for this work periodically.
Software Configuration Management
You need set a system for the software change, step by step, start with the user needs and cover all methodology work
see redmine ...

Why is it better to use Vagrant instead of sharing the VM's Disk itself?

I was searching for the best way to distribute a development environment to my team and found Vagrant.
After some reading and testing, here is what I can say about it
Pro:
Automates the process of creating a new VirtualBox VM (for a new user).
Con:
Nullifies the only pro I found by requiring the new user to learn a new tool.
The learning curve is way more time expensive than actually copying the VDI and configuring a new VirtualBox VM tu use it.
(It is not that great as well, but is still higher than learning a new tool.)
I definitely didn't understand why using Vagrant makes the process of distributing a development environment so different than just creating a normal VirtualBox VM and sharing the disk.
Maybe I'm missing a point here, can anyone enlight me?
I can only speak from my own experience: the learning curve is well worth it.
As a programmer, I love being able to set up separate isolated environments for development, testing, and deployment.
Sure, you can copy the VDI around and whatever, but it's easier-- in my opinion-- to execute a few command line programs and it's all ready to go. It just seems dirty to me to copy around VDIs or other types of images.
Also, I can make packaged up vagrant boxes that I can send to my coworkers or distribute on the Internet.
There are a lot more advantages to vagrant than just that:
100% reproducible environments, everywhere the same.
Can be stored in source control (git/svn/...) and versioned.
Saves diskspace. This can be important on laptops, I right now have 92 Vagrantfiles on my system, not all created. Imagine them all being full blown 20gb vm's...
Experimentation becomes trivial. A quick vagrant init precise64, vagrant up, try some stuff, vagrant destroy, vagrant up, retry some stuff, vagrant destroy - and done.
It requires some discipline not to install required packages manually or updating the common Vagrantfile when doing so, but once you have a workflow worked out, it's simply brilliant. I find
I think the major advantage of Vagrant is that it let's you have one base package that can be re-configured for different purposes covering dev, testing, management, operations, etc. simply by changing the manifest/cookbook. It's just more convenient to share and start up.
Less major, but still nice is that it lets you start over effortlessly. I know you can use snapshots in VirtualBox, but sometimes, it becomes annoying to keep going back and forth between them. With Vagrant, you can start a box, test some things, destroy it. Than you can start again, the same box, test something. You can start twice the same VM easily, and test slightly differently in each, destroy them.
Also check out the answer here: https://superuser.com/questions/584100/why-should-i-use-vagrant-instead-of-just-virtualbox
EDIT: In my first answer, I was thinking of Packer, not Vagrant, my mistake.
We are discussing the same question in the office... Right now, btw!
When I first got in touch with vagrant, I was sceptic, but the advantages I could see, beyond personal opinion is:
you can start from a common configuration and automatize many "forks", without a snapshot-chaining (what is a big pain when you need to delete one snapshot or two, I might say);
the process of cloning (packing) a machine is way faster than export appliance;
using vagrant to setup a machine still makes you able to use VBox iface, so no problem to make a start point based on Vagrant's features and let the teammates decide if they will use it or not;
as said, automatization.
We are still making some tests... Now talking about particular opinion, to me the main point is replication and the fact that packing and unpacking is faster than exports.

Which library to install on target?

I’m new to Qt Quick. So, This might sound like a dumb question. But I’m struggling with this.
I want to develop a complete UI for my embedded system using Qt Quick. So, I need QML to run my system.
Now, which library to install on my target embedded linux system.
I ‘ve seen this page : http://qt-project.org/downloads but it shows the library with 228MB! which will float my system size abnormally. I expect my system to be around 50MB only! I think this comes with lot of things which I may not want.
I may use qml, for internet browsing purpose parts of webkit (webkit module for qtquick)
xml.
So, Can you please help me which to install? & how??
Thanks & Regards
inblueswithu
Check http://qt-project.org/doc/qt-4.8/qt-embedded-install.html for a initial documentation.
Please note, that this will install everything, meaning all Qt modules. You might be able to strip some of them away, also you might now need all image plugins. However, as a start this should work for you.