can we use wasPreUpgrade and WASPostUpgrade scripts to migrate from one machine to another which runs same Websphere version v8.5.5.17 - migration

Trying to move the Dmgrs and Nodes to new machines with different OS.
The WAS install paths and profile locations are different on source and target servers
Both source and target machines are on 8.5.5.17 versions
Different OS for source (intel) & target (power)
I wanted to check if we can use wasPreUpgrade and WASPostUpgrade scripts for this kind of migration too and follow the steps listed on this
https://www.ibm.com/docs/en/was-zos/8.5.5?topic=SS7K4U_8.5.5/com.ibm.websphere.migration.nd.doc/ae/tmig_migrate_remote_commandline.html
OR
can we use the approach to be able to use different install locations
do a new install, create new profiles and then use
importWASConfig.py script and exportWASConfig.py script from old envt to new envt

No, those scripts require you to upgrade to a newer major version of WebSphere.
Someone has figured out a procedure to move profiles over, maybe this achieves what you need to do: http://akhilesh-humbe.blogspot.com/2013/06/moving-profile-to-new-host-in-websphere.html
Otherwise you will have to request support from the vendor to help you figure out how to move the profiles.

Related

How to install firebird server using a wix installer

I am required to install firebird super server on windows as a service, as a part of my application installation through wix for windows machines.
The machines might have a another firebird instance, usually default instance running, thus must be installed on a different port. User should not see any dialogues and installation should happen in the background.
I am able to do the installation through instsvc, installing firebird on a different port with a new instance name. However on windows you get the file execution security warning for instsvc execution. Thus I was looking in to http://www.mwasoftware.co.uk/firebird-msm merge modules but, it does not provide me with information on how to install on a different port/service name(if required).
Could you please provide me info on how to install firebird using wix, so that it would install firebird as apart of my wix installation, on a specified port, without obstructing existing installations, and no interaction from the user.
The Firebird installer only installs the service as the default instance with the default port. If you want to run on a different port and use a different servicename, then you need to change the port in firebird.conf yourself, and execute instsvc with an alternative servicename.
What I got from the merge module dev. While I havent tried the solution yet, seems to be straight forward.
The build scripts including WIX scripts are all available for
download. The direct link is:
http://www.mwasoftware.co.uk/download-msm/download/8-current-version/130-msm-build
To install a (possible) second server, you should do two things:
Build use a modified firebird.conf
Change all the UUIDs so that the package is unique.
You will also need to copy the build251.bat script and update the
environment variables to the Firebird version you are using. See also
readme.htm.

GitLab import 6.8.1 into 7.10.4

We have a productive GitLab 6.8.1 running. I've set up a parallel VM with GitLab 7.10.4. Now I want to move all data from the old installation to the new one. I've already found a way how to move the bare repositories, but I have no clue how to import the user account information, issues, etc.
EDIT: The thing is further complicated by the fact that the original installation was built from source, was running on Debian, used MySQL as a database and the whole installation was pretty much messed up. That's why I didn't manage to migrate the old server and decided to set up a new one. The new server is an Ubuntu machine with GitLab installed from apt-get package (I think that's Omnibus, but I'm not sure what this means.) The new installation seems to use PostgreSQL.
FYI You haven't specified whether the old or new server is running a source installation or omnibus, or whether you're running a MySQL or Postgres database. Instructions differ depending on these factors, so please clarify and I will update my answer.
The first thing is that you will need your old and new servers to be on the same version of GitLab. You cannot migrate anything other than repos without having synchronized versions.
Depending on your reply to the above you will either follow instructions similar to the backup and restore tasks or by running the backup and restore tasks. Both options generally require you to manually copy configuration files or migrate settings from multiple files to a single new file (in the case of going from a source install to Omnibus). The Omnibus upgrade guide above lists the configuration files that need to be migrated depending on your environment.
Update based on edited question: There's a guide specifically for that scenario in this section of the Omnibus upgrade guide, using Option 2. You still need to have the same version on both old and new servers, though, I believe.

How to obtain all versions of KRE?

Problem
I want to both use stable versions of KRE and the bleeding edge nightly built KRE. One ASP.NET5 application may be beta2, but another I may want to be beta4. So what I did was install both in powershell as found here.
What happened is that the stable KVM installed in C:/Users/derp/.kre and the nightly build KVM installed in C:/Users/derp/.k
Worse yet, I can only see this now
Attempts
I tried kvm install KRE-CLR-x86.1.0.0-beta2 and it failed
Shall I try moving the packages from /kre file to the /.k file? This seems hacky and like a really bad idea
RTFM - Tried to use the install feature and including the -a, but failed.
I'm doing something the hard way and can't see the obvious.
I search on here
I feel if there is an answer to what I am trying to do above, it is worth being on here for others to find as well. Thank you all for your patience.
ASP.NET 5 is under development and there is no guarantee that changes between different pre-release version are backward compatible (sorry!).
The /.kre -> ./k rename is not backward compatible and you cannot have both the old and the new kvm simultaneously on the PATH. However, you can get can have two versions of kvm on your machine but you will have to use the full path for at least one of them.
I think the key is the path environment variable of your system. You have to use two set of "kvm", one for night builds, one for public beta, to download and set correct path environment variable.
For instance, I get one kvm from Entity Framework 7 repository, which can download and use beta 4 builds. I also have another kvm from Home repository which can download and use public beta builds.
You can use either kvm with "upgrade" or "use" command to set correct path environment variable, then run your application on the runtime you need. I think even Visual Studio 2015 CTP runs your projects based on the Runtime specified in your path environment variable. For the time being, only beta 3 run times can display in the project property dialog of VS 2015 CTP, but when hitting ctrl + F5, my website starts to load beta 4 runtime and assemblies, I can see the loading in output window, I think this is because I have .k folder prior to the .kre folder in the path environment variable.
Can you try the following?
$cmd-prompt>kpm Install KRE-CLR-x86
It worked for me.

Cross platform patching

I have a program that I intend to install on Linux and Windows machines. I have it cross-compiling fine (with autotools), but at some point I would like the program to be able to update its binaries. The only ways I can think of doing this are:
Give users write access to "C:\Program Files\Foo Program" or "/usr/bin/foo_program".
or
Install the program to the user's profile/home directory.
Neither of these seems like a good idea. What would you do?
You need to give us more details on what you are trying to do - I don't understsand the link between cross platform, patching and your question.
If you need to be able to auto update the program, on linux at least, the best solution is to provide a binary package (rpm, deb, whatever, depending on your target), which is updated regularly - so that new versions will be picked up by the package manager. On windows and mac os x, things are usually more decentralized, each program has its own update manager. The best technical solution depends on the technology (C/C++/python/whatever). One exception I can think of on Linux is vmplayer, which tells you when there is a new version - but you still have to install the new version.
If the program binary is writeable, you could download the patch or the new bits to %TEMP% or /tmp then apply them to the binary. I don't think you need to be able to create new files in the directory. But you're going to run into problems on Windows with the file being in use while you try and patch it.

How to automate development environment setup? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Every time a new developer joins the team or the computer a developer is using changes, the developer needs to do lots of work to setup the local development environment to make the current project work. As a SCRUM team we are trying to automate everything including deployment and tests so what I am asking is: is there a tool or a practice to make local development environment setup automated?
For example to setup my environment, first I had to install eclipse, then SVN, Apache, Tomcat, MySQL, PHP. After that I populated the DB and I had to do minor changes in the various configuration files etc... Is there a way to reduce this labor to one-click?
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint # Github)
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you're using OSX and working with Rails. I'd suggest either:
https://github.com/platform45/let-there-be-light
https://github.com/thoughtbot/laptop
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co