We have an iOS and Android app that we deployed to our MFP server v7.0.
I know that maintaining a lot of versions is hard and the user who installed the old versions may not experience the new features if we don't let them install the latest version.
I want to know what are the other implications/effects if we maintain a lot of version in server. Like if it'll take a long time to restart the server or the server might experience downtime.
What is the good number of versions to be maintained as well? If there's any.
I want to know what are the other implications/effects if we maintain a lot of version in server. Like if it'll take a long time to restart the server or the server might experience downtime.
Yes, it could take longer for the server to start if you maintain a lot of resources, I would say this is expected. It's natural.
You will want to look at the following doc: https://mobilefirstplatform.ibmcloud.com/learn-more/scalability-and-hardware-sizing-7-1/
What is the good number of versions to be maintained as well? If there's any.
This is really up to your development, marketing, etc... You probably want to always have your customers on the latest, so you could play with 2-3 versions and Remote Disable those you decide to not support anymore.
Related
We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.
Planning to Migrate the Websphere from 7.0 to 9 and 8.5 to 9.
Can anyone help me getting the detailed Process
Migration here is "In place". (Migration will be done on the same servers, where the old Installation are in)
if at all any migration tools need to be used, please provide the clear info on them.
any documental references, or any video references for the questioner is appreciated.
OS used : RHEL
CUrrent version: WAS 7x and 8.5
Migrating to : WAS 9.0
It sounds like you're in the very beginning stages of doing this migration. Therefore, I highly recommend you take some time to plan this out, especially to figure out the exact steps you'll be taking and how you'll handle something going wrong. For WebSphere, there is a collection of documents from IBM that discuss planning and executing the upgrade. Look around there for documentation on the tools and step by step guides for different kinds of topologies. The step by step guide for an in place migration of a cell is here.
You should make sure to take good backups before you start the process so you can restore back to before the migration if you need to.
In addition to doing the upgrade, an important part is to also make sure your applications are going to work on the new version if you haven't already. IBM provides this tool to scan applications and identify potential issues that developers will have to fix. There is documentation for the tool at that link as well.
If you are in the planning phase, I'd strongly suggest you to consider migrating to WebSphere Liberty instead of traditional WAS v9. All these migration tools (toolkit for binaries, Eclipse migration toolkit) support both migration scenarios.
Choosing Liberty might be a bit more work at the beginning, but you will gain more deployment flexibility and speed up future development. Liberty is also much better fitted for any cloud/containers environments, as it is much more lightweight, so in the future, if you would like to move to containers, it would be much easier.
Check this tutorial Migrate traditional WebSphere apps to WebSphere Liberty on IBM Cloud Private by using Kubernetes, although it shows the steps to migrate to Liberty on ICP, beginning is the same - analyzing of the application whether they are fit for Liberty and migrating. If you don't have access to IBM Cloud or ICP, you can use stand alone version of the Transformation Advisor that was released recently - IBM Cloud Transformation Advisor.
Having said all that, some apps include old or proprietary traditional WebSphere APIs and in that case it may be easier and cheaper to temporary migrate them to WAS v9, and modernize in the future.
I'm searching for a main difference between OpenShift Origin and OpenShift Enterprise. I know that the first is open source and the latter is the commercial version. Have OpenShift Enterprise got other features compared to the open source version?
Thanks in advance.
Update 3/21/2018: If you find this old answer of mine in the future, Enterprise is called "OpenShift Container Platform" now.
The community version goes faster, but with change comes some risk. If you would like to be an early adopter Origin could be your choice. Note: support is best effort by the community, but I have found very helpful people on IRC and on the project's github page.
Link: https://github.com/openshift/origin
The enterprise version has the advantage of professional support for your money. While you won't get features as early, in exchange there is focus on stability and streamlining. This may be important for enterprises. Some solutions / examples may not work exactly the same way. For example application templates, utilities come as part of packages for RHEL users. It also comes with some entitlements for things like RHEL and CloudForms integration.
I tried installing a one master, one node small cluster with both, and found them just as good.
In short, stability or early adoption. Oh, and bugfixes.
Personally I prefer to go with Origin, as you can monitor the state of the project yourself and you are not forced to jump on every coming train. Update when suitable.
OpenShift Origin is the open source community version of OpenShift Enterprise. In order to understand what this means, you need to understand what open source software is - computer software developed via a competitive collaborative model from many individual sources. Origin updates as often as open source developers contribute via git, a version control system, sometimes as often as several times per week.
OpenShift Enterprise 3integrates with Red Hat Enterprise Linux and is tested via Red Hat's QA process in order to offer a stable, supportable product for customers who want to have their own private or onsite cloud. Enterprise might update around every six months, maintaining stabilization across minor updates. Providing timely professional support for each query they have from installation/POC to the production.
Our application is using Hibernate Search for indexing some of its data. The application is running on two JBoss EAP 6.2 application servers for load distribution and failover. We need changes made on one machine to be immediately visible on the other. The index is a central part of the application and needs to be consistent with the database data. Completely rebuilding it takes a long time so it is important that it remains intact even in the case of a server crash. Also, the index is expected to grow too large to keep all of it in memory.
Our current solution is to use the standard filesystem directory with a shared filesystem (NFS) and the JGroups backend to ensure that only one server writes to a given index at any time. This works more or less, but sometimes we have problems with index updates taking very long (up to 20 seconds) or failing completely. Due to some other reasons we need to migrate away from the currently used file system, so we are evaluating alternatives for the current setup.
One thing we tried is the Infinispan directory with a file cache store for persistence, but we had some problems there regarding OutOfMemoryErrors (see also my post in the Infinispan forums https://developer.jboss.org/thread/253732). Also, performance was still not acceptable in our first tests (about 3 seconds for an index update with two clustered servers set up on my developer machine), though that may be due to configuration issues.
I think this is not such an uncommon requirement, but I couldn't find much information on best practices to implement it.
Who has experiences with similar setups? Does the Infinispan directory work for you? Can anybody suggest a working configuration or how to proceed to arrive at one? What alternatives have you tried and which work?
You need to be careful about which versions are being used. The Infinispan version which is bundled within JBoss EAP is not intended (i.e. tested as extensively as for other purposes) for storing the Lucene index.
When JBoss EAP 6.2 was released, the bundled Infinispan was considered good to go for the internal needs of the application server, but as you might have discovered, the feature of index storage was having at least some performance issues.
In recent developments of Infinispan we applied many improvements to the index storage feature, fixing some bugs and getting very significant performance improvements out of it. I would hope you could be willing to try Infinispan 7.2.0.Beta1 ?
All of these improvements are also being backported to JBoss Data Grid, version 6.5 will make them available as a supported product. Note this feature of storing an Hibernate Search index wasn't supported before - it is going to be a new feature of JDG 6.5.
Modules from JDG 6.5 will be compatible with JBoss EAP, you'll just have to make sure you'll use the Infinispan build provided by JDG and not the one meant for internal usage of EAP.
Performance improvements are still being worked on. It's much better already - especially compared to that older version - but we won't stop working on that yet so if you could try latest bleeding edge versions of Infinispan 7.2.x (another release is scheduled for tomorrow), I'd highly appreciate your feedback to keep pushing it.
I am trying to get a better idea on this as so far I have had mixed answers in person.
I am a solo dev in a 5 man IT dept for a Health Care related business. My developer machine is running Win 7 RC1 (x64) but my users are all running Win XP Pro (x86). Is this a big deal? Whan kind of pitfalls should I be aware of? Is having a VM of the user image enough?
Should my environment completely mirror my end user's?
Your development environment doesn't need to mirror your user's environment, but your testing environment certainly should!
Having a VM of the users image for testing would probably be good enough.
First and foremost, as a developer, your machine will never look like your client's machine. Just accept that.
You will have tools and utilities installed that they won't have. That will fundamentally change the configuration of your machine from the outset. You have DLLs, applications, services, and possibly drivers installed that your users have never even heard of (and likely never will).
As far as the OS is concerned, Win7 and WinXP, despite claims to the contrary, are not the same animal. Don't believe the hype. Having said that, don't believe the anti-hype, either. Just be aware, as you well should, that any piece of software developed under one version of an OS is not guaranteed to behave the same way under another.
The short of it: Yes, it's important that your environment is different. Should you panic about it? No. Should you account for it in testing? Absolutely. As rigorously as possible.
Is this a big deal?
Yes, it is. You have an OS 2 generations ahead of the one the users have, including you are running a non-release version.
Whan kind of pitfalls should I be aware of?
Depends on what you are developing. Some libraries may be missing that you already have, then the versions may be different etc.
Should my environment completely mirror my end user's?
Not necessarily, but you definitely need to have a testing environment that corresponds to the one the users have.
If you were developing web applications, all that would not have been an issue (well, unless you used some fancy fonts that are not present in a clean OS by default).
Its unreasonable to develop on exactly the same type of system as your users. If nothing else, your life is made much easier by installing all sorts of developer tools your end users have no reason to install. I hear Visual Studio in particular likes to squirrel a number of potential dependencies onto a system.
However, you do need to test on a system more inline with that of your end users. If you have access to an image of such a system, your VM approach should be sufficient. If nothing else, you should try for a staged (or better yet, beta/trial) release so as to avoid pushing a completely broken app out the door.
In short, don't fret about the development environment but put some thought into your testing one!
It's effectively impossible for any two machines to be set up the same, so development and production environments will always be different. One advantage in them being VERY different is that you will be more aware of possible deployment problems.
The type of environment that you need to use for testing entirely depends on what you're developing.
If you're writing web applications, having a VM with the user's standard image should be more than enough (just make sure the VM contains all the browsers your users might be using). Web development is much easier in this respect (I'm also running Windows 7 and have a couple VMs to test various environments)
If you're writing a full desktop environment, you'll probably want to ask for a second computer that you can test on (even if just to test before a final release). I say that because of differences in hardware. Just imagine if something runs fine for you, but slows down the users computer so that everything else is unusable. Opposite of that, you might spend hours trying to make something faster in the VM whereas running that on a users computer might run just fine.