Tomcat test and production environment - testing

What is the best design to have many enviroments for one web-app? Is it better to have multiple tomcat instances or multiple web-app instances deployed on one Tomcat server?

If one server can handle the load, I would said it's better to have just one Tomcat instance and deploy web-app multiple times if necessary.
This way:
You'll have only one server to take care of (secure, administer, backup).
You share hardware resources among applications (RAM, DISK, CPU)

The idea of deploying the same web-app several times in order to reduce administration burden is good.
But in my opinion, this isn't an acceptable solution : suppose you deploy a web-app twice. Once for a TEST environment and a second time for a PRODUCTION environment. The web-app may encounter exceptions/errors (typically, memory-related issues) that may lead the whole Tomcat server to crash. In such a situation, problems that were encountered in one environment would cause the other one to be unavailable.
Therefore, I would rather install as many Tomcat instances as different environment.

Ideally, you should keep all production code on a completely separate environment as much as possible just to avoid mistakes and for security reasons.
Depending on your resources and team size, say for example, you have an enclave for production: web server, database, mail server. This should have rules to disallow any development resources from access production resources and vice versa. If your dev resources have been compromised or you run a script going to the wrong resource, there would be a layer of protection for that.
Yes, this is all inconvenient, but it could save you from having big headaches in the long run.

Related

Deploying ASP.NET Core application to ElasticBeanstalk without temporary HTTP 404

Currently, ElasticBeanstalk supports ASP.NET Core applications only on Windows platforms (when using the web role), and with Windows-based platform, you can't have Immutable updates or even RollingWithAdditionalBatch for whatever reason. If the application is running with a single instance, you end up with the situation that the only running instance is being updated. (Possible reasons for running a single instance: saving cost because it is just a small backend service, or it might be a service that requires a lot of RAM in comparison to CPU time, so it makes more sense to run one larger instance vs. multiple smaller instances.)
As a result, during deployment of a new application version, for a period of up to 30 seconds, you first get HTTP 503, then HTTP 404, later HTTP 502 Bad Gateway, before the new application version actually becomes available. Obviously this is much worse compared to e.g. using WebDeploy on a single server in a "classic" environment.
Possible workarounds I can think of:
Blue/Green deployments: slow (because it depends on DNS changes), and it seems like it is more suitable for "supervised" deployments, not for automated deploy pipelines.
Modify the autoscaling group to enforce 2 active instances before deployment (so that EB can do its normal Rolling update thing), then change back. However it is far from ideal to mess with resources created and managed by EB (like the autoscaling group), and it requires a fairly complex script (you need to wait for the second instance to become active, need to wait for rolling deployment etc.).
I can't believe that this are the only options. Any other ideas? The minimal viable workaround for me would be to at least get rid of the temporary 404s because this could seriously mislead API clients (or think of the SEO effect in case of a website if a search engine spider gets a 404 for every URL). As long as it is 5xx at least everybody knows it is just a temporary error.
Finally, in Feb 2019, AWS released Elastic Beanstalk Windows Server platform v2, which supports Immutable und Rolling with an additional Batch deployments and platform updates (like their Linux-based stacks already supported for ages):
https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-02-21-windows-v2.html
This solves the problem even for environments (normally) running just one instance.

What is the difference between Restarting JBoss server and redeploying it

I noticed that both options are available while running Jboss, and they both recompile the project (I noticed 'make' running with both). I did see this question, the accepted answer made sense, but I wasn't sure what hot-swapping means. What is a possible example of a change which could be registered without needing to restart the server?
Your question needs more details to answer completely, but here are some basic concepts:
Hot-swapping is simply replacing the files of your project into the deployment folder of the application server (unpackaged, i.e. not the .war/.ear but all separate files). It is usually faster because the change are immediately visible in the web-application. But it is not always possible/supported by application servers, and often if you hot-swap .jar files the application server doesn't pick it up or end up confused.
Restarting JBoss will stop all existing services ( EJBs, Pooling, Queues, Messaging...) and restart them. It is almost the cleanest way to run your application (the cleanest would be un-deploy, restart and deploy)
Redeploy means your application and its services are first removed from JBoss, but other services setup at server level (Messaging, Pools, JMX,... depends on your actual settings) are still deployed. Then the application is deployed (copied from your dev folder or .WAR/.EAR to JBoss webapp)
Typically, you would hot-swap (eventually manually) .(x)html/.jsp/.jsf/images/.js/.css safely as JBoss doesn't need to process them.
Changing code in java classes deployed as .class in a WEB-INF/classes can often be hot-swapped.
Changing code in java files deployed as .jar will almost always need at least redeployment. Some OSGi enabled application server properly configured are more flexible in hot-swapping a complete application (I know Glassfish does but I don't know what specific setting is needed)
Finally, in development, sometimes the multiple redeployments lead to memory leak or unstable application server (often you'll get a OutOfMemory exception in the logs) then you need to cleanup (undeploy, stop, start then deploy)

Load Balanced Deployments

I have an application that is load balanced across two web servers (soon to be three) and deployments are a real pain. First I have to do the database side, but that breaks the production code that is running - and if I do the code first the database side isn't ready and so on.
What I'm curious about is how everyone here deploys to a load balanced cluster of X servers. Since publishing the code from test to prod takes roughly 10 minutes per server (multiple services and multiple sites) I'm hoping someone has some insight into the best practice.
If this was the wrong site to ask (meta definitely didn't apply - wasn't sure if serverfault did as I'm a dev doing the deployment) I'm willing to re-ask elsewhere.
I use nant scripts and psexec to execute them.
Basically in the farm there's a master server that copies the app and db scripts locally and then executes a deployment script in each server in the farm, that copies the code locally, modifies it if needed takes the app offline deploys the code and takes the app online
Usually the app is of for about 20 seconds (5 nodes)
Also, I haven't tried it but I hear a lot about MSDeploy.
Hope this helps
Yeah, if you want to do this with no downtime you should look into HA (High Availability) techniques. Check out a book by Paul Bertucci - I think it's called SQL Server High Availability or some such.
Otherwise, put up your "maintenance" page, take all your app servers down, do the DB and one app server first, then go live and do the other two offline.

GlassFish multiple EARs

I have an EAR that I deploy as production, in context "/".
I'd like to deploy a test version of the application on the server, the same Glassfish instance.
Is it possible to deploy the application under a different context and port in the same instance?
If so, beside changing the context in application.xml, do I need to change anything else?
Usually you can deploy a test version of the application by altering the context root, and deploying it as a whole new application.
However, you must take the application's design into consideration. If the application utilizes a database, more often that not, you'll need a test database instance. All JNDI names (this includes datasources and EJBs, if any) that the test and production applications use, must not have any conflicts. It is an ill-advised move to run multiple instances of the same application, all of which reference the same JNDI names.
Finally, it is a standard accepted practice to separate your test and production environments, and even have separate machines for the same, in the case of mission critical apps and the like. This is done usually to prevent accidental overwriting of one environment (usually the production one) by another.

Apache and the c10k

How is Apache in respect to handling the c10k problem under normal conditions ?
Say while running very small scripts with little data, or do I need to scale out if I use Apache?
In the background heavy lifting is done by a few servers running specialized software that processes the requests but I'd like to use Apache as a front. Is this a viable plan?
I consider Apache to be more of an origin server - running something like mod_php or mod_perl to generate the content and being smart about routing to the appropriate system.
If you are getting thousands of concurrent hits to the front of your site, with a mix of types of data (static and dynamic) being returned, you may find it useful to put a more optimised system in front of it though.
The classic post-optimisation problem with Apache isn't generating the dynamic content (or at least, that can be optimised for early in the process), but simply waiting for a slow client to be able to receive the bytes that are being sent. It can therefore be a significant advantage to put a reverse proxy, in the form of Squid or Nginx, in front of the servers to take over the 'spoon-feeding' of the slow network clients, while allowing the content production to happen at full speed, and at local network speeds - 100Mb/sec or even gigabit speeds - if it even has to traverse a network at all.
I'm assuming you've probably seen this data, but if not, it might give you some idea.
Guys, imagine that you are running web server with 10K connections (simultaneous). How could it be?
You've got many many connections per second
Dynamic content
Are you sure that your CPU can handle that many PHP sessions for example? I guess no, so why are you thinking about C10K problem? :D
Static content - small files
And still soo many connections? On single server? Probably you've got problems with networking/throughput too or you are future competitor of Google. Use lighttpd which addresses C10K problem and is stable - fly light. Using Apache for only static files for large sites is obvious.
Your clients are downloading large files for a large time - static content
ISO images, archives etc
If you are doing it via web server - FTP may be more appropriate.
Video streaming
Use lighttpd or specialized software. And still... What about other resources?
I am using Linux Virtual Server as load balancer in front of apache servers (with specific patches for LVS-NAT) and I am happy :) This string is an answer you want to hear.