Jruby Warble executable performance - glassfish

I am developing a JRuby on Rails app that needs to be deployed to clients servers. We want be able to compile the app so that the source can not be read and copied (easily). From what I've read Warbler seems to be the way to go.
My concern is the performance of the app in standalone mode. Meaning just runing as "java -jar MyApp.war" as opposed to using Glassfish..Tomcat..etc. The distributed app wont be high traffic, maybe 20-30 users max. If anything it'd be more heavy on the db side which is a separate issue.
So how does this type of scenario compare performance wise with running with an actual server?

Using Glassfish, Tomcat JBoss (or Torquebox) will perform just as good as long as the JVM has enough memory.
You will need to tweak loading\compiling the assets depending on the deployment server.
If it's supposed to be a web app then you will need the war\tomcat. If it should be a desktop app then just use the jar version.

Related

Liferay Cloud IDE, Multiple developpers working on same liferay server

We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.

How to rapidly publish web role cloud service, uploading only binaries, avoiding wholly restarting the VM?

Possible ways to accomplish it:
Creating dedicated WCF service for this purpose (currently my favorite option)
Using the REST API?
Azure PowerShell?
Explanation:
Publishing a web-role cloud-service takes about 10 minutes. It's much too long during development - I try to do as much as I can offline, unit-test-ish and modular, but it's just impossible to completely avoid development cycles altogether with the VM.
Apparently, the long time is mostly a result of the machine being wholly restarted, so I'm trying to find an automatic solution, like uploading and installing the binaries.
What is the best way to accomplish it?
What do you think? would it cut at least 50% of the publishing time?
Do you expect any critical problems?
The solutions proposed below are definitely against best practices and should NEVER-EVER be used in production environment.
If your objective is to quickly test your changes in your development environment, there are two ways you can go about it.
Enable RDP and copy your modified binaries or other files directly in the appropriate folders on the VM. You could enable Remote Desktop on your web role and copy the files manually in appropriate folders.
Use Web Deploy: This will only work for web roles in your project but you could enable Web Deploy on your Web Roles and use that to make faster deployment. Please see this link for more details on how to use this feature: https://msdn.microsoft.com/en-us/library/azure/ff683672.aspx.

Why testing ejb3 in a embedded container?

It could be a stupid question since almost everyone is preffering embedded container technique to test EJBs, but I have to clarify this because of my lack of experience.
Also, some my argue that embedded containers my not reproduce the real life situation of deploying in a real app server.
So, when testing ejb3, why is indicated to use embedded containers instead of standalone container ?
Thanks in advance.
Time.
Testing EJBs in full blown application servers usually takes up a lot of time because of app. server has to "spin up" whenever changes are made, so a lot of time is wasted. Because of that, embedded containers such as OpenEJB can save you a lot of time. Embedded Glassfish is also an options these days, although I haven't personally tried it.
Zero turnaround is a kind of holy grail in Java EE.
Here are the most relevant arguments that I've found. Please comment beside this, or add your own reasons about testing with embeddable containers vs. a real application server container. Thank you.
using an embedded container testing technique ensures flexibility(you just need to add the new libs to the classpath). as far as I understand if we want to be able to deliver the testing project for several application servers we have to not be bound to the application server container in tests implementation. some app server could use some specific annotations or deployment descriptors, if they are used then you are bound to app server
embedded containers are lighter - this means reduced time for running the tests. real appserver have difficulties in starting and stopping automatically or could hang up. so to build fully automated testing process using real app server could be too difficult...
another problem is the stateless nature of most Java EE applications. after a method invocation of a transaction boundary (for example, a stateless session bean), all JPA-entities become detached. the client loses its state. this forces you to transport the entire context back and forth between the client and the server - heavy load,Every change of the client’s state has to be merged with the server
with embedded container you have one process that runs all (tests and ejbs), with real app server you should coordinate 2 processes(AppServer and Tests)
for full testing, of course, you need also tests on real appserver. different server could have some particularities, for example class loading etc.. embedded containers, however, help testing the logic (unit and integration of units testing) so for daily automated testing this could be enough and more easy.
An embedded container is much faster to execute (start/stop) than a full container -> this affects the developer for sure. Setup/configuration is easier to automate, specially with continuous integration. On the other hand, as some core features are disabled on an embedded container, you can't test everything.
You may want to investigate http://www.jboss.org/arquillian to have both options. From the site:
Arquillian enables you to test your
business logic in a remote or embedded
container. Alternatively, it can
deploy an archive to the container so
the test can interact as a remote
client.
In the end, it depends on the kind of EJBs you want to test. Certain complex scenarios will not work on an embedded container without mocks to some external services. In my projects we test EJBS with a custom mock container we created (ultra fast and easy to use) and, if all proceeds well, we test in the real thing, a full JBoss, using a remote control API pretty much like Arquillian.
Hope it helps.

Is it normal that my Grails application is using more than 200 MB memory at startup?

My Grails application is running in a development environment. I still didn't go into production, but in any case, is it normal that my Grails application is requiring 230 MB at startup only (with an empty bootstrap and no request handled so far)?
Do you know why this is the case, how to improve memory usage in development mode and, most important, whether it is reduced in production environment?
To answer your questions, yes - it is normal. It's especially normal if you have a lot GSPs in your application. GSPs are runtime compiled so you can speed up their generation by increasing your permgen space.
You can improve memory use and performance in general by making sure that you are passing the '-server' flag when you load your server JVM.
I wouldn't blame all that memory usage just on Grails. Because it uses an embedded Tomcat (Jetty in older versions) there will be a decent amount of overhead even when running an empty application.
IMO, 230MB is a lot of memory use for a Java application. High memory usage is just part of life when writing jvm based applications.
My online Grails applications run in a VPS with only 512MB (which includes a Drupal CMS, Apache, the email services, ... and the Tomcat to run GRails) so you can definitely tune your application to use less memory

Why use Glassfish instead of Apache? What's it strengths and weaknesses?

Sorry for my ignorance here, but when I hear the word webserver, I immediately imagine Apache, although I know people use Microsoft's IIS too. However since I've been hanging out here at Stackoverflow I've noticed lots of people use Glassfish.
Which made me wonder, why would I want to use Glassfish (in the sense that I'm interested, but I don't really understand why it might make my life easier). From what I read it's Sun's open-source derivate of Apache's Tomcat, thus I imagine it's a good (or great) quality product. But since I don't know its strengths and weaknesses, I don't know when it would be wise to choose Glassfish over another server. Could anyone elaborate ?
GlassFish is an Application Server which can also be used as a Web Server (Http Server).
A web Server means: Handling HTTP requests (usually from browsers).
A Servlet Container (e.g. Tomcat) means: It can handle servlets & JSP.
An Application Server (e.g. GlassFish) means: It can manage Java EE applications (usually both servlet/JSP and EJBs).
You should use GlassFish for Java EE enterprise applications.
The need for a seperate Web server is mostly needed in a production environment. You would normally find a Application server to be suffice most of your development needs. A web server is capable of holding larger number of active sessions and connections, thus providing the necessary balance without performance costs.
Stick to a simple web server if you are only working with servlets/jsps. It is also to be noted that in a netbeans environment, glassfish has better support than other App servers. In the context of eclipse though, WSAD and JBoss seem to the preferred options.
Glassfish will soon release the modular kernel.
This means that the containers you need start up and shutdown as you need them. I.e no EAR deployed, EJB container won;t start up. This seems to have made it very good for development as it can start and stop very quickly. This takes it a lot closer to development environments like Rails (where redeployment is a massive part of your development)
I have used GlassFish server for developing Web Services.
It provides a very interactive Admin Console where admin can test the Web Services.
I really find it helpful while developing Web Services