When running jboss 7.1 as a windows service (or not), it occasionally takes more than one try to successfully deploy a war file. This is not a problem when starting jboss manually since restarts are easy. However, when jboss runs as a windows service and it is restarted automatically (due to a windows patch), jboss itself may launch, but the war may not.
Is there any way to cause jboss to retry deploying the war after it fails the first time - for example, by changing a setting in standalone.xml?
There are to ways to fix your problem.
1) go to standalone.xml (or whatever configuration you are running), find deployment-scanner and add/modify attribute deployment-timeout in seconds
2) Deploy your application as managed deployment, you can do that if you deploy trough admin console or via cli with deploy command. This way deployment will then be "managed" and will always be deployed and wont be using deployment scanner and its timeouts.
I recommend you to use deploy as managed deployment as deployment scanner is not really recommend to be used in production environments as it adds additional IO load on filesystem.
It is great for development / testing scenarios but should be avoided in production if possible.
Related
I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.
Application information:
Spring Cloud Data Flow Server Cloudfoundry 1.0.0.RELEASE (DIY built with Spring Cloud Config Server dependencies)
Spring Cloud Config Server
PCF Elastic Runtime 1.7.x
I'm curious about the extent to which applications and the deployer depend on the Git repo and Maven artifact repository I'm binding my SCDF instance and my Spring Cloud Config Server instance to in PCF.
My suspicion is that the Maven repo is only used at deployment time, when an artifact needs to be downloaded for installation and deployment in the PCF space. Also, I'm thinking the Git repo is probably cloned by the Config Server whenever an application initialization, or refresh event occurs that would require the need to re-read the configuration information stored in Git.
Is this true, or are there ongoing dependencies that would require high availability for these external resources? My question is related to disaster recover planning activities, and how quickly these specific resources need to be recovered for Spring Cloud Data Flow and its deployed streams to continue working under adverse conditions.
My suspicion is that the Maven repo is only used at deployment time, when an artifact needs to be downloaded for installation and deployment in the PCF space.
Yes - The applications are resolved and downloaded upon stream deployment request and the resolved apps are cached and reused upon redeployments.
I'm thinking the Git repo is probably cloned by the Config Server whenever an application initialization
True - For a given URI of a configuration source, the server will clone the repository and make its configurations available to all the client applications bound to it.
These two capabilities are driven by application bootstrap event. As for the config-server, if you're running it as a service in Cloud Foundry, it's up to the platform to reliably serve the properties to the bound applications.
I noticed that both options are available while running Jboss, and they both recompile the project (I noticed 'make' running with both). I did see this question, the accepted answer made sense, but I wasn't sure what hot-swapping means. What is a possible example of a change which could be registered without needing to restart the server?
Your question needs more details to answer completely, but here are some basic concepts:
Hot-swapping is simply replacing the files of your project into the deployment folder of the application server (unpackaged, i.e. not the .war/.ear but all separate files). It is usually faster because the change are immediately visible in the web-application. But it is not always possible/supported by application servers, and often if you hot-swap .jar files the application server doesn't pick it up or end up confused.
Restarting JBoss will stop all existing services ( EJBs, Pooling, Queues, Messaging...) and restart them. It is almost the cleanest way to run your application (the cleanest would be un-deploy, restart and deploy)
Redeploy means your application and its services are first removed from JBoss, but other services setup at server level (Messaging, Pools, JMX,... depends on your actual settings) are still deployed. Then the application is deployed (copied from your dev folder or .WAR/.EAR to JBoss webapp)
Typically, you would hot-swap (eventually manually) .(x)html/.jsp/.jsf/images/.js/.css safely as JBoss doesn't need to process them.
Changing code in java classes deployed as .class in a WEB-INF/classes can often be hot-swapped.
Changing code in java files deployed as .jar will almost always need at least redeployment. Some OSGi enabled application server properly configured are more flexible in hot-swapping a complete application (I know Glassfish does but I don't know what specific setting is needed)
Finally, in development, sometimes the multiple redeployments lead to memory leak or unstable application server (often you'll get a OutOfMemory exception in the logs) then you need to cleanup (undeploy, stop, start then deploy)
When I start a Weblogic instance with a deployed application, the deployment is sometimes left in prepared state, not in active state. I have to go to Weblogic Console and start the deployment manually, which is quite slow and annoying repetetive work. Since this is done on a development machine — sometimes 50 times a day, — there are no security implication as the server is only visible on the local network. Is there some way to have it always start the deployment active?
Note that I'm not redeploying the application, I instead have it "constantly deployed" and stop/start the Weblogic instance using the scripts in bin directory.
If you are running weblogic in development mode, you can use the autodeploy folder for your app. See details here: http://download.oracle.com/docs/cd/E11035_01/wls100/deployment/autodeploy.html#wp1021620
Think this should solve your problem
How can i start/stop remote tomcat using maven. I am using cargo plugin which helps me in deploying the application , but doesn't provides the functionality to start/stop the remote tomcat.
Indeed, You can NOT start and stop Tomcat running remotely using Cargo, only deploy and undeploy your web application.
Actually, to my knowledge, there is currently nothing allowing to do this out of the box.
As explained here, the only way to make server "A" start or stop a service like Tomcat when the request comes from client "B" is that yet-another service needs to be available and already running on server "A". [...] and I don't know if such a service is available.
In this message, someone is describing such a solution (based on a socket listener) that you could maybe use (by doing some telnet through maven) but the message is quite old so it's likely outdated and the link pointing to the code seems to be dead. I didn't check out the whole thread, maybe there are other ideas.
If you are using windows, remote service sharing is another possible solution as described here. But, again, this would require some work on your side.
From a security standpoint, it's only possible in this way...
Linux: use an SCP or script via SSH Client (putty), then '$CATALINA_HOME/bin/shutdown.sh'
Windows: use sc command, like "sc \192.168.10.10 stop tomcat6"
Quick and clean!
You can try to use the maven tomcat plugin or if it does not give you everything you need, you can always use an ant task here is a reference on the task
You can use Cargo Daemon web-application. It runs on the remote machine and can start/stop tomcat for you (as well as deploy an app). You just need to configure Cargo plugin and call mvn:daemon-start. Here is the link: http://cargo.codehaus.org/Cargo+Daemon. It's easier to start with provided Cargo Daemon archetype: http://cargo.codehaus.org/Maven2+Archetypes#Maven2Archetypes-daemon
Try this useful Plugin
Afterwards try this:
mvn tomcat:start
and
mvn tomcat:stop