Every time I want to test a java project written with Eclipse IDE I need to upload the .jar file (which is heavy) to a Testing Environment via SSH and that process takes almost 15 minutes. It's uncomfortable to test any piece of code there because I have to wait that specific amount of time. The problem is that the upload speed cannot be increased (it belongs to the client). How can I reduce my deployment time for this kind of task?
Testing Server: Red Hat Enterprise Linux Server release 6.5
Is Jar contains embedded appserver like tomcat or it is deployed to application library path ?
If you can de-compose the module & split into another jar (as jar creation is in your scope) it might help you to create small jar so that you can transfer the jar quickly.
Related
I am evaluating AWS device farm for running mobile web test. These are the steps I am anticipating:
Create sample tests (Java)
Package that as zip file
Go through the AWS device farm console and upload the test zip only
Manually select the configuration and other things
Manually execute the test and evaluate results
For thing I need help are:
a) What if the tests needs some changes, do I need to go through the JAR package creation for every run? Can I run test from my IDE and if everything works fine then only package and upload it on AWS device farm?
b) To do (a) I noticed they have API to simulate step 3-5 to achieve running tests but wondering if there is some easy way to do it?
The steps you've listed are the correct sequence of tasks that need to be performed to run tests on a device. With AWS Device Farm, you have to perform the extra step of uploading the tests and application to the service. As you stated, every time you change your tests, you will need to rebuild the JAR and upload it. Most customers set up a continuous build/integration pipeline using a tool like Jenkins to perform this task automatically.
If you are running in Android Studio, you can use the Device Farm Gradle plugin, which will do the work for you.
I would like to speed up compile/build process when making a Maven-based Vaadin 8 app in IntelliJ 2017.1, as well as avoid chewing up my flash-based storage needlessly, by outputting intermediate and final products on a ram drive.
How to configure IntelliJ or my project to use a ram disk?
I am currently running the Vaadin app using the built-in Jetty Servlet container using the included Maven task. In the future I expect to have IntelliJ coordinate with a separate installation of Tomcat Servlet container.
Some perusing revealed:
Project Structure > Project > Project compiler output
Project Structure > Modules > Paths (tab) > Compiler output
Project Structure > Artifacts
Are any or all of those required to be redirected to the ram disk?
Is there a faster, easier, or simpler way to configure output to a ram disk? Perhaps some trick with the Maven POM file?
By comparison, this approach is used by many iOS/macOS developers using Xcode who divert their DerivedData folder to a RAM disk to speed up the compiling process.
I am looking at creating a jython application and deploying it as a java web start.
My query is related to a concern that for web start deployment, we have to distribute the jython standard jar package also along with our application jar.
From all the web resources , this is what I hear. And the concern is that this will make the download time of the application significantly large as jython jar file is nearly 9 Mb.
If anyone of you has deployed a jython app through web start, can you clarify if we need to bundle the jython jar package along with our application files or only the application files in a standalone jar file ( this solves my problem)
Regards
Shyam
OK , as I figured out...I have to package the jython jar also along with the application jar to make it work.
The reason is that, the application jar consists of python code which the client JVM has no way to understand unless it uses the jython jar package.
As I hear jython has no support currently to convert python code to java classes. Unless this is possible , the jython jar package has to be included.
I would like to know what yours Hadoop development environment looks like?
Do you deploy jars to test cluster, or run jars in local mode?
What IDE do you use and what plugins do you use?
How do you deploy completed projects to be run on servers?
What are you other recommendations about setting my own Hadoop development/test environment?
It's extremely common to see people writing java MR jobs in an IDE like Eclipse or IJ. Some even use plugins like Karamasphere's dev tools which are handy. As for testing, the normal process is to unit test business logic as you normally would. You can unit test some of the MR surrounding infrastructure using the MRUnit classes (see Hadoop's contrib). The next step is usually testing in the local job runner, but note there a number of caveats here: the distributed cache doesn't work in local mode, and you're singly threaded (so static variables are accessible in ways they won't be in production). The next step (and most common test environment) is pseudo-distributed mode - all daemons running, but on a single box. This is going to run code in different JVMs with multiple tasks in parallel and will reveal most developer errors.
MR job jars are distributed to the client machine in different ways. Usually custom deployment processes are seen here. Some folks use tools like Capistrano or config management tools like Chef or Puppet to automate this.
My personal development is usually done in Eclipse with Maven. I build jars using Maven's Assembly plugin (packages all dependencies in a single jar for easier deployment, but fatter jars). I regularly test using MRUnit and then pseudo-distributed mode. The local job runner isn't very useful in my experience. Deployment is almost always via a configuration management system. Testing can be automated with a CI server like Hudson.
Hope this helps.
This is my second question on Bamboo (My First One). My understanding after reading suggested info, I need a build tool, like nAnt or MSbuild to write a script that gets the source code and builds it (I am working on a .net 3.5 with silverlight project). After, when deploying, I need to write scripts to move my files to the diff servers. Please tell me whether I am going in the right direction or not. Can I use ant, maven, bash scripts to do the same with a .net project?
Yes, that is true:
Bamboo is the central management server which coordinates all work
Bamboo itself has interfaces and plugins for lots of types of work
Bamboo basically needs to first get your source from a source repository (lots of plugins here for a variety of systems)
Then it needs to do the build - that can be done by using MSBuild to build your Visual Studio solution, or it could be a batch file to call your XYZ compiler and linker to create your app - whatever it is you have and use
Once your solution or project is built, you have "artifacts" (build results, e.g. executable app, config files, etc.) lying around
with those results, you can do additional things:
zip them up into a ZIP file and copy them somewhere
run a install builder on them and create an MSI
install them on a test server to make sure everything installs just fine
The sky's the limit! :-)
But in general: Bamboo is just the "orchestrator" - the coordinator. The actual work is done by either direct Bamboo plugins (of which there are plenty), or then you can call external command-line apps by means of a unix script or Windows batch file.
Marc