Subclipse - Repository Exploring Fetching Children Takes Forever - eclipse-plugin

I recently got the latest version of Eclipse Indigo for Java Developers - then installed Subclipse using the update site: http://subclipse.tigris.org/update_1.6.x
Now when I go to SVN Repository Exporing perspective - my repository location having been added - when I expand the repository it says "Pending..." and tasks show it is "Fetching children of xxxx..."
Note that I tried using both JavaHL and SVNKit SVN interface. Sometimes it actually does finally show me the directory (after a few minutes), but other times it just sits there forever (one time I let it sit overnight).
But it never seems to show anything even if it let it sit for a really long time. I know this has worked in the past though.
I even created a different installation which had subversive plugin installed instead of subclipse. In this case it seems to be much faster. However, I find that synchronizing with subversive is painful because it tries to refresh the synchronization any time I do anything. Subclipse seemed to work a lot better with synchronizing, so I would really like to get this to work.
Here is a copy of my installation:
Apache Ivy 2.2.0.final_20100923230623
Apache IvyDE 2.1.0.201008101807-RELEASE
Eclipse IDE for Java Developers
Eclipse Java Web Developer Tools 3.3.0.v201103310009-7F7AFO-C25TohFunnht_0yz0s92kZCb4ufuz0TLG
Eclipse Web Developer Tools 3.3.0.v201102200555-7O7IFhJEMiB5vNMYta56_GonLeahqrwnYjv2mBz-
JavaScript Development Tools 1.3.0.v201103031824-7F78FXPFBBoPbXRIcIgs0z0
JNA Library 3.2.7
PHP Development Tools (PDT) SDK Feature 3.0.0.v20110516-1100-77--84_23JBVgSVXO7XGJz0VLa9O
Subclipse (Required) 1.6.18
Subversion Client Adapter (Required) 1.6.12
Subversion JavaHL Native Library Adapter (Required) 1.6.17
Subversion Revision Graph 1.0.9
SVNKit Client Adapter (Not required) 1.6.15
SVNKit Library 1.3.5.7406

Related

Create a old Play 2.3.1 framework (current is 2.4.3)

Problem
I'm trying to create a Play 2.3.1 framework, because the lack of info on how to get started with 2.4.3. So much has changed apparently that the tutorials on youtube is useless and I can't get it to work.
Question
How do I do this?
I have tried to go to https://www.playframework.com/download#older-versions but all versions yield the same link to https://downloads.typesafe.com/typesafe-activator/1.3.6/typesafe-activator-1.3.6-minimal.zip
which installs the newest playframework 2.4.3.
Please say that someone knows how to do this?
Also, why should I bother using 2.4.3 > 2.3.1 if I'm only creating a simple mobile app w/database? Security reasons or just "easier"?
Same question for IntelliJ 14 > IntelliJ 13 ?
https://www.playframework.com/download#older-versions is the link you need.
When you're new to Play! it can be quite confusing so I think a bit of terminology is needed.
SBT - Scala build tool. This is a build tool that is baked into every Play! project but totally independent of Play! framework, ie. many Scala projects use this to manage their builds without ever using Play! It's just the Scala equivilient of a Maven, Gradle or Ant. Nothing special.
Activator - This is Play!'s commandline, like a build-tool++. It's commandline tool with a superset of the SBT commands clean compile etc etc, with Play! specific ones like 'new', 'run'. It actually just amounts to not much more than a script (.sh/.bat) which bootstraps SBT and some extra goodness for running play commands. In earlier versions like 1.x this command was named play. Version 2.x was a practically a re-write so you can ignore all related advice.
Play - the playframework itself is just a regular jar (and all its dependencies). It is declared in the project/plugins.sbt
So the reason all the download links point to activator-1.3.6 is because that is just the version of the commandline tool. This will default to latest: 2.4.x.
When you perform an activator new you get a choice of templates. If you REALLY REALLY want to use 2.3.x you could choose this template when prompted hello-play-2_3-scala.
But I don't suggest you do that because:
The documentation for 2.4.x is comprehensive and there are walkthrough guides, it won't take any longer than a youtube video.
There are bug fixes and new features in 2.4.x
2.4.x introduced dependency injection which means it will be harder to upgrade once you'ved developed everything in 2.3x.
Apart from dependency injection most stuff works the same in 2.4.x
Intellij:
Use 14. Play support is improving all the time. If you can use the Early Access Program and the latest version of the Scala plugin.
Don't run 'activator idea' - this is deprecated. File -> open project from Intellij should be enough.

How to ensure eclipse plugin has required bundles available?

I'm just starting to develop a new eclipse plugin where I want a web application server running in Eclipse. I found a nice blog, OSGi as a Web Application Server, that describes how to do this. The author suggests creating a target environment for my bundle requirements, and some of those bundles get pulled in from the Equinox Project SDK (now called Equinox Target Components in Juno). I notice that the tutorial project runs fine when my target platform is the platform I created in the tutorial, but fails to start when it is the default platform. So, now for my question...
If I need bundles that are not part of the default, how will my plugin project get access to those bundles? Will I need to deploy them along with my plugin? How would I know if the user's eclipse does or does not already have those required bundles?
You was not much clear about what kind of application you are developing. Running a web server in an Eclipse IDE as a plugin don't make any sense to me. This kind of server application is best just running on top of Equinox.
Anyway, the right path is to create a "Product Configuration" file and add categories that contains the needed bundles (go to File/Plug-in Development/Product Configuration).
With this file you can run an instance of the product (inside the IDE) and can export it (create a zip containing all needed bundles)
And if you want to able your user to install plugin inside his IDE you must create a P2 repository (using a Target Definition File) and expose the exported directory within a Http server. You could research about Tycho to build this kind of components in a maven style.
Well, I'm not sure if re-inventing the wheel again is really sufficient.
You might take a look at Pax-Web for inspiration on how to do it, or take a look Apache Karaf as a OSGi-Container (using Pax-Web). Or even better start contributing to one of the two :-)

IntelliJ Datanucleus Enhancer plugin not working

The project I'm developing uses Datanucleus 2.0.3, so I'm using those libraries for enhancement (plugin is configured to use the module dependencies as well). IntelliJ version 12.0.1 on a Ubuntu 12.4 machine. I know the 2.0.3 is ancient history but upgrading it at least now is not an option for me.
From gradle all works fine. I imported by project to IntelliJ and when I ran the tests from junit I got the usual ClassNotPersistenceCapableException so I recalled I need a plugin for this.
I installed the newest plugin (tried both the beta and the last stable version) and configured the plugin to enhance my this one module. I chose JDO and applied, it discovered all the classes annotated for persistence, I rebuilt the whole project, ran the tests again and the same error occurs.
some things I've noticed / checked:
- the Enchaner is ticked in "Build / Datanucleus Enhancer"
- looked for multiple datanucleus jars, but there is only one
- haven't seen any message in IntelliJ in the Event Log saying is has done enhancing (the gradle enhancer logs such a message)
- haven't seen any error messages in IntelliJ saying enhancement failed, I also didn't find any log files outside IntelliJ (should there be any?)
- when I manually added the gradle built classes at the top of the classpath for the test the tests passed - but this is no good
- the module has the following datanucleus 2.0.3 jars on it's classpath: datanucleus-core, datanucleus-enhancer, datanucleus-connectionpool, datanucleus-rdbms and the asm-3.1.jar (the dependencies say it's 3.0-4.0 so this one should fit)
I have no idea why it sees the classes but doesn't enhance them, or maybe it does try and silently fail ... but then I don't know how to diagnose the problem
No other ideas come to my mind, please advise what to check or what to try.

Hadoop CDH4 and Eclipse Juno

Has anyone been successful in building an eclipse plugin for Juno against the CDH4 installation?
I've seen CDH3 all over the net. Looking for CDH4.
Thanks much.
I'm not sure if you're referring to the Hadoop Eclipse plugin or a plugin to develop code against CDH4. I'll answer both questions.
Developing against CDH4 in Juno:
By far, the easiest way to write applications against CDH4 components in Eclipse (any version), is by using m2eclipse[1] and adding the Cloudera Maven repository to your pom.xml. In fact, a significant portion of folks at Cloudera (including myself), do this regularly. Recently, one of our engineers (Natty) wrote a nice blog post about getting started with CDH4, Maven, and Eclipse[2] (and other IDEs). Otherwise, nothing special is required to write apps against CDH4 other than having the JARs around. You can also browse through the Cloudera Maven repository here[3].
The Hadoop Plugin:
Long ago, a plugin for Eclipse existed that allowed for MR job execution and some other bits. It has, however, been unmaintained for a very long time (at least two to three years now). I don't think anyone ever updated it to work with Juno--, let alone Juno, itself.
Hope this helps.
[1] http://bit.ly/UUGmlB
[2] http://bit.ly/O6rkp6
[3] http://bit.ly/UUGwcC
I followed the instructions found at: http://iredlof.com/part-4-compile-hadoop-v1-0-4-eclipse-plugin-on-ubuntu-12-10/
System: Local: Windows 7, Eclipse Juno (4.2.2), hadoop-1.2.1. Remote: Debian 7.1 with the same hadoop version.
I should mention that I built the plugin against vanilla hadoop-1.2.1 freshly downloaded from apache.
Not everything works with the plugin: I can add new MR location (remote in my case), I can browse/upload/download/delete files from DFS, but BUT I cannot run my code (using Run as ... Run to Hadoop). The console writes "ClassNotFoundException: WordCountReducer"
A good thing is that the jar generated by eclipse can be manually uploaded to MR master and started from command line.
You can get Hadoop Eclipse plugin at this GitHub Repository. https://github.com/winghc/hadoop2x-eclipse-plugin.
A post here introduces how to integrate CDH5 and Eclipse Luna. http://speedy-elephant.blogspot.com/2015/08/the-real-getting-started-guide-cloudera.html

Maven, Hudson and Dynamic Clearcase Views

This led on from the question about asking if Apache Maven and IBM Rational ClearCase integrated well. Thought I should write up what I found out - will require various edits, but I shall eventually get round to adding it all I hope.
Environment
ClearCase - Version 7.0.1.2 of ClearCase.
Maven - All of them, from the Maven website.
Hudson - Version 1.307 downloaded straight from the Hudson website
Questions
Does Maven run from a VOB?
I installed all the versions of Maven2 into a VOB 'stacked', i.e. I added Version 2.0 - labelled it, locked the label - then added 2.0.1 on top.
To prevent there being extraneous files, I used the -rnname flag in clearfsimport.
This way, I could simply use a label to specify the version of Maven I wanted access to in my configuration spec, but still keep the same path for the maven executable - /maven/bin/mvn.
Once all the versions were installed, I had no problem running Maven from there via a Dynamic View. Repositories are downloaded from an internal installation of Nexus to the users home directory as normal - and this saves any problems with checking in and out.
A benefit of keeping the tool in source control is that you can set company-wide settings (such as pointing to a internal repository) - then run that single instance of Maven from the VOB on any platform, which retains the settings you originally set!
In Maven projects, I only kept the src directory and the pom.xml in source control, as everything else can be auto-generated afterwards.
Does Hudson work with ClearCase?
I had no problem setting up Hudson to run with ClearCase Dynamic Views. All it took was a symlink from the working directory for Hudson to the root of the view (in this case /view/xxx). The ClearCase plugin successfully ran ct lshistory to find if there had been any changes in the integration branch that developers merge into.
I did write a small script to set-up the initial environment for a job - just the config.xml and dynamic view symlink - so that the correct view was listed in the job and the initial settings were correct. Any enhancements by the users afterwards were then changes to the default template, rather than them setting it up themselves.
In the overall settings of Hudson, I used the $CLEARCASE_VIEW environment variable to set the path to the Maven executable. That way, the version of Maven depended on the version set in the configuration specification - rather than the one they selected within Hudson.
This saves extra administration on both the part of me (the admin) and my users.
What Internal Repository Manager did you use?
I set up Sonatype Nexus to be the Internal Repository Manager - primarily because I read in the Sonatype blog that Hudson was going to get more integrated with Nexus, and we may as well be prepared for new enhancements in the future. I also believed, when I got it set up and tried it, that it was more prepared for a large commercial environment because you could tune the groups within the repository manager to be more flexible - useful for a great number of projects.
I have some Maven repositories outside of ClearCase, for some third-parties libraries referential.
But I have never used Maven with ClearCase since they follow a different logic (Maven needs signed names for files, like myfile-1.2.jar, whereas ClearCase can store only myfile.jar, and record the fact it is version labeled 1.2)
That may have changed with the Maven2 ClearCase plugin reported by romaintaz, but there is still some bugs in this new product, as shown by this thread, when one runs it a second time without unco'ing the pom file. Maven is getting through the checkout fine but is not able to whatever the next step is.
INFO Checking out file: /opt/viewstore/common/maven/my_lf_ss/vobs/test_alm/LF_Build/pom.xml
INFO ERROR BUILD FAILURE
INFO INFO Unable to enable editing on the POM
Provider message:
The cleartool command failed.
Command output:
cleartool: Error: Element "/opt/viewstore/common/maven/my_lf_ss/vobs/test_alm/LF_Build/pom.xml" is already checked out to view "my_lf_ss".
I am not using this SCM, but there is a Maven2 plugin called SCM that handles Clearcase.
I had a team building with Maven 2 and using Clearcase as the version control system. We used Archiva as the repository for built artifacts so the development team did not need to use the SCM plugin.
However, the continuous integration server was Continuum and that was relying on the SCM information in the POM. We had problems with the Clearcase SCM grabbing snapshot views using out branching strategy. One of my developers had to tweak the Clearcase SCM code to get it to work with our branches. We both moved on before we got round to contributing his fix.