ServiceMix -> NetBeans OpenESB? - apache

I've picked up a project that needs to import some (old) JBI components that were developed using ServiceMix about three years ago. I need to bring these into to a modern GlassFish environment. So far, it's not very clear what or how I should do it. Any tips or pointers?
My worst case scenario is to wrap the JBI component call in a POJO class, stripping out the ServiceMix bits, to see if that will at least get the gears spinning again.
I note elsewhere, that the JBI code in ServiceMix is not apparently JBI certified. So maybe that might be an indication this may be a non-sequitur.
TIA!
Andrew

I would think and re-think and then re-think once more before moving or importing anything into open esb. OpenESB project has basically been left without funding since Oracle bought Sun.

I did one better. I wrote my own data flow processing system from scratch. Everything else available was just too heavyweight and complex.
My new system, code named LightRail, works great. All connectivity is component driven and defined through a single JSON configuration file. All processing and flow control is handled through a single BeanShell script.
I've already deployed 10 different data flows in the last 10 months, connecting to IMAP, SFTP, FTP, Files and a database or two. Life is good again...
Andrew

Related

Software testing with version control

So, as a developer, you probably write a small amount of code, and then test it to see if it works before you move onto something else. This is because you don't want to write thousands of lines of code and find that doesn't work. Stating the obvious here. So myself and a few others(soon) are working on a php application where I want to implement some form of version control, most likely subversion since we all know how to use it, somewhat. My question is how do I implement this writing process stated above with writing, and then testing.
My idea was to set each developer up with their own workstation including a web server, and php/mysql etc.. so they can checkout the repo and then test on their own computer as they are writing. I'm really looking for some direction here with that. Currently we aren't using version control as there are only two developers and we simply use a shared directory thats located on the web server. When we make changes, we can view them immediately on the web server. Any input on this? Whats the best way to handle multiple developers when in the development process of an application?
There are a number of different ways to approach this:
1) Each developer has a whole web server stack on their machine, deploys to it, and tests there, then checks in working code.
2) There's a separate test/integration machine. Developers take turns deploying to that machine, do their testing, then check in working code.
3) You use branches in Subversion. Development happens on a branch, and it's OK to check in broken code on a branch. There may be a branch for each developer, or a branch for each feature, or whatever. The developer checks in code onto the branch, checks it out on the separate test machine, tests, fixes, then checks the working code onto the trunk.
Which one is right depends on how big your team is and how complex your server setup is. Choose one that makes sense for your team.
You'd need to start thinking about a build server, using a piece of software like cruisecontrol which monitors for source code changes and is then able to build, run tests and deploy you're code (ed: in a manner as close to live like as possible!).
I'd highly recommend integrating a build server as soon as possible, otherwise you'll find out down the line that automating something that somebody has been doing manually for the last 5 years is somewhat difficult :)
You might also find that each dev ends up with their own methods of deployment and custom environment, it'd be far better to centralise this in one place and then have other devs use the scripts and from that deployment if they want to run the same process locally.
Configuration management is something you want to get right to begin with!
One important thing with CI: You only want to push working code to the central repository. This requires a private repository for each developer, but has the advantage that you never break trunk.
Git and Mercurial are the most obvious tools, and can work with svn as a central repository.
To prevent merge conflicts, there's one trick to prevent pushing broken code to central: always pull/merge from central first, and frequently, prior to pushing:
http://martinfowler.com/bliki/FeatureBranch.html
And have a look at our sponsor: http://hginit.com for examples of workflows with multiple developers.

How to create an online rebol console?

Where can I find the code for creating an online rebol console like the one here ?
http://tryrebol.esperconsultancy.nl/
Update: for the sandbox system on the server, can't Rebol manage it itself with some security wrapper and its security options ?
As for console itself, I don't know Ruby so I don't want to use TryRuby and why would I need it ? Can't I mimic Rebol console itself by "remoting" it somehow ? Why RT or Esper Consultancy can't make an opensource version ? There's no value in keeping it closed source. Rebol needs to prove it's more open than in the past.
In my opinion, you should aim higher with something like the already open-sourced Try Ruby. You'd type in expressions and it would guide you. Their showcase site is at tryruby.org and is fairly slick.
I modified TryRuby to work with Rebol and it wound up looking like this:
But I'm not going to run it on my server because I didn't want to belabor the necessary sandboxing/etc. or protections against someone running an infinite loop. I can give you what I've got so far if you want it.
I started a tutorial script here that no one seemed interested in helping me with, so I wandered off to other tasks:
http://www.rebol.net/wiki/Interactive_tutorial_script
I'm not sure what exactly you want. You mention you want a remote REBOL shell instead of a tutoring setup, but that's what the Try REBOL site is. There are several reasons it's not open source:
It's in heavy development. I'm currently changing the code regularly.
So it's not in a release state. Preparing it for release, documenting and publishing it would take a lot of extra work, as with most projects.
It's written in my CMS that's also in heavy development. Even if the Try REBOL site were open source, it wouldn't run. The CMS is not planned to be open sourced soon.
It's not meant as a generic REBOL remoting tool, but as a one-off demo site. If that site is running, what's the use of more of them?
As others have answered, there are many generic solutions for remoting that you could use. Also, most parts of the Try REBOL site are readily available as open source:
Syllable Server, produced and published by us.
The Cheyenne web server.
The HTML source of the web client can be viewed, including my simple JavaScript command service bus.
Syllable Server is an essential part of the site, as the sandboxing is not done with REBOL facilities (except some extra limits in the R3 backend), but with standard Linux facilities.
A truly air tight (do I mean silica tight?) sandbox is close to impossible with R2.
R3 (still in alpha) is looking a lot more promising. The deep technical discussions in flight right now (see Cure code and AltME/REBOL3 Proposals regarding unwinds and protect and even occasionally mentioning sandboxes should lead to an excellent sandbox capability.
Right now, the big advance R3 has that makes Kaj's tryREBOL possible is R3's secure policy settings which make it possible (with some careful wrapper code) to construct an alpha/demo sandbox.
To answer your precise question("where can I find code...", you could try asking Kaj for his :)
I'm new to StackOverflow. I'm not sure if this is going to end up as a reply to your comment, or as a new answer.
The somewhat common idea that any project can be open sourced and contributed to by others is a naive view. In the case of my Try REBOL site, it makes no sense. It's not just in heavy development; it's written in a CMS that's also in heavy development. Basically noone could contribute to it at this point, because I'm the only one who knows my CMS. Or in any case its newest features, which I develop by developing Try REBOL, and other example sites. So developing Try REBOL means developing the CMS at the same time, and by definition, I'm the only one who can do that.
More generally, my projects are bleeding edge, innovative technology with a strong vision. The vision is mine, and to teach it to others, I have to build it to show how I intended it to work. So there's a catch 22: to enable others to contribute, I have to finish my projects first, because people typically don't understand them until I show them how they work.
There certainly are other projects where mass contribution makes more sense. Still, only the top projects get the contributors. We found that out the hard way. We created Syllable Desktop and Syllable Server with surrounding infrastructure for contributions. These are fairly classic, well understood operating systems that many people could work on in parallel. However, despite years of begging, we get very few contributions.
So, if you feel a burning need to contribute to our projects, please pick one of the many tasks in Syllable to execute. :-)

How can multiple developers efficiently work on one force.com application?

The company I work for is building a managed force.com application as an integration with the service we provide.
We are having issues working concurrently on the same set of files due to the shoddy tooling that is provided with the force.com Eclipse plugin. If 2 developers are working on the same file, one is given a message that he can't save -- once he merges he has to manually force the plugin to push his changes to the server along with clicking 2 'Are you really sure' messages.
Basically, the tooling does a shoddy job of merging in changes and forces minutes of work every time the developer wants to save if another person has modified the file he's working on.
We're currently working around this by basically 'locking' individual files by letting co-workers know who is editing a file.
It feels like there has got to be a better way in this day and age. Does anyone know of a different toolset we could use, process we could change, or anything we can do to make this easier?
When working with the Force.com platform my current organisation has found a number of different approaches can work depending on the situation. We all use the Eclipse Force.com plugin without issues and have found the following set ups to work well.
We have a centralised version control system that we deploy from using a series of ant commands to a developer org instance. We then depending on the scope of the work either separate it off into chunks with each developer having their own development org and merging the changes and testing them regularly, or working in a single development org together (which if you have 2 developers should be no major problem) allowing you to have almost instant integration.
If you are both trying to work on the same file you should be pair programming anyway, but if working on two components of a similar system together, sharing the same org can allow you to develop in a fast and flexible manner by creating the skeleton of the system you wish to use and then individually fleshing out the detail.
I have used both methods extensively and a I say, work really well depending on the situation.
Each developer could work in separate development sandbox (if you have enterprise edition, I think 10 sandboxes with full config & limited amount of data are included in the fee?). From time to time you would merge your changes (diff tool from any version control system should be enough) and test them in integration environment. The chain development->integration->system test->Q&A-> production can be useful for other reasons too.
Separate trick to consider can be used if for example 2 guys work on the same trigger. I've learned it on the "DEV 401" course for Developers.
Move all your logic to classes. Seriously. They will be simpler to unit test too.
Add custom field (multi-select picklist) to User object. Values should be equal to each separate feature people are working on. It can hold up to 500 values so you should be safe.
For User account of developer 1 set "feature1" in the picklist. Set "feature2" for the other guy.
In the trigger write an if that tests presence of each picklist value and enters or leaves the call to relevant class. This wastes 1 query but you are sure that only the code you want will be called.
Each developer keeps on working in his own class file.
For integration test of both features simply set the multiselect to contain both features.
I found this trick especially useful when other guy's code turned out to be non-optimal and ate too many resources. I've just disabled his feature on my user account and kept on working.
This trick can be to some extend applied to Visualforce pages too (if you can divide them into components).
If you don't want to waste query - use some logic like "user's first name contains X" ;)
We had/have the exact same problem, we have a team of 10 Devs working on a force.com application that has loads of apex classes (>300) and VF pages (>300).
We started using Eclipse plugin but found it:
too slow working outside of the USA each time a save is called takes > 5 sections
to many merge issues with a team of 10 developers
Next we tried developing in our own individual sandboxes and then merging code. This is ok for a small project but when you have lots of files and need changes to be pushed between sandboxes it becomes impossible to manage as the only thing worse then force.com development tooling, is force.coms deployment/build tools. No automation its all manual. No easy way to move data between sandboxes either.
Our third approach was to just edit all our VF pages and Apex code in the browser. (not using their embedded editor that shows up in the bottom half of the page because that is buggy and slow) but just using the regular Editor under setup > develop > Apex classes. This worked ok. To supplement this we also had a scheduled job that would download all our code and save it into our SVN repository. We also built a tool that allow us to click a folder on our desktop and zip its contents and deploy it as static resources for us.
However this approach still has its short comings, i.e. it is slow and painful to develop in the cloud, their (salesforce) idea of Development As a Service is crazy. Also we have no real SCM we only have it acting as backups.
Bottom line is force.com is a CRM and not a Development platform, if you can? run, flee, get away from it as fast as you can. Using it for anything apart from a CRM is more trouble then it is worth. Even their Slogan "No Software" makes me laugh everytime
I'm not familiar with force.com, but couldn't you use source control and pull all the files down from force.com into your repository. Then you could all do your work, and merge your changes back into the mainline. Then whenever it's necessary push the mainline up to force.com?
Take a look at the "Development Lifecycle Guide: Enterprise Development on the Force.com Platform". You can find it on developer.force.com's documentation page.
You might want to consider working on separate static resources and pages and then just being careful when editing objects, classes etc.. If most of your development happens on client side code (page, staticresource, lightning component/app) you might be interested in this project: https://github.com/bvellacott/salesforce-build . In any case I strongly suggest using version control. If not on a server then at least locally on your machine, in case your peers overwrite your work.

Rich Internet application solutions

I am evaluating Rich Internet application solutions to use in next project. I have heard of following solutions -
Adobe Flex
extJS
Jboss Richfaces
IceFaces
Oracle ADF
JavaFX
Silverlight
GWT
I want to know if there are more solutions available.
I would appreciate if you can provide any valuable feedback on the above solutions.
IT Mill Toolkit is a "server-driven" framework built on top of GWT.
Comment: coming from a heavy PHP and Java-hostile background, I found Toolkit to be very pleasing to use pretty quickly. Being able to write nothing but (the strongly typed, nicely OO-oriented) Java is nice, considering the fact that what you change in the code is pretty instantially reflected on what you see in the browser.
It's a bit tricky to set up, but IT Mill has an Eclipse plugin that supposedly helps with that. The only thing is that the plugin itself is a tad unintuitive to use :)
0.02€
Reply to comment: The biggest difference between GWT and IMT is that GWT operates entirely inside the browser (a hostile/exploitable environment with e.g. FireBug), while IMT uses GWT only to render the server-side state. So, while you can edit any values you want in the browser with both GWT and IMT, GWT will happily accept the user-edited variable values, IMT keeps track on the values server-side, and doesn't allow any discrepancies between the client and server.
Another big difference is that GWT widgets need to be compiled every time you do any changes to them with the relatively time consuming GWT cross-compiler (compiles Java to JavaScript). IMT, on the other hand, needs only to be redeployed to the servlet container, and the changes are there, because the GWT widgets inside IMT don't need to be recompiled. With Tomcat, it's virtually instantaneous (i.e. as soon as Tomcat notices that Eclipse has recompiled the classes on the fly).
#the_drow: Not being familiar with Dijit, here's an answer: Dojo is javascript only, meaning it's client side only. Vaadin (née IT Mill Toolkit) lives partly in the server side too (calls itself "server driven"), so you can't hack the client side just by changing JavaScript variable values. There's a chart that compares Vaadin with other comparable products. Dojo isn't included, but JQuery is, which is vaguely similar to Dojo
i had an experience with Spring Webflow + Rich Faces with mixed results - time to get the results on screen is really short, but it's pain to fine tune the presentation part.
ie if you are building some tech oriented/backend
/standard GUIs - it's ok, if you are going to build a frontend used by millions web2.0-ers you end up messing with presentation part css/javascript big time.
After evaluating and reading various RIA solutions, I have finally selected GWT and GWT-Ext. I see these benefits for me and my team -
We are used to Eclipse so this is an advantage.
Ability to use Java Debugger in Eclipse is extremely helpful.
GWT Hosted mode in eclipse, so no compile and deploy required on every change.
Big developer community
Lots of ready to use components
Prior Java knowledge helpful
Similar to swing and the team have worked on Swing in earlier project.
Look and feel is also good
Maven support is also available.
Can write Junit test cases.
No other language knowledge required apart from Java.
Easy RPC configuration based on annotations.

Managing ABAP Source Code in Source Control

Our product currently spans a large number of technologies, including Java, PL/SQL, VB.Net and ABAP. We have a fairly mature source control and build system set up for all of the languages except ABAP, which is still in the stone ages. Since SAP has a build system set up within it, our engineers do all of their development in an SAP environment export transports, and check those into source control. Since we support a number of SAP versions, it becomes very difficult to track versions and migrate code across 4.6, 4.7, 5.0, etc.
My ideal process would be to check the ABAP code into source control in text files, and then load it into SAP and generate the transports as part of the build process. The SAP engineers don't think there are tools to support this model.
If you are managing ABAP code in a source control system, what does your process look like? Are there tools available (preferably command-line) for loading ABAP code into SAP? How do your engineers manage the code/test/debug cycle? Do they code in SAP and then export the code when finished, or edit in an external editor?
I used SAPLINK (mentioned in previous answer) for that purpose. There is also a related project called "zake", that supposedly can automate some of the tasks but I never used it. I simply exported my code manually to so-called slinkees (they contain single objects like function groups; nuggets on the other hand contain several objects).
Reasons to use some external source control system:
correlation to non-abap source code (as our software consisted of .net and abap code)
hosting / maintaining SAP was not something we were exactly good at, so it was good to know you had your code in a safe place
one thing though: you need at least WAS 620 in order to use saplink
I'm interested as to what the benefit is of version control outside the ABAP stack of the SAP system.
I've never seen anyone use external source code control for ABAP, as it's built right in. I've never seen anyone code ABAP outside the SAP system either. It really doesn't fit the model.
SAP's ABAP stack is a single-development system environment. All the developers log on to the one system and develop there. The system records versions automatically, and groups changed objects into transports. A transport is just a list of changed objects. Once you export the transport, the version numbers are incremented for each object and you get the package for the other systems.
The ABAP stack also doesn't really have a "build" concept as such. Everything you do is a patch.
Also check out SAP CTS+ which is used for managing transports and version control of ABAP and JAVA based components.
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0249083-c0ab-2a10-78b8-b7a7854b1070
At the very least, modifications should be done and tested in an SAP development system. Nobody uses an external editor with ABAP. (SAP Java on the other hand...) There is no reason why you can't keep backups of the SAP code, either directly, as text files, or, (preferably) with SAPLink or transport dump files. (Ask your BASIS people about the transport files). Realize that if you go the text file route, you might miss out on things like field text, etc., which are stored elsewhere in the database.
Hy,
As Dom told you, SAP has it's own version managment. However in order to makes regular save between transport releasing, you might use tools like :
SAPLink (as saied Wili aus Rohr)
ZAPLink
this tools could be use to extract ABAP components into XML. I really do not advise to make automatic import into SAP, for many reasons :
# thoses tools have no guaranties
# not all ABAP Compoponent can be handled like this
# you will lose SAP guaranty if you do this on a productive SAP system
But it might be interesting to use tools like (Google code) to display in detail software change, which could be more complicated on ABAP Object.
I developed this on ZAP Link framework with ZAPLINK_EXTRACTOR program that export SAP Components into XML when they have changed. This prevent XML file to change (new file but same content) and to be detected by tools such as mercurial as a change.
Hope it helps.
Keep in mind that you should use SAP tools to change SAP Component. SAP consultant might explain it to you in details.
Taryck.
[http://www.steria.com Steria (France)]
The 2020 answer to the question is simply: Use AbapGit.
It gives you all the advantages of modern version control, is fully documented, open source and works like a charm.