I understand what is Elasticsearch, but have no clue on how to write a plugin for Elasticsearch. Can any one tell me the guidelines for writing plugins to Elasticsearch.
Found.no (an Elasticsearch hosting service) has a very good writeup on the Elasticsearch plugin development process. It's as detailed as I've seen out there and is fairly recent (Sept 2013) so should be reasonably up to date. If I was going to build a plugin from scratch that's where I would start:
https://www.found.no/foundation/writing-a-plugin/
The other is to dig around in other plugins on Github:
https://github.com/mobz/elasticsearch-head
https://github.com/elasticsearch/elasticsearch-river-twitter
Lists of other plugins:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html
Between the tutorial and looking at all the source code out there you should have a solid foundation.
It depends what you want to do with your plugin!
You can write a plugin in java to send data in elastic for example. You can do what you want for that.
I think a plugin is freedom! You can use elastic APIs or not, or imagine new function when you connect your plugin (java for example) on elastic.
Related
I'm trying to build a simple JavaFX application in IntelliJ environment using Gradle and OSGi, but I could't find a simple working example anywhere.
Most of the solutions I've found are not Gradle based or they use some additional tools, or they are outdated and they simply don't run, or they import some magical "hack code" from github etc.
The tools I've found for similar purpose e.g. e(fx)clipse, bndtools are probably not important if I use IntelliJ. Moreover the bndtools tutorial is very wordy and I couldn't find a good starting point or quickstart to try those things out.
I know the basics about Gradle and OSGi and according to information I've found, it does not seem to be an easy task to solve.
Are there any (good) tutorial(s) or quickstart(s) about how to start this kind of project properly? A simple working example would be very useful.
The e(fx)clipse project is a good starting point and provides many useful features for using JavaFX and OSGi.
See http://www.eclipse.org/efxclipse/index.html .
The blog of one of the developers has also many useful tips, https://tomsondev.bestsolution.at/ .
I've used DCEVM hotswap technology in eclipse and IntelliJ IDE which was pretty cool feature. By using DCEVM in IDE what I can do is, I can change into the source code i.e add/remove/edit method, classes, properties at the runtime without doing restart the program.
Now my question is:
I want to apply the same features in my running application which is run without using any IDE. To be more specific the running applications source code(compiled code) can change on the fly. And for that bytecode how to deploy on DCEVM for runtime hotswapping?
What I've found is:
We can do hotSwapping without using IDE for that we can write own JNI code to directly hook into JVMTI and trigger a hotswap.
any idea/help would be much helpfull. thanks
Fortunately I found the solution.We can use the HotSwapper plugin to solve this kind of problem. The same question is asked in dcevm discussion forume:
https://groups.google.com/forum/#!topic/hotswapagent/Uk3cUdkHNYQ
Although the information from this link https://news.ycombinator.com/item?id=3198497 is very helpfull but it was asked four years ago, so that I was stuck in.
Now the DCEVM is being more smart It can support various plugin such as Hotswapper, AnonymousClassPatch, WatchResources, Hibernate, Spring, Jersey2, Jetty, Tomcat, ZK, Logback, JSF, Seam, ELResolver, OsgiEquinox and even we can write own plugin too which is more easy to develope.
I have just recently come across graph databases and Tinkerpop.
I am somewhat confused on how/what to install to use Tinkerpop 2.5.0/2.6.0. Does it have to be installed on each Database separately (as you would a plugin) or can I set it up and then use it to access different supported software.
My goal is to use it to try out 2 (possibly more) different databases (mainly Neo4j and OrientDB or perhaps Titan) and be able to query them using Gremlin.
How you use TinkerPop is entirely dependent on what you intend to do with it. If you are just getting started, I suggest you simply download the Gremlin distribution, unpackage it and start the console with bin/gremlin.sh. Working in the REPL will help you learn quickly as the feedback time for trying things out is basically instantaneous. Even as your Gremlin code makes its way to production, you will find the Gremlin Console to be a good friend as it provides a way to try out ideas before committing them to code. It also provides a mechanism for maintaining/administering your database with Gremlin.
If you intend to use TinkerPop in a JVM-based application then you will want to use a dependency management tool like Maven and reference the appropriate TinkerPop dependencies you'd like to use. Alternatively, I suppose you could try to manually manage the dependencies by downloading them individually from Maven Central and adding them to your path (though I wouldn't recommend that for obvious reasons). I guess my point for suggesting that, is to just make it clear that the TinkerPop library is just a set of jars that can be included in your JVM development tools like any other.
How you work with a particular database is dependent on the one that you choose, but again the process is little different than what I described above. Neo4j is packaged with the Gremlin Console, so you can work with it right away in there. For OrientDB, you will want to copy those dependencies into the Gremlin Console path (i.e. the /lib directory). If you are building an application, then maven is again your friend and you simply reference the Neo4j or OrientDB maven coordinates and all require dependencies will come with it.
Some implementations, like Titan, have separate prerequisites (e.g. install cassandra or hbase). In those cases, you will need to refer to their documentation for specifics on how to set them up.
All that said, if you are just getting started, I recommend that you look into TinkerPop3. It is the next major line of development for TinkerPop and quit different from it's previous incarnations. It does not yet have all the of the implementations in play as of yet, but database vendors are at work to bring them online. All that I wrote about TinkerPop 2.x "installation" above generally applies to TinkerPop3, however, the TinkerPop3 Gremlin Console does have a plugin system that can help make it a little easier to bring in external dependencies, preventing you from having to worry about dealing with them manually.
I am working with erlang project which uses google protobuf via https://github.com/basho/erlang_protobuffs
After some time I've got not such good impresion about it (I've found usage of this technology in erlang very clumsy and inconvenient). But of course, I known that this is because I can not cook it properly.
Which open source erlang project are using erlang_protobuffs? I am interested in best (or at least sufficient) practices of its usage.
I assume that you mean http://github.com/basho/erlang_protobuffs library.
From major opensource projects I know only basho's riak using this library, although
Github code search gives a lot of different projects.
Note, that this library is not only one, take a look at this post
WHere can I get a XD version of dojo source like the one hosted on google? What I want to do is to host dojo source from my local CDN, and my custom dojo module in my web application. Is this a good practice? or I might as well just include the dojo source in my web app, and run the custom build?
Thanks,
You can build an xd version of dojo from the source code
Here are instructions on how to do it:
http://dojotoolkit.org/reference-guide/1.7/quickstart/custom-builds.html
See the section on "doing xdomain builds"
In our organization (a large one), we do have a CDN version of dojo deployed on internal CDN mainly since some of our webapps are not allowed to access extranet (firewall issues).
For performance, though, a custom build gives biggest boost since it is customized to the modules you need/use - once the custom build is done, you only need to ship a single compressed js output file and a small number of supporting files
When doing your custom build, you can use the xdDojoPath and loader=xdomain if you wish to use cross domain dojo to load your optimized js - see http://osdir.com/ml/cometd-users/2011-08/msg00050.html for some notes on this
Also see related SO question: Dojo on a CDN vs own install
The good news is that with Dojo 1.7+ and the new loader, you don't have to do anything special for a cross domain build (good answer above from #Vijay Agrawal, but I think that reference guide link may need some updating for 1.7) Just write your code to the new AMD format, use asynch:true, run the build tools to create layers, and deploy them on any server. AMD makes use of callbacks and many of the tricks the old Dojo xd builder used to employ, but in a much simpler way.
To support older code, there is a legacy cross domain mode mentioned in the loader docs.