Was wondering what kind of software and hardware component cant fail in a rmi program?
so for example in a client server model when a client invokes an object in the server?
thanks
Here's a quick list of RMI related failures:
Networking related failures. This list is rather large. If this is Java and windows, watch out for DNS related problems. I've seen some rather nasty challenges on large corporate LANs.
Version control between client and server
I've seen scaling problems with Java RMI and the default ServerSocket handler.
In general, I'm not a big fan of Java RMI. While it takes care of some problems, its implementation in traditional enterprise networks have caused me more problems than it saves. There usually are other choices out there.
Related
I'm managing a rather widely distributed software application in a semi industrial environment. The software at its heart is based on SOA and employs OPC-UA to make communications between important processes (on local or LAN-based machines) possible. These processes are either a server (e.g. an outer network management server, hardware managers server etc.) or a client (customer panel) or both (servers talking to each other).
OPC-UA has the following problems:
Configuring and maintaining the configurations is a hard job (just Config file settings takes lots of time)
Security measures are too much detailed for my needs (certificate management and sudden invalidation of certifications on customer systems)
Modeling and networking overheads in the library make it hard to work with in my communications (high data rates usually ends in server and client disconnecting)
Unspecified and weird errors like UA Discovery Server stopping to work or respond, etc. which I have reported to OPC GitHub forum many times.
Troubleshooting in internal parts of OPC UA is nearly impossible.
Overall, for me its performance and stability are not reliable enough. I am willing to sacrifice features for better performance and reliability. I've even considered to write sockets from bottom-up for my inter-process (IPC) needs. This way I could at least trace errors to their core. Since I do not need its most advertised feature (aka PLC support) I'm desperate to find a good alternative for it. My main requirements are:
OPC-UA like Data Modeling support that enables me to provide a clean interface to customers and other teams (something like IDL).
Publish/Subscribe, Remote Commands, Update Notifications and Node Based Behavior.
Tough Security is not my concern as my network is closed.
High performance for data rates up to 1Gbps (this could mean UDP support).
I am entirely working in .NET framework. So C# support of OPC-UA is a great help for me.
I've looked at DDS (lacks commands and Update Notifications) and WCF (lacks cross platform support) and many more.
This link also notes about MQTT: Alternative to OPC-UA
What about Google's gRPC + protobufs?
https://grpc.io/
It could be a stupid question since almost everyone is preffering embedded container technique to test EJBs, but I have to clarify this because of my lack of experience.
Also, some my argue that embedded containers my not reproduce the real life situation of deploying in a real app server.
So, when testing ejb3, why is indicated to use embedded containers instead of standalone container ?
Thanks in advance.
Time.
Testing EJBs in full blown application servers usually takes up a lot of time because of app. server has to "spin up" whenever changes are made, so a lot of time is wasted. Because of that, embedded containers such as OpenEJB can save you a lot of time. Embedded Glassfish is also an options these days, although I haven't personally tried it.
Zero turnaround is a kind of holy grail in Java EE.
Here are the most relevant arguments that I've found. Please comment beside this, or add your own reasons about testing with embeddable containers vs. a real application server container. Thank you.
using an embedded container testing technique ensures flexibility(you just need to add the new libs to the classpath). as far as I understand if we want to be able to deliver the testing project for several application servers we have to not be bound to the application server container in tests implementation. some app server could use some specific annotations or deployment descriptors, if they are used then you are bound to app server
embedded containers are lighter - this means reduced time for running the tests. real appserver have difficulties in starting and stopping automatically or could hang up. so to build fully automated testing process using real app server could be too difficult...
another problem is the stateless nature of most Java EE applications. after a method invocation of a transaction boundary (for example, a stateless session bean), all JPA-entities become detached. the client loses its state. this forces you to transport the entire context back and forth between the client and the server - heavy load,Every change of the client’s state has to be merged with the server
with embedded container you have one process that runs all (tests and ejbs), with real app server you should coordinate 2 processes(AppServer and Tests)
for full testing, of course, you need also tests on real appserver. different server could have some particularities, for example class loading etc.. embedded containers, however, help testing the logic (unit and integration of units testing) so for daily automated testing this could be enough and more easy.
An embedded container is much faster to execute (start/stop) than a full container -> this affects the developer for sure. Setup/configuration is easier to automate, specially with continuous integration. On the other hand, as some core features are disabled on an embedded container, you can't test everything.
You may want to investigate http://www.jboss.org/arquillian to have both options. From the site:
Arquillian enables you to test your
business logic in a remote or embedded
container. Alternatively, it can
deploy an archive to the container so
the test can interact as a remote
client.
In the end, it depends on the kind of EJBs you want to test. Certain complex scenarios will not work on an embedded container without mocks to some external services. In my projects we test EJBS with a custom mock container we created (ultra fast and easy to use) and, if all proceeds well, we test in the real thing, a full JBoss, using a remote control API pretty much like Arquillian.
Hope it helps.
I am working with an electronics appliance manufacturer to embed LAN based control systems into the products. The idea is to serve up a system configuration/control interface through a web browser so clients never need to install software. We can communicate with the appliance by sending and receiving serial data through the embedded module. Since the appliance can also be controlled from a front panel UI, it creates a challenge to keep a remote web interface in sync with very low latency. It seems like websockets or some sort of Push is what we need for handling real time events from the server to clients.
I am using a Lantronix Mathport AR embedded device server. Out of the box the unit will serve up any custom HTML and java servlets/applets. We have the option to install a lightweight Linux distro if we need more flexibility. I am not sure how to implement any server side apps since the device is not running standard Apache. I believe it is using Boa.
Can anyone guide me in the right direction of how to do this?
Some general info...The WebSocket protocol (draft spec here) is a simple layer on top of TCP. What this means is that, if you already have a TCP server for your platform, implementing the WebSocket is just a matter of hours. The protocol specifies a handshake and two ways of sending data frames.
I strongly suggest you start by reading the 39 pages spec.
As Tihauan already mentioned, start by reading the spec, and also note that there are still some changes ongoing, although websockets is now more stable than it was 1 year ago.
Key point for me was the requirement that websocket data is entirely UTF-8 text, which lends itself nicely to JSON based message definitions.
Our system uses a form of embedded linux, so we then added and made use of the following libraries:
"libwebsockets" from:
http://git.warmcat.com/cgi-bin/cgit/libwebsockets/
"jansson" from:
http://www.digip.org/jansson/
Using the above as support libraries, we created an internal lightweight "client/server" that allowed our other software modules to register for certain, applicable, websocket messages, and respond as needed. Worked great.
Good luck and best regards,
I'm a bit late, but Mozilla posted a guide entitled "Writing WebSocket servers", which literally guides you through writing a websocket server.
You will need to already know how HTTP works and have medium programming experience. Depending on language support, knowledge of TCP sockets may be required. The scope of this guide is to present the minimum knowledge you need to write a WebSocket server.
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers
There seems to be a lot of enmity against DCOM, and I'm curious to understand why. For a company still writing to the Win32 SKD using C++, is there any real reason not to use DCOM in current or future development? Is some future version of Windows not going to support it? Is it too fragile and fails to work often? Is it too complicated to implement compared to other technologies? What's the deal?
Security model. Especially when computers are not in the same domain (or aren't in domain at all).
Auto interfaces modeled for Visual Basic (original, not .NET), obsolete and not pretty to use from other languages.
If you only want to develop in C++ and deploy in controlled network, it may still be a good choice.
I dislike COM/DCOM because "Catastrophic failure" is the most unhelpful error message in the history of error messages.
Well, DCOM is a distributed version of COM and COM is very complex by itself and it's very easy to do something wrong unintentionally (see this recent question and the answer to it for examples). With DCOM you just have even more ways to hurt yourself.
Other than that it works and is for example a good way for hosting in-proc COM components in a separated process.
If your trying to build a client server application and want the communication to go across network boundaries (for example the internet) then DCOM can be problematic due to firewalls.
I had worked on a very success server application which was distributed using DCOM, we let the system handle most of the complexity by creating COM+ Server Applications and exporting Application Proxies. In this case it worked very well as long as all of our versions were synched up.
I implemented a large system using DCOM in the late 90's. Although it worked pretty well, there were a couple of issues. For starters it uses unpredictable port numbers for communication. It is not scalable, and you are much better off using WCF than DCOM.
I think momentum has shifted to SOAP and other web service technology because it is:
easier to deploy systems in the presence of firewalls
no vendor lock-in
I've never used DCOM myself, so I can't really comment on its general quality or fitness.
I have two Java Programs each running in its own JVM instance ? Can they communicate with each other using any IPC technique like Shared Memory or Pipes ? Is there a way to do it ?
Yes; D-BUS and Pipes are both easy to use, and cross-platform. D-BUS is useful for general message-passing IPC, and pipes for sending bulk data.
You can also open a TCP or UDP socket on localhost, if you need to support multiple clients connecting to a central server.
I also found an implementation of UNIX sockets in Java, though it requires the JNI.
http://java.sun.com/javase/technologies/core/basic/rmi/index.jsp
Java Remote Method Invocation (Java RMI) enables the programmer to create distributed Java technology-based to Java technology-based applications, in which the methods of remote Java objects can be invoked from other Java virtual machines*, possibly on different hosts. RMI uses object serialization to marshal and unmarshal parameters and does not truncate types, supporting true object-oriented polymorphism.
Sure. Have a look at RMI or a Shared Memory concept like Java Spaces.
Use MemoryMappedByteBuffer in Java NIO for sharing memory between processes.
There is a fairly new initiative for language-agnostic IPC of columnar (i.e. array-based) data from Apache called Plasma.
As of yet (Sept '17) there are no JVM bindings, but as the project is backed by the likes of Spark, I think it won't be long before we see an implementation.
My understanding however is that there is not a general IPC system, as it is geared toward sharing arrays of primitives like double,long, for scientific computing, rather than classes/objects; though I could be wrong here.
On the plus side, it's also language-agnostic, so you could use it to communicate with another (non-JVM) runtime. However, the OP did ask for Java IPC so this could be irrelevant.