I am working on an application to perform CRUD operations on the LDAP.
This web application is using the IdentityStore class to communicate with the LDAP.
Now, some operation, like periodic role switch (from user to user) or deleting users, needs to be schedulable.
So my first idea was using a BPEL services to connect a DB (with which and when to execute the operation) to the LDAP.
I know it can done, but I have no clue how does the BPEL works.
Is there another way? Keep in mind this project will be deployed on a server with other applications, so it need to be light.
The solution I used at the end is:
With my little knowledge of the BPEL I created some services to which I added
an Java script that uses an internal library I wrote in java (using the IdentityStore module), to communicate with the Ldap.
Note:
I found that you can use the LDAP in the composite, (Right click-> insert -> LDAP), to connect to it. I'd advise you to research that if you want to do this in a 'cleaner' way.
Related
I added the component #loopback/health to my loopback4 server but I don't understand on what it's based to assume my server is up. I searched on https://loopback.io/doc/en/lb4/Health.html#add-custom-live-and-ready-checks and on google but I can't find any infos about how it's working.
Thanks for your light !
Without configuring any additional custom checks, #loopback/health only configures a Startup Check that keeps track when the REST server (which is a LifeCycleObserver) is started and shutdown. This is useful for infrastructure with existing tooling that consumes (e.g. Kubernetes, Cloud Foundry), or if the LoopBack 4 project does more beyond a REST server.
It is still an experimental package, and there are intentions to expand the scope to encompass other LifeCycleObservers of the LoopBack 4 app such as DataSources.
I need to install WSO2 Identity Server in several machines with a custom configuration (some service providers, roles, user datasource etc. etc.). I have seen that almost everything can be done using the webservices exposed, but I'm looking for something else.
Is it available a tool to collect these configurations in a sort of script and launch it on a new created istance or the only way is to create something that call the webservices?
If you have the ability to create a cluster of IS instances, everything you do through the UI (Management Console) will be automatically replicated/synced with the other IS instances. You can create a cluster following the below link.
https://docs.wso2.com/display/CLUSTER44x/Clustering+Identity+Server+5.1.0
I want to know if their any way to invoke a windows service from SQL query ?
note that : the windows service is installed on the same SQL server
Thank you
No, because windows services cannot be invoked. A service is just a process that is started by the SCM and responds to the SCM commands, there is no such concept as 'invoking' a service. What you can interact with is the service API, which implies a communication protocol with the service. Depending on the API and protocol used, that may be possible to do from T-SQL. Eg. COM based APIs can be interacted using sp_OAcreate, managed APIs may be interacted using SQLCLR and so on. But is always a Bad Idea. T-SQL is not the proper layer to introduce dependencies on external services, it may impact the SQL performance dramatically and, most importantly, is almost always incorrect due to not proerly considering rollbacks.
Do any/all service interaction in the application layer, where it belongs.
i use sesame for a project and i use a local nativestore file repository. everything is fine but when multiple clients use my application silmuntaneusly the repository locks.How can i deal with parallel connections problem;
A Sesame Native Store assumes it has sole, unique access to its datadir. This means that you can not create two NativeStore objects that use the same datadir, as this will cause inconsistencies and potential deadlocks. So, you need to share a single NativeStore object.
In a single JRE, this can be easily achieved by using a RepositoryManager. See this article for an explanation and code examples. If your setup requires several independent client applications to connect to Sesame, you will either have to implement your own server app for these clients to connect to, or you can use a Sesame Server and have each client connect via a HTTPRepository.
We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.