Polymer, Deepstream.IO and RabbitMQ - rabbitmq

we'd like to set up a notification engine that uses AMQP. To achieve this, we're using RabbitMQ. That's fine, the server is installed and configured.
Now, we'd like to access the RabbitMQ message queues from a browser, so we need to have a wrapper around AMQP. For this, we found deepstream.io. This is especially fine, because we use Polymer as frontend which is supported by deepstream.io.
We configured deepstream.io to use rabbitMQ as backend, but the connection from Polymer to deepstream.io does not work:
The sets up the connection, we can see this in the deepstream server log (INCOMING_CONNECTION), but the component seems to be the problem. After a long timeout the log file reports a CONNECTION_AUTHENTICATION_TIMEOUT.
How can I set the user name and passwort specified in the deepstream.io config file in the component?
Thank you!

According to the ds-tutorial-polymer repo you connect to deepstream as follows:
<ds-connection
url="localhost:6020"
ds={{ds}}>
</ds-connection>
<template is="dom-if" if="[[ds]]">
<ds-login
auto-login
ds="[[ds]]">
</ds-login>
<todos-list
name="polymer_example/todos"
ds="[[ds]]">
</todos-list>
</template>
This exposes deepstream as a global ds for you to pass to other records and lists.
If you switch off auto-login within the ds-login you will need to call the login method on the prototype. An example ( and rest of documentation ) can been seen here:
http://deepstreamio.github.io/deepstream.io-tools-polymer/components/deepstream.io-tools-polymer/#ds-login

Related

H2-console in r2dbc-h2 driver

I am using R2DBC-H2 driver, and my UR.L is spring.r2dbc.url=r2dbc:h2:mem:///customer
Using this configuration, SpringBoot starts fine, however, I can not access the h2-console.
Does anybody know why, and how I can fix it?
If I understand the source code of H2ConsoleAutoConfiguration correctly, the h2 console auto configuration from spring boot does not work in a reactive environment.
...
#ConditionalOnWebApplication(type = Type.SERVLET)
...
public class H2ConsoleAutoConfiguration {
You can confirm this by yourself by changing the type of your web application to SERVLET (for example, by adding spring-boot-starter-web as a dependency) which will activate the route to the h2 console (if enabled in the application properties). The h2-console route endpoint will start working again.
As the whole code seems very servlet-specific, I don't know how to properly fix this problem.
H2 Console depends on traditional Jdbc drivers, not compatible with Spring WebFlux stack.
If you are developing a WebFlux application, you can use H2 as a standalone database, ane use H2 Console freely.
Following the official Getting Started guide to start H2 Database and H2 Console.
Set your spring.r2dbc.url to the database url you are running in the first step.
NOTE: Do not use a Memory DB here.

How ambari detect a service state

I'm adding a new custom service to Ambari.
I have successfully created the service and install it in the Ambari web UI. After starting the master component of my new service, Ambari claims that the master is in stop status, however, the master has been run successfully on the intended node and I can use its API.
I wonder how Ambari checks a component status?
Does it use the status function which I have provided in the component definition? I don't see logs of calling my status function in the Ambari logs.
Or does it use the PID file? My component does not have a PID file.
#TailofGodzilla (cool name btw), When I make custom services, I start with existing open source examples, and then finally create management packs. You can easily reverse engineer these, including the service status function.
I checked 3 of these services (Hue, Elk, NiFi) and all are using PID File with entries for status function and status_params file.

Why can not pass Gui User in jUDDI

After Configuration server Juddi in Eclipse and create environment variable
we get Problem to access to page Gui user and admin and tomcat interface :
I think you are looking at something like :
message java.lang.IllegalStateException: No output folder
I would check the Tomcat logs, the permissions of the user you are running tomcat under, and check the directory that you have installed your tomcat into.
Do not even try to use UDDI
these days. People are moving towards semantic web services ,UDDI is out of the scene.
WSMO and OWL-s are major initiatives for semantic web services. These solutions can provide more precise results.
Here's a few
mDNS/Bonjour/Avahi - can be used to share endpoint information for a web service, or anything else using a TXT record
WS-Discovery - supported by CXF and WCF, shares implementation of a specific interface
ebXML - had a component similar to UDDI
visite this link

Debugging Parse Cloud-Code

What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?
During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.
at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.

Re-route/Divert some WL.Client Adapter Invocation traffic to WL Server through different URL (for PCI payment and security requirements)?

Worklight 5.0.6.1
We are having a specific requirement from our client about using a PCI Appliance from Intel (http://info.intel.com/rs/intel/images/Intel_Expressway_Tokenization_Broker.pdf) to avoid a PCI Audit for the application and server.
Therefore, the Adapter calls that have something to do with payment data would need to go through this hardware appliance before hitting the worklight server. All other adapter calls should go to the worklight server directly (to not overload the appliance).
The idea is to have two different URLs but the same worklight server in the background. It is assumed that the calls through the appliance will be transparent for the worklight server, so worklight functionality should not be impacted.
My questions around this would be:
a Worklight best-practice for having two different URLs for the same worklight server and alternating those URLs from the client for Adapter invocations (only; not direct update or anything else, since we assume this is executed native)?
is it possible to dynamically overwrite the worklight server URL that is used for an adapter invocation through JavaScript code in the client code? e.g. overwrite a specific JS function that gets/returns the worklight URL from somewhere before the WL.Client AJAX adapter invocation?
We are also looking into having a load-balancer switch the route based on a regex of the AdapterName that is being invoked or so. But it is not sure right now if that is possible and what the performance impact is.
Though possible, this is not something supported by WL. You will not be able to get help from support in case something goes wrong (and it will). You have to keep in mind that all server cookies (e.g. session id) are per domain. Therefore when you're dynamically changing server URL you will loose them. Therefore WL server will treat your request as a new session, unrelated to an old (existing) one. This is not something specific to WL, this is how HTTP works.
WL keeps server URLs in two global properties - WL.AppProp.WORKLIGHT_ROOT_URL and WL.AppProp.APP_SERVICES_URL. You can override them thus changing server URLs.
First one is used for all requests triggered by developer (init, connect, login etc). Second one is used for miscellaneous internal functionality (e.g. encrypted cache).
Once again - this is a hack, definitely not a solution. Use with caution if at all:)
How About this,if we define our own function that will call some static properties and update them ?
function changeServerUrl(serverURL) {
WL.StaticAppProps.APP_SERVICES_URL = serverURL + WL.StaticAppProps.POSTFIX_APP_SERVICES_URL;
WL.StaticAppProps.WORKLIGHT_ROOT_URL = serverURL + WL.StaticAppProps.POSTFIX_WORKLIGHT_ROOT_URL;
WL.StaticAppProps.WORKLIGHT_BASE_URL = serverURL;
}
and call it
chnageServerUrl("http://"+yourServerIP+":"+PORT);
if you dig into the worklight.js file there is a function "setWLUrl(url)" that can be use to change the serevr URL.
call it like this and its done
setWLUrl("http://"+yourServerIP+":PORT");
its kind a hack but i think it should not have anny issue since its a function within there api.
Good Luck