How to specify JMS username and password within URL - activemq

I have an application which connects to ActiveMQ using a "failover" URL string. The admins are adding authentication to the brokers. Is it possible to put jms.userName and jms.password into the URL string? An example with dummy values would be most helpful.

Yes, exactly how you specified it works. The jms. prefix configures any of the setters on the ActiveMQConnectionFactory.
failover:(tcp://127.0.0.1:61616)?jms.userName=admin&jms.password=admin
Log confirmation:
09:41:53.429 INFO [ActiveMQ Task-1] Successfully connected to tcp://127.0.0.1:61616
09:41:53.481 INFO [Blueprint Event Dispatcher: 1] Route: route1 started and consuming from: amq://queue:VQ.ORDER.VT.ORDER.EVENT

Related

Confusion on AsyncAPI AMQP binding for subscribe operation

I have a server which publishes rabbitmq messages on a exchange, so I tried to create following async api specs for this -
asyncapi: 2.3.0
info:
title: Hello World
version: 1.0.0
description: Get Hello World Messages
contact: {}
servers:
local:
url: amqp://rabbitmq
description: RabbitMQ
protocol: amqp
protocolVersion: 0.9.1
defaultContentType: application/json
channels:
hellow_world:
subscribe:
operationId: HelloWorldSubscriber
description:
message:
$ref: '#/components/messages/HellowWorldEvent'
bindings:
amqp:
ack: true
cc: ["hello_world_routing_key"]
bindingVersion: 0.2.0
bindings:
amqp:
is: routingKey
exchange:
name: hello_world_exchange
type: direct
durable: true
vhost: /
bindingVersion: 0.2.0
components:
messages:
HellowWorldEvent:
payload:
type: object
properties: []
Based on my understanding what it means is that MyApp will publish helloworldevent message on hello_world_exchange exchange using routing key hello_world_routing_key
Question -
How can consumer/subscriber can define which queue he will be using for consuming this message ?
Do I need to define new schema for subscriber and define queue element there ?
I can define another queue.** elements in channel element, but that can only specify 1 queue element, what if there are more than 1 subscriber/consumer, so how we can specify different queues for them ?
Reference -
https://github.com/asyncapi/bindings/tree/master/amqp
I see you have not yet approved any of the responses as a solution. Is this still an issue? Are you using the AsyncAPI generator to generate your code stubs?
If so the generator creates a consumer/subscriber. If you want different processing/business logic you would generate new stubs and configure the queues they listen from. The queue is an implementation detail. I had an issue with the node.js generator for AMQP and RabbitMQ and so I decided to test the spec against Python to see if it was me or the generator.
Try the generator and you can try my gist: https://gist.github.com/adrianvolpe/27e9f02187c5b31247aaf947fa4a7360. I did do this for version 2.2.0 so hopefully it works for you.
I also did a test with the Python pika library however I did not assign a binding to the queue.
I noticed in the above spec you are setting your exchange type to Direct. You can have the same binding with multiple consumers with both Direct and Topic exchanges however you may want Topic as quoted from the RabbitMQ docs:
https://www.rabbitmq.com/tutorials/tutorial-five-python.html
Topic exchange is powerful and can behave like other exchanges.
When a queue is bound with "#" (hash) binding key - it will receive all the messages, regardless of the routing key - like in fanout exchange.
When special characters "*" (star) and "#" (hash) aren't used in bindings, the topic exchange will behave just like a direct one.
Best of luck!

Spring Cloud Config Basic Security throwing 401 error

I have following configuration on server side:
server:
port: 8888
spring:
profiles:
active: native
cloud:
config:
server:
native:
search-locations: "classpath:/config"
security:
user:
name: test
password: test
And following configuration on client side:
spring:
cloud:
config:
fail-fast: true
profile: "${spring.profiles.active}"
uri: "${SPRING_CLOUD_CONFIG_URI:http://localhost:8888/}"
username: test
password: test
I can successfully access properties from browser using user/pwd as test/test, but when my client tries to fetch it failed with 401 error:
INFO 7620 --- [5cee934b64bfd92] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
WARN 7620 --- [5cee934b64bfd92] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: 401 null
I tried setting the log level for spring cloud to DEBUG but nothing additional got logged, so I have no clue why I'm getting a 401 from client while I can access properties successfully via browser using the same credentials.
I've also tried removing the security from server and client and it worked perfectly, which means rest of the configurations are quite ok. But then the question is, what am I overlooking when I apply basic security and why it is not working and throwing a 401 instead?
Try checking these configurations:
spring.cloud.config.username
spring.cloud.config.password
Both properties should be defined at bootstrap.properties (not application.properties)
Please check if the way you are specifying profile name is correct and if it is getting resolved properly in Java code. You can implement CommandLineRunner and print active profiles from Environment variable.
If you specified property spring.profiles.active as native in pom.xml, you can resolve it in application/yaml file as #spring.profiles.active#
If you specified property file as VM argument, then it should work with current implementation.
If you did not specify spring.profiles.active in pom or VM argument, it will resolve to default profile, not native profile. Profile in config client and config server should be same.
Yes we have to use bootstrap properties because When the Spring Cloud application starts, it creates a bootstrap context. The bootstrap context is searching for a bootstrap.properties or a bootstrap.yaml file, whereas the application context is searching for an application.properties or an application.yaml file. The bootstrap context is the parent context for the main application

Apache camel / MQTT through SSL : Failed to create Producer for endpoint (java.lang.NullPointerException)

I'm trying to publish to a MQTT topic thanks to the appropriate Apache Camel MQTT component.
So in my spring context XML I have the following:
<camel:to uri="mqtt:test?host=ssl://myhost:8883&publishTopicName=test&userName=test&password=test"/>
But I'm getting the following error at startup :
Failed to create Producer for endpoint:
Endpoint[mqtt:test?host=ssl://myhost:8883&publishTopicName=test&userName=test&password=test]. Reason: java.lang.NullPointerException
Everything is working fine when not using ssl, the following configuration (regular tcp instead of ssl) is working well :
<camel:to uri="mqtt:test?host=tcp://myhost:1883&publishTopicName=test&userName=test&password=test"/>
I've added the javax.net.ssl.trustStore JVM property pointing to my certificates store but without any effect.
Does someone already met this issue ? Is there something to specifically add in the spring DSL configuration file when using the camel mqtt component with ssl ?
EDIT :
The stacktrace of the NPE :
Caused by: java.lang.NullPointerException at
org.fusesource.hawtdispatch.transport.SslTransport.connecting(SslTransport.java:194)
at
org.fusesource.mqtt.client.CallbackConnection.createTransport(CallbackConnection.java:285)
at
org.fusesource.mqtt.client.CallbackConnection.connect(CallbackConnection.java:138)
at
org.apache.camel.component.mqtt.MQTTEndpoint.connect(MQTTEndpoint.java:305)
at
org.apache.camel.component.mqtt.MQTTProducer.doStart(MQTTProducer.java:38)
at
org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at
org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:3219)
at
org.apache.camel.impl.DefaultCamelContext.doAddService(DefaultCamelContext.java:1209)
at
org.apache.camel.impl.DefaultCamelContext.addService(DefaultCamelContext.java:1170)
at
org.apache.camel.impl.ProducerCache.doGetProducer(ProducerCache.java:442)
... 33 more
Debugging through javax.net.debug=ssl was useful.
Actually there were an issue on the java.security where the security.provider property was not set properly. That was manually changed for testing purpose related to another application.
Since, everything is working fine. Sorry for the post related to a internal specific mistake.
Alex.

ActiveMQ; how to make a broker distribute messages among several transportConnectors

is it possible to make the ActiveMQ broker distribute messages received on one transportConnector to other transportConnectors as well?
The concrete use case is this: I have a Java client sending messages using the openwire transportConnector and I would like to be able to read them on the mqtt transportConnector.
I use the sample jndi.properties file that is on the ActiveMQ page http://activemq.apache.org/jndi-support.html:
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
# use the following property to configure the default connector
java.naming.provider.url = tcp://localhost:61616
# use the following property to specify the JNDI name the connection factory
# should appear as.
#connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue
# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
I had to replace the default 'vm' transportConnector with the 'tcp' one because it did not execute using 'vm'.
The messages are pushed to my Java MessageListener instance but my mqtt client does not show them. I tried with different topics, started with 'example.MyTopic' up to '/example/MyTopic'.
Any help would be much appreciated.
Many thanks,
Roman
The broker does that by default so you are not doing something right, check the admin console for producers and consumer registered on the given destinations to see what is going on. You must remember that a Topic consumer will not receive messages sent to that Topic unless it was online at the time they were sent or you had previously created a durable topic subscription.

Can't connect SonarQube to LDAP directory for user mapping

I'm trying to set up the user mapping on SonarQube (Latest) so it can fetch the organizational structure from LDAP.
I already installed the LDAP plugin on Sonar (1.5.1), and created a minimal configuration to connect the two:
# General Configuration
sonar.security.realm=LDAP
ldap.url=ldap://ldap:389
# User Mapping
ldap.user.baseDn=ou=users,ou=udd,dc=example,dc=com
ldap.user.request=(&(objectClass=inetOrgPerson)(uid={uid}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
All my users are under the example.com domain:
But then, as I try to login to Sonar using the LDAP entries I get the following error on the logs:
Error from external users provider: exception Java::OrgSonarApiUtils::SonarException: Unable to retrieve details for user dev1 in <default>
Which is pretty frustrating, since all those properties are configured on the configuration file above.
Any ideas about the source of this issue?
EDIT:
I found this when I increased the log depth to DEBUG:
2016.01.25 05:54:27 DEBUG web[o.s.p.l.LdapContextFactory] Initializing LDAP context {java.naming.provider.url=ldap://ldap:389, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, com.sun.jndi.ld
ap.connect.pool=true, java.naming.security.authentication=simple, java.naming.referral=follow}
2016.01.25 05:54:27 DEBUG web[o.s.p.l.LdapUsersProvider] integer expected inside {}: (&(objectClass=inetOrgPerson)(uid={uid}))
javax.naming.directory.InvalidSearchFilterException: integer expected inside {}: (&(objectClass=inetOrgPerson)(uid={uid}))
at com.sun.jndi.toolkit.dir.SearchFilter.format(SearchFilter.java:602) ~[na:1.7.0_95]
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1785) ~[na:1.7.0_95]
...
I don't see why is an integer supposed to be expected between the {}'s, and that doesn't make much sense compared to my LDAP structure.
Try to set ldap.user.request to (&(objectClass=inetOrgPerson)(uid={login}) (instead of using (uid={uid})).
Details:
The LDAP Plugin does not recognise {uid} and therefore doesn't know what to do with it. It then passes it to the LDAP javax.naming API, which chokes on this. This behaviour is made explicit at SonarQube startup (logs in my case):
INFO web[o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=cn=employees,dc=example,dc=org, request=(&(objectClass=inetOrgPerson)(uid={uid})), realNameAttribute=cn, emailAttribute=mail}
Using {login} instead (keyword shown in the documented default values) will let the LDAP Plugin build a well-formed request with a {0}:
INFO web[o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=cn=employees,dc=example,dc=org, request=(&(objectClass=inetOrgPerson)(uid={0})), realNameAttribute=cn, emailAttribute=mail}
The javax.naming API will then replace this {0} by a parameter which SonarQube will set to the actual username value you fill in the login form.