By following https://activemq.apache.org/rest.html, I'm able to push messages via the REST API (e.g. curl -u admin:admin -d "body=message" http://localhost:8161/api/message/TEST?type=queue works, and I can see in the admin console) However, I'd like to be able to use HTTPS. I found https://activemq.apache.org/http-and-https-transports-reference.html and http://troyjsd.blogspot.co.uk/2013/06/activemq-https.html but couldn't manage to make it work. Based on these two outdated/incomplete links:
I added to conf/activemq.xml
Imported self-signed certificate into JDK keystore (per http://troyjsd.blogspot.co.uk/2013/06/activemq-https.html)
Copied xstream and httpclient jars from lib/optional to lib/ (both under ActiveMQ directory, obviously)
So,
How can I set ActiveMQ so that it can be used with a HTTPS REST endpoint?
Assuming I did step 1, how can I test it (a similar curl command example like the above)?
I use ActiveMQ 5.9.1 and Mac OS 10.9.4
Uncomment the following section of conf/jetty.xml.
<!--
Enable this connector if you wish to use https with web console
-->
<!--
<bean id="SecureConnector" class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector">
<property name="port" value="8162" />
<property name="keystore" value="file:${activemq.conf}/broker.ks" />
<property name="password" value="password" />
</bean>
-->
Jetty powers not only the WebConsole, but all HTTP stuff in ActiveMQ.
It should work out of the box for testing, but you probably want to roll your own keystore/certificate for real use.
You could use curl as before on port 8162 with HTTPS given you supply the "insecure" flag -k.
Otherwise, you need to create a trust store in pem format and supply it - see this SO for details. Curl accept the argument --cacert <filename.pem> with your certificate or issuing CA in it.
Related
Note: This is not a question, I'm providing information that may help others.
Hi all,
I recently spent way too much time beating my head against the keyboard trying to work out how to connect Nifi to a Nifi registry in a corporate environment. After eventually working it out I thought I'd post my findings here to save the next poor soul that comes along seeking help with Nifi and Nifi registry.
Apologies in advance for the long post, but I thought the details would be useful.
I had a requirement to setup containerised instances of Nifi and Nifi-registry, both backed by LDAP, leveraging corporate SSL certificates and using an internal Container registry (no direct Internet access). As of this morning, this is now working, here's an overview of how I got it to work on RHEL 8 servers:
In a corporate environment the hosts need SSL certs setup for HTTPS, and to ensure they can communicate securely.
SSL Cert setup
Generate SSL private keys for each host in a Java keystore on the respective machines
Generate CSRs from the keystores, with appropriate SANs as required
Get CSRs Signed - Ensure that the "Client Auth" and "Server Auth" Extended Key Usage attributes are set for the Nifi cert (This is required for Nifi to successfully connect to a Nifi Registry). The registry cert just needs the Server Auth attribute.
Import corporate CA chain into the keystores, to ensure full trust chain of the signed cert is resolvable
Create a Java keystore (truststore) containing the CA cert chain
I can provide further details of the above steps if needed
Now that we have some SSL certs, the steps to setup the containers were as follows:
Container setup
Install podman (or docker if you prefer)
For Podman - Update the /etc/containers/registries.conf to turn off the default container registries
For Podman - Update /usr/share/containers/libpod.conf to replace the path to the pause container with the path the container in our internal registry
Setup folders for the containers, ensuring they have an SELinux file context of "container_file_t", and have permissions of 1000:1000 (UID & GID of nifi user in the containers).
Setup an ENV file to define all of the environment variables to pass to the containers (there's a lot for Nifi and the Registry, they each share this info). This saves a lot of CLI parameters, and stops passwords appearing in the process list (note password encryption for nifi is possible, but not covered in this post).
KEYSTORE_PATH=/path/to/keystore.jks
TRUSTSTORE_PATH=/path/to/truststore.jks
KEYSTORE_TYPE=JKS
TRUSTSTORE_TYPE=JKS
KEYSTORE_PASSWORD=InsertPasswordHere
TRUSTSTORE_PASSWORD=InsertPasswordHere
LDAP_AUTHENTICATION_STRATEGY=LDAPS
LDAP_MANAGER_DN=CN=service account,OU=folder its in,DC=domain,DC=com
LDAP_MANAGER_PASSWORD=InsertPasswordHere
LDAP_TLS_KEYSTORE=/path/to/keystore.jks
LDAP_TLS_TRUSTSTORE=/path/to/truststore.jks
LDAP_TLS_KEYSTORE_TYPE=JKS
LDAP_TLS_TRUSTSTORE_TYPE=JKS
LDAP_TLS_KEYSTORE_PASSWORD=InsertPasswordHere
LDAP_TLS_TRUSTSTORE_PASSWORD=InsertPasswordHere
LDAP_TLS_PROTOCOL=TLSv1.2
INITIAL_ADMIN_IDENTITY=YourUsername
AUTH=ldap
LDAP_URL=ldaps://dc.domain.com:636
LDAP_USER_SEARCH_BASE=OU=user folder,DC=domain,DC=com
LDAP_USER_SEARCH_FILTER=cn={0}
LDAP_IDENTITY_STRATEGY=USE_USERNAME
Start both the Nifi & Nifi-Registry containers, and copy out the contents of their respective conf folders to the host (/opt/nifi-registry/nifi-registry-current/conf and /opt/nifi/nifi-current/conf). This allows us to customise and persist the configuration.
Modify the conf/authorizers.xml file for both Nifi and the Nifi-registry
to setup LDAP authentication, and add a composite auth provider (allowing both local & ldap users). We need both in order to add user locals accounts for any Nifi nodes connecting to the registry (can be done via LDAP, but is easier this way).
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<!--<property name="Initial User Identity 1"></property>-->
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">LDAPS</property>
<property name="Manager DN">CN=service account,OU=folder its in,DC=domain,DC=com</property>
<property name="Manager Password">InsertPasswordHere</property>
<property name="TLS - Keystore">/path/to/keystore.jks</property>
<property name="TLS - Keystore Password">InsertPasswordHere</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">/path/to/truststore.jks</property>
<property name="TLS - Truststore Password">InsertPasswordHere</property>
<property name="TLS - Truststore Type">jks</property>
<property name="TLS - Client Auth">WANT</property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully">true</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://dc.domain.com:636</property>
<property name="Page Size"/>
<property name="Sync Interval">30 mins</property>
<property name="User Search Base">OU=user folder,DC=domain,DC=com</property>
<property name="User Object Class">user</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"/>
<property name="User Identity Attribute">cn</property>
<property name="Group Search Base">OU=group folder,DC=domain,DC=com</property>
<property name="Group Object Class">group</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"/>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"/>
</userGroupProvider>
<userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">composite-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">YourUsername</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">DN of Nifi Instance (OPTIONAL - more details on this later)</property>
<property name="Node Group"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
Performance Mod - Optional - Modify conf/bootstrap.conf to increase the Java Heap Size (if required). Also update Security limits (files & process limits).
Extract the OS Java keystore from the containers, and add the corporate cert chain to it. Note: Nifi and nifi-registry java keystores are in slightly different locations in the containers. I needed to inject CA certs into these keystores to ensure Nifi processors can resolve SSL trust chains (I needed this primarily for a number of custom nifi processors we wrote which interrogated LDAP).
Run the containers, mounting volumes for persistent data and include your certs folder and the OS Java keystores:
podman run --name nifi-registry \
--hostname=$(hostname) \
-p 18443:18443 \
--restart=always \
-v /path/to/certs:/path/to/certs \
-v /path/to/OS/Java/Keystore:/usr/local/openjdk-8/jre/lib/security/cacerts:ro \
-v /path/to/nifi-registry/conf:/opt/nifi-registry/nifi-registry-current/conf \
-v /path/to/nifi-registry/database:/opt/nifi-registry/nifi-registry-current/database \
-v /path/to/nifi-registry/extension_bundles:/opt/nifi-registry/nifi-registry-current/extension_bundles \
-v /path/to/nifi-registry/flow_storage:/opt/nifi-registry/nifi-registry-current/flow_storage \
-v /path/to/nifi-registry/logs:/opt/nifi-registry/nifi-registry-current/logs \
--env-file /path/to/.env/file \
-d \
corporate.container.registry/apache/nifi-registry:0.7.0
podman run --name nifi \
--hostname=$(hostname) \
-p 443:8443 \
--restart=always \
-v /path/to/certs:/path/to/certs \
-v /path/to/certs/cacerts:/usr/local/openjdk-8/lib/security/cacerts:ro \
-v /path/to/nifi/logs:/opt/nifi/nifi-current/logs \
-v /path/to/nifi/conf:/opt/nifi/nifi-current/conf \
-v /path/to/nifi/database_repository:/opt/nifi/nifi-current/database_repository \
-v /path/to/nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository \
-v /path/to/nifi/content_repository:/opt/nifi/nifi-current/content_repository \
-v /path/to/nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository \
-v /path/to/nifi/state:/opt/nifi/nifi-current/state \
-v /path/to/nifi/extensions:/opt/nifi/nifi-current/extensions \
--env-file /path/to/.env/file \
-d \
corporate.container.registry/apache/nifi:1.11.4
Note: Please ensure that the SELinux contexts (if applicable to your OS), and permissions (1000:1000) are correct for the mounted volumes prior to starting the containers.
Configuring the Containers
Browse to https://hostname.domain.com/nifi (we redirected 8443 to 443) and https://hostname2.domain.com:18443/nifi-registry
Login to both as the initial admin identity you provided in the config files
Add a new user account using the full DN of the SSL certificate, e.g. CN=machinename, OU=InfoTech, O=Big Company, C=US. This account is needed on both ends for Nifi & the registry to connect and getting the name correct is important. There's probably an easier way to determine the DN, but I reverse engineered after inspecting the cert in a browser. I took everything listed under the "Subject Name" heading and wrote it out from the bottom entry up.
Set permissions for the account in nifi, adding "Proxy User Request", "Access the controller (view)" and "Access the controller (modify)".
Set permissions for account in nifi registry, adding "Can proxy user request", "Read buckets".
Set other user/group permissions as needed
Setup and Connect to the Registry
Create a bucket in Nifi Registry
In Nifi (Controller Settings -> Registry Clients), add the url of the registry: https://hostname.domain.com:18443.
Select a Processor or Process group, right-click, Version -> Start Version Control
That should be it!
I found that Nifi is terrible at communicating errors when connecting to the registry. I got a range of errors whilst attempting to connect. The only way to get useful errors is to add a new entry to conf/bootstrap.conf on the nifi registry:
java.arg.XX=--Djavax.net.debug=ssl,handshake
After restarting the Nifi Registry container you should start seeing SSL debug information in logs/nifi-registry-bootstrap.log.
e.g. When Nifi was reporting "Unknown Certificate", the Nifi Registry debug logs contained:
INFO [NiFi logging handler] org.apache.nifi.registry.StdOut sun.security.validator.ValidatorException: Extended key usage does not permit use for TLS client authentication
I hope this is helpful.
I have ActiveMQ 5.15.13 running in my localhost with jolokia without any problem:
# wget --user admin --password admin --header "Origin: http://localhost" --auth-no-challenge http://localhost:8161/api/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost
--2020-06-22 14:49:15-- http://localhost:8161/api/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8161... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘org.apache.activemq:type=Broker,brokerName=localhost.2’
org.apache.activemq:type=Broker,brokerName=localhost.2 [ <=> ] 2,24K --.-KB/s in 0s
2020-06-22 14:49:15 (175 MB/s) - ‘org.apache.activemq:type=Broker,brokerName=localhost.2’ saved [2291]
Hawtio 2.10.0 looks like it's ok, but when I try to connect to ActiveMQ I receive this message:
This Jolokia endpoint is unreachable. Please check the connection details and try again.
I checked network inspector and I guess that's the problem:
Request URL: http://localhost:8161/hawtio/proxy/http/localhost/8161/api/jolokia/
After some changes in the URL I noticed that there's a hardcode part of the URL:
http://localhost:8161/hawtio/proxy/
That part is always there, no matter what I do and the other part:
http/localhost/8161/api/jolokia/
Change always I change the settings, but for some reason it's became a query strings instead of be the expected URL:
http://localhost:8161/api/jolokia/
That's are the options I'm using:
ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY -Dhawtio.disableProxy=true -Dhawtio.realm=activemq -Dhawtio.role=admins -Dhawtio.rolePrincipalClasses=org.apache.activemq.jaas.GroupPrincipal -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=$ACTIVEMQ_CONF/login.config"
How can I fix this issue?
Thanks in advance.
After review a lot of "the same" procedure to install Hawtio with ActiveMQ, questions everywhere I could find it and review the documentation for both ActiveMQ and Hawtio, I could finally found some information, from 6 years ago, that suggested an "extra step" when use Hawtio with ActiveMQ which fixed my issue.
I may be wrong, but from my point of view Hawtio have a highlander bug that use the HOST URL as base, instead of the SETUP CONNECTION URL that is created, to fix that problem, just need to add the following lines into <ACTIVEMQ PATH>/conf/jetty.xml:
<bean class="org.eclipse.jetty.webapp.WebAppContext">
<property name="contextPath" value="/hawtio" />
<property name="resourceBase" value="${activemq.home}/webapps/hawtio" />
<property name="logUrlOnStart" value="true" />
</bean>
That's should be inside of:
<bean id="secHandlerCollection" class="org.eclipse.jetty.server.handler.HandlerCollection">
<property name="handlers">
<list>
<ref bean="rewriteHandler"/>
I have a ProGet server that currently uses SSL and requires a client certificate in order to communicate with it. We would like to be able to use this server directly from the command line or within the Visual Studio package manager.
When accessed via a browser there are no issues with viewing the repository. When using nuget.exe on the command line the result is 403 Forbidden. I have used Fiddler to monitor the request and it highlights that the server is asking for a client certificate, Fiddler allows you to inject the required certificate and the nuget request is then successful.
Is it possible to provide a client certificate when using NuGet:
nuget install PackageName -Source https://myhost -Cert ???
Or with a setup like this we are going to have to fall back to using an API key to gain access?
Are we able to provide the certificate when using Visual Studio?
Starting from NuGet 5.7.2 you can use client-cert feature.
Configuration example:
<configuration>
...
<packageSources>
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
<add key="Contoso" value="https://contoso.com/packages/" />
<add key="Example" value="https://example.com/bar/packages/" />
</packageSources>
...
<clientCertificates>
<storeCert packageSource="Contoso"
storeLocation="currentUser"
storeName="my"
findBy="thumbprint"
findValue="4894671ae5aa84840cc1079e89e82d426bc24ec6" />
<fileCert packageSource="Example"
path=".\certificate.pfx"
password="..." />
<fileCert packageSource="Bar"
path=".\certificate.pfx"
clearTextPassword="..." />
</clientCertificates>
...
</configuration>
Also you can use nuget client-certs CLI command for configuration.
I have realised that some years later I never posted the answer to this issue. In order to get NuGet to use certificates the certificate had to be added to the Credential Manager in Windows as a certificate based credential. NuGet then automatically picked this up when communicating with a matching URL.
I'm trying to use SSL with the JMX connector that Active MQ creates, but with no success. I'm able to get SSL working with the JVM platform JMX connector, but that requires storing keystore and truststore passwords plaintext, which is a no-go for our project.
Using the instructions here, I set up managementContext in activemq.xml as follows:
<managementContext>
<managementContext createConnector="true">
<property xmlns="http://www.springframework.org/schema/beans" name="environment">
<map xmlns="http://www.springframework.org/schema/beans">
<entry xmlns="http://www.springframework.org/schema/beans"
key="javax.net.ssl.keyStore"
value="${activemq.base}/conf/keystore.jks"/>
<entry xmlns="http://www.springframework.org/schema/beans"
key="javax.net.ssl.keyStorePassword"
value="${keystore.password}"/>
<entry xmlns="http://www.springframework.org/schema/beans"
key="javax.net.ssl.trustStore"
value="${activemq.base}/conf/truststore.jks"/>
<entry xmlns="http://www.springframework.org/schema/beans"
key="javax.net.ssl.trustStorePassword"
value="${truststore.password}"/>
</map>
</property>
</managementContext>
</managementContext>
This section seems to be completely ignored when the connector starts up. I can connect without credentials. I also tried using username and password authentication instead of ssl for JMX, as seen here, and that worked fine.
Has anyone seen this before? Any ideas? Thanks!
Have you enabled jmx ssl in the activemq launch scripts? On windows in the activemq-admin or activemq batch files, uncomment and modify the SUNJMX settings.
JMX authentiation is independent of whether ssl is used. It is controlled by the authenticate attribute. By default it will use the jmx access files in your jre, so re-point them with the system properties shown below. You may get an error message stating that the files themselves must be access controlled, so set them with chmod on unix or cacls on windows. I would suggest even turning off the ssl and getting the authentication to work first. You can test with jconsole with a remote connection to confirm that it wants credentials. Then follow-up with the ssl stuff.
set SUNJMX=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=1199 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=true -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.password.file=%ACTIVEMQ_BASE%/conf/access/jmx.password -Dcom.sun.management.jmxremote.access.file=%ACTIVEMQ_BASE%/conf/access/jmx.access
I had the same issue regarding the ActiveMQ SSL configuration (keystore & password) in the XML not working.
My requirement was to enable remote JMX monitoring of ActiveMQ with SSL and authentication through a firewall.
I resolved it using a custom JMX connector (via a Java Agent), rather than using the JMX connector that Active MQ creates.
see: JMX connectivity through a firewall for an example (JMXAgent.java)
The important entries for configuring SSL in the JMXAgent.java are:
Map<String, Object> env = new HashMap<String, Object>();
SslRMIClientSocketFactory csf = new SslRMIClientSocketFactory();
SslRMIServerSocketFactory ssf = new SslRMIServerSocketFactory();
env.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE, csf);
env.put(RMIConnectorServer.RMI_SERVER_SOCKET_FACTORY_ATTRIBUTE, ssf);
You can also specify your authentication files in the env Map:
env.put("jmx.remote.x.password.file", System.getProperty("password.file","<default_path>"));
env.put("jmx.remote.x.access.file", System.getProperty("access.file","<default_path>"));
The Java Agent needs to be compiled and put into a jar with a valid manifest file as described here
Add the following to the activemq launch configuration (depending on activemq version/ environment and run ActiveMQ:
-javaagent:<full_path_to_agent_jar_file> \
-Dpassword.file=<full_path_to_jmx.password_file> \
-Daccess.file=<full_path_to_jmx.access_file> \
-Djavax.net.ssl.keyStore=<full_path_to_keystore_file> \
-Djavax.net.ssl.keyStorePassword=<password>
You should then be able to connect through jconsole (with correct security parameters)
The remote JMX connection URL will be something like:
service:jmx:rmi://<host>:<rmi_server_port>/jndi/rmi://<host>:<port>/jmxrmi
Note - ports can be configured in the Java Agent.
Is there a way I can have multiple ssl certificates point to a single inputendpoint in a service definition? For example, lets say I have two url's.
service.foo.net/Service.svc
service.doo.net/Service.svc
I want both of these addresses to resolve to my windows azure service, but I'm not sure how to configure this in the service definition.
<Certificates>
<Certificate name="service.foo.net" storeLocation="LocalMachine" storeName="My" />
<Certificate name="service.doo.net" storeLocation="LocalMachine" storeName="My" />
</Certificates>
<Endpoints>
<InputEndpoint name="HttpsIn" protocol="https" port="443" certificate="service.foo.net" />
</Endpoints>
According to this MSDN article, each input endpoint must have a unique port. Is there any way to specify more than once certificate for this endpoint?
Unfortunately this is not possible. Azure is re-exposing an SSL limitation. The SSL limitation is interesting, and the reason you can't use v-hosts over SSL. Lets walk through an example:
You connect to https://ig2600.blogspot.com
That resolves to some ip address - say 8.8.8.8
Your browser now connects to 8.8.8.8
8.8.8.8 must preset a certificate before your browser will send any data
the browser verifies the ceritificate presented is for ig2600.blogspot.com
You send the http request, which contains your domain name.
Since the server needs to present a certificate before you tell it the host name you want to talk to, the server can't know which certificate to use if multiple are present, thus you can only have a single cert.
"Oliver Bock"'s answer may work for you and "Igor Dvorkin"'s answer is not valid anymore since IIS 8 with Windows Server 2012 supports SNI, which enables you to add a "hostheader" to HTTPS bindings and having multiple SSL certificates to different domains listening to the same HTTPS port.
You need to automate the process of installing the certificates on the machine and add HTTPS bindings to IIS.
I'm a Microsoft Technical Evangelist and I have posted a detailed explanation and a sample "plug & play" source-code at:
http://www.vic.ms/microsoft/windows-azure/multiples-ssl-certificates-on-windows-azure-cloud-services/
This post indicates you will need a "multi domain certificate", which seems to be a certificate that can match multiple DNS names in step 5 of Igor's answer. I have not tried it, but presumably this certificate can be uploaded to Azure in the usual way.