Arquillian Cube and ArquillianResource url - jboss-arquillian

I have been trying to get the url for my tomcat server(running in a docker container) used in my Junit test and not just localhost. The reason this is important is my test runs fine locally, but when run on our jenkins node which is also run in Docker, localhost does not work.
I have configued the node to use Docker on Docker configs. I need the url to use the ip of the parent docker machine. The odd thing is the jmx url seems to work just fine to deploy the test war, however it is the unit test url itself that has an issue. I rewrote the test with the ip hard coded and this worked fine, but is really not an optimal solution in the event devs here want to run the test locally.
I also tried using #CubeIp and #DockerUrl or #HostIp, but they either returned just localhost or null, as it says it cannot find the container "tomcat"
Any ideas?
Here is my arquillian.xml
<extension qualifier="cube">
<property name="connectionMode">STARTORCONNECTANDLEAVE</property>
</extension>
<extension qualifier="docker">
<property name="serverVersion">1.14</property>
<property name="serverUri">unix:///var/run/docker.sock</property>
<!--<property name="serverUri">localhost:2375</property>-->
<property name="dockerInsideDockerResolution">false</property>
<property name="definitionFormat">CUBE</property>
<property name="dockerContainersFile">docker-compose.yml</property>
<property name="dockerRegistry">https://internalnexus.com:5000/</property>
<property name="username">user</property>
<property name="password">pass</property>
<property name="email">email</property>
</extension>
<container qualifier="tomcat" default="true">
<configuration>
<property name="host">10.0.20.1</property>
<property name="httpPort">8080</property>
<property name="user">user</property>
<property name="pass">pass</property>
</configuration>
</container>
And here is my docker compose file
`
tomcat:
image: internalnexus.com:5000/perf-tomcat:latest
exposedPorts: [8080/tcp,8089/tcp]
alwaysPull: false
await:
strategy: polling
env: [TOMCAT_PASS=mypass, JAVA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=8089 -Dcom.sun.management.jmxremote.rmi.port=8089 -Dcom.sun.management.jmxremote.ssl=false -Dspring.config.location=/usr/local/tomcat/conf/application.properties]
portBindings: [8089/tcp, 8080/tcp]
links:
- database:database
database:
image: internalnexus.com:5000/perfstats-sqlserver:latest
exposedPorts: [1433/tcp]
env: [SA_PASSWORD=pass, ACCEPT_EULA=Y]
await:
strategy: log
match: 'ms sql server is done'
stdOut: true
stdErr: true
timeout: 30
portBindings: [1433/tcp]
`

I have one question and it is why you need parent docker host ip?
What I see is that parent docker host ip is the docker host ip where Jenkins node is using. This IP is the one required by external callers to access to Jenkins. Then inside that docker instance you are running another docker host or you are reusing the docker host where Jenkins is running?

Related

Connecting a containerised LDAP-backed Nifi to a containerised Nifi Registry via third party SSL certificates

Note: This is not a question, I'm providing information that may help others.
Hi all,
I recently spent way too much time beating my head against the keyboard trying to work out how to connect Nifi to a Nifi registry in a corporate environment. After eventually working it out I thought I'd post my findings here to save the next poor soul that comes along seeking help with Nifi and Nifi registry.
Apologies in advance for the long post, but I thought the details would be useful.
I had a requirement to setup containerised instances of Nifi and Nifi-registry, both backed by LDAP, leveraging corporate SSL certificates and using an internal Container registry (no direct Internet access). As of this morning, this is now working, here's an overview of how I got it to work on RHEL 8 servers:
In a corporate environment the hosts need SSL certs setup for HTTPS, and to ensure they can communicate securely.
SSL Cert setup
Generate SSL private keys for each host in a Java keystore on the respective machines
Generate CSRs from the keystores, with appropriate SANs as required
Get CSRs Signed - Ensure that the "Client Auth" and "Server Auth" Extended Key Usage attributes are set for the Nifi cert (This is required for Nifi to successfully connect to a Nifi Registry). The registry cert just needs the Server Auth attribute.
Import corporate CA chain into the keystores, to ensure full trust chain of the signed cert is resolvable
Create a Java keystore (truststore) containing the CA cert chain
I can provide further details of the above steps if needed
Now that we have some SSL certs, the steps to setup the containers were as follows:
Container setup
Install podman (or docker if you prefer)
For Podman - Update the /etc/containers/registries.conf to turn off the default container registries
For Podman - Update /usr/share/containers/libpod.conf to replace the path to the pause container with the path the container in our internal registry
Setup folders for the containers, ensuring they have an SELinux file context of "container_file_t", and have permissions of 1000:1000 (UID & GID of nifi user in the containers).
Setup an ENV file to define all of the environment variables to pass to the containers (there's a lot for Nifi and the Registry, they each share this info). This saves a lot of CLI parameters, and stops passwords appearing in the process list (note password encryption for nifi is possible, but not covered in this post).
KEYSTORE_PATH=/path/to/keystore.jks
TRUSTSTORE_PATH=/path/to/truststore.jks
KEYSTORE_TYPE=JKS
TRUSTSTORE_TYPE=JKS
KEYSTORE_PASSWORD=InsertPasswordHere
TRUSTSTORE_PASSWORD=InsertPasswordHere
LDAP_AUTHENTICATION_STRATEGY=LDAPS
LDAP_MANAGER_DN=CN=service account,OU=folder its in,DC=domain,DC=com
LDAP_MANAGER_PASSWORD=InsertPasswordHere
LDAP_TLS_KEYSTORE=/path/to/keystore.jks
LDAP_TLS_TRUSTSTORE=/path/to/truststore.jks
LDAP_TLS_KEYSTORE_TYPE=JKS
LDAP_TLS_TRUSTSTORE_TYPE=JKS
LDAP_TLS_KEYSTORE_PASSWORD=InsertPasswordHere
LDAP_TLS_TRUSTSTORE_PASSWORD=InsertPasswordHere
LDAP_TLS_PROTOCOL=TLSv1.2
INITIAL_ADMIN_IDENTITY=YourUsername
AUTH=ldap
LDAP_URL=ldaps://dc.domain.com:636
LDAP_USER_SEARCH_BASE=OU=user folder,DC=domain,DC=com
LDAP_USER_SEARCH_FILTER=cn={0}
LDAP_IDENTITY_STRATEGY=USE_USERNAME
Start both the Nifi & Nifi-Registry containers, and copy out the contents of their respective conf folders to the host (/opt/nifi-registry/nifi-registry-current/conf and /opt/nifi/nifi-current/conf). This allows us to customise and persist the configuration.
Modify the conf/authorizers.xml file for both Nifi and the Nifi-registry
to setup LDAP authentication, and add a composite auth provider (allowing both local & ldap users). We need both in order to add user locals accounts for any Nifi nodes connecting to the registry (can be done via LDAP, but is easier this way).
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<!--<property name="Initial User Identity 1"></property>-->
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">LDAPS</property>
<property name="Manager DN">CN=service account,OU=folder its in,DC=domain,DC=com</property>
<property name="Manager Password">InsertPasswordHere</property>
<property name="TLS - Keystore">/path/to/keystore.jks</property>
<property name="TLS - Keystore Password">InsertPasswordHere</property>
<property name="TLS - Keystore Type">JKS</property>
<property name="TLS - Truststore">/path/to/truststore.jks</property>
<property name="TLS - Truststore Password">InsertPasswordHere</property>
<property name="TLS - Truststore Type">jks</property>
<property name="TLS - Client Auth">WANT</property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully">true</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldaps://dc.domain.com:636</property>
<property name="Page Size"/>
<property name="Sync Interval">30 mins</property>
<property name="User Search Base">OU=user folder,DC=domain,DC=com</property>
<property name="User Object Class">user</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"/>
<property name="User Identity Attribute">cn</property>
<property name="Group Search Base">OU=group folder,DC=domain,DC=com</property>
<property name="Group Object Class">group</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"/>
<property name="Group Name Attribute">cn</property>
<property name="Group Member Attribute">member</property>
<property name="Group Member Attribute - Referenced User Attribute"/>
</userGroupProvider>
<userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">composite-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">YourUsername</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">DN of Nifi Instance (OPTIONAL - more details on this later)</property>
<property name="Node Group"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers>
Performance Mod - Optional - Modify conf/bootstrap.conf to increase the Java Heap Size (if required). Also update Security limits (files & process limits).
Extract the OS Java keystore from the containers, and add the corporate cert chain to it. Note: Nifi and nifi-registry java keystores are in slightly different locations in the containers. I needed to inject CA certs into these keystores to ensure Nifi processors can resolve SSL trust chains (I needed this primarily for a number of custom nifi processors we wrote which interrogated LDAP).
Run the containers, mounting volumes for persistent data and include your certs folder and the OS Java keystores:
podman run --name nifi-registry \
--hostname=$(hostname) \
-p 18443:18443 \
--restart=always \
-v /path/to/certs:/path/to/certs \
-v /path/to/OS/Java/Keystore:/usr/local/openjdk-8/jre/lib/security/cacerts:ro \
-v /path/to/nifi-registry/conf:/opt/nifi-registry/nifi-registry-current/conf \
-v /path/to/nifi-registry/database:/opt/nifi-registry/nifi-registry-current/database \
-v /path/to/nifi-registry/extension_bundles:/opt/nifi-registry/nifi-registry-current/extension_bundles \
-v /path/to/nifi-registry/flow_storage:/opt/nifi-registry/nifi-registry-current/flow_storage \
-v /path/to/nifi-registry/logs:/opt/nifi-registry/nifi-registry-current/logs \
--env-file /path/to/.env/file \
-d \
corporate.container.registry/apache/nifi-registry:0.7.0
podman run --name nifi \
--hostname=$(hostname) \
-p 443:8443 \
--restart=always \
-v /path/to/certs:/path/to/certs \
-v /path/to/certs/cacerts:/usr/local/openjdk-8/lib/security/cacerts:ro \
-v /path/to/nifi/logs:/opt/nifi/nifi-current/logs \
-v /path/to/nifi/conf:/opt/nifi/nifi-current/conf \
-v /path/to/nifi/database_repository:/opt/nifi/nifi-current/database_repository \
-v /path/to/nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository \
-v /path/to/nifi/content_repository:/opt/nifi/nifi-current/content_repository \
-v /path/to/nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository \
-v /path/to/nifi/state:/opt/nifi/nifi-current/state \
-v /path/to/nifi/extensions:/opt/nifi/nifi-current/extensions \
--env-file /path/to/.env/file \
-d \
corporate.container.registry/apache/nifi:1.11.4
Note: Please ensure that the SELinux contexts (if applicable to your OS), and permissions (1000:1000) are correct for the mounted volumes prior to starting the containers.
Configuring the Containers
Browse to https://hostname.domain.com/nifi (we redirected 8443 to 443) and https://hostname2.domain.com:18443/nifi-registry
Login to both as the initial admin identity you provided in the config files
Add a new user account using the full DN of the SSL certificate, e.g. CN=machinename, OU=InfoTech, O=Big Company, C=US. This account is needed on both ends for Nifi & the registry to connect and getting the name correct is important. There's probably an easier way to determine the DN, but I reverse engineered after inspecting the cert in a browser. I took everything listed under the "Subject Name" heading and wrote it out from the bottom entry up.
Set permissions for the account in nifi, adding "Proxy User Request", "Access the controller (view)" and "Access the controller (modify)".
Set permissions for account in nifi registry, adding "Can proxy user request", "Read buckets".
Set other user/group permissions as needed
Setup and Connect to the Registry
Create a bucket in Nifi Registry
In Nifi (Controller Settings -> Registry Clients), add the url of the registry: https://hostname.domain.com:18443.
Select a Processor or Process group, right-click, Version -> Start Version Control
That should be it!
I found that Nifi is terrible at communicating errors when connecting to the registry. I got a range of errors whilst attempting to connect. The only way to get useful errors is to add a new entry to conf/bootstrap.conf on the nifi registry:
java.arg.XX=--Djavax.net.debug=ssl,handshake
After restarting the Nifi Registry container you should start seeing SSL debug information in logs/nifi-registry-bootstrap.log.
e.g. When Nifi was reporting "Unknown Certificate", the Nifi Registry debug logs contained:
INFO [NiFi logging handler] org.apache.nifi.registry.StdOut sun.security.validator.ValidatorException: Extended key usage does not permit use for TLS client authentication
I hope this is helpful.

Ignite Thin Client in Kubernetes

I'm trying to set up a distributed cache using Ignite and my java app through a thin client in a Kubernetes environment.
In my Kubernetes cluster, I have 2 pods with the java app and 2 pods of ignite. For the java pods to communicate with ignite pods, I have configured a thin client to connect with the ignite kubernetes service. With this configuration, I was expecting that the load balancing was on the kubernetes side. Here's what I have done in java code:
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("ignite-service.default.svc.cluster.local:10800")
.setUserName("user")
.setUserPassword("password");
IgniteClient igniteClient = Ignition.startClient(cfg);
While storing and getting objects from ignite, I deleted one of the ignite pods and, after a while, I was getting errors saying that "Ignite cluster is unavailable":
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable
With this behavior, I assume that the method setAddresses in ClientConfiguration class stores one of the IPs of the pods and channels all communication to that pod.
Is this what's happening in this method?
Ignite version 2.7
Kubernetes version 1.12.3
You need to pass several IP addresses to enable the failover (aka. automatic reconnect) on the thin client end. Find more details here.
Although you might have resolved the issue since the question was posted a long time back, but still putting an answer here for others.
With the Apache Ignite version(2.7+), you can modify your deployment to use Kubernetes IP Finder. With this Kubernetes will take care of discovering and connecting all server and client nodes.
TcpDiscoveryKubernetesIpFinder module will help you achieve this.
This is the discovery SPI that needs to be added to your configuration (Replace with appropriate Namespace and Service Name)
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
Official documentation can be found here - https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment

Apache Ignite SQLClient Connection from outside cluster

Apache Ignite is running in the 5 node hadoop cluster. Ignite Visor top command shows all the recognized nodes accurately. Outside the cluster, only one node is exposed as an edge node, using external ip. I am unable to connect to the Apache Ignite Cluster from outside the cluster using the exposed ip of the edge node.
Working within cluster : jdbc:ignite:thin://127.0.0.1/
Working within cluster : jdbc:ignite:thin://internal-ip.labs.net/
Not Working Outside cluster : jdbc:ignite:thin://external-ip.labs.net/
Please advise if any additional configuration is needed in the edge node to make the jdbc url work using the external ip address also. I am trying to do this in order to connect to the ignite cluster from outside using a sql client so that I can run all the sqls.
My Current Config
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.sharedfs.TcpDiscoverySharedFsIpFinder">
<property name="path" value="/storage/softwares/ignite/addresses"/>
</bean>
</property>
</bean>
</property>
</bean>
Apache Ignite JDBC driver operates over port 10800 by default. You need to forward it from external IP to your Ignite node to be able to connect to the cluster using JDBC.

Apache Ignite - Node running on remote machine not discovered

Apache Ignite Version is: 2.1.0
I am using TcpDiscoveryVmIpFinder to configure the nodes in an Apache Ignite cluster to setup a compute grid. Below is my configuration which is nothing but the example-default.xml, edited for the IP addresses:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>xxx.40.16.yyy:47500..47509</value>
<value>xx.40.16.zzz:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
If I start multiple nodes on individual machine, the nodes on respective machines discover each other and form a cluster. But, the nodes on the remote machines do not discover each other.
Any advise will be helpful...
First of all, make sure that you really use this config file and not a default config. With default configuration, nodes can find each other only on the same machine.
Once you've checked it, you also need to test that it's possible to connect from host 106.40.16.64 to 106.40.16.121(and vice versa) via 47500..47509 ports. It's possible that there is a firewall blocked connections or these ports is simply closed.
For example, it's possible to check it with netcat, run this from 106.40.16.64 host:
nc -z 106.40.16.121 47500

SharedRDD code for ignite works on setup of single server but fails with exception when additional server added

I have 2 server nodes running collocated with spark worker. I am using shared ignite RDD to save my dataframe. My code works fine when I work with only one server node stared, if I start both server nodes code fails with
Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
DiscoverySpi is configured as below
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">-->
<property name="shared" value="true"/>
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>v-in-spark-01:47500..47509</value>
<value>v-in-spark-02:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
I know this exception generally means ignite instanace either not started or stopped and operation tried with same, but I don't think this is the case for reasons that with single server node it works fine and also I am not explicitly closing ignite instance in my program.
Also in my code flow I do perform operations in transaction which works, so it is like
create cache1 : works fine
Create cache2 : works fine
put value in cache1 ; works fine
igniteRDD.saveValues on cache2 : This step failes with above mentioned exception.
USE this link for complete error trace
caused by part is pasted below here also
Caused by: java.lang.IllegalStateException: Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
at org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:190)
at org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:90)
at org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3151)
at org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2739)
at org.apache.ignite.spark.impl.IgniteAbstractRDD.ensureCache(IgniteAbstractRDD.scala:39)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:164)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:161)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
... 3 more</pre>
It looks like the node embedded in the executor process is stopped for some reason while you are still trying to run the job. To my knowledge the only way for this to happen is to stop the executor process. Can this be the case? Is there anything in the log except the trace?