Failure on installing S3 connector for AEM 6.3 - amazon-s3

I am trying to connect S3 data store following this instructions. I am getting exact error described in this SOF question.
Steps:
Created a vanilla AEM 6.3 instance and able to upload images to DAM
Downloaded S3 connector and copied all .jar files into crx-quickstart/install folder
Copied org.apache.jackrabbit.oak.segment.SegmentNodeStoreService.config file and set customBlobStore=B"true"
Copied org.apache.jackrabbit.oak.plugins.blob.datastore.S3DataStore.config file and looks like this:
accessKey="scribed" connectionTimeout="120000" maxConnections="40" maxErrorRetry="10" s3Bucket="myproj-s3bucket" s3Region="ap-southeast-1" s3EndPoint="https://scribed.signin.aws.amazon.com/console" secretKey="scribed" socketTimeout="120000" writeThreads="30" cacheSize="16GB" cachePurgeTrigFactory="1"
(have scribed the key and secret)
When I restart my AEM none of consoles start. It throws
HTTP ERROR: 503 Problem accessing /. Reason: AuthenticationSupport service missing. Cannot authenticate request.
This is the exception trace:
15.05.2017 07:42:56.156 *INFO* [FelixStartLevel] org.apache.jackrabbit.oak.blob.cloud.s3.Utils Configuring Amazon Client from property file.
15.05.2017 07:42:59.401 *INFO* [FelixStartLevel] org.apache.jackrabbit.oak.blob.cloud.s3.Utils S3 service endpoint [https://170564245278.signin.aws.amazon.com/console]
15.05.2017 07:43:04.292 *ERROR* [FelixStartLevel] org.apache.jackrabbit.oak-blob-cloud [org.apache.jackrabbit.oak.plugins.blob.datastore.S3DataStore(2946)] The activate method has thrown an exception (java.lang.NullPointerException: null value in entry: component.id=null) java.lang.NullPointerException: null value in entry: component.id=null at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:33) at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:135) at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:206) at com.google.common.collect.Maps.fromProperties(Maps.java:1187) at org.apache.jackrabbit.oak.blob.cloud.s3.S3Backend.init(S3Backend.java:166) at org.apache.jackrabbit.oak.plugins.blob.AbstractSharedCachingDataStore.init(AbstractSharedCachingDataStore.java:163) at org.apache.jackrabbit.oak.plugins.blob.datastore.AbstractDataStoreService.activate(AbstractDataStoreService.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.felix.scr.impl.inject.BaseMethod.invokeMethod(BaseMethod.java:224) at org.apache.felix.scr.impl.inject.BaseMethod.access$500(BaseMethod.java:39) at org.apache.felix.scr.impl.inject.BaseMethod$Resolved.invoke(BaseMethod.java:617) at org.apache.felix.scr.impl.inject.BaseMethod.invoke(BaseMethod.java:501) at org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:302) at org.apache.felix.scr.impl.inject.ActivateMethod.invoke(ActivateMethod.java:294) at org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:298) at org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:109) at org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:906) at org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:879) at org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:749) at org.apache.felix.scr.impl.manager.AbstractComponentManager.enableInternal(AbstractComponentManager.java:675) at org.apache.felix.scr.impl.manager.AbstractComponentManager.enable(AbstractComponentManager.java:430) at org.apache.felix.scr.impl.manager.ConfigurableComponentHolder.enableComponents(ConfigurableComponentHolder.java:657) at org.apache.felix.scr.impl.BundleComponentActivator.initialEnable(BundleComponentActivator.java:341) at org.apache.felix.scr.impl.Activator.loadComponents(Activator.java:390) at org.apache.felix.scr.impl.Activator.access$200(Activator.java:54) at org.apache.felix.scr.impl.Activator$ScrExtension.start(Activator.java:265) at org.apache.felix.utils.extender.AbstractExtender.createExtension(AbstractExtender.java:259) at org.apache.felix.utils.extender.AbstractExtender.modifiedBundle(AbstractExtender.java:232) at org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:482) at org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:415) at org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:232) at org.osgi.util.tracker.BundleTracker$Tracked.bundleChanged(BundleTracker.java:444) at org.apache.felix.framework.util.EventDispatcher.invokeBundleListenerCallback(EventDispatcher.java:916) at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:835) at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:517) at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4542) at org.apache.felix.framework.Felix.startBundle(Felix.java:2173) at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1372) at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308) at java.lang.Thread.run(Thread.java:745)
15.05.2017 07:43:04.308 *INFO* [FelixStartLevel] com.day.cq.cq-compat-codeupgrade BundleEvent RESOLVED
15.05.2017 07:43:04.310 *INFO* [FelixStartLevel] com.day.cq.cq-compat-codeupgrade BundleEvent STARTING
15.05.2017 07:43:04.310 *INFO* [FelixStartLevel] com.day.cq.cq-compat-codeupgrade BundleEvent STARTED
Am I missing any steps or config? Please help out

I got the answer to my question with help of my lead comparing the working config against failed. This parameter was incorrect:
s3EndPoint="https://scribed.signin.aws.amazon.com/console"
This can be blank as the connector will rebuild using s3Region. or it is https://region.aws.amazon.com. Since the error logs were throwing irrelevant errors, I was misguided. Removing this one parameter made difference.
Second observation was, while starting AEM, initially it does throw the error. But eventually it starts up. Need to wait for 3-4 mins. On logs I see connection refused during startup. But on subsequent request once all config is loaded, it is able to connect and upload successfully.

Related

Unable to run s3 sink connector for persisting kafka data on Minio

I have a Kubernetes cluster running with minikube, inside the cluster is running one kafka pod, one zookeeper pod and minio pod everyone with it service. Everything looks working properly. I have a minio topic generated called minio-topic working on kafka, and minio has one bucket called kafka-bucket, I have tried to run s3-sink-connector with this properties:
name=s3-sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=minio_topic
s3.region=us-east-1
s3.bucket.name=kafka-bucket
s3.part.size=5242880
flush.size=3
store.url=http://l27.0.0.1:9000/
storage.class=io.confluent.connect.s3.storage.S3Storage
#format.class=io.confluent.connect.s3.format.avro.AvroFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
After running the connector i am getting this error
[2020-05-06 18:00:46,238] ERROR WorkerSinkTask{id=s3-sink-0} Task threw an uncaught and unrecoverable
exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:55)
at io.confluent.connect.s3.S3SinkTask.start(S3SinkTask.java:99)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:50)
... 10 more
Caused by: java.lang.IllegalArgumentException: hostname cannot be null
at com.amazonaws.util.AwsHostNameUtils.parseRegion(AwsHostNameUtils.java:79)
at com.amazonaws.util.AwsHostNameUtils.parseRegionName(AwsHostNameUtils.java:59)
at com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:277)
at com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:229)
at com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:688)
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:362)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:38)
at io.confluent.connect.s3.storage.S3Storage.newS3Client(S3Storage.java:96)
at io.confluent.connect.s3.storage.S3Storage.<init>(S3Storage.java:65)
... 15 more
The credentials are well defined into .aws/credentials, Does anyone know what possibly be the mistake in the configuration?
hostname cannot be null at com.amazonaws.util.AwsHostNameUtils.parseRegion
I suggest you read up on the Minio blog on setting the store.url correctly as well as verify which region your Minio cluster thinks it's running on.

What's the right setup to replicate a remote docker registry on artifactory

I've got two artifactory instances with one serving as primary docker registry behind an apache2 proxy. Now, I'd like to have the second one acting also as a docker registry but with a remote registry which points to the primary instance.
When trying that I got this message when testing the active replication:
Error testing pull replication config: Unknown host 'api: Name or service not known
Here is the full stacktrace in the logs:
2017-10-26 15:30:58,004 [art-exec-3] [ERROR] (o.a.a.c.BasicStatusHolder:212) - Error occurred while performing folder replication for 'private-docker-registry:': api
java.net.UnknownHostException: api
at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[na:1.8.0_121]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_121]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_121]
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:111) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184) ~[httpclient-4.5.1.jar:4.5.1]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71) ~[httpclient-4.5.1.jar:4.5.1]
at org.jfrog.client.http.CloseableHttpClientDecorator.doExecute(CloseableHttpClientDecorator.java:90) ~[jfrog-http-client-1.2.4.jar:na]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.5.1.jar:4.5.1]
at org.artifactory.repo.HttpRepo.doExecuteMethod(HttpRepo.java:493) ~[artifactory-core-5.2.0.jar:na]
at org.artifactory.repo.HttpRepo.executeMethod(HttpRepo.java:510) ~[artifactory-core-5.2.0.jar:na]
at org.artifactory.repo.HttpRepo.executeMethod(HttpRepo.java:461) ~[artifactory-core-5.2.0.jar:na]
at org.artifactory.addon.replication.core.context.RemoteReplicationRequestExecutor.execute(RemoteReplicationRequestExecutor.java:28) ~[artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.context.server.TargetServerInfoResolver.executeRequestAndSetDetails(TargetServerInfoResolver.java:92) ~[artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.context.server.TargetServerInfoResolver.resolveTargetInfo(TargetServerInfoResolver.java:49) ~[artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.BaseReplicationProducer.resolveTargetInfo(BaseReplicationProducer.java:92) ~[artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.BaseReplicationProducer.run(BaseReplicationProducer.java:78) ~[artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.remote.RemoteReplicator.replicate(RemoteReplicator.java:56) [artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.remote.RemoteReplicator.replicate(RemoteReplicator.java:29) [artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.addon.replication.core.ReplicationAddonImpl.performRemoteReplication(ReplicationAddonImpl.java:91) [artifactory-addon-replication-5.2.0.jar:na]
at org.artifactory.repo.replication.RemoteReplicationJob.onExecute(RemoteReplicationJob.java:101) [artifactory-core-5.2.0.jar:na]
at org.artifactory.schedule.quartz.QuartzCommand.execute(QuartzCommand.java:52) [artifactory-storage-common-5.2.0.jar:na]
at org.quartz.core.JobRunShell.run(JobRunShell.java:202) [quartz-2.2.1.jar:na]
at org.artifactory.schedule.ArtifactoryConcurrentExecutor$RunnableWrapper.run(ArtifactoryConcurrentExecutor.java:104) [artifactory-storage-common-5.2.0.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
What am I doing wrong ?
Same thing occurs when trying to replicate my private account from dockerhub
Thank you for your help
EDIT:
Ok now, I managed to partially get this working. I'm saying "partially" because it's actually not anymore displaying a stacktrace and the different image references are synced but for some reason the layers themselves are not copied over to the remote. Looking closer to the logs, it looks like something wrong with permissions:
2017-11-26 23:30:08,611 [replication-consumer-1511713803259-0] [WARN ] (o.a.r.s.RepositoryServiceImpl:901) - Cannot set properties on 'remote-docker-cache:my/irc-base/0.1.0-b25-09e84a42/sha256__096495a59c0e938508a5c9d4cb003d5e4556e0fb8f1befd9469903a6d446e797': Item not found.
2017-11-26 23:30:08,611 [replication-consumer-1511713803259-0] [ERROR] (o.a.a.c.BasicStatusHolder:214) - Unable to set properties for remote-docker-cache:my/image/0.1.0-b25-09e84a42/sha256__096495a59c0e938508a5c9d4cb003d5e4556e0fb8f1befd9469903a6d446e797
2017-11-26 23:30:10,947 [replication-consumer-1511713803259-0] [WARN ] (o.a.r.s.RepositoryServiceImpl:901) - Cannot set properties on 'remote-docker-cache:my/image/0.1.0-b25-09e84a42/sha256__5523a881c6c86f188888bba730867591402f40db1be718c64726b1723c5abbf5': Item not found.
2017-11-26 23:30:10,947 [replication-consumer-1511713803259-0] [ERROR] (o.a.a.c.BasicStatusHolder:214) - Unable to set properties for remote-docker-cache:my/image/0.1.0-b25-09e84a42/sha256__5523a881c6c86f188888bba730867591402f40db1be718c64726b1723c5abbf5
2017-11-26 23:30:12,145 [replication-consumer-1511713803259-0] [WARN ] (o.a.r.s.RepositoryServiceImpl:901) - Cannot set properties on 'remote-docker-cache:my/image/0.1.0-b25-09e84a42/sha256__6b888ef3098531f0c7000584ce049b24e4559cfab8c4141fcf62bcfd60f6b177': Item not found.
2017-11-26 23:30:12,145 [replication-consumer-1511713803259-0] [ERROR] (o.a.a.c.BasicStatusHolder:214) - Unable to set properties for remote-docker-cache:my/image/0.1.0-b25-09e84a42/sha256__6b888ef3098531f0c7000584ce049b24e4559cfab8c4141fcf62bcfd60f6b177
In the same time on the registry hosting the layers:
20171126173008|1|REQUEST|192.168.210.102|admin|GET|/api/storage/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__096495a59c0e938508a5c9d4cb003d5e4556e0fb8f1befd9469903a6d446e797|HTTP/1.0|200|0
20171126173008|1|REQUEST|192.168.210.102|anonymous|HEAD|/api/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__5523a881c6c86f188888bba730867591402f40db1be718c64726b1723c5abbf5|HTTP/1.0|403|0
20171126173010|0|REQUEST|192.168.210.102|non_authenticated_user|GET|/api/storage/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__5523a881c6c86f188888bba730867591402f40db1be718c64726b1723c5abbf5|HTTP/1.0|401|0
20171126173010|0|REQUEST|192.168.210.102|admin|GET|/api/storage/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__5523a881c6c86f188888bba730867591402f40db1be718c64726b1723c5abbf5|HTTP/1.0|200|0
20171126173011|0|REQUEST|192.168.210.102|anonymous|HEAD|/api/docker/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__6b888ef3098531f0c7000584ce049b24e4559cfab8c4141fcf62bcfd60f6b177|HTTP/1.0|403|0
20171126173011|0|REQUEST|192.168.210.102|non_authenticated_user|GET|/api/storage/docker/remote-docker/my/image/0.1.0-b25-09e84a42/sha256__6b888ef3098531f0c7000584ce049b24e4559cfab8c4141fcf62bcfd60f6b177|HTTP/1.0|401|0
I'm trying to do the replication with the admin user which should have full access to the different registries.
Now the funny thing (maybe not so funny) is that if I allow the user Anonymous to have access to the registry, the replication works just fine. However, from a security point of view I cannot just let Anonymous access on these private registries.
Thanks again for your help
All credit goes to Yonatan from JFrog support
I finally managed to solve the issue thanks to JFrog support and here is what I was doing wrong and how it should be solved:
In the remote repository settings, I put the following as target url:
https://myregistry.example.com.
I was then kindly suggested by jFrog support to prefix the url with /api/docker/myregistry having the the following as target url:
https://myregistry.example.com/api/docker/myregistry
More inforation can be found here.
Edit:
Here is the exact reply from JFrog support (which might be more accurate than my trial to bulky translate what I understood):
"The issue you are experiencing is due to a misconfiguration in the target URL.
For some packaging formats, when using the corresponding client to access a repository through Artifactory, the repository key in the URL needs to be prefixed with api/ in the path. For example, in the case of Docker repositories, the repository key should be prefixed with api/docker.
Nevertheless, there are exceptions to this rule. For example, when replicating Maven repositories, you do not need to add a prefix the remote repository path. (This is the reason why you did not encounter issues with replicating Maven repositories)
You can find the full list here.
With regards to your scenario, please try to configure the following URL:
https://myregistry.example.com/api/docker/myregistry
Please note you have to add the target repository name."
Thank you Yonatan

How to configure Apache NiFi for a Kerberized Hadoop Cluster

I have Apache NiFi running standalone and its working fine. But, when I am trying to setup Apache NiFi to access Hive or HDFS Kerberized Cloudera Hadoop Cluster. I am getting issues.
Can someone guide me on the documentation for Setting HDFS/Hive/HBase (with Kerberos)
Here is the configuration I gave in nifi.properties
# kerberos #
nifi.kerberos.krb5.file=/etc/krb5.conf
nifi.kerberos.service.principal=pseeram#JUNIPER.COM
nifi.kerberos.keytab.location=/uhome/pseeram/learning/pseeram.keytab
nifi.kerberos.authentication.expiration=10 hours
I referenced various links like, but none of those are helpful.
(Since the below link said it had issues in NiFi 0.7.1 version, I tried NiFi 1.1.0 version. I had the same bitter experience)
https://community.hortonworks.com/questions/62014/nifi-hive-connection-pool-error.html
https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
Here are the errors I am getting logs:
ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.hive.SelectHiveQL
org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:292) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
at org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177) ~[na:na]
at com.sun.proxy.$Proxy83.getConnection(Unknown Source) ~[na:na]
at org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:158) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_51]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:288) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
... 18 common frames omitted
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:231) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:176) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545) ~[commons-dbcp-1.4.jar:1.4]
... 21 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:204) ~[hive-jdbc-1.2.1.jar:1.2.1]
... 27 common frames omitted
WARN [NiFi Web Server-29] o.a.nifi.dbcp.hive.HiveConnectionPool HiveConnectionPool[id=278beb67-0159-1000-cffa-8c8534c285c8] Configuration does not have security enabled, Keytab and Principal will be ignored
What you've added in nifi.properties file is useful for Kerberizing nifi cluster. In order to access kerberized hadoop cluster, you need to provide appropriate config files and keytabs in NiFi's HDFS processor.
For example, if you are using putHDFS to write to a Hadoop cluster:
Hadoop Configuration Resources : paths to core-site.xml and hdfs-site.xml
Kerberos Principal: Your principal to access hadoop cluster
kerberos keytab: Path to keytab generated using krb5.conf of hadoop cluster. nifi.kerberos.krb5.file in nifi.properties must be pointed to appropriate krb5.conf file.
Immaterial of whether NiFi is inside kerberized hadoop cluster or not, this post might be useful.
https://community.hortonworks.com/questions/84659/how-to-use-apache-nifi-on-kerberized-hdp-cluster-n.html

Step ‘Run Tests on AWS Device Farm’ aborted due to exception: javax.net.ssl.SSLHandshakeException: Could not generate secret

I am trying to integrate the Jenkins with AWS device Farm to automate the Mobile device testing.
So, i had created an IAM user and attached the devicefarm policy to teh user.
Generated AWS Access Key ID & AWS Secret Key ID and provied in the jenkins manage configurations.
But the Post-build Actions to Run Tests on AWS Device Farm leads to below exception. Could anyone be able to help me?
Exception stackstrace is as below:
[AWSDeviceFarm] Using Project 'BMS_OPDVO'
[AWSDeviceFarm] Using DevicePool 'LG Nexus5'
[AWSDeviceFarm] Using App '**/target/resources/org.wordpress.android.5.0.apk'
[AWSDeviceFarm] Archiving artifact 'org.wordpress.android.5.0.apk'
[AWSDeviceFarm] Uploading org.wordpress.android.5.0.apk to S3
ERROR: Step ‘Run Tests on AWS Device Farm’ aborted due to exception:
javax.net.ssl.SSLHandshakeException: Could not generate secret
at sun.security.ssl.ECDHCrypt.getAgreedSecret(ECDHCrypt.java:103)
at sun.security.ssl.ClientHandshaker.serverHelloDone(ClientHandshaker.java:1067)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:348)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:290)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:259)
at org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:125)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:319)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:363)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:219)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:359)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:330)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:317)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.uploadApp(AWSDeviceFarm.java:211)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarmRecorder.perform(AWSDeviceFarmRecorder.java:287)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:785)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:726)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:1037)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:671)
at hudson.model.Run.execute(Run.java:1766)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:529)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:408)
Caused by: java.security.InvalidKeyException: ECDH key agreement requires ECPublicKey for doPhase
at org.bouncycastle.jcajce.provider.asymmetric.ec.KeyAgreementSpi.engineDoPhase(Unknown Source)
at javax.crypto.KeyAgreement.doPhase(KeyAgreement.java:567)
at sun.security.ssl.ECDHCrypt.getAgreedSecret(ECDHCrypt.java:100)
... 34 more
The exception is basically because of the ssl handshaking between the jenkins and s3 server is not happening.
I had installed skip certificate check plugin and restarted the jenkins server which ignored the ssl handshaking error and proceeded with the operation.

Spring Session, Embedded Redis Server Error

Unable to start embedded Redis server, its giving the following error. What could be the possible reason. I'm working on Wildfly, Ubuntu. Following is the stacktrace.
... 25 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'redisServer' defined in org.egov.infra.config.session.RedisHttpSessionConfiguration: Invocation of init method failed; nested exception is java.lang.RuntimeException: Can't start redis server. Check logs for details.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1566)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:116)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:606)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:462)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:93)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [rt.jar:1.8.0_31]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [rt.jar:1.8.0_31]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [rt.jar:1.8.0_31]
at java.lang.reflect.Constructor.newInstance(Constructor.java:408) [rt.jar:1.8.0_31]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147)
... 27 more
Caused by: java.lang.RuntimeException: Can't start redis server. Check logs for details.
at redis.embedded.AbstractRedisInstance.awaitRedisServerReady(AbstractRedisInstance.java:66)
at redis.embedded.AbstractRedisInstance.start(AbstractRedisInstance.java:37)
at redis.embedded.RedisServer.start(RedisServer.java:11)
at org.egov.infra.config.redis.EmbeddedRedisServer.afterPropertiesSet(EmbeddedRedisServer.java:20)
This is a bug that has been reported here https://github.com/spring-projects/spring-session/issues/150
In my case I was running embedded redis-server on port 1337 and this port was locked and went into a loop sometime back when I was running my testcases in debug mode. After that I also started spring-boot app which created another server connection on the port 6379. But I failed to terminate the server running on the port 1337. Since then I had been getting the exception when I was trying to execute test cases "Can't start redis server. Check logs for details.", since 1337 was locked. Debugging line-my-line "AbstractRedisInstance" class and "awaitRedisServerReady" method revealed "1337 Already in use" which was never logged at all. Killed this port and re-run testcases and again I was on fly. Hope this helps