Mobile First Platform 8.0 Push notification failed on MacOS High Sierria - ibm-mobilefirst

I created created Push Notification App(it's cordova app) and MFPPush.registerDevice was succeeded. But "send notification" through MFPConsole(http://localhost:9080/mfpconsole/index.html#/mfp/push/echoview2.MFP8Sample#sendPush) failed.
I think MFP Server may use the IPv6 IP address as its own server IP address.
Anyone can find workaround?
Environment
+ OS: Mac OS High Sierria
+ MFP8: Product version: 8.0.0.00-20180315-134705
[err] com.ibm.mobile.analytics.sdk.events.AnalyticArgumentException: java.lang.IllegalArgumentException: MSAN018E: The supplied value was invalid: fe80:0:0:0:9787:db29:a3d:b9e9%utun0 for serverIpAddress.
[err] at com.ibm.mobile.analytics.sdk.model.PushNotification.setServerIpAddress(PushNotification.java:197)
[err] at com.ibm.mobile.analytics.sdk.events.PushNotification.<init>(PushNotification.java:37)
[err] at com.ibm.mfp.push.server.analytics.plugin.AnalyticsPlugin.sendNotificationDispatchEvent(AnalyticsPlugin.java:172)
[err] at com.ibm.mfp.push.server.notification.Mediator.fireNotificationDispatchEvent(Mediator.java:236)
[err] at com.ibm.mfp.push.server.notification.apns.ApplicationConnection.sendNotification(ApplicationConnection.java:146)
[err] at com.ibm.mfp.push.server.notification.apns.APNSMediator.sendNotification(APNSMediator.java:124)
[err] at com.ibm.mfp.push.server.notification.Mediator$2.run(Mediator.java:105)
[err] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[err] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[err] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[err] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[err] at java.lang.Thread.run(Thread.java:748)
[err] Caused by: java.lang.IllegalArgumentException: MSAN018E: The supplied value was invalid: fe80:0:0:0:9787:db29:a3d:b9e9%utun0 for serverIpAddress.
[err] ... 12 more
[ERROR ] Couldn't send message after 3 retries.Message(Id=2; Token=3BAE6CECD27D2DF48920AF230BE02A7FC3699480529AAAD17D1C27235B6C33C5; Payload={"payload":"{\"nid\":\"46403e8\",\"tag\":\"Push.ALL\"}","aps":{"alert":{"action-loc-key":null,"body":"Foo"}}})
Connection closed by remote host
[ERROR ] FPWSE1083E: Failure sending Apple Push Notification Service (APNS) notification with identifier 2, device token: 3BAE6CECD27D2DF48920AF230BE02A7FC3699480529AAAD17D1C27235B6C33C5.
Connection closed by remote host
[err] Exception in thread "pool-9-thread-3"
[err] com.notnoop.exceptions.NetworkIOException: java.net.SocketException: Connection closed by remote host
[err] at com.notnoop.apns.internal.Utilities.wrapAndThrowAsRuntimeException(Utilities.java:277)
[err] at com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:319)
[err] at com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:292)
[err] at com.notnoop.apns.internal.ApnsPooledConnection$2.run(ApnsPooledConnection.java:47)
[err] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[err] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[err] at java.lang.Thread.run(Thread.java:748)
[err] Caused by: java.net.SocketException: Connection closed by remote host
[err] at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1565)
[err] at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:124)
[err] at java.io.OutputStream.write(OutputStream.java:75)
[err] at com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:302)
[err] ... 5 more

This looks like a configuration error. MFP Push needs to communicate with apple notifications servers (called APNS) in order to successfully send notifications.
It looks like mobilefirst in your case is unable to communicate with the APNS servers. You need to explicitly configure this. Instructions on configuring push notifications proxy for both APNS and GCM (Google cloud messaging for android) can be found here.

Related

Apache Beam pipeline running on Dataflow failed to read from KafkaIO: SSL handshake failed

I'm building an Apache Beam pipeline to read from Kafka as an unbounded source.
I was able to run it locally using direct runner.
However, the pipeline would fail with the attached exception stack trace, when run using Google Cloud Dataflow runner on the cloud.
It seems it's ultimately the Conscrypt Java library that's throwing javax.net.ssl.SSLException: Unable to parse TLS packet header. I'm not really sure how to address this issue.
java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.kafka.KafkaUnboundedSource#33b5ff70
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:783)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:126)
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:778)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
java.util.concurrent.FutureTask.report(FutureTask.java:122)
java.util.concurrent.FutureTask.get(FutureTask.java:206)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:112)
com.google.cloud.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:778)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:360)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:193)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:135)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLException: Unable to parse TLS packet header
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:782)
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:723)
org.conscrypt.ConscryptEngine.unwrap(ConscryptEngine.java:688)
org.conscrypt.Java8EngineWrapper.unwrap(Java8EngineWrapper.java:236)
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:464)
org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:328)
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:255)
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:79)
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:460)
org.apache.kafka.common.network.Selector.poll(Selector.java:398)
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:214)
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:190)
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:219)
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:205)
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.fetchCommittedOffsets(ConsumerCoordinator.java:468)
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.refreshCommittedOffsetsIfNeeded(ConsumerCoordinator.java:450)
org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1772)
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1411)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.setupInitialOffset(KafkaUnboundedReader.java:641)
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.lambda$start$0(KafkaUnboundedReader.java:106)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Looks like Conscrypt causes SSL errors in many cotexts like this. Dataflow worker in Beam 2.9.0 has an option to disable this. Please try. --experiment=disable_conscrypt_security_provider. Alternately, you can try Beam 2.4.x, which does not enable Conscrypt.

Mobilefirst Push Notifications for IOS Native

I have to implement push notification on IOS native mobile application, currently I am doing small POC using Mobilefirst 8.0. I have setup all the configuration as per IBM MFP 8.0 standard. I am trying mobilefirst based sample app which is download from IBM MFP document website. I can able to login and register the users but if I am sending the notifications, I am not able to received.
I have checked logs messages in my development server. its throwing error as below,
[ERROR ] Couldn't send message after 3 retries.Message(Id=2;
Token=D60FFA8C91A465400B203424095CD5530DF2D223EC72BAE386982C0905C1AE1B;
Payload={"payload":"{\"nid\":\"59ef5d3\",\"tag\":\"Push.ALL\"}","aps":{"alert":{"action-loc-key":null,"body":"hi"}}})
Received fatal alert: internal_error [ERROR ] FPWSE1083E: Failure
sending Apple Push Notification Service (APNS) notification with
identifier 2, device token:
D60FFA8C91A465400B203424095CD5530DF2D223EC72BAE386982C0905C1AE1B.
Received fatal alert: internal_error [err] Exception in thread
"pool-9-thread-3" [err] com.notnoop.exceptions.NetworkIOException:
javax.net.ssl.SSLException: Received fatal alert: internal_error [err]
at
com.notnoop.apns.internal.Utilities.wrapAndThrowAsRuntimeException(Utilities.java:277)
[err] at
com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:319)
[err] at
com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:292)
[err] at
com.notnoop.apns.internal.ApnsPooledConnection$2.run(ApnsPooledConnection.java:47)
[err] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[err] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[err] at java.lang.Thread.run(Thread.java:748) [err] Caused by:
javax.net.ssl.SSLException: Received fatal alert: internal_error [err]
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) [err] at
sun.security.ssl.Alerts.getSSLException(Alerts.java:154) [err] at
sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2023)
[err] at
sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1125)
[err] at
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
[err] at
sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:747)
[err] at
sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123) [err]
at java.io.OutputStream.write(OutputStream.java:75) [err] at
com.ibm.mfp.push.server.notification.apns.ApnsConnectionImpl.sendMessage(ApnsConnectionImpl.java:302)
[err] ... 5 more

IBM MPF8.0 409 Response from Push Tag POST

In a Java adapter when we create a new tag using Push Tag (POST) we always get a "The target resource 'PushTag' already exists" 409 response and an error in the log even though the new tag is successfully created
Is this a known issue? Is there a work-around?
In the logs the sequence is like this;
[INFO ] 2017-10-16-07:51:48.318 [1363] PostRequest.call http://localhost:9080/imfpush/v1/apps/com.ibm.mfpstartercordova/tags/TAG2203137
GET
Authorization: Bearer eyJhbGci...
[ERROR ] FPWSE0010E: Internal server error. FPWSE0001E: Not Found - The target resource 'PushTag' does not exist. Check the 'TAG2203137' parameter.
[ERROR ] FPWSE0010E: Internal server error. FPWSE0001E: Not Found - The target resource 'PushTag' does not exist. Check the 'TAG2203137' parameter.
http://localhost:9080/imfpush/v1/apps/com.ibm.mfpstartercordova/tags/TAG2203137
GET
404
{OkHttp-Sent-Millis=1508158308325, Date=Mon, 16 Oct 2017 12:51:48 GMT, Content-Length=125, OkHttp-Received-Millis=1508158308332, Connection=Close, Content-Type=application/json, X-Powered-By=Servlet/3.1}
{"message":"Not Found - The target resource 'PushTag' does not exist. Check the 'TAG2203137' parameter.","code":"FPWSE0001E"}
[INFO ] 2017-10-16-07:51:48.333 [1363] PostRequest.call http://localhost:9080/imfpush/v1/apps/com.ibm.mfpstartercordova/tags
POST
Authorization: Bearer eyJhbGci...
Content-Type: application/json
{"description":"TAG2203137","name":"TAG2203137"}
[err] com.ibm.mfp.push.server.exceptions.PushWorksEntityExistsException: FPWSE0002E: Conflict - The target resource 'PushTag' already exists. Check the 'TAG2203137' parameter.
[err] at com.ibm.mfp.push.server.rest.resources.TagResource.createTag(TagResource.java:283)
[err] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[err] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[err] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[err] at java.lang.reflect.Method.invoke(Method.java:606)
[err] at org.apache.wink.server.internal.handlers.InvokeMethodHandler.handleRequest(InvokeMethodHandler.java:63)
[err] at org.apache.wink.server.handlers.AbstractHandler.handleRequest(AbstractHandler.java:33)
[err] at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)
[err] at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)
[err] at org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:67)
etc, etc
[err] at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1287)
[err] at [internal classes]
[err] at com.ibm.mfp.push.server.rest.SecurityFilter.doFilter(SecurityFilter.java:93)
[err] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:207)
[err] at [internal classes]
[err] at com.ibm.mfp.push.server.rest.filter.RequestDetailLogger.doFilter(RequestDetailLogger.java:94)
[err] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:207)
[err] at [internal classes]
[err] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[err] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[err] at java.lang.Thread.run(Thread.java:745)
[ERROR ] FPWSE0010E: Internal server error. FPWSE0002E: Conflict - The target resource 'PushTag' already exists. Check the 'TAG2203137' parameter.
http://localhost:9080/imfpush/v1/apps/com.ibm.mfpstartercordova/tags
POST
409
{OkHttp-Sent-Millis=1508158308345, Date=Mon, 16 Oct 2017 12:51:48 GMT, Content-Length=124, OkHttp-Received-Millis=1508158308372, Connection=Close, Content-Type=application/json, X-Powered-By=Servlet/3.1}
{"message":"Conflict - The target resource 'PushTag' already exists. Check the 'TAG2203137' parameter.","code":"FPWSE0002E"}
[err] java.io.IOException: Could not create http://localhost:9080/imfpush/v1/apps/com.ibm.mfpstartercordova/tags
[err] at com.onemain.mfp.push.PushMessage$oauthInstance.pushToTag(PushMessage.java:153)
[err] at com.onemain.mfp.push.pushAdapterApplication.handleMessage(pushAdapterApplication.java:99)
[err] at com.onemain.mfp.push.pushAdapterApplication.waitForMessage(pushAdapterApplication.java:127)
[err] at com.onemain.mfp.push.pushAdapterApplication.pollLocalMQ(pushAdapterApplication.java:158)
[err] at com.onemain.mfp.push.pushAdapterApplication.access$200(pushAdapterApplication.java:48)
[err] at com.onemain.mfp.push.pushAdapterApplication$1.run(pushAdapterApplication.java:255)

How to configure Apache NiFi for a Kerberized Hadoop Cluster

I have Apache NiFi running standalone and its working fine. But, when I am trying to setup Apache NiFi to access Hive or HDFS Kerberized Cloudera Hadoop Cluster. I am getting issues.
Can someone guide me on the documentation for Setting HDFS/Hive/HBase (with Kerberos)
Here is the configuration I gave in nifi.properties
# kerberos #
nifi.kerberos.krb5.file=/etc/krb5.conf
nifi.kerberos.service.principal=pseeram#JUNIPER.COM
nifi.kerberos.keytab.location=/uhome/pseeram/learning/pseeram.keytab
nifi.kerberos.authentication.expiration=10 hours
I referenced various links like, but none of those are helpful.
(Since the below link said it had issues in NiFi 0.7.1 version, I tried NiFi 1.1.0 version. I had the same bitter experience)
https://community.hortonworks.com/questions/62014/nifi-hive-connection-pool-error.html
https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
Here are the errors I am getting logs:
ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.hive.SelectHiveQL
org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:292) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
at org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177) ~[na:na]
at com.sun.proxy.$Proxy83.getConnection(Unknown Source) ~[na:na]
at org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:158) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_51]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:288) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
... 18 common frames omitted
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:231) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:176) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545) ~[commons-dbcp-1.4.jar:1.4]
... 21 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:204) ~[hive-jdbc-1.2.1.jar:1.2.1]
... 27 common frames omitted
WARN [NiFi Web Server-29] o.a.nifi.dbcp.hive.HiveConnectionPool HiveConnectionPool[id=278beb67-0159-1000-cffa-8c8534c285c8] Configuration does not have security enabled, Keytab and Principal will be ignored
What you've added in nifi.properties file is useful for Kerberizing nifi cluster. In order to access kerberized hadoop cluster, you need to provide appropriate config files and keytabs in NiFi's HDFS processor.
For example, if you are using putHDFS to write to a Hadoop cluster:
Hadoop Configuration Resources : paths to core-site.xml and hdfs-site.xml
Kerberos Principal: Your principal to access hadoop cluster
kerberos keytab: Path to keytab generated using krb5.conf of hadoop cluster. nifi.kerberos.krb5.file in nifi.properties must be pointed to appropriate krb5.conf file.
Immaterial of whether NiFi is inside kerberized hadoop cluster or not, this post might be useful.
https://community.hortonworks.com/questions/84659/how-to-use-apache-nifi-on-kerberized-hdp-cluster-n.html

Google Cloud Messaging SSL error peer not authenticated

I have an issue regarding GCM. When my server app tries to send a message to GCM, it sometimes throws an error:
16-01-20 18:13:47,993 ERROR [com.chopper.ivolley.server.association.gcm.GcmNoticifationClient] (pool-5-thread-2) GCM returned an error: javax.ws.rs.ProcessingException: Unable to invoke request
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:287) [resteasy-client-3.0.10.Final.jar:]
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:407) [resteasy-client-3.0.10.Final.jar:]
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:450) [resteasy-client-3.0.10.Final.jar:]
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation$5.call(ClientInvocation.java:513) [resteasy-client-3.0.10.Final.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_65]
Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:431) [jsse.jar:1.8.0_65]
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:572)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:640)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:283) [resteasy-client-3.0.10.Final.jar:]
... 7 more
I imported the certificate from android.googleapis.com in my TrustStore and my app uses it, because i can also see log messages with successful m
I have no idea why this happens only from time to time. Could anybody help?