How to make akka-tree work with akka.net cluster? - akka.net

I am currently looking for a visualizer for akka.net (.NET version). I found none so I am wondering if there is any way to make akka-tree (scale) and akka.net cluster work together. I guess if both framework are following the same spec then it should work. is it?
I tried asking this question in akka-tree but no response so I am trying my luck here.
https://github.com/nraychaudhuri/akka-tree/issues/15
I am not familiar with scale but I did the following changes.
I tried adding the UDP configuration in my akka.net sample because I think this visualizer is using UDP.
helios.udp {
port = 9003 # needs to be on a different port or IP than TCP
hostname = localhost
}
And then, I tried to change the IP address in this file akka-tree\visualizer\app\controllers\Application.scala
val group = InetAddress.getByName("127.0.0.1");
But, doesn't work.. Any idea how to make it work? Thanks!
Update:
I tried using TCP but doesn't work.
I am getting the exception below when I access the webpage "localhost:90000". I am not familiar with Scale but I think it has something to do with scale installation. I did install scale and java on my machine tho..
[info] Compiling 1 Scala source to
D:\git\akka-tree\visualizer\target\scala-2.11 \classes... [info] play
- Application started (Dev) [error] application -
! Internal server error, for (GET) [/] ->
java.lang.ExceptionInInitializerError: null
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na]
at play.core.Router$HandlerInvokerFactory$$anon$13$$anon$14.call(Router.
scala:217) ~[play_2.11-2.3.7.jar:2.3.7]
at play.core.Router$Routes$TaggingInvoker.call(Router.scala:464) ~[play_
2.11-2.3.7.jar:2.3.7]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1.apply(routes_routing.
scala:72) ~[na:na] Caused by: java.net.SocketException: Not a
multicast address
at java.net.MulticastSocket.joinGroup(Unknown Source) ~[na:1.8.0_51]
at controllers.Application$.(Application.scala:16) ~[na:na]
at controllers.Application$.(Application.scala) ~[na:na]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na] [error] application - Error while
rendering default error page scala.MatchError:
java.lang.ExceptionInInitializerError (of class java.lang.Exce
ptionInInitializerError)
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:148) ~[pla
y_2.11-2.3.7.jar:2.3.7]
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:206) [play_2.11-
2.3.7.jar:2.3.7]
at play.core.server.Server$class.logExceptionAndGetResult$1(Server.scala
:63) [play_2.11-2.3.7.jar:2.3.7]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:7
3) [play_2.11-2.3.7.jar:2.3.7]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:7
1) [play_2.11-2.3.7.jar:2.3.7] [error] application -
! Internal server error, for (HEAD) [/] ->
java.lang.NoClassDefFoundError: Could not initialize class
controllers.Applicati on$
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.appl
y(routes_routing.scala:72) ~[na:na]
at play.core.Router$HandlerInvokerFactory$$anon$13$$anon$14.call(Router.
scala:217) ~[play_2.11-2.3.7.jar:2.3.7]
at play.core.Router$Routes$TaggingInvoker.call(Router.scala:464) ~[play_
2.11-2.3.7.jar:2.3.7]
at Routes$$anonfun$routes$1$$anonfun$applyOrElse$1.apply(routes_routing.
scala:72) ~[na:na] [error] application - Error while rendering default
error page scala.MatchError: java.lang.NoClassDefFoundError: Could not
initialize class con trollers.Application$ (of class
java.lang.NoClassDefFoundError)
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:148) ~[pla
y_2.11-2.3.7.jar:2.3.7]
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:206) [play_2.11-
2.3.7.jar:2.3.7]
at play.core.server.Server$class.logExceptionAndGetResult$1(Server.scala
:63) [play_2.11-2.3.7.jar:2.3.7]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:7
3) [play_2.11-2.3.7.jar:2.3.7]
at play.core.server.Server$$anonfun$getHandlerFor$4.apply(Server.scala:7
1) [play_2.11-2.3.7.jar:2.3.7]

I'm not the author but there's a 'very very alpha version' of the visualizer ported to .NET (from Aug '16) available on nuget https://www.nuget.org/profiles/corneliutusnea
with source here https://github.com/corneliutusnea/Akka.Visualizer

The problem is that, you're trying to span your cluster over both .NET and JVM akka implementations. They are not compatible with each other at the moment. Reason for that are some differences in .NET/JVM socket transport layer (like little- and big-endian byte ordering) as well as problems with message serialization (JVM uses built-in JavaSerializer, .NET uses JSON.NET by default).
There are probably some other minor issues as well, but the conclusion is, that at the present moment you cannot combine akka cluster between .NET and JVM nodes.

Related

opentelemetry Connection refused: localhost/0:0:0:0:0:0:0:1:4317

I am using opentelemtry for tracing purpose following are the command but getting error
Can any one suggest what I am doing wrong here:
java -Dotel.traces.exporter=jaeger -Dotel.exporter.jaeger.endpoint=host:14250 -Dotel.resource.attributes=service.name=app-name \
-javaagent:./opentelemetry-javaagent-all.jar -jar app-1.0.0.jar
[opentelemetry.auto.trace 2021-03-17 12:41:19:593 +0530] [IntervalMetricReader-1] WARN io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter - Failed to export metrics
io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:534)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:617)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:803)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:782)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/0:0:0:0:0:0:0:1:4317
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
As far as I understand, OTel has 2 modules - traces and metrics. The error seems to complain about the exports of "metrics", while your traces might actually work fine. It tries to use the default metrics exporter (non jaeger) which writes metrics to a local Otel collector https://github.com/open-telemetry/opentelemetry-collector ( localhost/0:0:0:0:0:0:0:1:4317)
At time of writing, metrics is still marked as Alpha as in https://github.com/open-telemetry/opentelemetry-java/blob/v1.0.1/QUICKSTART.mds
I have not used jaeger perhaps they have support for metrics as well. Try -Dotel.metrics.exporter=jaeger and see whether it works.
If you just want to remove that warning you might consider adding flag -Dotel.metrics.exporter=none which should disable the export of metrics, while traces should still function.

Unable to run s3 sink connector for persisting kafka data on Minio

I have a Kubernetes cluster running with minikube, inside the cluster is running one kafka pod, one zookeeper pod and minio pod everyone with it service. Everything looks working properly. I have a minio topic generated called minio-topic working on kafka, and minio has one bucket called kafka-bucket, I have tried to run s3-sink-connector with this properties:
name=s3-sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=minio_topic
s3.region=us-east-1
s3.bucket.name=kafka-bucket
s3.part.size=5242880
flush.size=3
store.url=http://l27.0.0.1:9000/
storage.class=io.confluent.connect.s3.storage.S3Storage
#format.class=io.confluent.connect.s3.format.avro.AvroFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
After running the connector i am getting this error
[2020-05-06 18:00:46,238] ERROR WorkerSinkTask{id=s3-sink-0} Task threw an uncaught and unrecoverable
exception (org.apache.kafka.connect.runtime.WorkerTask:179)
org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:55)
at io.confluent.connect.s3.S3SinkTask.start(S3SinkTask.java:99)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:50)
... 10 more
Caused by: java.lang.IllegalArgumentException: hostname cannot be null
at com.amazonaws.util.AwsHostNameUtils.parseRegion(AwsHostNameUtils.java:79)
at com.amazonaws.util.AwsHostNameUtils.parseRegionName(AwsHostNameUtils.java:59)
at com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:277)
at com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:229)
at com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:688)
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:362)
at
com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:38)
at io.confluent.connect.s3.storage.S3Storage.newS3Client(S3Storage.java:96)
at io.confluent.connect.s3.storage.S3Storage.<init>(S3Storage.java:65)
... 15 more
The credentials are well defined into .aws/credentials, Does anyone know what possibly be the mistake in the configuration?
hostname cannot be null at com.amazonaws.util.AwsHostNameUtils.parseRegion
I suggest you read up on the Minio blog on setting the store.url correctly as well as verify which region your Minio cluster thinks it's running on.

Revolution slider with video: Connection closed

We have a JSF application running on Payara server (5.182) which incorporates web pages with Revolution Slider (5.4.6.4).
Some of the slides are setup as video backgrounds, and for the most part it's working exactly as designed, however, we do intermittently see the following WARNING trace in the server logs:
[2018-08-08T09:15:20.329-0400] [Payara 5.182] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=31 _ThreadName=http-thread-pool::http-listener-1(5)] [timeMillis: 1533734120329] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: Connection closed
at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:307)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:477)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:388)
at org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:642)
at org.glassfish.grizzly.nio.NIOConnection$6.run(NIOConnection.java:608)
at org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(DefaultSelectorHandler.java:213)
at org.glassfish.grizzly.nio.NIOConnection.terminate0(NIOConnection.java:602)
at org.glassfish.grizzly.nio.transport.TCPNIOConnection.terminate0(TCPNIOConnection.java:267)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:173)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:68)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.processAsync(AbstractNIOAsyncQueueWriter.java:320)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:84)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:80)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:66)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:391)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:360)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:324)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:255)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: An established connection was aborted by the software in your host machine
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(Unknown Source)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.write(Unknown Source)
at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
at org.glassfish.grizzly.nio.transport.TCPNIOUtils.flushByteBuffer(TCPNIOUtils.java:125)
at org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:165)
... 16 more
We are assuming that this occurs when the entire video has not yet been loaded and then the slider rotates to the next slide, but that's really just a guess. We have tried modifying the logger settings on the server just to eliminate the warning, but that doesn't solve the root problem.
Does anyone have any insight as to what we might change either in the code of the slider page or on the server console to remedy this?
Thank you in advance.
As the error says: "An established connection was aborted by the software in your host machine" the problem isn't caused by anything running in the JVM. It's not related to Payara or even Java at all.
You should check what's causing that the connection was aborted. I assume some firewall configuration, network control software or worst case a virus.

How to configure Apache NiFi for a Kerberized Hadoop Cluster

I have Apache NiFi running standalone and its working fine. But, when I am trying to setup Apache NiFi to access Hive or HDFS Kerberized Cloudera Hadoop Cluster. I am getting issues.
Can someone guide me on the documentation for Setting HDFS/Hive/HBase (with Kerberos)
Here is the configuration I gave in nifi.properties
# kerberos #
nifi.kerberos.krb5.file=/etc/krb5.conf
nifi.kerberos.service.principal=pseeram#JUNIPER.COM
nifi.kerberos.keytab.location=/uhome/pseeram/learning/pseeram.keytab
nifi.kerberos.authentication.expiration=10 hours
I referenced various links like, but none of those are helpful.
(Since the below link said it had issues in NiFi 0.7.1 version, I tried NiFi 1.1.0 version. I had the same bitter experience)
https://community.hortonworks.com/questions/62014/nifi-hive-connection-pool-error.html
https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
Here are the errors I am getting logs:
ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.hive.SelectHiveQL
org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:292) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
at org.apache.nifi.controller.service.StandardControllerServiceProvider$1.invoke(StandardControllerServiceProvider.java:177) ~[na:na]
at com.sun.proxy.$Proxy83.getConnection(Unknown Source) ~[na:na]
at org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:158) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_51]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.nifi.dbcp.hive.HiveConnectionPool.getConnection(HiveConnectionPool.java:288) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
... 18 common frames omitted
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://ddas1106a:10000/innovate: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:231) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:176) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) ~[hive-jdbc-1.2.1.jar:1.2.1]
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545) ~[commons-dbcp-1.4.jar:1.4]
... 21 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[hive-exec-1.2.1.jar:1.2.1]
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:204) ~[hive-jdbc-1.2.1.jar:1.2.1]
... 27 common frames omitted
WARN [NiFi Web Server-29] o.a.nifi.dbcp.hive.HiveConnectionPool HiveConnectionPool[id=278beb67-0159-1000-cffa-8c8534c285c8] Configuration does not have security enabled, Keytab and Principal will be ignored
What you've added in nifi.properties file is useful for Kerberizing nifi cluster. In order to access kerberized hadoop cluster, you need to provide appropriate config files and keytabs in NiFi's HDFS processor.
For example, if you are using putHDFS to write to a Hadoop cluster:
Hadoop Configuration Resources : paths to core-site.xml and hdfs-site.xml
Kerberos Principal: Your principal to access hadoop cluster
kerberos keytab: Path to keytab generated using krb5.conf of hadoop cluster. nifi.kerberos.krb5.file in nifi.properties must be pointed to appropriate krb5.conf file.
Immaterial of whether NiFi is inside kerberized hadoop cluster or not, this post might be useful.
https://community.hortonworks.com/questions/84659/how-to-use-apache-nifi-on-kerberized-hdp-cluster-n.html

SecurityConstraint.class not found Tomcat 8.0.30

I am trying to upgrade my application server from Tomcat 6 to Tomcat 8, which is using cutom realm. After changing the server.xml file to point to our custom realm started getting exception :-
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:485)
Caused by: java.lang.NoClassDefFoundError: org.apache.catalina.deploy.SecurityConstraint
at java.lang.Class.getMethods(Class.java:1357)
at org.apache.tomcat.util.modeler.modules.MbeansDescriptorsIntrospectionSource.createManagedBean(MbeansDescriptorsIntrospectionSource.java:297)
at org.apache.tomcat.util.modeler.modules.MbeansDescriptorsIntrospectionSource.execute(MbeansDescriptorsIntrospectionSource.java:77)
at org.apache.tomcat.util.modeler.modules.MbeansDescriptorsIntrospectionSource.loadDescriptors(MbeansDescriptorsIntrospectionSource.java:70)
at org.apache.tomcat.util.modeler.Registry.load(Registry.java:582)
at org.apache.tomcat.util.modeler.Registry.findManagedBean(Registry.java:485)
at org.apache.tomcat.util.modeler.Registry.registerComponent(Registry.java:614)
at org.apache.catalina.util.LifecycleMBeanBase.register(LifecycleMBeanBase.java:161)
at org.apache.catalina.util.LifecycleMBeanBase.initInternal(LifecycleMBeanBase.java:61)
at org.apache.catalina.realm.RealmBase.initInternal(RealmBase.java:1214)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
... 18 more
Caused by: java.lang.ClassNotFoundException: org.apache.catalina.deploy.SecurityConstraint
at java.net.URLClassLoader.findClass(URLClassLoader.java:607)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:844)
at java.lang.ClassLoader.loadClass(ClassLoader.java:823)
at java.lang.ClassLoader.loadClass(ClassLoader.java:803)
at java.lang.Class.getVirtualMethodsImpl(Native Method)
Tried checking the catalian.jar in TOMCAT/lib, when extracted I could not find the SecurityConstraint.class.
Any idea
1)why is it not there
2) how to fix this issue . so that we can deploy the application.
SecurityConstraint class has moved to org.apache.tomcat.embed:tomcat-embed-core
The other answer mentions that SecurityConstraint was moved, but it mentions that it was moved to a location thats only relevant to the embedded version of Tomcat. For the regular version of tomcat 8, the class was moved to
org.apache.tomcat.util.descriptor.web.SecurityConstraint
inside tomcat-util-scan.jar