Get SparkUncaughtExceptionHandler when run spark-perf - testing

We have setup distributed spark cluster(version 1.5.0) and try to run spark-perf. But we got this error and have no idea how to fix it.
15/10/05 20:14:37 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#43ff6bf rejected from java.util.concurrent.ThreadPoolExecutor#36077c7[Running, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:95)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:95)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:121)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:132)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:124)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/10/05 20:14:37 INFO DiskBlockManager: Shutdown hook called
15/10/05 20:14:37 INFO ShutdownHookManager: Shutdown hook called
15/10/05 20:14:37 INFO ShutdownHookManager: Deleting directory /tmp/spark-c5a4a63b-3dc5-4c52-bd2b-e6df22a0c19f

Please check the variable SPARK_CLUSTER_URL in config/config.py.
SPARK_CLUSTER_URL = "spark://Master_Ip:7077"
PS: The Master_Ip is Master's ip address, not hostname.

You have not entered your Spark master URL correctly. It could be because upper case error. Please use this command to ensure the path to hibench.spark.masterin the file conf/99-user_defined_properties.conf is correct. You should be able to connect to the Spark-shell be running the following command.
MASTER=<YOUR-SPARK-MASTER-URL-HERE> bin/spark-shell
In the Spark's standalone mode this url should look something like:
spark://<master-machine-IP>:7077
Generally it is better to use the IP address of Master node instead of the alphabetic hostname that is provided by spark master for e.g spark://Macs-MacBook-Pro.local:7077

I replaced the master ip with its hostname in spark-submit command, and this error solved.
--master "spark://hostname:7077"

Related

install odl-ovsdb-southbound-impl exception

In opendaylight carbon when installing
feature:install odl-ovsdb-southbound-impl
I get the following exception
Exception in thread "Thread-133" java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:128)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:554)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1258)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487)
at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:191)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:980)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:250)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:365)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
I also get it each time I start the ODL
The java.net.BindException: Address already in use means that something on the host you are running already holds one of the ports which OpenDaylight wants to open. This could also happen if you start OpenDaylight twice. Often the actual port is shown in this kind of error, and then you can use (on Linux) netstat to find what process is already on that port. In this case for some reason its not showing which port; I don't know why. So you could just do some trial and error and stop other things you are running one after the other one, and retry ODL, until you'll find what it is.

Connection refused: connect whilst running Apache Zeppelin.

I have downloaded Spark 2.1.0 and Apache Zeppelin 0.7.0. I have edited my environment variables for Spark, Java and Winutils (as i am using windows 10). I have edited the zeppelin-env.sh and zeppelin-site.xml to export my JAVA_HOME and SPARK_HOME. I edited zeppelin-site.xml to change the port number as 8080 was already in use. I have opened command line as an administrator, changed to the zeppelin-0.7.0-bin-all directory and run the command bin\zeppelin.cmd. I opened the browser and navigated to localhost:8090. Zeppelin opens and the green light in the top right corner appears suggesting i am connected to the server. Once i run the "load data into table" tutorial i receive an error:
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:90)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.init(RemoteInterpreter.java:209)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:375)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:105)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:365)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
No logs are available in the logs folder. Everything is printed in the command line. The %md interpreter works as the code from "Welcome to Zeppelin" works fine after i run it.
I have tried altering the memory as per ZEPPELIN_305 and ZEPPELIN-449 on issues.apache.org. I have tried turning off windows firewall. I have uninstalled and reinstalled to ensure it wasn't an error i caused myself but nothing seems to work.
Has anybody had the same/similar problem? Any help would be much appreciated.
Your issue might be in the editing the wrong config file. Since you are on Windows 10 you want to edit the zeppelin-env.cmd (and not the zeppelin-env.sh which is intended for Linux). See the link to the Zeppelin install guide.

JDBC client to Hive - No data or no sasl data in the stream Exception

We have a Kerberised cluster and I'm trying to run a Java action in Oozie where I make a JDBC connection to Hive. This JDBC connections works fine on the Sandbox without Kerberos.
The connection string is as simple as the following, where I'm providing username and password in it:
Connection con = DriverManager.getConnection("jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET","user123","passw123");
The Oozie action (strangely) completes succesfully, and the Java action log does not present any error:
1742 [main] INFO org.apache.hive.jdbc.Utils - Supplied authorities: W12345:10000
1742 [main] INFO org.apache.hive.jdbc.Utils - Resolved authority: W12345:10000
1766 [main] INFO org.apache.hive.jdbc.HiveConnection - Will try to open client transport with JDBC Uri: jdbc:hive2://W12345:10000/control;principal=hive/W12345.companynet.net#COMPANYNET.NET
<<< Invocation of Main class completed <<<
Oozie Launcher ends
1785 [main] INFO org.apache.hadoop.mapred.Task - Task:attempt_1464245290012_0129_m_000000_0 is done. And is in the process of committing
1847 [main] INFO org.apache.hadoop.mapred.Task - Task attempt_1464245290012_0129_m_000000_0 is allowed to commit now
1854 [main] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_1464245290012_0129_m_000000_0' to hdfs://danskehadoop/user/user123/oozie-oozi/0000013-160527101253015-oozie-oozi-W/JavaAction--java/output/_temporary/1/task_1464245290012_0129_m_000000
1909 [main] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1464245290012_0129_m_000000_0' done.
But in reality the Java main does not complete correctly the execution (and does not execute the needed queries) because the JDBC connection fails with an exception that I can see only in the Hive log:
ERROR [HiveServer2-Handler-Pool: Thread-78363]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
I'm actually connected to the cluster, and already done further kinit on my username.
Does anybody know what could the cause of this exception be?
Thanks in advance for the help!
Antonio
This happened to me on MapR hadoop distribution platform.
In my case it was Keepalived checking Hive port every 5 seconds and producing such error. I simply used "nc" command to check if Hive port is in use and did not use any authentication method. Later I switched to "maprcli" command which uses SASL authentication and the error was gone.

'address already in use ' exception throws in com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl

I try to run a *.sh file in linux system ,this *.sh file is used to start a Java Application.this Application use gemfire as its distributed cache system. it seems that i can not build a new tcp connection of gemfire. is there anyone knows how to solve this problem?
here is the exception:
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:437)
at sun.nio.ch.Net.bind(Net.java:429)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl.<init>(AcceptorImpl.java:378)
at com.gemstone.gemfire.internal.cache.BridgeServerImpl.start(BridgeServerImpl.java:297)
at spark.cache.CacheServicePoint.enableServer(CacheServicePoint.java:197)
at orion.di.service.profile.ProfileService.initialize(ProfileService.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1544)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1485)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1417)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
Address already in use is that already your application instance is running/listening on the same port. If you are an ubuntu user
netstat -tulpn | grep YOURAPPNAME
This will give you the ProcessId which is running the instance in your system. Find the process Id number and kill the instance and start the application once again.
To kill
kill -9 PROCESSID
Are you starting two processes of your Java application on the same box? GemFire server by default opens up a port on 40404 to listen to client connections, so when you start more than two servers on the same box, the second one gets an Address already in use exception. Look the script used to start your application. You will need to provide a different port for each GemFire server you are trying to start. Using GemFireShell i.e. gfsh this could be done like so:
gfsh>start server --name=server1 --server-port=4045
Or, if there are no clients (i.e. peer-to-peer deployment of GemFire), you can disable listening to clients like so:
gfsh>start server --name=server1 --disable-default-server

Spring Session, Embedded Redis Server Error

Unable to start embedded Redis server, its giving the following error. What could be the possible reason. I'm working on Wildfly, Ubuntu. Following is the stacktrace.
... 25 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'redisServer' defined in org.egov.infra.config.session.RedisHttpSessionConfiguration: Invocation of init method failed; nested exception is java.lang.RuntimeException: Can't start redis server. Check logs for details.
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1566)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:116)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:606)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:462)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:93)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [rt.jar:1.8.0_31]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [rt.jar:1.8.0_31]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [rt.jar:1.8.0_31]
at java.lang.reflect.Constructor.newInstance(Constructor.java:408) [rt.jar:1.8.0_31]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147)
... 27 more
Caused by: java.lang.RuntimeException: Can't start redis server. Check logs for details.
at redis.embedded.AbstractRedisInstance.awaitRedisServerReady(AbstractRedisInstance.java:66)
at redis.embedded.AbstractRedisInstance.start(AbstractRedisInstance.java:37)
at redis.embedded.RedisServer.start(RedisServer.java:11)
at org.egov.infra.config.redis.EmbeddedRedisServer.afterPropertiesSet(EmbeddedRedisServer.java:20)
This is a bug that has been reported here https://github.com/spring-projects/spring-session/issues/150
In my case I was running embedded redis-server on port 1337 and this port was locked and went into a loop sometime back when I was running my testcases in debug mode. After that I also started spring-boot app which created another server connection on the port 6379. But I failed to terminate the server running on the port 1337. Since then I had been getting the exception when I was trying to execute test cases "Can't start redis server. Check logs for details.", since 1337 was locked. Debugging line-my-line "AbstractRedisInstance" class and "awaitRedisServerReady" method revealed "1337 Already in use" which was never logged at all. Killed this port and re-run testcases and again I was on fly. Hope this helps