Invalid broker uri when starting apache activemq - activemq

I downloaded apache activemq version 5.9 on windows 7, then opened the command prompt and typed ACTIVEMQDIR> .\bin\activemq and got a Invalid Broker URI error.
What configuration information do I need to provide so that activemq works on windows 7?
EDIT:
Here's the stack trace:
C:\activemq\bin>activemq start /p
Java Runtime: Oracle Corporation 1.7.0_79 C:\Program Files\Java jdk1.7.0_79\jre
Heap sizes: current=1005568k free=989808k max=1005568k
JVM args: -Dcom.sun.management.jmxremote -Xms1G -Xmx1G -Djava.util.logging.c
onfig.file=logging.properties -Djava.security.auth.login.config=C:\activemq\bin\
..\conf\login.config -Dactivemq.classpath=C:\activemq\bin\..\conf;C:\activemq\bi
n\../conf;C:\activemq\bin\../conf; -Dactivemq.home=C:\activemq\bin\.. -Dactivemq
.base=C:\activemq\bin\.. -Dactivemq.conf=C:\activemq\bin\..\conf -Dactivemq.data
=C:\activemq\bin\..\data -Djava.io.tmpdir=C:\activemq\bin\..\data\tmp
Extensions classpath:
[C:\activemq\bin\..\lib,C:\activemq\bin\..\lib\camel,C:\activemq\bin\..\lib\op
tional,C:\activemq\bin\..\lib\web,C:\activemq\bin\..\lib\extra]
ACTIVEMQ_HOME: C:\activemq\bin\..
ACTIVEMQ_BASE: C:\activemq\bin\..
ACTIVEMQ_CONF: C:\activemq\bin\..\conf
ACTIVEMQ_DATA: C:\activemq\bin\..\data
Loading message broker from: /p
ERROR: java.lang.RuntimeException: Failed to execute start task. Reason: java.la
ng.IllegalArgumentException: Invalid broker URI, no scheme specified: /p
java.lang.RuntimeException: Failed to execute start task. Reason: java.lang.Ille
galArgumentException: Invalid broker URI, no scheme specified: /p
at org.apache.activemq.console.command.StartCommand.runTask(StartCommand
.java:91)
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractC
ommand.java:57)
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand
.java:150)
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractC
ommand.java:57)
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.ja
va:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.

The problem is 'start' string is being interpreted as the broker url. Run without argument.
activemq

Just execute only the "activemq" and remove the argument "start",
For example, run it like the below line,
C:\activemq\bin>activemq

Related

Flink 1.10 not connecting to minio (s3)

I'm running locally a docker compose running flink and minio
When I try to connect to minio, I always get the following error:
caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
It seems that the plugin isn't loaded correctly.
my flink config (flink-conf.yaml):
state.backend: filesystem
s3.endpoint: http://minio:9000
s3.path.style.access: true
s3.access-key: minio
s3.secret-key: minio123
presto.s3.access-key: minio
presto.s3.secret-key: minio123
presto.s3.endpoint: http://minio:9000
presto.s3.path-style-access: true
I've copied the required plugin as following:
mkdir -p plugins/s3-fs-presto
cp opt/flink-s3-fs-presto-*.jar plugins/s3-fs-presto
Any suggestions?
Stack trace:
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: 7ae6657256719d8c32d76ba113fb35f0)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:820)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
at org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1008)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1081)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1081)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259)
... 21 more
Caused by: java.io.IOException: Error opening the Input Split s3://test/test.txt [0,3243]: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.api.common.io.FileInputFormat.open(FileInputFormat.java:824)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:470)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:47)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
at org.apache.flink.api.common.io.FileInputFormat$InputSplitOpenThread.run(FileInputFormat.java:995)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)
... 2 more
#

Nutch problems executing crawl

I am trying to get nutch 1.11 to execute a crawl. I am using cygwin to run these commands in windows 7.
Nutch is running, I am getting results from running bin/nutch, but I keep getting error messages when I try to run a crawl.
I am getting the following error when I try to run a crawl execute with nutch:
Error running: /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl/crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/seed.txt
Failed with exit value 127.
I have my JAVA_HOME classpath set, and I have altered the host file to include the 127.0.0.1 as the localhost.
I am curious if I am calling the write directory correctly, if maybe that is the problem.
The full printout looks like:
User5#User5-PC /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local
$ bin/crawl -i -D solr.server.url=http://localhost:8983/solr/ C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/ TestCrawl/ 2
Injecting seed URLs
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl//crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/
Injector: starting at 2015-12-23 17:48:21
Injector: crawlDb: TestCrawl/crawldb
Injector: urlDir: C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls
Injector: Converting injected urls to crawl db entries.
Injector: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.nutch.crawl.Injector.inject(Injector.java:323)
at org.apache.nutch.crawl.Injector.run(Injector.java:379)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
Error running:
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl//crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/
Failed with exit value 127.
The hadoop log that I think may have something to do with the error I am getting is:
2016-01-07 12:24:40,360 ERROR util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
at org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:432)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:478)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
2016-01-07 12:24:40,450 ERROR crawl.Injector - Injector: java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal character in scheme name at index 15: solr.server.url=http://localhost:8983/solr
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.nutch.crawl.Injector.run(Injector.java:379)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
Caused by: java.net.URISyntaxException: Illegal character in scheme name at index 15: solr.server.url=http://localhost:8983/solr
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3048)
at java.net.URI.<init>(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 4 more
You are running linux commands from Cygwin and there is no C:\ path in linux systems. Correct command should be something like
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch1.11/runtime/local/bin/nutch inject TestCrawl/crawldb /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch1.11/runtime/local/urls/seed.txt
You have answer to your problem in this message:
2016-01-07 12:24:40,360 ERROR util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
This is happening because hadoop version included with nutch 1.11 is designed to work in linux out of the box and not on windows.
I had same situation and I ended up using nutch1.11 in ubuntu virtual box.
hadoop-core jar file is needed when you are working with nutch
with nutch 1.11 compatible hadoop-core jar is 0.20.0
please download jar from this link :
http://www.java2s.com/Code/Jar/h/Downloadhadoop0200corejar.htm
paste that jar into "C:\cygwin64\home\apache-nutch-1.11\lib" folder
and it will run successfully.
The problem is pretty clear. According to your hadoop log, it cannnot find the winutils.exe file. Include winutils.exe in %HADOOP_HOME%/bin folder

unable to start server Glassfish 4 on eclipse unsing java 8

I am configuring my computer for implement primesface and primefaces mobiles. I installed apache-tomcat, GlassFish 4 and java 8 (jdk 1.8). I configured GlassFish on eclipse, but when I want to start it, I receive this message
Starting GlassFish 4 at localhost [domains1] has encountered a problem :
Unable to start server due following issues:
Launch process failed with exit code 1
and the log file give me this
Launching GlassFish on Felix platform
ERROR: Unable to create cache directory: C:\Program
Files\glassfish-4.1\glassfish\domains\domain1\osgi-cache\felix
ERROR: Error creating bundle cache. (java.lang.RuntimeException:
Unable to create cache directory.) java.lang.RuntimeException: Unable
to create cache directory. at
org.apache.felix.framework.cache.BundleCache.(BundleCache.java:131)
at org.apache.felix.framework.Felix.init(Felix.java:640) at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiFrameworkLauncher$1.run(OSGiFrameworkLauncher.java:88)
Exception in thread "Thread-1" java.lang.RuntimeException:
org.osgi.framework.BundleException: Error creating bundle cache. at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiFrameworkLauncher$1.run(OSGiFrameworkLauncher.java:90)
Caused by: org.osgi.framework.BundleException: Error creating bundle
cache. at org.apache.felix.framework.Felix.init(Felix.java:645) at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiFrameworkLauncher$1.run(OSGiFrameworkLauncher.java:88)
Caused by: java.lang.RuntimeException: Unable to create cache
directory. at
org.apache.felix.framework.cache.BundleCache.(BundleCache.java:131)
at org.apache.felix.framework.Felix.init(Felix.java:640) ... 1 more
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483) at
com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97)
at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:54)
Caused by: org.glassfish.embeddable.GlassFishException:
java.lang.NullPointerException at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:170)
at
org.glassfish.embeddable.GlassFishRuntime._bootstrap(GlassFishRuntime.java:157)
at
org.glassfish.embeddable.GlassFishRuntime.bootstrap(GlassFishRuntime.java:110)
at
com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:112)
... 6 more Caused by: java.lang.NullPointerException at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.newFramework(OSGiGlassFishRuntimeBuilder.java:241)
at
com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:135)
... 9 more Error stopping framework: java.lang.NullPointerException
java.lang.NullPointerException at
com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher$1.run(GlassFishMain.java:203)
Java HotSpot(TM) Client VM warning: ignoring option MaxPermSize=192m;
support was removed in 8.0
Can I have some solution please?

why jmap command running with exception java.lang.reflect.InvocationTargetException? [duplicate]

I get the following exception when i take a heapdump using
jmap -F -dump:format=b,file=/tmp/heapdump/before.hprof 10737
Attaching to process ID 10737, please wait...
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jmap.JMap.runTool(JMap.java:179)
at sun.tools.jmap.JMap.main(JMap.java:110)
Caused by: java.lang.RuntimeException: Type "nmethodBucket*", referenced in VMStructs::localHotSpotVMStructs in the remote VM, was not present in the remote VMStructs::localHotSpotVMTypes table (should have been caught in the debug build of that VM). Can not continue.
at sun.jvm.hotspot.HotSpotTypeDataBase.lookupOrFail(HotSpotTypeDataBase.java:361)
at sun.jvm.hotspot.HotSpotTypeDataBase.readVMStructs(HotSpotTypeDataBase.java:252)
at sun.jvm.hotspot.HotSpotTypeDataBase.<init>(HotSpotTypeDataBase.java:87)
at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:568)
at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494)
at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:332)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:163)
at sun.jvm.hotspot.tools.HeapDumper.main(HeapDumper.java:77)
Anyone know how to resolve this ?
I was seeing the same error because my path to jmap wasn't the same as the path to the java process (i.e. targeting two different versions).
Running jmap with the full path to my JDK resolved it.
If OpenJDK is used, it requires installation of debuginfo-packages.
In Centos this works with
- sudo debuginfo-install java-1.8.0-openjdk
- or sudo yum install java-1.8.0-openjdk-debuginfo.x86_64
See
- https://bugzilla.redhat.com/show_bug.cgi?id=1010786#c15
- amazon linux - install openjdk-debuginfo?

Jenkins build failed on OSX

I am trying to build my project using Jenkins to deploy the artifacts to the nexus. I have a Jenkins setup on my macOSX.
below is the error, I am getting:
Parsing POMs
[maventest] $
/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java
-Xmx512m -XX:MaxPermSize=128m -Dfile.encoding=UTF-8 -cp /Users/Shared/Jenkins/Home/plugins/maven-plugin/WEB-INF/lib/maven3-agent-1.3.jar:/usr/share/maven/boot/plexus-classworlds-2.4.jar
org.jvnet.hudson.maven3.agent.Maven3Main /usr/share/maven
/Users/Shared/Jenkins/Home/war/WEB-INF/lib/remoting-2.26.jar
/Users/Shared/Jenkins/Home/plugins/maven-plugin/WEB-INF/lib/maven3-interceptor-1.3.jar
59985
<===[JENKINS REMOTING CAPACITY]===>channel started
channel stopped
ERROR: Failed to parse POMs java.io.IOException: Remote call on
Channel to Maven
[/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java,
-Xmx512m, -XX:MaxPermSize=128m, -Dfile.encoding=UTF-8, -cp, /Users/Shared/Jenkins/Home/plugins/maven-plugin/WEB-INF/lib/maven3-agent-1.3.jar:/usr/share/maven/boot/plexus-classworlds-2.4.jar,
org.jvnet.hudson.maven3.agent.Maven3Main, /usr/share/maven,
/Users/Shared/Jenkins/Home/war/WEB-INF/lib/remoting-2.26.jar,
/Users/Shared/Jenkins/Home/plugins/maven-plugin/WEB-INF/lib/maven3-interceptor-1.3.jar,
59985] failed at hudson.remoting.Channel.call(Channel.java:727) at
hudson.maven.ProcessCache$MavenProcess.call(ProcessCache.java:156) at
hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.doRun(MavenModuleSetBuild.java:770)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:586)
at hudson.model.Run.execute(Run.java:1593) at
hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:491) at
hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:247) Caused by:
java.lang.InternalError: Can't connect to window server - not enough
permissions. at java.lang.ClassLoader$NativeLibrary.load(Native
Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1827)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1724) at
java.lang.Runtime.loadLibrary0(Runtime.java:823) at
java.lang.System.loadLibrary(System.java:1045) at
sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50)
at java.security.AccessController.doPrivileged(Native Method) at
java.awt.Toolkit.loadLibraries(Toolkit.java:1605) at
java.awt.Toolkit.(Toolkit.java:1627) at
java.awt.Color.(Color.java:263) at
hudson.util.ColorPalette.(ColorPalette.java:39) at
hudson.model.BallColor.(BallColor.java:56) at
hudson.model.Result.(Result.java:51) at
java.lang.Class.forName0(Native Method) at
java.lang.Class.forName(Class.java:171) at
com.sun.proxy.$Proxy8.(Unknown Source) at
sun.reflect.GeneratedSerializationConstructorAccessor41.newInstance(Unknown
Source) at
java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
java.io.ObjectStreamClass.newInstance(ObjectStreamClass.java:929) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1759)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
at java.util.HashMap.readObject(HashMap.java:1030) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597) at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:979)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
at hudson.remoting.UserRequest.deserialize(UserRequest.java:182) at
hudson.remoting.UserRequest.perform(UserRequest.java:98) at
hudson.remoting.UserRequest.perform(UserRequest.java:48) at
hudson.remoting.Request$2.run(Request.java:326) at
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138) at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680) Finished: FAILURE
I already tried the below solution but it didn't work:
http://jenkins-ci.361315.n4.nabble.com/JIRA-Created-HUDSON-5584-java-io-IOException-Remote-call-on-Channel-to-Maven-td1475049.html
Configurations I have:
MAVEN_OPTS:-Xmx1024m
-XX:MaxPermSize=128m
-Dfile.encoding=UTF-8
-Djava.awt.headless=true
output of ps -ef | grep java: /usr/bin/java -Djava.awt.headless=true -jar /Applications/Jenkins/jenkins.war
build command:clean deploy -DaltDeploymentRepository=central::default::http://<user>:<pwd>#<host>:<port>/nexus/content/groups/public/
The solution I used was to apply Java 7. What you want to do is add 1.7 to Jenkins. Following these steps I was able to successfully build my project:
Go to the Oracle Java page and downloaded the 1.7_51 jdk for Mac.
Opened the dmg and ran the executable.
On the Mac, this installs the JDK to /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/
In Jenkins, go to 'Manage Jenkins' > 'Configure System'
Under the JDK heading, click the button that says JDK Installations
Under Name type 'JDK 1.7.0_51'
For JAVA_HOME type '/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/'
Select Save
Go to your project and select Configure
You should now have a JDK drop down near the top of the page.
Select the JDK you just configured under 'Manage Jenkins'
Run the build
After doing this, my build successfully ran without the 'Can't connect to window server - not enough permissions error.'
This line looks curious:
hudson.model.Executor.run(Executor.java:247) Caused by: java.lang.InternalError:
Can't connect to window server - not enough permissions. at java.lang.ClassLoader
$NativeLibrary.load(Native Method)
I would start with a simpler project, and then add complexity from that point, for the reason of just testing to make sure your basic assumptions are correct.
You might need to set the JVM property : -Djava.awt.headless=true . By doing that you will disable the (most likely unnecessary) gui libraries that are trying to load.