How to add flow table on each two switches? (mininet) - sdn

I set topology as can seen in the image below. I want to add different flow-tables to each switch. But if I type
dpctl add-flow in_port=1,nw_dst=10.0.0.2,actions=output:3
the flow table is added to both s1 and s2!
How can I add a different flow-table to each switch?

You can't do this with dpctl command, you have to use "sh ovs-ofctl" command.
Also mininet answered a question related with dpctl in this link
Here is what i did:
yavuz#ubuntu:~$ sudo mn --topo linear,2,1 --switch ovsk --controller=remote
*** Creating network
*** Adding controller
Connecting to remote controller at 127.0.0.1:6653
*** Adding hosts:
h1 h2
*** Adding switches:
s1 s2
*** Adding links:
(h1, s1) (h2, s2) (s2, s1)
*** Configuring hosts
h1 h2
*** Starting controller
c0
*** Starting 2 switches
s1 s2 ...
*** Starting CLI:
Lets dump flows:
mininet> dpctl dump-flows
*** s1 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=10.979s, table=0, n_packets=21, n_bytes=1674, idle_age=1, priority=0 actions=CONTROLLER:65535
*** s2 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=10.974s, table=0, n_packets=21, n_bytes=1674, idle_age=1, priority=0 actions=CONTROLLER:65535
Add a flow to s1:
mininet> sh ovs-ofctl add-flow s1 in_port=5,nw_dst=10.0.0.5,actions=output:5
2017-08-03T16:06:41Z|00001|ofp_util|INFO|normalization changed ofp_match, details:
2017-08-03T16:06:41Z|00002|ofp_util|INFO| pre: in_port=5,nw_dst=10.0.0.5
2017-08-03T16:06:41Z|00003|ofp_util|INFO|post: in_port=5
Now, as seen in flow dump, flows for each switch are different:
mininet> dpctl dump-flows
*** s1 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=2.644s, table=0, n_packets=0, n_bytes=0, idle_age=2, in_port=5 actions=output:5
cookie=0x0, duration=20.971s, table=0, n_packets=21, n_bytes=1674, idle_age=11, priority=0 actions=CONTROLLER:65535
*** s2 ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=20.965s, table=0, n_packets=21, n_bytes=1674, idle_age=11, priority=0 actions=CONTROLLER:65535
mininet>

If already a link is created how do we set the priority of the link from xterm hosts

Related

RabbitMQ is dead

I don't actually know how to describe my problem in my title well. The issue I'm having is I have a local install of RabbitMQ through Homebrew (Mac), and it suddenly just died. I suddenly became unable to send messages to the queue. Unfortunately I can't post the error message, because I tried a few other steps, including resetting, uninstalling, and reinstalling Rabbit. After the reinstall, I can't start my Rabbit server; it gets stuck after this:
sudo rabbitmq-server start
Password:
2022-08-05 11:14:17.972308-04:00 [info] <0.221.0> Feature flags: list of feature flags found:
2022-08-05 11:14:17.979492-04:00 [info] <0.221.0> Feature flags: [x] classic_mirrored_queue_version
2022-08-05 11:14:17.979531-04:00 [info] <0.221.0> Feature flags: [x] implicit_default_bindings
2022-08-05 11:14:17.979546-04:00 [info] <0.221.0> Feature flags: [x] maintenance_mode_status
2022-08-05 11:14:17.979568-04:00 [info] <0.221.0> Feature flags: [x] quorum_queue
2022-08-05 11:14:17.979583-04:00 [info] <0.221.0> Feature flags: [x] stream_queue
2022-08-05 11:14:17.979599-04:00 [info] <0.221.0> Feature flags: [x] user_limits
2022-08-05 11:14:17.979611-04:00 [info] <0.221.0> Feature flags: [x] virtual_host_metadata
2022-08-05 11:14:17.979672-04:00 [info] <0.221.0> Feature flags: feature flag states written to disk: yes
2022-08-05 11:14:18.203961-04:00 [notice] <0.44.0> Application syslog exited with reason: stopped
2022-08-05 11:14:18.204012-04:00 [notice] <0.221.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
## ## RabbitMQ 3.10.7
## ##
########## Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
###### ##
########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Erlang: 25.0.3 [jit]
TLS Library: OpenSSL - OpenSSL 1.1.1q 5 Jul 2022
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: /usr/local/var/log/rabbitmq/rabbit#localhost.log
/usr/local/var/log/rabbitmq/rabbit#localhost_upgrade.log
<stdout>
Config file(s): (none)
Starting broker... completed with 7 plugins.
After this it just hangs forever.
I would like to completely uninstall Rabbit from my computer and reinstall it from fresh; when I first installed it it worked like a charm, but since them somehow something has gone belly-up. Can someone help me with this?
Also, yes, the obvious thing to do is brew rm rabbitmq but that's what got me into this situation. It can't be that simple.
I got it working. Some combination of these steps worked:
Change the permission of the service:
sudo chown -R $(whoami) $(brew --prefix)/*
Reload the launchctl config:
launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.rabbitmq.plist
launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.rabbitmq.plist
Restart the service:
brew services restart rabbitmq
And then it worked.

Error in modifing web-context from /auth to /ddu-auth from jboss cli in openshift

i am trying to deploy kelclock container in openshift environment
and automatically keyclock-bootstrap.sh script will run and set
context path to "/ddu-auth" when container start.i am struggling for
this issue since last 4/5 days but dont find any solution
.appriciated ,if you could help me on this issue
if i removed bootstrap.sh file ,container will be stable ortherwise
it restarted automatically in every 2/3 min
Dockerfile for image creation
FROM jboss/keycloak-openshift
ADD keycloak-bootstrap.sh /usr/bin/
ADD openshift-entrypoint.sh /usr/bin/
USER root
RUN chmod +x /usr/bin/openshift-entrypoint.sh && \
chmod +x /usr/bin/keycloak-bootstrap.sh && \
chmod +x /opt/jboss/ddu.sh
USER 1000
EXPOSE 8080
**#keycloak-bootstrap.sh**
/opt/jboss/keycloak/bin/add-user-keycloak.sh -u admiin-p ert246yui
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server
http://localhost:8080/auth --realm master --user admin--password
ert246yui /opt/jboss/keycloak/bin/kcadm.sh update realms/master -s
sslRequired=NONE
sleep 50
/opt/jboss/keycloak/bin/jboss-cli.sh --connect
--command="/subsystem=keycloak-server/:write-attribute(name="web-context",value=ddu-auth)"
/opt/jboss/keycloak/bin/jboss-cli.sh --connect --command=:reload
**Error details**
######################################################### 13:09:54,018 INFO [org.keycloak.services] (ServerService Thread
Pool -- 73) KC-SERVICES0001: Loading config from standalone.xml or
domain.xml 13:09:54,055 INFO [org.jboss.as.server] (Thread-2)
WFLYSRV0236: Suspending server with no timeout. 13:09:54,056 INFO
[org.jboss.as.ejb3] (Thread-2) WFLYEJB0493: EJB subsystem suspension
complete 13:09:54,058 INFO [org.jboss.as.server] (Thread-2)
WFLYSRV0220: Server shutdown has been requested via an OS signal
13:09:54,062 ERROR [org.jboss.msc.service.fail] (ServerService
Thread Pool -- 73) MSC000001: Failed to start service
jboss.undertow.deployment.default-server.default-host./ddu-auth:
org.jboss.msc.service.StartException in service
jboss.undertow.deployment.default-server.default-host./ddu-auth:
java.lang.RuntimeException: RESTEASY003325: Failed to construct
public
org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at
org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
at java.lang.Thread.run(Thread.java:748) at
org.jboss.threads.JBossThread.run(JBossThread.java:485) Caused by:
java.lang.RuntimeException: RESTEASY003325: Failed to construct
public
org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at
org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:162)
at
org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2676)
at
org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:361)
at
org.jboss.resteasy.spi.ResteasyDeployment.startInternal(ResteasyDeployment.java:274)
at
org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:86)
at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:119)
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
at
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at
org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at
io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:300)
at
io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:140)
at
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:584)
at
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:555)
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)

redis-cli unix socket - import rcmd failing

I am using redis server 3.0.6 on ubuntu 16.04 desktop. Installed using
apt-get install redis-server
It was working fine for a long time. Today, when I tried to import a rcmd file using unix socket in redis-cli I am getting out of memory error.
$ sudo redis-cli -s /run/redis/redis.sock < a1.rcmd
Error: Out of memory
a1.rcmd contents:
SET Api:Server:Name "server1"
/etc/redis/redis.conf was confiured with /run/redis/redis.sock. All other configurations are default.
GET, SET commands with unix socket option are working fine.
If I run without unix socket, the redis-cli < a1.rcmd command is working fine.
What causes this error?
Will upgrade to latest version of redis resolve this? I dont see apt-get way to install latest redis version 5.
Any help would be really appreciated.
** EDIT **
Some more inputs and GDB logs.
$ which redis-server
/usr/bin/redis-server
$ sudo service redis-server status
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-12-17 11:29:14 EST; 16min ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 8986 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS)
Process: 8981 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 8977 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS)
Process: 9001 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS)
Process: 8997 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
Process: 8992 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS)
Main PID: 9000 (redis-server)
Tasks: 3
Memory: 884.0K
CPU: 304ms
CGroup: /system.slice/redis-server.service
└─9000 /usr/bin/redis-server 127.0.0.1:637
$ sudo gdb /usr/bin/redis-server 9000
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/redis-server...(no debugging symbols found)...done.
Attaching to program: /usr/bin/redis-server, process 9000
[New LWP 9002]
[New LWP 9003]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fe8f5158a13 in epoll_wait () at ../sysdeps/unix/syscall-template.S:84
84 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) continue
Continuing.
After this step I ran following command in another terminal:
$ sudo redis-cli -s /run/redis/redis.sock < a1.rcmd
Segmentation fault (core dumped)
(gdb) bt
#0 0x00007fe8f5158a13 in epoll_wait () at ../sysdeps/unix/syscall-template.S:84
#1 0x000055b238abe15f in aeProcessEvents ()
#2 0x000055b238abe58b in aeMain ()
#3 0x000055b238abd204 in main ()
(gdb) info registers
rax 0xfffffffffffffffc -4
rbx 0x7fe8f48e5300 140638512108288
rcx 0x7fe8f5158a13 140638520969747
rdx 0x1060 4192
rsi 0x7fe8f4910000 140638512283648
rdi 0x3 3
rbp 0x7fe8f487b068 0x7fe8f487b068
rsp 0x7ffd9a5ca920 0x7ffd9a5ca920
r8 0x284 644
r9 0x284 644
r10 0x64 100
r11 0x293 659
r12 0x7ffd9a5caae8 140727193217768
r13 0x7fe8f480e1a0 140638511227296
r14 0x7fe8f48101f0 140638511235568
r15 0xfffffffe 4294967294
rip 0x7fe8f5158a13 0x7fe8f5158a13 <epoll_wait+51>
eflags 0x293 [ CF AF SF IF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
(gdb)

Apache Traffic Server forward proxy basic authentication

I'm trying to set up a forward proxy server with basic proxy authentication using Apache Traffic Server (ATS) on CentOS 6. I've already successfully deployed both SQUID and Apache httpd mod_proxy forward proxies with basic proxy authentication, and want to do the same with ATS to compare performance.
I'm trying to use the basic-auth plugin example provided by ATS, with multiple issues.
I add the latest epel repo for CentOS 6 and install both trafficserver and trafficserver-devel (required to use the ATS compiler, tsxs) packages. I copy the basic-auth.c file from source to my user directory and attempt to compile:
# tsxs -v -o /root/basic-auth.so -c /root/basic-auth.c
Whereupon I get errors for files not found - ts/ink_defs.h
This file is generated by running autoconfig -if and configure on the source code - so I went ahead and cloned the trafficserver git repo and ran through the steps to make the few hundred files in /opt/ts/. I copied these to the directory that tsxs looks at - /usr/include/ts/ (which is here because it is the default location when installed using trafficserver-devel (when I previously only had installed traffic server from source, tsxs would not run).
With the files now in place, I ran the compiler again on basic-auth.cc. This time I receive errors in ts.h, because of an sdk_version parameter:
# tsxs -v -o basic-auth.so basic-auth.c
compiling basic-auth.c -> basic-auth.lo
cc -I/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -feliminate-unused-debug-symbols -fno-strict-aliasing -mcx16 -fpic -c basic-auth.c -o basic-auth.lo
In file included from basic-auth.c:30:
/usr/include/ts/ts.h:158: error: expected ‘)’ before ‘sdk_version’
In file included from /usr/include/ts/ink_defs.h:28,
from basic-auth.c:31:
/usr/include/ts/ink_config.h:41:26: error: ink_autoconf.h: No such file or directory
basic-auth.c: In function ‘TSPluginInit’:
basic-auth.c:222: warning: implicit declaration of function ‘TSPluginRegister’
tsxs: compilation failed: cc -I/usr/include -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -feliminate-unused-debug-symbols -fno-strict-aliasing -mcx16 -fpic -c basic-auth.c -o basic-auth.lo
I investigated the code for ts.h and compared it with the latest source. You can see that line 156 changes:
trafficserver-devel:
tsapi TSReturnCode TSPluginRegister(TSSDKVersion sdk_version, TSPluginRegistrationInfo plugin_info);
source:
tsapi TSReturnCode TSPluginRegister(TSPluginRegistrationInfo *plugin_info);
Hence I'm assuming there's some issue with the versioning. I replaced my version of ts.h with the latest source and attempted the compile again: it works!
I copy the .so file to the plugins directory and modify plugins.config and records.config accordingly. Alas, when I try to start up trafficserver, it fails with a segmentation fault:
# /usr/bin/traffic_server
traffic_server: using root directory '/usr'
[Jul 15 16:19:21.224] Server {0x7fd9458ba7e0} DEBUG: (dns) ink_dns_init: called with init_called = 0
[Jul 15 16:19:21.227] Server {0x7fd9458ba7e0} DEBUG: (dns) localhost=vmProxy1
[Jul 15 16:19:21.227] Server {0x7fd9458ba7e0} DEBUG: (dns) Round-robin nameservers = 1
traffic_server: Segmentation fault (Signal sent by the kernel [(nil)])traffic_server - STACK TRACE:
/usr/bin/traffic_server(_Z19crash_logger_invokeiP7siginfoPv+0x99)[0x4a5209]
/lib64/libpthread.so.0[0x35b600f710]
/lib64/libc.so.6[0x35b5d3362f]
/usr/lib64/trafficserver/libtsutil.so.5(_xstrdup+0x6d)[0x7fd945f2b6cd]
/usr/bin/traffic_server(TSPluginRegister+0x7c)[0x4bcb6c]
/usr/lib64/trafficserver/plugins/basic-auth.so(TSPluginInit+0x2f)[0x7fd942334e1f]
/usr/bin/traffic_server(_Z11plugin_initb+0x322)[0x4dab22]
/usr/bin/traffic_server(main+0x1424)[0x4d2754]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x35b5c1ed5d]
/usr/bin/traffic_server[0x4942a9]
Segmentation fault (core dumped)
I tried to use gdb to get a better debug log, but I don't see anything useful. There's another mention of sdk_version - but I'm starting to think that hopping around files and replacing them isn't how it's meant to work...
Starting program: /usr/bin/traffic_server
[Thread debugging using libthread_db enabled]
traffic_server: using root directory '/usr'
[New Thread 0x7ffff7704700 (LWP 19967)]
[Jul 15 16:18:28.841] Server {0x7ffff77777e0} DEBUG: (dns) ink_dns_init: called with init_called = 0
[New Thread 0x7ffff68ff700 (LWP 19968)]
[New Thread 0x7ffff67fe700 (LWP 19969)]
[Jul 15 16:18:28.844] Server {0x7ffff77777e0} DEBUG: (dns) localhost=vmProxy1
[Jul 15 16:18:28.844] Server {0x7ffff77777e0} DEBUG: (dns) Round-robin nameservers = 1
[New Thread 0x7ffff46f5700 (LWP 19970)]
[New Thread 0x7ffff44f3700 (LWP 19971)]
Program received signal SIGSEGV, Segmentation fault.
__strlen_sse42 () at ../sysdeps/x86_64/multiarch/strlen-sse4.S:32
32 pcmpeqb (%rdi), %xmm1
Missing separate debuginfos, use: debuginfo-install tcl-8.5.7-6.el6.x86_64
(gdb) bt
#0 __strlen_sse42 () at ../sysdeps/x86_64/multiarch/strlen-sse4.S:32
#1 0x00007ffff7de86cd in _xstrdup (str=0xd46e3934ae7d6389 <Address 0xd46e3934ae7d6389 out of bounds>, length=-1)
at ink_memory.cc:231
#2 0x00000000004bcb6c in TSPluginRegister (sdk_version=<value optimized out>, plugin_info=0x7fffffffcc50)
at InkAPI.cc:1803
#3 0x00007ffff41f1e1f in TSPluginInit (argc=<value optimized out>, argv=<value optimized out>) at /root/basic-auth.c:222
#4 0x00000000004dab22 in plugin_load (validateOnly=false) at Plugin.cc:114
#5 plugin_init (validateOnly=false) at Plugin.cc:265
#6 0x00000000004d2754 in main (argv=<value optimized out>) at Main.cc:1714
Any hints or tips on what I might be doing wrong are very much appreciated.
Yeah, this is somewhat unfortunate, but the examples in the source tree is not intended to be compiled with tsxs. You would need to make a few changes in the code to make it work. For example, see this git commit I made to the version.c example:
diff --git a/example/version/version.c b/example/version/version.c
index f5c8126..4020a0c 100644
--- a/example/version/version.c
+++ b/example/version/version.c
## -24,10 +24,9 ##
#include <stdio.h>
#include "ts/ts.h"
-#include "ts/ink_defs.h"
void
-TSPluginInit(int argc ATS_UNUSED, const char *argv[] ATS_UNUSED)
+TSPluginInit(int argc , const char *argv[])
{
TSPluginRegistrationInfo info;
As for the Version information, this was removed for ATS v6.0.0, which means older plugins also need to be modified to remove it. This also makes previously built binaries are not compatible. There are probably better tools to use than tsxs as well, including the pkgconfig support, and traffic_layout.

Jenkins SSH Slave Configuration

I am trying to configure a slave for my jenkins Master. I did the below steps.
enabled passwordless auth to remote host(GNU LINUX)
Configured the slave on master
I can see the slave.jar being copied to remote host folder. But it is failing with the below error
Expanded the channel window size to 4MB
[11/07/14 19:11:54] [SSH] Starting slave process: cd "/test/app/abc/slavetest" && /usr/java /jdk1.6.0_29 -XX:MaxPermSize=2048m -Xmx2048m -jar slave.jar
bash: /usr/java/jdk1.6.0_29: is a directory
hudson.util.IOException2: Slave JVM has terminated. Exit code=126
at hudson.plugins.sshslaves.SSHLauncher.startSlave(SSHLauncher.java:953)
at hudson.plugins.sshslaves.SSHLauncher.access$400(SSHLauncher.java:133)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:696)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException: unexpected stream termination
at hudson.remoting.ChannelBuilder.negotiate(ChannelBuilder.java:200)
at hudson.remoting.Channel.<init>(Channel.java:419)
at hudson.remoting.Channel.<init>(Channel.java:398)
at hudson.remoting.Channel.<init>(Channel.java:394)
at hudson.remoting.Channel.<init>(Channel.java:383)
at hudson.remoting.Channel.<init>(Channel.java:375)
at hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:344)
at hudson.plugins.sshslaves.SSHLauncher.startSlave(SSHLauncher.java:945)
... 7 more
[11/07/14 19:11:54] Launch failed - cleaning up connection
[11/07/14 19:11:54] [SSH] Connection closed.
Any idea what I am doing wrong?
You have your slave's path to the java executable misconfigured:
/usr/java /jdk1.6.0_29 -XX:MaxPermSize=2048m -Xmx2048m -jar slave.jar
The blank space should be removed, and the full path should be
/usr/java/jdk1.6.0_29/bin/java
I just ran into this as well. Best to check the docker container/slave's java path by logging into the container and running whereis java.
The java path of the host and agent are probably not the same. And that jar and the java command is being executed from within the agent.