I need a step by step procedure to uninstall RUNDECK. i am facing some STACK overflow issue which i wasn't able to resolve so i want to uninstall and install it from scratch
Stack error:
[2020-06-05 18:48:44.098] ERROR StackTrace --- [tp1284944245-71] Full Stack Trace:
org.grails.taglib.GrailsTagException: [views/layouts/base.gsp:184] Error executing tag <g:render>: [views/common/_sidebar.gsp:128] Error executing tag <g:ifMenuItems>: Method 'java.util.Set com.dtolabs.rundeck.core.authorization.providers.EnvironmentalContext.forProject(java.lang.String)' must be InterfaceMethodref constant
at org.grails.gsp.GroovyPage.throwRootCause(GroovyPage.java:473)
at org.grails.gsp.GroovyPage.invokeTag(GroovyPage.java:415)
at jdk.internal.reflect.GeneratedMethodAccessor217.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:98)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.in```
WAR based instance:
Make sure that the Rundeck process is down, identify the process
doing ps aux| grep -i rundeck and use kill -9 <PID> to shut
down.
Wipe the instance, you can delete all directory (and content)
defined in %RDECK_BASE%. All configurations and files are inside
this directory. If your system has a init script to launch rundeck, ensure that script doesn't have any reference to rundeck.
Re-install following this.
RPM-based (CentOS, RHEL, Fedora) instance:
Shutdown the Rundeck service: # systemctl stop rundeckd.
Make sure that the process is down: # systemctl status rundeckd.
Remove the package, do # yum remove rundeck.
Some files keep on the system, check and wipe the following paths:
/etc/rundeck, /var/lib/rundeck and /var/log/rundeck.
Re-install following this.
DEB-based (Debian, Ubuntu, Mint) instance:
Shutdown the Rundeck service: # systemctl stop rundeckd.
Make sure that the process is down doing # systemctl status rundeckd.
Remove the package, do # apt-get purge rundeck
Some files keep on the system, check and wipe the following paths:
/etc/rundeck, /var/lib/rundeck and /var/log/rundeck.
Reinstall following this.
Anyway, I recommend to make a backup of your instance / configurations before wiping it.
For testing the best option is to run Rundeck docker image, it saves a lot of time.
About the error, check your Rundeck version, maybe you're facing this issue.
Related
I saw few other posts (in particular this one) about it but there are from last year. I still have this issue right now. I opened the Preview features from the User settings but I can't turn off this feature.
My pipelines use SSH connection to run some commands on a virtual machine (basically, pull a Docker image).
All my pipelines are failing. How can I fix it or update the SSH connections?
Update
I set up the Service connection
and I use it in my pipelines with this YAML code:
- task: SSH#0
displayName: 'SSH: stop shinyproxy'
inputs:
sshEndpoint: $(server)
commands: |
echo $(pwd) | sudo -S docker stop shinyproxy
failOnStdErr: false
continueOnError: true
All pipelines, new and old, get the same error
##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: All configured authentication methods failed
at doNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:803:21)
at tryNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:993:7)
at USERAUTH_FAILURE (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:373:11)
at 51 (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/handlers.misc.js:337:16)
at Protocol.onPayload (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:2025:10)
at AESGCMDecipherNative.decrypt (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/crypto.js:987:26)
at Protocol.parsePacket [as _parse] (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:1994:25)
at Protocol.parse (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:293:16)
at Socket. (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:713:21)
at Socket.emit (node:events:527:28) {
level: 'client-authentication'
I have never had this issue before.
Based on the other post, highlighted by Antonia, the solution has to be applied on the Ubuntu machine.
To fix it, open Terminal and edit /etc/ssh/sshd_config and, at the end of it, add this line
/etc/ssh/sshd_config
After that, restart. It is working for me.
I am able to login into a taskrun pod as long as the task is being executed by:
kubectl exec $POD_NAME /bin/bash
However, if a task has failed or completed. I am unable to login by kubectl exec command, since it states, "cannot login to a completed tasks".
If need to debug on a failed tasks, is there any way to attach to a console of a failed/completed tasks in Tekton.
I am running on a minikube environment.
Tekton Tasks are Pods. When they complete, or when they fail: that pod exits, which leaves you unable to get in.
Troubleshooting, you may edit your Task, catch the error and start some "sleep" command, which might help figuring it out.
Or, without risking to impact other jobs, I would usually prefer to re-create the Pod corresponding to my failed task.
$ kubectl get pods -n <ci-namespace> | grep <taskrun-name>
NAME
<tasksrun>-xxx-yyy
$ kubectl get pods -n <ci-namespace <taskrun>-xxx-yyy -o yaml >check.yaml
Then, edit that yaml file. Remove all metadata unless name/namespace. Change metadata.name, making sure your pod has its own name. Remove the status block. Catch failure where it's needed and add your 'sleep'. Then kubectl create that file and enter your pod.
Depending on what you're troubleshooting, it may be easier to add some PVC workspace into your task, make sure your working directories, logs, built assets, ... end up in some volume that you could mount from a separate container, should you need to troubleshoot it.
Or: if you're fast enough, just re-run your pipeline/task, enter its container while it starts, and try troubleshooting it before it fails.
I'm trying to use rabbitmq for a django tutorial but when I want to start the server I get this error:
~$ sudo rabbitmq-server
Configuring logger redirection
14:49:57.041 [error]
14:49:57.044 [error] BOOT FAILED
BOOT FAILED
14:49:57.044 [error] ===========
===========
14:49:57.044 [error] ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit#wss
ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit#wss
14:49:57.045 [error]
14:49:58.046 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","wss"} in context start_error
14:49:58.046 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"wss\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelau
Crash dump is being written to: erl_crash.dump...done
I've searched for port to see that if it's in use or not and I used lsof -i :25672 and I get nothing.
I don't know too much about these things so if you need anything please tell me.
Try:
sudo lsof -i :25672
sudo kill <PID>
sudo rabbitmq-server
Where <PID> is the process ID that is occupying port 25672
I have encountered this issue. I figured out that this issue is coming because the rabbitmq-server is already running on the machine.
I have used the following command
rabbitmqctl.bat status to know the status of the rabbitmq-server. This helped me to know if the server is up or down.
If it is up, this could the reason you are getting the error that you have specified in your post.
You can issue the following command to make the server down
rabbitmqctl.bat stop
Now you can try starting the rabbitmq-server by issuing the following command
rabbitmq-server start
Note that I am using Windows. And I have executed these commands by pointing the command prompt to C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14\sbin as my rabbitmq installation directory is C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14.
I have encountered this before. Here is what caused it and how I fixed it:
This is one of those commands which requires the magic word sudo (i.e it needs a superuser privilege).
If you forget to add sudo to the command, it begins the process but later fails when it hits a superuser-only roadblock. This leaves you with an incomplete process. Now when you decide to add sudo, it attempts the same process again but finds out that someone without the right privilege has made a mess or is still messing around.
Then the solution will be to cancel out whatever the first command has started and try again.
sudo lsof -i :25672
This list out details about the port 25672
You will see the PID (process ID) e.g 1301
Then stop the process on that port with:
sudo kill <PID>
for example, sudo kill 1301
And make sure you are killing the right process if not you may get into trouble.
Now, retry the command with sudo:
sudo rabbitmq-server
ALSO,
In most cases, this error occurs because without deliberately stopping the rabbitmq-server, it always keeps running even after you restart you system.
another way to stop rabitmq server windows+R then type "services.msc" and then find for RabitMq.slelect and stop from left top corner.
Then re run your rabitmq server.
-Hi guys, I am putting up an answer that can help Googlers to run multiple rabbitmq-server on the same machine. Trying to achieve the latter, I ran into a similar error reported in the first place and solved that by defining:
export RABBITMQ_DIST_PORT=anything_other_than_25672
as stated in the documentation:
https://www.rabbitmq.com/networking.html#epmd-inet-dist-port-range
if you are using windows go to task manager and stop rabbitmq from running...
then reload the rabbitmq-server
For Linux others answered but in Windows you should press Ctrl+Alt+delete and select task management and in that end proccess that depends on erlang.
Note that it requires Administrator previlage.
Now enter this command to start rabbitmq-server:
rabbitmq-server start
Every time you restart your computer you should do these steps.For prevent do them again you should stop rabbitmq service from startup services.
went through same problem in windows, it is already running after installation as a service
so just enable the plugins from the rabbitmq commandline by entering the code as
rabbitmq-plugins enable management_plugin
than go to the localhost:15672 and good to go.
This means that your port 25672 is already in use
try: -
sudo lsof -i :25672
sudo kill <PID>
and now start your rabbitmq server using
sudo rabbitmq-server
I was installing phpmyadmin following this tutorial.
I missed the warning in step 1 and I did not select Apache2. I exited the command line and when I try to start from the beginning I get this error:
E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?
I've been searching for answers, but couldn't find one that helps.
What should I do here?
Thanks
Some of the running processes is still using the apt package manager. You can find the apt process using the following command:
ps aux | grep apt
and kill it:
sudo kill -9 PID
Don't forget to replace PID with the actual process ID.
Probably a background process is using/locked on the administrator directory. You could
ps
or
ps | e
to view the running processes and stop/kill the one using the dpkg.
I ran into this error once after updating my Linux Mint Tara, and couldn't use dpkg. I had to restart the whole system which worked fine.
You could try it too, restarting.
It mean something(Process) else is installing or removing software and has locked the apt database while it performs the action.(Probably The Software Center or The update Manager). The safest way is(without crashing your system) Reboot Ubuntu the Try to install phpmyadmin again.
Killing the process might not work always, because there could be no process involved at all!
So, the best solution would be:
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock*
I had your same error messages [both of them] and I solved the issue running:
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock
as said in this post which explains that "the root cause is the lock file. Lock files are used to prevent two or more processes from using the same data. When apt or apt-commands are run, it creates lock files in a few places. When the previous apt command was not terminated properly, the lock files were not deleted and hence they prevent any new instances of apt/apt-get commands"
Redis went quite on me.
user#mycomputer:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
I try to restart the service by doing this
sudo /etc/init.d/redis_6379 stop
/var/run/redis/redis.pid exists, process is already running or crashed
But no luck. Logs didn't show an error as well.
Got it fixed by backing up the redis.rdp file mine is located at
/var/lib/redis
check your config file "/etc/redis/redis.conf" for the rdp file's location and do this
sudo mv /var/lib/redis/redis.rdp /var/lib/redis/redis_backup.rdp
Then recreate the the redis.rdp file
sudo touch redis.rdp
Run the redis-server with the conf and it should work
sudo redis-server /etc/redis/redis.conf
Get it fixed in a tidy way: Recreate the the redis.rdp file as suggested here in one of answer, will purge all the cache recorded so far and redis will start up fresh with no cache data.
This is a warning message to notify system crash / improper shutdown: "/var/run/redis/redis.pid exists, process is already running or crashed"
Just delete /var/run/redis/redis.pid file and restart the server again.
Note: You might have lost latest cache changes due to untidy shutdown, which weren't flushed into the disk. This data loss can be minimized using frequent disk flush configuration in redis conf file(in my case it is #/etc/redis/6379.conf)
save 900 1
save 300 10
save 60 10000
Or try AOF persistence, more details [here][1]
Depends on how you installed redis, the pid can be found on /var/run/redis_6379.pid.
What happened is that redis crashed, but the pid is still there. So you just have to delete it.
sudo rm -f /var/run/redis_6379.pid
Then start redis again:
sudo /etc/init.d/redis_6379 start
If you can't find it, I suggest installing redis "more properly". Follow redis quickstart guide in the Installing Redis more properly section.
You can find it here:
https://redis.io/topics/quickstart
Run the redis-server with config.
sudo redis-server redis.conf