Metasploitable 3 - System error 67 - windows-server-2008

I am trying to set up Metasploitable 3 (VirtualBox) on my Ubuntu 16.04.
I have done everything according to the guidelines of the inventors (https://github.com/rapid7/metasploitable3) when it comes to dependencies etc.
However, when I'm trying to start it (via vagrant up --provision win2k8) I get this nasty little error, that I just can't fix.
It always says:
win2k8: System error 67 has occurred.
win2k8: The network name cannot be found.
The following WinRM command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
cmd /q /c "c:\tmp\vagrant-shell.bat"
Stdout from the command:
CMDKEY: Credential added successfully.
Stderr from the command:
System error 67 has occurred.
The network name cannot be found.
I just can't find anything out on the internet. I only "know" it has something to do with network settings. But I don't know what to do now.
I'd appreciate some help!

Related

Why can't you start ufw on wsl2?

I have this error. I have days trying to solve it but I can not find the answer, if someone something similar happened I would appreciate your help.
ERROR: problem running ufw-init
iptables-restore v1.8.4 (legacy): Couldn't load match `limit':No such file or directory
Error occurred at line: 63
Try iptables-restore -h' or 'iptables-restore --help' for more information. iptables-restore v1.8.4 (legacy): Couldn't load match limit':No such file or directory
Error occurred at line: 8
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Problem running '/etc/ufw/before.rules'
Problem running '/etc/ufw/user.rules'
ERROR: problem running ufw-init
Adding a firewall to wsl2 might not be very useful because you can gain root access to wsl2 with a simple command: "wsl -u root".
Check this (https://superuser.com/a/1626513) post out for more details.

Setting {WSL::Bash} as default shell throws an error in cmder

note: backend error output: -v: -c: line 0: unexpected EOF while looking for matching `''
-v: -c: line 1: syntax error: unexpected end of file
ConEmuC: Root process was alive less than 10 sec, ExitCode=0.
Press Enter or Esc to close console...
This is error i am getting.
Also i have set the fish shell as default shell in WSL.
For WSL1 on windows 10 build later than 1909 (yes wsl2 is available to me but for corporate reasons i cant use it)
Try setting your command to wsl.exe -new_console:d:C:\_stuff\code -cur_console:p5 and the task parameters to /dir "c:/_stuff/code" /icon "c:/_distros/ubuntu/ubuntu1804.exe"
You may need to change the file locations to make the command and parameters suitable for your setup. c:/_stuff/code is where i keep all my repositories and c:/_distros/ubuntu is where i have installed ubuntu.

can't start rabbitmq-server after installation

I'm trying to use rabbitmq for a django tutorial but when I want to start the server I get this error:
~$ sudo rabbitmq-server
Configuring logger redirection
14:49:57.041 [error]
14:49:57.044 [error] BOOT FAILED
BOOT FAILED
14:49:57.044 [error] ===========
===========
14:49:57.044 [error] ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit#wss
ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit#wss
14:49:57.045 [error]
14:49:58.046 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","wss"} in context start_error
14:49:58.046 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"wss\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelau
Crash dump is being written to: erl_crash.dump...done
I've searched for port to see that if it's in use or not and I used lsof -i :25672 and I get nothing.
I don't know too much about these things so if you need anything please tell me.
Try:
sudo lsof -i :25672
sudo kill <PID>
sudo rabbitmq-server
Where <PID> is the process ID that is occupying port 25672
I have encountered this issue. I figured out that this issue is coming because the rabbitmq-server is already running on the machine.
I have used the following command
rabbitmqctl.bat status to know the status of the rabbitmq-server. This helped me to know if the server is up or down.
If it is up, this could the reason you are getting the error that you have specified in your post.
You can issue the following command to make the server down
rabbitmqctl.bat stop
Now you can try starting the rabbitmq-server by issuing the following command
rabbitmq-server start
Note that I am using Windows. And I have executed these commands by pointing the command prompt to C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14\sbin as my rabbitmq installation directory is C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14.
I have encountered this before. Here is what caused it and how I fixed it:
This is one of those commands which requires the magic word sudo (i.e it needs a superuser privilege).
If you forget to add sudo to the command, it begins the process but later fails when it hits a superuser-only roadblock. This leaves you with an incomplete process. Now when you decide to add sudo, it attempts the same process again but finds out that someone without the right privilege has made a mess or is still messing around.
Then the solution will be to cancel out whatever the first command has started and try again.
sudo lsof -i :25672
This list out details about the port 25672
You will see the PID (process ID) e.g 1301
Then stop the process on that port with:
sudo kill <PID>
for example, sudo kill 1301
And make sure you are killing the right process if not you may get into trouble.
Now, retry the command with sudo:
sudo rabbitmq-server
ALSO,
In most cases, this error occurs because without deliberately stopping the rabbitmq-server, it always keeps running even after you restart you system.
another way to stop rabitmq server windows+R then type "services.msc" and then find for RabitMq.slelect and stop from left top corner.
Then re run your rabitmq server.
-Hi guys, I am putting up an answer that can help Googlers to run multiple rabbitmq-server on the same machine. Trying to achieve the latter, I ran into a similar error reported in the first place and solved that by defining:
export RABBITMQ_DIST_PORT=anything_other_than_25672
as stated in the documentation:
https://www.rabbitmq.com/networking.html#epmd-inet-dist-port-range
if you are using windows go to task manager and stop rabbitmq from running...
then reload the rabbitmq-server
For Linux others answered but in Windows you should press Ctrl+Alt+delete and select task management and in that end proccess that depends on erlang.
Note that it requires Administrator previlage.
Now enter this command to start rabbitmq-server:
rabbitmq-server start
Every time you restart your computer you should do these steps.For prevent do them again you should stop rabbitmq service from startup services.
went through same problem in windows, it is already running after installation as a service
so just enable the plugins from the rabbitmq commandline by entering the code as
rabbitmq-plugins enable management_plugin
than go to the localhost:15672 and good to go.
This means that your port 25672 is already in use
try: -
sudo lsof -i :25672
sudo kill <PID>
and now start your rabbitmq server using
sudo rabbitmq-server

How to fix vagrant up error related to NFS issue?

I ran into a very strange issue this morning. When I rebooted my machine, and tried to run vagrant up, I get this error;
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,rw,tcp,nolock,noacl,async 10.0.1.1:/Users/me/code /vagrant
Stdout from the command:
Stderr from the command:
mount.nfs: requested NFS version or transport protocol is not supported
I didn't change any configuration settings, or update my machine or anything. Of things I know, nothing has changed. What gives? Anyone have any ideas as to what the issue is and what I can do to fix it?
For anyone that runs into this same issue and none of the other solutions you find seem to work, my issue was 127.0.0.1 localhost missing from my /etc/hosts file. Not sure how or why it went missing, but when adding this back, it fixed the issue.

How to fail gitlab CI build?

I am trying to fail a build in gitlab CI and get email notification about it.
My build script is this:
echo "Listing files!"
ls -la
echo "##########################Preparing build##########################"
mkdir build
cd build
echo "Generating make files"
cmake -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Release -D CMAKE_VERBOSE_MAKEFILE=on ..
echo "##########################Building##########################"
make
I have commited the code that breaks build. However, instead of finishing, build seems to be stuck in "running" state after exiting make. Last line is:
make: *** [all] Error 2
I also get no notifications.
How can i diagnose what is happening?
Upd.: in runner, following is repeated in log:
Submitting build <..> to coordinator...response error: 500
In production.log and sideq.log of gitlab_ci, following is written:
ERROR: Error connecting to Redis on localhost:6379 (ECONNREFUSED)
Full message with stacktrace is here: pastebin.
I have the same problem, i can help you with a workaround but im trying to fully fix it.
1- most of the times he hangs but the jobs keeps on going and actually finishes it, you can see the processes inside the machine, example: in my case it compiles and in the end it uses docker to publish the build, so the process docker doesn't exist until he reaches that phase.
2- to workaround this issue you have to make the data persistent and "retry" the download over and over again until he downloads everything he needs.
PS: stating what kind of OS you are using always helps.