remote ssh command issue - ssh

Team,
I am facing some difficulties running commands on a remote machine. I am unable to understand why ssh is trying to think that the command I pass is a host.
ssh -tt -i /root/.ssh/teamuser.pem teamuser#myserver 'cd ~/bin && ./ssh-out.sh'
|-----------------------------------------------------------------|
| This system is for the use of authorized users only. |
| Individuals using this computer system without authority, or in |
| excess of their authority, are subject to having all of their |
| activities on this system monitored and recorded by system |
| personnel. |
| |
| In the course of monitoring individuals improperly using this |
| system, or in the course of system maintenance, the activities |
| of authorized users may also be monitored. |
| |
| Anyone using this system expressly consents to such monitoring |
| and is advised that if such monitoring reveals possible |
| evidence of criminal activity, system personnel may provide the |
| evidence of such monitoring to law enforcement officials. |
|-----------------------------------------------------------------|
ssh: Could not resolve hostname cd: No address associated with hostname
Connection to myserver closed.
It works absolutely fine if I don't pass a command. It simply logs me in. Any ideas?

Man ssh says:
If command is specified, it is executed on the remote host instead of
a login shell.
The thing is that cd is a bash built-in (run type cd in your terminal). So, ssh tries to run cd as a shell, but can not find it in PATH.
You should invoke ssh something like this:
ssh user#host -t 'bash -l -c "cd ~/bin && ./ssh-out.sh"'

Related

using google gcloud to ssh tunnel into linux machine inside network

I have an Ubuntu 16.04 VirtualBox machine (i.e. machine A) running on OSX connected to a university campus network. I would like to occasionally ssh into the machine from my laptop to remotely assist my colleagues, and I looked at different options.
It seems one of the options is "reverse ssh" (related to "port forwarding" or "ssh tunnelling"). My laptop does not have a fixed IP, so I can't do straight reverse ssh. The possible solution is to use a proxy machine. The idea is that when I need to assist my colleagues, they will type in the connection instructions from machine A, this will create a running GCP instance, and I will be able to then connect to machine A from the outside using this bridging (proxy?) GCP machine.
/ Academic intranet
+----------+ |
| GCE | | +----------+
| instance |<----|--| Machine A|
+----------+ | +----------+
|
\
/ Academic intranet
+----------+ |
+-------------+ ssh | GCE | ssh | +----------+
| Laptop dynIP|---------->| instance |-----|->| Machine A|
+-------------+ +----------+ | +----------+
|
\
We have a Google cloud account and gcloud installed on machine A. For what I can tell, GCP already has a very simple way to set up a tunnel in GCP:
https://cloud.google.com/community/tutorials/ssh-tunnel-on-gce
I tried it and it works. Which makes me guess that the same should be possible on GCP for the final step: for me to be able to open an SSH browser window on the running GCP instance so that I can ssh into machine A from there.
Any ideas?
EDITED:
Here is how far I got following the ssh tunnel on gce instructions:
On machine A:
gcloud compute instances create --zone us-west1-a tunnel
gcloud compute ssh --zone us-west1-a tunnel -- -N -p 22 -D localhost:2210
On my laptop, I can open https://console.cloud.google.com/compute/instances and then open a browser window to SSH connect.
From the GCP instance hostname tunnel, I guess I am missing something like:
ssh-into-machine-A-from-here
This is the last command that I am missing. Or maybe the ssh tunnel in gcloud needs extra flags/parameters.
0) Create an instance on GCP with a command like:
gcloud compute instances create --zone us-west1-a tunnel
0b) Click on the 'SSH' link on https://console.cloud.google.com/compute/instances to open a browser window.
0c) On the browser window, edit the sshd_config file to enable GatewayPorts yes.
0d) Set up gcloud CLI and connect the first time as shown below:
gcloud compute ssh --zone us-west1-a tunnel
This will create the ssh keys in $HOME/.ssh/google_compute_engine. Disconnect from it. Now that the keys are created, follow the next steps.
1) To establish forwarding from GCE to machine A: run following on machine A:
ssh -i ~/.ssh/google_compute_engine -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -f -N -R 2022:*:22 gce_user#gce_address
2) Now, to connect to machine A from your laptop, you can use the browser window with the GCP instance and do:
ssh -p 2022 A_machine_user#localhost
This should then ask for the password on A_machine_user and connect you to machine A.
I am not 100% sure that I got your exact question, but as far as I understood creating a VPN should be the best solution for you. The best and safest way of connecting your GCE instance with the machine A.
You can find here a discussion on the same kind of implementation.
Another option, which is in the same spirit is to a Virtual private server like OpenSSH on Machine A. Here there is a guide on how to implement that using a Virtual Private Server like OpenSSH and how to configure it.

Postgresql User not connecting to Database (Nginx Django Gunicorn)

For almost a month now I have been struggling with this issue. Whenever I try to access my Django Admin page on production I get the following error:
OperationalError at /admin/login/
FATAL: password authentication failed for user "vpusr"
FATAL: password authentication failed for user "vpusr"
My production.py settings file is as follows:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'vpdb',
'USER': 'vpusr',
'PASSWORD': os.environ["VP_DB_PASS"],
'HOST': 'localhost',
}
}
NOTE: the environment variable is working correctly. even if I put the normal password hard coded in there it doesn't work.
Here is the list of databases with their owner:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
vpdb | vpusr | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/vpusr +
| | | | | vpusr=CTc/vpusr
And here is the list of users:
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
vpusr | Superuser, Create DB | {}
As you can see I have also tried adding the roles of Superuser and Create DB to the vpusr but that did not have any effect.
Even when I try to connect through the terminal like this I get the same error:
sudo -u postgres psql -U vpusr vpdb
I still get the error: psql: FATAL: Peer authentication failed for user "vpusr"
When I do this command:
psql -U vpusr -h localhost vpdb
I properly connect to psql as vpusr.
A few more notes: I did delete the database, and the user and re created them. I made sure the password was correct.
I use Gunicorn, Nginx, Virtualenv, Django, Postgres on an Ubuntu Server from Digital Ocean.
Thank you in advance for taking the time to read this and helping me out!
EDIT: I have noticed that there are no migrations in my apps migration folder! Could it be that django or my user or postgres does not have permission to write the file?
EDIT: NOTE: I CHANGED THE USER TO TONY
In my postgres log file the following errors are found:
2017-09-09 18:09:55 UTC [29909-2] LOG: received fast shutdown request
2017-09-09 18:09:55 UTC [29909-3] LOG: aborting any active transactions
2017-09-09 18:09:55 UTC [29914-2] LOG: autovacuum launcher shutting down
2017-09-09 18:09:55 UTC [29911-1] LOG: shutting down
2017-09-09 18:09:55 UTC [29911-2] LOG: database system is shut down
2017-09-09 18:09:56 UTC [2711-1] LOG: database system was shut down at 2017-09-09 18:09:55 UTC
2017-09-09 18:09:56 UTC [2711-2] LOG: MultiXact member wraparound protections are now enabled
2017-09-09 18:09:56 UTC [2710-1] LOG: database system is ready to accept connections
2017-09-09 18:09:56 UTC [2715-1] LOG: autovacuum launcher started
2017-09-09 18:09:57 UTC [2717-1] [unknown]#[unknown] LOG: incomplete startup packet
2017-09-09 18:10:17 UTC [2740-1] tony#vpdb LOG: provided user name (tony) and authenticated user name (postgres) do not match
2017-09-09 18:10:17 UTC [2740-2] tony#vpdb FATAL: Peer authentication failed for user "tony"
2017-09-09 18:10:17 UTC [2740-3] tony#vpdb DETAIL: Connection matched pg_hba.conf line 90: "local all all peer"
EDIT:
pg_hba.conf file:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 password
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
what can you tell form this?
Your application is trying to connect to PostgreSQL using a password authentication method, but in your pg_hba.conf file, the connection type is matching the md5 method so it's expecting a md5 authentication. We can see this in your log messages
2017-09-01 11:42:17 UTC [16320-1] vpusr#vpdb FATAL: password authentication failed for user "vpusr"
2017-09-01 11:42:17 UTC [16320-2] vpusr#vpdb DETAIL: Connection matched pg_hba.conf line 92: "host all all 127.0.0.1/32 md5"
Locate your pg_hba.conf file inside your PostgreSQL data directory, vim the pg_hba.conf file and update the line
host all all 127.0.0.1/32 md5
and change it to
host all all 127.0.0.1/32 password
and then restart your PostgreSQL service
[root#server] service postgresql restart
and then try to authenticate again
To expand on the other messages you are seeing, when you run the command:
sudo -u postgres psql -U vpusr vpdb
you are not passing the -h <host> parameter, so the connection will attempt to match the line
local all all 127.0.0.1/32 <method>
so you will need to check which method of authentication it expects for local connections and authenticate that way, or else pass the -h <host> parameter, and then it will match your line
host all all 127.0.0.1/32 password
which means you can then enter your password when prompted, or else change your connection string to
sudo -u postgres -c "PGPASSWORD=<password>;psql -h localhost -U vpusr vpdb"
From the documentation:
db_user_namespace (boolean)
This parameter enables per-database user names. It is off by default. This parameter can only be set in the postgresql.conf file or on the server command line.
If this is on, you should create users as username#dbname. When username is passed by a connecting client, # and the database name are appended to the user name and that database-specific user name is looked up by the server. Note that when you create users with names containing # within the SQL environment, you will need to quote the user name.
With this parameter enabled, you can still create ordinary global users. Simply append # when specifying the user name in the client, e.g. joe#. The # will be stripped off before the user name is looked up by the server.
db_user_namespace causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because md5 uses the user name as salt on both the client and server, md5 cannot be used with db_user_namespace.
Although this doesn't explain why psql does the right thing, it's worth looking into.
Another possibility is that psycopg2 links with a different libpq, that links with a different and FIPS compliant OpenSSL. It would have no way to do md5 hashing as that OpenSSL doesn't contain the md5 algorithm. I would expect a different error message, but this bug is all but obvious.
UPDATE: This looks like a red herring. Apparently psycopg2 brings it's own crypto version.
Last thing to check would be character encoding. Test with a password that only contains ascii characters, like abcdefghijkl. If Django works then, look into LANG_* and LC_* variables in the environment.
fox fix password authentication failed for user "vpusr" try add password as is to the settings and the test for os.environ["VP_DB_PASS"],
change Engine
'ENGINE': 'django.db.backends.postgresql_psycopg2'
install if need:
pip install psycopg2
for fix psql: FATAL: Peer authentication failed for user "vpusr" try simple add host
psql -h localhost -U vpusr vpdb
# ^^^^^^^^^^^^

rabbitmqadmin list vhosts show messages but there are no queues

rabbitmqadmin list vhosts show messages but there are no queues. Why it is possible?
When I run Celery it still somehow receives messages. How can I see the name of the queue where the messages are stored? What do I miss?
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest list vhosts
+---------+----------+----------------+-------------------------+----------+----------+---------+
| name | messages | messages_ready | messages_unacknowledged | recv_oct | send_oct | tracing |
+---------+----------+----------------+-------------------------+----------+----------+---------+
| myvhost | 1 | 1 | 0 | 231903 | 229228 | False |
+---------+----------+----------------+-------------------------+----------+----------+---------+
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest list queues
No items
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ sudo rabbitmqctl list_queues
Listing queues ...
...done.
dmugtasimov#dmugtasimov-ThinkPad-Edge-E440 ~ $ rabbitmqadmin -u guest -p guest -V myvhost get queue=celery requeue=true count=10
*** Access refused: /api/queues/myvhost/celery/get
Please, suggest what extra information is required to answer the question.
I met the similar problem. the difference is: my message is "unacknowledged"
e.g. I found my queue has a message:
$ rabbitmqadmin list queues name node messages
+----------------------------+----------------+----------+
| name | node | messages |
+----------------------------+----------------+----------+
| my_queue_name | rabbit#xx-2 | 1 |
but when I run "get" command to show its content, rabbitmq tells me "there's no item"
so, I query it with this command:
$ rabbitmqadmin list queues name node messages messages_ready messages_unacknowledged
+----------------------------+----------------+----------+----------------+-------------------------+
| name | node | messages | messages_ready | messages_unacknowledged |
+----------------------------+----------------+----------+----------------+-------------------------+
| my_queue_name | rabbit#xxxxx-2 | 1 | 0 | 1 |
+----------------------------+----------------+----------+----------------+-------------------------+
I don't know why. just restart the rabbitmq server and everything seems goes fine.

How to start apache-spark slave instance on a standalone environment?

This are the steps I've done so far:
Download spark-1.4.1-bin-hadoop2.6.tgz
unzip
.spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh
Master is working but slave doesn't start
This is the output:
[ec2-user#ip-172-31-24-107 ~]$ sudo ./spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/ec2-user/spark-1.4.1-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-ip-172-31-24-107.out
localhost: Permission denied (publickey).
[ec2-user#ip-172-31-24-107 ~]$
This is the secure log
Aug 9 00:09:30 ip-172-31-24-107 sudo: ec2-user : TTY=pts/0 ; PWD=/home/ec2-user ; USER=root ; COMMAND=./spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh
Aug 9 00:09:32 ip-172-31-24-107 sshd[4828]: Connection closed by 127.0.0.1 [preauth]
I believe the problem is with SSH but I haven't been able to find the solution on google...
Any idea how to fix my SSH issue?
You need to set up passwordless ssh. Try:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Then restart the cluster. If that does not work please post new error message(s).
Its infact a two step process;
Generate public/private rsa keypair.
ubuntu#master:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
80:4d:40:f6:3a:09:32:07:74:25:cc:cd:f2:b3:75:10 ubuntu#master.flexilogix
The key's randomart image is:
+--[ RSA 2048]----+
|o.o+Bo.E. |
| ..=.B . |
|o o + + . |
| + . = o . |
| + + S |
| o |
| |
| |
| |
+-----------------+
set passwordless ssh;
ubuntu#master:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
Once both steps are done, you should be able to verify it;
ubuntu#master:~$ ssh localhost

2nd server in RabbitMQ cluster not participating, shows no uptime

I have a two server RabbitMQ cluster behind a load balancer, but right now only the first nodes seems to be fielding traffic.
When I do:
> rabbitmqadmin list nodes name type running uptime
+----------------------+------+---------+------------+
| name | type | running | uptime |
+----------------------+------+---------+------------+
| rabbit#n2-rabbitmq-1 | disc | True | 3899164848 |
| rabbit#n2-rabbitmq-2 | disc | True | |
+----------------------+------+---------+------------+
The second node shows no uptime. A cluster_status shows:
> sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#n2-rabbitmq-1' ...
[{nodes,[{disc,['rabbit#n2-rabbitmq-1','rabbit#n2-rabbitmq-2']}]},
{running_nodes,['rabbit#n2-rabbitmq-2','rabbit#n2-rabbitmq-1']},
{cluster_name,<<"rabbit#n2-rabbitmq-1">>},
{partitions,[]}]
...done.
What am I doing wrong or what should I look for?
Maybe for some reasons, one of your node went down and when it's up again, it does not sync with the 'master' node( which is the one running). To do so, set in the configuration file, /etc/rabbitmq/rabbitmq.config :
{cluster_partition_handling, autoheal}
I recommend that you should be using web management plugin for better observation:
$ rabbitmq-plugins enable rabbitmq_management
From the main (overview) page, you could see the status of your nodes in the cluster(connected, partitioned ...)
Anyway you'd better show your procedure of configuring (every step) your cluster with more information. If my above guess is wrong, please show me some information you get from the web interface.