I've modified some of the nfs server functions using another server connecting between the client and the server.
I would like to test the ReadDir function for NFS, but whatever I try to test it, the command sent is ReadDirPlus (ls, ls -l etc.)
is there a specific command via terminal (bash) to request a ReadDir command for NFS?
Use NFS version 2, instead of 3. There's no READDIRPLUS in version 2. The client has to issue the READDIR, then do individual GETATTRs to retrieve the attributes for an ls -l.
If you're using Linux, simply issue the mount command with nfsvers=2.
Related
I am trying to run mpi on an external server. As a part of my goal, I'm attempting to run something in parallel across multiple nodes.
However, this external server has a bad default configuration file that is readonly, such that when I try to ssh to another external server without using ssh <server> -F ~/.ssh/config then it will simply return four different "Bad configuration option"s. However, -F is not an option that I can use for mpirun, and I don't know if there is any way to manually change the ssh configuration file for mpirun.
What should I do?
I am writing a test case in Robot Framework where in, I have to either copy the file from the local machine (windows) to the remote server (linux) or create a new one at the location.
I have used multiple sudo su - command to switch users to root user to reach the desired host. As a result of this, I am not able to use Put File Keyword from SSH Library to upload the file. I have reached at the desired folder location by executing the commands with Write keyword.
Since there is no option left (thats what i realize with my limited knowledge on Robot Framework), i started creating a new file with vi <filename> command. I have also reached the INSERT mode of the file, BUT i am not able to edit text into the file.
Can someone please suggest me how can i either
Copy the file from local windows machine to remote linux server AFTER multiple SU commands (Switch User)
Create a new text file and enter the content.
Please See : the new file which is being created / copied is a certificate file. Hence i do not wish to write the entire content of the certificate in my test suite file
The entire test case looks something like this
First Jump1
Log Starting the connection to AWS VM
# Connection to VM with Public Key
Connection To VM ${hostname} ${username}
Send Command sudo su -
Send Command su - <ServiceUser1>
# Reached the Detination server
Send Command whoami
Send Command ss -tln | grep 127.0.0.1:40
# Connecting to Particular ZIP
Send Command sudo -u <ServiceUser2> /usr/bin/ssh <ServiceUser2>#localhost -p <port>
Send Command sudo su -
# Check Auth Certificate
Send Command mosquitto_pub -h ${mq_host} -p ${mq_port} -u ${mq_username} -P ${mq_password}
In the step Check Auth Certificate, the certificate is checked to be present or not, if present -> delete the current certificate and create the new one (either create a new file or upload from local) and if it not there create a new certificate
though it might not be ideal, but was able to achieve what i wanted to do with
echo "content" > newFilename
echo "update content" >> newFileName
I need to execute ikeyman on an IBM HTTP Server. Since I don't want to install a full blown UI on the server, I used MobaXterm with x forwarding for the Windows workstations. When executed as regular user (e.g. /opt/IBM/HTTPServer/bin/ikeyman) it works. For corresponding permissions in certain folders, ikeyman need to run as root:
sudo -i
/opt/IBM/HTTPServer/bin/ikeyman
Exception in thread "main" java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:217)
at java.awt.Window.<init>(Window.java:547)
at java.awt.Frame.<init>(Frame.java:431)
at java.awt.Frame.<init>(Frame.java:396)
at javax.swing.JFrame.<init>(JFrame.java:200)
at com.ibm.gsk.ikeyman.gui.KeymanFrame.<init>(KeymanFrame.java)
at com.ibm.gsk.ikeyman.gui.KeymanFrame.<init>(KeymanFrame.java)
at com.ibm.gsk.ikeyman.Ikeyman.main(Ikeyman.java)
Not working:
sudo DISPLAY=localhost:10.0 /opt/IBM/HTTPServer/bin/ikeyman (the DISPLAY value was copied from the regular user)
xauth add $(xauth -f /home/user/.Xauthority list | tail -1 )
export DISPLAY=localhost:10.0
/opt/IBM/HTTPServer/bin/ikeyman```
I've got,
My laptop
A remote server I can SSH into which has a Docker volume inside of which are some files I'd like to copy to my laptop.
What is the best way to copy these files over? Bonus points for using things like rsync, etc.. which are fast / can resume / show me progress and not writing any temporary files.
Note: my user on the remote server does not have permission to just scp the data straight out of the volume mount in /var/lib/docker, although I can run any containers on there.
Having this problem, I created dvsync which uses ngrok to establish a tunnel that is being used by rsync to copy data even if the machine is in a private VPC. To use it, you first start the dvsync-server locally, pointing it at the source directory:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_DIRECTORY,target=/data,readonly \
quay.io/suda/dvsync-server
Note, you need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard. Then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The DVSYNC_TOKEN can be found in dvsync-server output and it's a base64 encoded private key and tunnel info. Once the data has been copied, the client wil exit.
I'm not sure about the best way of doing so, but if I were you I would run a container sharing the same volume (in read-only -- as it seems you just want to download the files within the volume) and download theses.
This container could be running rsync as you wish.
Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.