Here i am creating a test machine(dev) using the docker machine.
$ docker-machine create -d virtualbox dev
Creating CA: C:\Users\xxx\.docker\machine\certs\ca.pem
Creating client certificate: C:\Users\xxx\.docker\machine\certs\cert.pem
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
The vm gets created and runs with out flaws.
And here is the error when i run the following command:
$ docker-machine env dev
open C:\Users\xxx\.docker\machine\machines\dev\ca.pem: The system cannot fin
d the file specified.
I have no idea how to deal with this problem. Tried restarting boot2docker.
You should try using docker-machine regenerate-certs dev. The problem i think is that somehow your .pem file got deleted or was not created. I had the same issue and regenerating the certs fixed the problem (reboot did not help btw).
I guess you are getting Docker-machine : ca.pem not found error even when you use docker info or any command with docker
Try this command: docker-machine env -u
output will be similar to:
unset DOCKER_TLS_VERIFY
unset DOCKER_HOST
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
# Run this command to configure your shell:
# eval $(docker-machine env -u)
now enter eval $(docker-machine env -u)
this should do the work. Try docker info to be sure finally.
I was getting the exact same error. It turned out to be the Cisco AnyConnect client affecting my networking settings. It's not enough to quit AnyConnect, you have to reboot your machine to restore your settings.
If someone knows more about how AnyConnect is affecting things and if there are solutions better than rebooting, I'd love to hear about it!
Copy certificates from "C:\Users\xxx\.docker\machine\certs"
Paste certificates to "C:\Users\xxx\.docker\machine\machines\dev"
NOTE: This error was on Windows 10 Docker
Here was my error:
#user ➜ git-repo git(users/user/dev) ✗ docker
unable to resolve docker endpoint: open C:\Users\user\.docker\ca.pem: The system cannot find the file specified.
Here is the link to the shell file I used to recreate the certificates I named it generate_docker_cert.sh:
https://gist.github.com/bradrydzewski/a6090115b3fecfc25280
So I went to that directory that the error output:
cd C:\Users\user\.docker\
Created that file:
notepad generate_docker_cert.sh
Copied the values from the link into there and saved.
Then ran that .sh file:
.\generate_docker_cert.sh
Then the docker command worked:
#user ➜ git-repo git(users/user/dev) ✗ docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
...
Related
I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.
I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim
I am running a virtual environment on CentOS with podman.
When I used the --net option of the podman run command, I get an error.
[user#server ~]$ podman run --net slirp4netns:port_handler=slirp4netns -p 1080:80 -d --name web nginx
Error: cannot join CNI networks if running rootless: invalid argument
Is this option unavailable?
Or is there a problem with the way the options are specified?
Please tell me solution.
I used this site as a reference for the command.
This is the configuration of the server.
[user#server ~]$ cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
[user#server ~]$ podman -v
podman version 2.0.6
The port_handler option requires Podman >= 2.1.0, which isn't released at this moment: https://github.com/containers/podman/commit/d86bae2a01cb855d5964a2a3fbdd41afe68d62c8
You can use that option if you compile Podman from its master branch.
I find this link quite helpful to see rootless communication :
https://www.redhat.com/sysadmin/container-networking-podman
https://podman.io/getting-started/network
I am not sure if you have seen this link before or even if it is helpful to you at this instance. But, in view of helping others out, I think the blog post quotes the following helpful statements:
Note: All podman network commands are for rootfull containers only.
Technically, the container itself does not have an IP address, because without root privileges, network device association cannot be achieved
When using Podman as a rootless user, the network is setup automatically. The container itself does not have an IP Address, because without root privileges, network association is not allowed. You will also see some other limitations.
I'm new to Chef and I have stuck in a problem. I'm using AWS Chef Automate Server and EC2 ubuntu instance as Chef Node. My workstation is local machine where I have installed ChefDK on windows. I have successfully configured the Chef server with ChefDK.
When I bootstrap the node using Knife Bootstrap command, it bootstraps the ubuntu node but shows this error in the end cannot create /etc/chef/trusted_certs/opsworks-cm-ca-2016-root.pem: Directory nonexistent
The command I used here is knife bootstrap myEC2PublicIPHere -N UmaidNode1 -x ubuntu --sudo --run-list "recipe[nginx]" -i .chef/my_key.pem.
After that I added some other cookbooks in the server and run Knife ssh command from my windows workstation to run Chef-client on the node, but this command is not working. I have tried it with different attributes, but always the similar issue FATAL: 1 node found, but does not have the required attribute to establish the connection. Try setting another attribute to open the connection using --attribute.
The command I tried here is knife ssh 'name:*' --attribute myEC2PublicIpHere -x ubuntu -i .chef/my_key.pem 'sudo chef-client'.
Furthur upon running this command knife node show UmaidNode1, it shows the data about node where IP is blank. I don't know why it is not getting this IP here. Showing the output Node Name: UmaidNode1 Environment: _default FQDN: IP: Run List: recipe[nginx], recipe[apache] Roles: Recipes: Platform: Tags:
enter image description here
The issue is finally resolved. I don't know why, but the problem was with the ChefDK version. I was using the latest version 4.8.23. It always creates directory /etcchef but the chef searches for all files in the directory /etc/chef. So it was unable to get the files like client.rb etc.
NOTE: I even make the required /etc/chef directory by myself, but it didn't work.
I installed an older version of ChefDK and now it's working fine.
I am rather new to docker images and am trying to set up a selenium/standalone-firefox image linked to a local folder.
I'm running Docker version 19.03.2, build 6a30dfc on Windows 10 and have unsuccessfully tried figuring out the correct working of the docker run -v syntax because it either is unspecific (i.e. too little context for me to make sense of it) or on the wrong platform).
Running docker as admin the the cmd, I used docker run -d -v LOCAL_PATH:C:\Users\Public.
This throws docker: Error response from daemon: invalid mode: \Users\Public as an error message.
I want to bind the running container to the folder C:\Users\Public (or another folder on the host machine - this is for illustration purposes).
Can someone point me to the (I fear obvious) mistake I'm making? I essentially want to achieve the container's output data (for later scraping) being stored in the host machine's folder C:\Users\Public. The container's output folder should be named myfolder.
** EDIT **
Digging around, I found this (see Volume Mapping).
I have thus tried the following code:
>docker run -d -p 4444:4444 --name selenium-hub selenium/hub
>docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
while the former works fine (it only runs the container), the latter throws the error:
docker: Error response from daemon: Drive has not been shared.
Docker for Windows (and Mac) require you to share drives to be able to volume mount - https://docs.docker.com/docker-for-windows/ (Under Shared drives).
You should be able to find it under your Docker Settings > Shared Drives. Ensure your C:\ is selected and restart the daemon. After that, you can run:
docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
base on the documation:
https://github.com/SeleniumHQ/docker-selenium
this path does not exist in container and its linux container.
"C:\Users\Public\Documents\TMP_DOCKERS\firefox selenium/standalone-firefo"