openshift create app ask for passwd - openshift-origin

all. when I try 'rhc create-app demo python-2.7', I meet an issue not able to check out the git repo. system will ask for the password of the cartridge or something. but in fact I have upload the default key from openshift console.
here is what I have done:
install openshift from puppet
oo-diagnostics check pass
create app
then I remove the default files in /root/.ssh and remove the key file from openshift console, and recreate the ssh key, and run rhc setup again to upload key. then create app again, but failed again.

In the Broker Virtual Machine, while running - oo-register-dns -h node -d domainX.example.com XXX.XXX.XXX.XXX -k /var/named/domainX.example.com.key,
The proxy XXX.XXX.XXX.XXX should be your Node Virtual Machine's IP Address (as I think most probabily you have used Broker's IP Address. Change accordingly and run this command again,
It will work.

Can you try with a different (main) domain name instead of example.com? I think it might be the issue as per wikipedia explanation:
Example.com, example.net, example.org, and example.edu are second-level domain names reserved for documentation purposes and examples of the use of domain names.
Even if you've masked it with your hosts file or local DNS it still might be confusing the Openshift's DNS.

Related

Jenkins won't use SSH key

I'm sorry to have to ask this question, but I feel like I've tried every answer so far on SO with no luck.
I have my local machine and my remote server. Jenkins is up and running on my server.
If I open up terminal and do something like scp /path/to/file user#server:/path/to/wherever then my ssh works fine without requiring a password
If I run this command inside of my Jenkins job I get 'Host Key Verification Failed'
So I know my SSH is working correctly the way I want, but why can't I get Jenkins to use this SSH key?
Interesting thing is, it did work fine when I first set up Jenkins and the key, then I think I restarted my local machine, or restarted Jenkins, then it stopped working. It's hard to say exactly what caused it.
I've also tried several options regarding ssh-agent and ssh-add but those don't seem to work.
I verified the local machine .pub is on the server in the /user/.ssh folder and is also in the authorized keys file. The folder is owned by user.
Any thoughts would be much appreciated and I can provide more info about my problem. Thanks!
Update:
Per Kensters suggestion I did su - jenkins, then ssh server, and it asked me to add to known hosts. So I thought this was a step in the right direction. But the same problem persisted afterward.
Something I did not notice before I can ssh server without password when using my myUsername account. But if I switch to the jenkins user, then it asks me for my password when I do ssh server.
I also tried ssh-keygen -R server as suggested to no avail.
Try
su jenkins
ssh-keyscan YOUR-HOSTNAME >> ~/.ssh/known_hosts
SSH Slaves Plugin doesn't support ECDSA. The command above should add RSA key for ssh-slave.
Host Key Verification Failed
ssh is complaining about the remote host key, not the local key that you're trying to use for authentication.
Every SSH server has a host key which is used to identify the server to the client. This helps prevent clients from connecting to servers which are impersonating the intended server. The first time you use ssh to connect to a particular host, ssh will normally prompt you to accept the remote host's host key, then store the key locally so that ssh will recognize the key in the future. The widely used OpenSSH ssh program stores known host keys in a file .ssh/known_hosts within each user's home directory.
In this case, one of two things is happening:
The user ID that Jenkins is using to run these jobs has never connected to this particular remote host before, and doesn't have the remote host's host key in its known_hosts file.
The remote host key has changed for some reason, and it no longer matches the key which is stored in the Jenkins user's known_hosts file.
You need to update the known_hosts file for the user which jenkins is using to run these ssh operations. You need to remove any old host key for this host from the file, then add the host's new host key to the file. The simplest way is to use su or sudo to become the Jenkins user, then run ssh interactively to connect to the remote server:
$ ssh server
If ssh prompts you to accept a host key, say yes, and you're done. You don't even have to finish logging in. If it prints a big scary warning that the host key has changed, run this to remove the existing host from known_hosts:
$ ssh-keygen -R server
Then rerun the ssh command.
One thing to be aware of: you can't use a passphrase when you generate a key that you're going to use with Jenkins, because it gives you no opportunity to enter such a thing (seeing as it runs automated jobs with no human intervention).

Docker: What is the simplest way to secure a private registry?

Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.

unable to connect to the aws ec2 instance,Host key verification failed

I had set up a ubuntu instance with rails package and also deployed my app, it is working fine.
But when i try to do SSH I its not allowing me for the remote login and throws errors like host key verification failed.
The problem seem to be persisting, kindly recommend the solution and I have attached a elastic IP to that and I am not able to see the public DNS, my instance is running in singapure region.
You may need to turn off StrictHostChecking by adding this option to ssh command line
-o StrictHostKeyChecking=no
As answered in more detail in your cross posted question on ServerFault: https://serverfault.com/questions/342228/unable-to-connect-to-the-aws-ec2-instance-host-key-verification-failed/342696#342696
Basically your ec2 elastic IP has previously been used with another server instance and your ssh client known hosts file does not match the new one for this IP. Remove offending line in known_host file. (More detail on Server Fault answer)
You need to log in to your instance with the private key that you set it to use.
Depending on your instance, the user might vary
ssh -i [private key file] [user]#[host]
Where user could be one of the following in my experience (or possibly others)
root
ec2-user
ec2user
bitnami
ubuntu

gitolite with non-default port

To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?
The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.
It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.

A way to specify a different host in an SSH tunnel from the host in use

I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server).
I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel).
The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests.
At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command:
ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1
I can tell Subversion that the repo. is located at:
https://username.svn.beanstalkapp.com:8080/repo-name
This uses my tunnel and the username and password are accepted.
So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?
I would add an entry to your hosts file that maps 127.0.0.1 to the hostname you need and then use the hostname to connect to your tunnel.
Update
The hosts file is IMO your best option.