In my ~/.ssh/config I added the following:
Include /Path/to/ssh.config
And it gives error:
ssh remoteEc-2
/Users/Me/.ssh/config: line 1: Bad configuration option: include
/Users/Me/.ssh/config: terminating, 1 bad configuration options
ssh -V gives:
OpenSSH_6.9p1, LibreSSL 2.1.8
I am on OSX El-Capitan
Include is not a valid option until version 7.3...
See: https://www.openssh.com/txt/release-7.3
New Features
[...]
ssh(1): Add an Include directive for ssh_config(5) files.
Also, see this answer.
If you can't / don't want to update, then you could collate your configuration files, using the following:
cat ${CONFIG_1} ${CONFIG_2} ${CONFIG_3} > ~/.ssh/config
You'd need to run it every time you update any of the parts...
Same problem, except I'm on 7.4
Turns out, the Include directive needs to go into /etc/ssh/ssh_config, not /etc/ssh/sshd_config (note the d in the filename).
Wasn't obvious to me. Hope this saves whoever finds this some time.
Related
I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp
When I try to run the command ScriptAlias, I always get the error:
ScriptAlias: command not found
I have made sure that the alias mod is enabled through a2enmod alias, and I have ran apt-get update a few times as well. Does anyone know what could be causing this?
I realized that I did not understand how the module worked, and was trying to use the command directly into the command line. I needed to edit the 000-default.conf file instead.
I am using Puppet Enterprise 3.7.2 and on one of my nodes I create the file:
[root#vii-osc4-mgmt-001 ~]# cat /etc/profile.d/POD_prefix.sh
export FACTER_pod_prefix=vii-osc4
Then I rebooted that node and logged back in and verified that
the FACTER_pod_prefix gets set and facter pod_prefix outputs the
expected value.
[root#vii-osc4-mgmt-001 ~]# env | grep FACTER_pod_prefix
FACTER_pod_prefix=vii-osc4
[root#vii-osc4-mgmt-001 ~]# facter pod_prefix
vii-osc4
On my PE 3.7 Puppet master I created the file /var/lib/hiera/vii-osc4.yaml.
I created the /var/lib/hiera/vii-osc4.yaml from the /var/lib/hiera/defaults.yaml
file that I had been using like so:
# cp /var/lib/hiera/defaults.yaml /var/lib/hiera/vii-osc4.yaml
This file has a bunch of class parameter values. For example there is this
line in the file:
controller_vip_name: vii-osc4.example.com
Then I changed my hiera.yaml file to look like this:
[root#osc4-ppt-001 ~]# cat /etc/puppetlabs/puppet/hiera.yaml
---
:backends:
- yaml
:hierarchy:
- "%{pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
Then I restarted my pe-httpd service like so (RHEL7):
# systemctl restart pe-httpd
Then I make a small change to the /var/lib/hiera/vii-osc4.yaml for example
I change the line ...
controller_vip_name: vii-osc4.example.com
... to ...
controller_vip_name: VII-osc4.example.com
But when I run puppet agent -t --noop on my node, vii-osc4-mgmt-001, I do not see the change
that I expected to see. If I make the change in the /var/lib/hiera/defaults.yaml and then
run puppet agent -t --noop on my node I do see the expected changes. What am I doing wrong here?
UPDATE: using /etc/facter/facts.d method of setting custom facts.
I looked into using /etc/facter/facts.d for what I am trying to do. What I am trying to do is set a custom fact "pod_prefix". I want to use this fact in my hiera.yaml like so ...
---
:backends:
- yaml
:hierarchy:
- "%{::pod_prefix}"
- defaults
- "%{clientcert}"
- "%{environment}"
- global
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /var/lib/hiera on *nix
# - %CommonAppData%\PuppetLabs\hiera\var on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
... so that nodes that have pod_prefix set to vii-osc4 will obtain their class parameters from the file /var/lib/hiera/vii-osc4/yaml and host that pod_prefix set to ix-xyz will get their class params from /var/lib/hiera/ix-xyz.yaml. I do not see how creating the file /etc/facter/facts.d/pod_prefix.txt on my puppet master that contains something like this ...
# cat pod_prefix.txt
pod_prefix=vii-osc4
... could possibly be a solution to my problem. I guess I must be misunderstanding something here. Can someone help?
UPDATE 2.
The /etc/facter/facts.d/pod_prefix.txt file goes on my nodes.
I think my biggest problem is that just execute systemctl restart pe-httpd was not sufficient and things didn't start working until I did a full reboot of my puppet master. I need to go look at the docs and figure out what is the correct way to restart the "puppet master".
The very approach of managing custom facts through environment variables is quite brittle. In this case, I suspect it does not work because you changed the environment of login shells via /etc/profile.d. System services don't run in such shells, though.
A clean approach would be to define your fact value in /etc/facter/facts.d instead.
My httpd.conf file contains following configurations
SSLPassPhraseDialog builtin
#SSLPassPhraseDialog exec:/root/passphrase.sh
when the line containing automatic reading of Passphrase is commented it works fine.
But when i change it to
#SSLPassPhraseDialog builtin
SSLPassPhraseDialog exec:/root/passphrase.sh
it fails, just with a failed message.
contents of passphrase file
#!/bin/bash
echo "xyz123"
Check rights permissions of passphrase file.
Possible causes for this.
Some times users put the passphrase-script under /etc/httpd/conf.d
directory. Don't do that.
Some time that script need to have execute (chmod +x) permission.
May be /bin/bash doesn't work with the current Linux distro then you have to include /bin/sh.
I've narrowed down this question - regarding a MySQL over SSH connection only working once - to a conflicting line in my host computer's known_hosts file.
Essentially, I can not get into my Database GUI of choice because the key is different for the same IP address (after re-provisioning, reloading, etc.).
Once I delete any offending lines, I can get in just fine.
So, through Vagrant's shell command (that I'm provisioning with) how can I modify the host machine's ~/.ssh/known_hosts file?
EDIT:
I found a temp fix that involves adding/creating a ~/.ssh/config file (this involves using a private IP address):
Host 192.168.*.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Should let you in. Not a real fix, as this kind of fix can be a security concern. Look below for much better answer.
Sorry for taking you away from what you need!
Changing HOST files from Vagrantfile:
What you actually want is very simple. Vagrantfile is interpreted by Vagrant each time you run vagrant command. It is regular ruby code, so if you want to change HOST file, all you need to do is to put in the Vagrantfile Ruby code that performs this change. Here's the example code I've put at the end of my Vagrantfile:
require 'tempfile'
require 'fileutils'
path = '~/.ssh/known_hosts'
temp_file = Tempfile.new('foo')
begin
File.open(path, 'r') do |file|
file.each_line do |line|
if line !~ /REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE/ then
temp_file.puts line
end
end
end
temp_file.rewind
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Note to edit the code above by putting your own value for REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE.
Hope I at least partially fixed my mistake by providing this answer :)
Original Answer
For anyone's (mine as well) future reference, I'm leaving part of the answer that refers to changing GUEST OS files or copying files to GUEST OS:
Vagrant provides couple of provisioners.
Solution number 1: For simple file copy you may use Vagrant file provisioner. Following code in your Vagrantfile would copy the file ~/known_hosts.template from your host system to VM's file /home/vagrant/.ssh/known_hosts
# Context -------------------------------------------------------
Vagrant.configure('2') do |config|
# ...
# This is the section to add ----------------------------------
config.vm.provision :file do |file|
file.source = '~/known_hosts.template'
file.destination = '/home/vagrant/.ssh/known_hosts'
end
#--------------------------------------------------------------
end
File provisioner is poorly documented on Vagrant site, and we've got to thank #tmatilai who had answered similar question on serverfault.
Keep in mind that you should use absolute paths in destination field, and that copying is being performed by vagrant user, so file will have vagrant's owner:group.
Solution number 2: If you need to copy file with a root privileges, or really have to change the file without using templates, consider using well documented shell provisioner. File copying in this case would work only if you have the file placed in the folder visible from within the VM(guestOS), but you have all the power of shell.
Solution number 3: Though it would be overkill in this case, you might use very powerful Chef or Puppet as provisioners, and perform action via one of those frameworks. I know nothing about Puppet, and may talk only about Chef. Cookbook would be very simple. Create template file (.erb) with desired content, and then your recipe will just place the file where necessary. Of course you'll need a box with Chef packeges in it.
I use plain ssh to enter my machines in order to do provisioning:
$ ssh-add ~/.vagrant.d/insecure_private_key
With this setup the known hosts is bound to give problems, but I do not want to turn off the host key checking as I use that also for external hosts. Given my hosts include pattern foo, I did this on the shell:
$ ssh -i '' '/foo/d' ~/.ssh/known_hosts
Remove the empty '' argument after -i if you have GNU/linux host in stead of BSD/MacOSX.
You can then install the vagrant trigger plugin:
$ vagrant plugin install vagrant-triggers
And add the above snippet to the Vagrantfile (mind the backticks):
config.trigger.after :destroy do
puts "Removing known host entries"
`sed -i '' '/foo/d' ~/.ssh/known_hosts`
end
This is what I do:
I define the IP_ADDRESS and DOMAIN_NAME variables at the top of the Vagrantfile.
Then inside Vagrant.configure I add:
config.trigger.after :destroy do |trigger|
trigger.info = "Removing known_hosts entries"
trigger.run = {inline: "ssh-keygen -R #{IP_ADDRESS}"}
trigger.run = {inline: "ssh-keygen -R #{DOMAIN_NAME}"}
end