After several days of hammering at this, I have a working CentOS 6.3 system bound to a AD domain running Windows 2008R2. My method is sssd based pam using Kerberos authentication. Directory info is accessed on the domain controller via LDAP. The LDAP bind is also kerberized.
On my client (Mac OS 10.8) I am able to ssh into the CentOS system with all the pieces seemingly clicking right. The Mac gets a ticket and then GSSAPI key exchange takes place followed by gssapi-keyex authentication. So the setup is working but I'm having an issue with slow logins -- about 10 seconds from start to finish. My experience is that kerberized ssh should be instantaneous so something is still not right.
I've monitored the communication between CentOS and the DC using tcpdump and it looks like CentOS gets a response immediately from anything it requests from the DC. The part that it hangs up is actually before it tries to contact the DC at all. It looks like the GSSAPI key exchange is what is slow. So if I look at the ssh connection in debug mode the two points it hangs are
debug1: SSH2_MSG_KEXINIT sent
and
debug1: Doing group exchange
Once it gets to authentication method: gssapi-keyex it flys through. Does anybody have any ideas as to what would cause the key exchange to run slow? Possibly is something not right on my client? On the Mac my ~/.ssh/config file is set as follows:
GSSAPIAuthentication yes
GSSAPIKeyExchange yes
GSSAPIDelegateCredentials yes
GSSAPITrustDNS yes
GSSAPIClientIdentity username#MYDOMAIN.COM
I figured it out, Kerberos makes many many DNS calls and you must have a caching DNS server installed to CentOS if you want to have practical login speeds. So simply install and set up BIND and you'll be good to go.
Related
Vulnerability scan shows that my server (server1821) is currently vulnerable to TLS ROBOT
Server is AIX.
How do I check for this vulnerability and how to fix this?
I checked with my vender and I got the reply as :
Does the scan report which ports are vulnerable? Those applications using TLS protocol with RSA ciphers need to be altered so they no longer use RSA. We need to do this at the application level.
Not sure about this suggestion.
The TLS ROBOT advisory site ((https://robotattack.org/) doesn't have any answers with respect to AIX.
A simple command shows this"
serverl1821 2 % cat /etc/ssh/sshd_config |grep -i rsa
#HostKey /etc/ssh/ssh_host_rsa_key
serverl1821 3 %
Can anyone help me here?
Your application vendor is absolutely correct. First of all you have to ask you security guys, where they found the vulnerability. Not only the server name, but also the port.
Then the problem may be in one of the following component:
OpenSSH
OpenSSL
IBM GSKit
Java
Every of the components requires different tuning to disable RSA ciphers.
To make it more complex every application can come with their own SSL/TLS library and their own set of settings.
The vulnerability may have nothing to do with ssh. You should update GSKit package. This is the package which implement SSL/TLS in AIX. And do not forget to restart web/application server.
I am trying to better understand how ssh does host authentication. I am ssh'ing from a macbook pro (OSX 10.14.6) to several CentOS 8.1 servers. There are several files on the remote CentOS servers in /etc/ssh/ that are used for the host-based authentication (e.g. ssh_host_ed25519_key.pub, ssh_host_dsa_key.pub, ssh_host_rsa_key.pub).
If I look at my macbook's local ~/.ssh/known_hosts, I see entries that use ssh-rsa which corresponds to /etc/ssh/ssh_host_rsa_key.pub. I also see entries for ecdsa-sha2-nistp256 which correspond to /etc/ssh/ssh_host_ecdsa_key.pub.
Question :
When I ssh into my remote server, is there a way for me to force ssh to use a particular algorithm for the host authentication or is this something that I'll have to change by hand in known_hosts? E.g. force it to use ssh_host_ecdsa_key.pub instead of ssh_host_rsa_key.pub.
How does ssh by default decide which algorithm to use for host authentication?
You can use the -o flag to specify options for SSH. One of these options is HostKeyAlgorithms which will control which algorithms your client offers, see: https://man.openbsd.org/ssh.
If you run ssh with the -vv flag you can see the offer that is made by your client. Then the server chooses the first algorithm used by the client that it supports. I would guess that the different support different algorithms.
I was updating the ssh port of an Oracle Cloud Infrastructure machine
I changed /etc/ssh/sshd_config
The port was
#Port 22
I changed it to
Port 40531
Then
restarted the sshd service systemctl restart sshd
open the port on the OCI Web
however, now I cannot connect.
ssh -vvv -p 40531 -i ~/.ssh/vm.key opc#129.xxx.xxx.xxx
OpenSSH_8.2p1, OpenSSL 1.1.1e 17 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: resolve_canonicalize: hostname 129.xxx.xxx.xxx is address
debug2: ssh_connect_direct
debug1: Connecting to 129.xxx.xxx.xxx [129.xxx.xxx.xxx] port 40531.
debug1: connect to address 129.xxx.xxx.xxx port 40531: Connection timed out
ssh: connect to host 129.xxx.xxx.xxx port 40531: Connection timed out
I saw a Cloud Shell but I'm not sure if it can be used to connect to the machine to perform maintenance tasks
Is there a way to connect to the VM from the web oci interface to fix the ssh issues?
I used to use a VPS service that has a web console from which you can enter to fix problems like this
is there something like this in OCI?
Note:
SELinux was disabled on the machine
if you are about to do this on your machine, remember to update the SELinux configuration prior restart the sshd service or you will be locked out, another option is to disable SELinux totally (this is what I did)
The changes above described worked well, the only thing that was causing issues on my side
(I don't really know why) is that I was connected from a VPN
After I disconnected the VPN and tried to connected again it worked
Update:
I figured out why the ssh using a different port was not working. The VPN I use is a corporate VPN which has very strict inbound and outbound rules, The VPN outbound rules were blocked by the TCP on port 40xxx.
Update:
if you are struggling with a VM you can connect using the below instruction
Creating the Instance Console Connection
Before you can connect to the serial console or VNC console, you need to create the instance console connection.
To create the console connection for an instance
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you're interested in.
Under Resources, click Console Connection.
Click Create Console Connection.
Upload the public key (.pub) portion for the SSH key. You can browse to a public key file on your computer or paste your public key into the text box.
Click Create Console Connection.
When the console connection has been created and is available, the state changes to Active.
Thanks to #bmuthuv for the info
You can connect to Serial Console of the VM where you could get access to GRUB Menu during a Reboot operation. You can subsequently use typical Linux commands to get to Shell from Grub. You can subsequently undo anything you would like to.
Serial Console connection can be created on OCI Web Console in the Instance's page.
I am attempting to create an SSH server (using Paramiko, but that's mostly irrelevant here). I am wondering if there is any way to determine the hostname that the SSH client requested. For example, if I were to connect with
ssh user#example.com
but I also had a CNAME record that pointed to the same server so I could also connect with
ssh user#foo.com
then I would like the server to know in the first case the user requested example.com and in the second, foo.com.
I have been reading through SSH protocol documents like:
https://www.rfc-editor.org/rfc/rfc4253
https://www.rfc-editor.org/rfc/rfc4252
But cannot find out if there is a way to do this.
In general, the ssh protocol does not support this. It's possible that a given ssh client may send an environment variable that gives you a hint, but that would happen after key exchange and user authentication, which would be far later than you'd want the information. It happens that if you were using Kerberos authentication via one of the ssh GSS-API mechanisms described in RFC 4462, you would get the hostname the user requested as part of the GSS exchange. That almost certainly doesn't help you, but it happens to be the only case I'm aware of where this information is sent.
For ssh virtual hosting you're going to need to dedicate an IP address or port for each virtual host. Take a look at port sharing or IPv6 as possibilities for your application.
I have an odd issue, and have managed to replicate this problem at different locations with different installs of Squid.
I will base my "problem" with my squid server at home.
Running Fedora 20 (32bit), with Squid 3.3.11, firewall and iptables uninstalled/disabled. The network is IPv4. I have a couple of Windows 7 machines with IE11, and 1x Windows 8.1 machine with IE11.
My problem is, on my Windows 8.1 machine with IPv6 protocols turned off, trying to load SSL based web pages (such as https://www.google.co.uk or https://www.facebook.com), the initial page load results in an error. Subsiquent loads either fail, part fail (IE main body of the site loads, but further SSL connections fail, such as image loads) or allow the page to load.). Oddly enough though, I do not re-call having a problem with my banks website! I would suggest that some websites seem to struggle more than others.
A friend of mine also managed to replicate the fault on a squid server he setup with Windows 8.1. He commented that using another browser such as Firefox, the problem is resolved so it seems limited to Windows 8.1, and IE11
Using Wireshark, during the failed attempts, towards the end my machine sends back a load of TCP RST commands.
However loading the same websites on my Windows 7 using IE11, or Windows XP with IE8, the problem does not appear, and until I moved to Windows 8.1, I had 0 problems with my Squid server.
My Squid Config is fairly basic as I just use it for filtering adverts using a block list using SquidGuard, although an experiment ruled out SquidGuard as being the problem when I removed the relevant line from squid.conf.
Thanks for reading and hope we can get to the bottom of this!
Copy of my squid config.
#squid normally listens to port 3128
http_port 3128
#Allow local machine
#acl manager proto cache_object
acl localhost src 192.168.20.6
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
http_access allow localhost
# Define Local network
acl localnet src 192.168.20.0/24
http_access allow localnet
#Redirect for SquidGuard
redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
# And finally deny all other access to this proxy
http_access deny all
I found it: Disable SPDY/3 Protokol in IE11 (Extras....)
It might be related to IE11 disabling RC4 and using TLS1.2 in the initial handshake, see http://blogs.msdn.com/b/ie/archive/2013/11/12/ie11-automatically-makes-over-40-of-the-web-more-secure-while-making-sure-sites-continue-to-work.aspx. Unfortunatly lots of hosts still require RC4 and might fail if the client does not offer this cipher. Other hosts will also croak when using TLS1.2, e.g they just close the connection (like currently bmwi.de) or just drop the request (usually an older F5 BIG-IP load balancer in front).
I know from Chrome that it just retries a failed TLS 1.2 request immediately with TLS 1.1. Maybe IE11 does this different, e.g. does not retry immediately but only remembers the failure and work around it if the user retries. Or it behaves this way only when used with a proxy.
That you sometimes get the full page and sometimes only the html on the second request might be the case if the images are loaded from a different host name, e.g. images.whatever instead of www.whatever. In this case it gets at first a failed handshake with www.whatever and after a successful retry it with downgraded SSL it receives the HTML with the includes from images.whatever. It then will access images.whatever and will run in the same problems with SSL, so that the page stays w/o images until another retry.
If my theory is correct, you should see in wireshark in the initial failed connect an SSL Client Hello message with version TLS 1.2 and it should not have RC4 in the list of offered ciphers. On the second (successful) connect you should see RC4 in the cipher list of the Client Hello and maybe it will also use TLS 1.1 or oven TLS 1.0 instead of TLS 1.2.