webseal virtual junction not forwarding packets to server - webseal

I have configured a Virtual Junction on Webseal with command:
pdadmin sec_master> server task default-webseald-abkale2l virtual host create -t tcp -h abkale2l -p 7002 -v abkale2l:7002 -f vmachine1
I have WebSEAL server and application server on same machine.
when i hit url https://abkale2l:7002 i am expecting webseal will interrupt the package and throw an authentication challenge but not happening.
any help is appreciated.

Related

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

Error: Connection reset by peer while connecting to Elastic cache using stunnal method

I am using elastic cache single node shard redis 4.0 later version.
I enabled In-Transit Encryption and gave redis auth token.
I created one bastion host with stunnal using this link
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
I am able to connect to elastic cache redis node using following way
redis-cli -h hostname -p 6379 -a mypassword
and i can do telnet also.
BUT
when I ping (expected response "PONG") on redis-cli after connection it is giving
"Error: Connection reset by peer "
I checked security group of both side.
Any idea ?
Bastion Host ubuntu 16.04 machine
As I mentioned in question, I was running the command like this:
redis-cli -h hostname -p 6379 -a mypassword
The correct way to connect into a ElastiCache cluster through stunnel should be using "localhost" as the host address,like this:
redis-cli -h localhost -p 6379 -a mypassword
There is explanation for using the localhost address:
when you create a tunnel between your bastion server and the ElastiCache host through stunnel, the program will start a service that listen to a local TCP port (6379), encapsulate the communication using the SSL protocol and transfer the data between the local server and the remote host.
you need to start the stunnel, check if the service is listening on the localhost address (127.0.0.1), and connect using the "localhost" as the destination address: "
Start stunnel. (Make sure you have installed stunnel using this link https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/)
$ sudo stunnel /etc/stunnel/redis-cli.conf
Use the netstat command to confirm that the tunnels have started:
$ netstat -tulnp | grep -i stunnel
You can now use the redis-cli to connect to the encrypted Redis node using the local endpoint of the tunnel:
$redis-cli -h localhost -p 6379 -a MySecretPassword
localhost:6379>set foo "bar"
OK
localhost:6379>get foo
"bar"
Most probably ElastiCache Redis Instance is using Encryption in-transit and Encryption at-rest and by design, the Redis CLI is not compatible with the encryption.
You need to setup stunnel to connect redis cluster
https://datanextsolutions.com/blog/how-to-fix-redis-cli-error-connection-reset-by-peer/
"Error: Connection reset by peer" indicates that Redis is killing your connection without sending any response.
One possible cause is you are trying to connect to the Redis node without using SSL, as your connection will get rejected by the Redis server without a response [1]. Make sure you are connecting through the correct port in your tunnel proxy. If you are connecting directly from the bastion host, you should be using local host.
Another option is that you have incorrectly configured your stunnel to not include a version of SSL that is supported by Redis. You should double check the config file is exactly the same as the one provided in the support doc.
It that doesn't solve your problem, you can try to build the cli included in AWS open source contribution.[2] You'll need to check out the repository, follow the instructions in the readme, and then do make BUILD_SSL=yes make redis-cli.
[1] https://github.com/madolson/redis/blob/unstable/src/ssl.c#L464
[2] https://github.com/madolson/redis/blob/unstable/SSL_README.md

SSH port forwarding in Docker

I have these two containers, say backend (CentOs) and mongo. What I would like to have is that from within the backend container I can connect to the mongo database as if it was running locally, $> mongo localhost:27017
Anyway, as far as I understand all this, you can map the port localhost:27017 to mongo:27017 like this
$backend> ssh -L 27017:mongo:27017 root#mongo
However, if I do this I have to provide the root password and after that it logs me into the mongo container and no port forwarding is happening
Background: I want to do this because I'm running a Java program which connects to a Mongo database on localhost and I cannot change that.
I found the correct SSH port forwarding command
$> ssh root#mongo -L 27017:localhost:27017 -Nf
Normally the idea with this command is that you map a non-public port - through a public server to you own server/compute.
* `root#mongo` - the public server
* -L <port on your server>:<third server address>:<port>
* `-Nf` - Do not login
Because the public server and third server are the same computer/container you have to use localhost :)

Accessing a CentOS 7 (minimal) server running on VirtualBox from outside

Is it possible to access my Apache server from outside the VirtualBox on Google Chrome browser? Its running on CentOS 7 on VirtualBox.
I have tried connecting to the ip address of the CentOS virtual machine but it didn't work. Its using 'Bridged Adapter' networking in the VM settings and i checked the ip address using the 'ip addr' command. Thanks.
Of course you can. Though you need to add a tunnel to allow access to your Centos 7 machine web service from the host machine.
For example, my VM's bridge IP address (the interface that connects to the world) is 192.168.1.38 and its interface is enp0s3. Let's say I'm running the web service on my second interface, enp0s8 with IP 192.168.100.101 on port 8000. Here's how you create the tunnel:
iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 80 -j DNAT --to-destination 192.168.100.101:8000
services iptables save
That's it. You should be able to go to your host's Chrome browser and type in the url 192.168.1.38 and be presented with your web service. If it's still not working I'd suggest looking into your iptables rules to see if any is blocking this traffic.

SSH port forwarding not working for websocket

I'm running a web service on serverA:8890, this includes the regular HTTP service and websocket services. I'm trying to set up the SSH port forwarding from serverB to serverA, so I can access the ServerA's service through SSH tunnel.
Here is my command:
ssh -f user#serverA -i user.pem -L 2000:serverB:8890 -N
When I connect to ServerB:2000, I can see all the regular web services, but the websocket part is not working.
Any idea how to solve this?
Thanks
I believe your tunnel needs to be:
ssh -f user#serverA -i user.pem -L 2000:serverA:8890 -N