Activate a ssh sock5 proxy in a Terragrunt before_hook - ssh

I am trying to activate a ssh sock5 proxy before applying a RDS Terraform stack.
For this I configure the ssh command into a Terragrunt before_hook block, example below:
before_hook "ssh_tunnel_start" {
commands = ["init", "plan", "apply"]
execute = ["ssh", "-D", "3306", "-M", "-S", "/tmp/ssh-control-socket", "-fnNTC", "<bastion_host>"]
}
If I execute the ssh command manually in my terminal it is working as expected, ssh bind the local port and then detach, but executed from the Terragrunt hook the proxy is up but the ssh command do not detach and the Terragrunt process can't continue and get stuck on the hook command.

I have found a working solution by using screen, maybe there is a better one.
before_hook "ssh_tunnel_start" {
commands = ["init", "plan", "apply"]
execute = ["screen", "-d", "-m", "ssh", "-D", "3306", "-M", "-S", "/tmp/ssh-control-socket", "-fnNTC", "<bastion_host>"
}

Related

Terraform - Failed to set up SSH tunneling for host

Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.

How to use `ioredis` to connect to Redis instance (AWS elasticcache) across ssh tunnel with SSL?

This seems to be something about ioredis and its support for TLS. This is all on a mac, Catalina, etc.
I have an elasticcache Redis instance running, inside a VPC. I tunnel to it with ssh,
ssh -L 6379:clustercfg.my-test-redis.amazonaws.com:6379 -N MyEC2
The following doesn't work with node 12.9, ioredis 4.19.4
> const Redis = require("ioredis");
> const redis = new Redis('rediss://127.0.0.1:6379');
[ioredis] Unhandled error event: Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: IP: 127.0.0.1 is not in the cert's list:
at Object.checkServerIdentity (tls.js:287:12)
<repeated ... many times>
This doesn't work either:
> const Redis = require("ioredis");
> const redis = new Redis('redis://127.0.0.1:6379');
> redis.status
'connect'
> redis.set('fooo','barr').then(console.log).catch(console.error)
Promise { <pending> }
> redis.status
'connect'
Is there a way to let me do this with ioredis? This is just for debugging. If the first form is correct, is there a setting to allow "non-strict" validation of the cert or something?
This works (on a mac)
% openssl s_client -connect localhost:6379
set "fred" "Mary"
+OK
get "fred"
$4
Mary
This works (with redis installed via pip3)
#!/usr/bin/env python3
import redis
r = redis.Redis(host='127.0.0.1', ssl=True, port=6379)
r.set('foo', 'bar')
print(r.get('foo'))
While I wouldn't recommend this for production, you said this was for debugging.
You need to disable the server identity check. You can do that by overriding the function in the configuration with a noop:
const Redis = require("ioredis");
const redis = new Redis('rediss://127.0.0.1:6379', {
tls: {
checkServerIdentity: () => undefined,
}
});

Masstransit cannot access host machine RabbitMQ from a docker container

I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan

Redis-cli command to restart the redis server

I terminated the redis server using SHUTDOWN from redis-cli. Now the prompt shows 'not connected>'.
The only way I found to restart the server was to exit the redis-cli prompt and then do a restart of the redis service.
My question is, is there any way to restart the server from the redis-cli prompt using any redis commands WITHOUT EXITING the redis-cli prompt?
While you don't have to exit the cli, the server cannot be restarted from it once it is shut down.
i agree Itamar Haber answer and i will uncover the details
after the server restart,if you type any command in this 'not connected>',the redis-cli will attempt connect again if send command failed.
while (1) {
config.cluster_reissue_command = 0;
if (cliSendCommand(argc,argv,repeat) != REDIS_OK) {
cliConnect(1);//try to connect redis server if sendcommand failed
if (cliSendCommand(argc,argv,repeat) != REDIS_OK) {//after try to connect,send commend again
cliPrintContextError();
return REDIS_ERR;
}
}
}
after redis-server restart successfully,it will listen socket event,if socket connect occur,server will accept connect at here
void acceptTcpHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
......some code.......
while(max--) {
cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);//accept connect
if (cfd == ANET_ERR) {
if (errno != EWOULDBLOCK)
serverLog(LL_WARNING,
"Accepting client connection: %s", server.neterr);
return;
}
serverLog(LL_VERBOSE,"Accepted %s:%d", cip, cport);
acceptCommonHandler(cfd,0,cip);
}
}

Gulp task to SSH and then mysqldump

So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf