Registering a Fargate service ELB target with correct port using CDK? - aws-fargate

I'm trying to add a fargate service as an Application Load Balancer target but it keeps getting the wrong container port. The task definition has two containers: an app on port 8080 and an nginx reverse proxy on port 443. When I try to wire these together via the CDK the target registration always gets port 8080. I can't seem to find a method or set of props that lets me tell the CDK which container's port to use. Or maybe I am and it's ignoring it? What am I missing?
Here's a trimmed down example construct:
export class CdkFargateElbStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.Vpc(this, 'Vpc', { maxAzs: 2 });
const cluster = new ecs.Cluster(this, 'Cluster', {
vpc: vpc,
});
const taskDef = new FargateTaskDefinition(this, 'TaskDefinition');
const appContainer = new ContainerDefinition(this, 'AppContainer', {
image: ContainerImage.fromRegistry(APP_IMAGE),
taskDefinition: taskDef,
});
appContainer.addPortMappings({
hostPort: 8080,
containerPort: 8080
});
const proxyContainer = new ContainerDefinition(this, 'ProxyContainer', {
image: ContainerImage.fromRegistry(PROXY_IMAGE),
taskDefinition: taskDef,
})
proxyContainer.addPortMappings({
hostPort: 443,
containerPort: 443,
});
const service = new FargateService(this, 'Service', {
cluster: cluster,
taskDefinition: taskDef,
assignPublicIp: true,
desiredCount: 1,
vpcSubnets: vpc.selectSubnets({
subnetType: ec2.SubnetType.PUBLIC,
}),
});
const alb = new elb.ApplicationLoadBalancer(this, 'LoadBalancer', {
vpc: vpc,
internetFacing: true,
ipAddressType: elb.IpAddressType.IPV4,
vpcSubnets: vpc.selectSubnets({
subnetType: ec2.SubnetType.PUBLIC,
})
});
const tg = new ApplicationTargetGroup(this, 'TargetGroup', {
protocol: elb.ApplicationProtocol.HTTPS,
port: 443,
vpc: vpc,
targetType: elb.TargetType.IP,
targets: [ service ],
});
const listener = alb.addListener('Listener', {
protocol: elb.ApplicationProtocol.HTTPS,
port: 443,
certificateArns: [ CERTIFICATE_ARN ],
defaultTargetGroups: [tg]
});
const rule = new ApplicationListenerRule(this, 'rule', {
listener,
priority: 1,
pathPattern: '*',
targetGroups: [ tg ],
});
}
}
Here's the resulting target registrations. I need the port here to be 443.

According to the documentation from ecs.FargateService.loadBalancerTarget, the container which will be selected as the ALB target is the first essential container from the task definition.
To use other containers create a service reference like this:
const sTarget = service.loadBalancerTarget({
containerName: 'MyContainer',
containerPort: 1234
}));
Then add sTarget to the targetGroup services.

Related

Cannot connect NestJS Bull to Elasticache (Redis)

I stuck when connecting NestJS Bull to AWS Elasticache on deployment
On local I easily connect to Redis by
import { Module } from '#nestjs/common';
import { BullModule } from '#nestjs/bull';
#Module({
imports: [
BullModule.forRoot({
redis: {
host: 'localhost',
port: 6379,
password: 'secret',
},
}),
],
})
export class AppModule {}
I even try on https://app.redislabs.com/ a official Redis cloud. It still working.
But on deployment with Elasticache. There is no error on startup but the queue is not worked as expected
My code last year was worked, But now no response
import Redis from 'ioredis';
#Module({
imports: [
BullModule.forRoot({
createClient: () => {
return config.get('redis.cluster.host')
? new Redis.Cluster([
{
port: +config.get('redis.cluster.port'),
host: config.get('redis.cluster.host'),
},
])
: new Redis(+config.get('redis.standalone.port'), config.get('redis.standalone.host'));
},
}),
FeeQueue,
],
providers: [],
exports: [],
})
export class QueuesModule {}
Could you have time to help me. Thanks
I don't know if it'll be the same for you, but I just ran into a similar issue. The queue wasn't working, but no error logged. After a lot of testing, I finally got it to log an error saying that enableReadyCheck and maxRetriesPerRequest can't be used for bclients and subscibers. So I unset them:
BullModule.forRoot({
createClient: (type) => {
const opts =
type !== 'client'
? { enableReadyCheck: false, maxRetriesPerRequest: null }
: {}
return config.get('redis.cluster.host')
? new Redis.Cluster([{ host, port }], opts)
: new Redis({ host, port, ...opts});
},
})

ioredis infinite loop of connect event

Not getting ready event to be triggered. The connected event is triggered multiple times but ready is not. What am I doing wrong? Also connected event should also be triggered only once.
Implementation:
const client = new Cluster(
[
{
host: '127.0.0.1',
port: 7000,
},
],
{
dnsLookup: (address, callback) => callback(null, address),
redisOptions: {
},
},
);
client.on('ready', () => {
log.info('Ready to use Redis');
});
client.on('connect', () => {
log.info('Connected to Redis');
});
client.on('error', (x) => {
log.error(`Disconnected from Redis`);
});
Dockerhub:
redis-cluster:
image: grokzen/redis-cluster
environment:
MASTERS: 1
SLAVES_PER_MASTER: 1
ports:
- "7000:7000"
ioredis version: 4.26.0

VueSocketIO offer fallback connection url

I am using Vuetify together with Vuex and VueSocketIO for my WebApp and here is an example Part of the code:
Vue.use(new VueSocketIO({
reconnection: true,
debug: true,
connection: SocketIO(`http://${process.env.ip}:4000`),
vuex: {
store,
actionPrefix: 'SOCKET_',
},
}));
If I understand it correctly, using Vuex and VueSocketIO together makes it only possible to use one Socket like this at the same time.
In some cases Vue might not be able to connect to the socket specified at connection.
I was wondering if there was a possibility to first let Vue try to connect to one socket (also with some number of reconnection attempts) but to switch to another connection value and try with that one afterwards as a fallback?
Thank you in advance!
Final solution
const options = {
reconnection: true,
reconnectionAttempts: 2,
reconnectionDelay: 10,
reconnectionDelayMax: 1,
timeout: 300,
};
let connection = new SocketIO(`http://${process.env.ip}:4000`, options);
const instance = new VueSocketIO({
debug: true,
connection,
vuex: {
store,
actionPrefix: 'SOCKET_',
},
options,
});
const options2 = {
reconnection: true,
reconnectionAttempts: 4,
};
connection.on('reconnect_failed', () => {
connection = new SocketIO(`http://${process.env.fallback}:4000`, options2);
instance.listener.io = connection;
instance.listener.register();
Vue.prototype.$socket = connection;
});
To specify the number of reconnection attempts you can set reconnectionAttempts option.
Example Code:
const url = `http://${process.env.ip}:4000`
const options = {
reconnectionAttempts: 3
}
Vue.use(new VueSocketIO({
debug: true,
connection: new SocketIO(url, options),
vuex: { ... }
}))
But switching to another connection is not easy as both of vue-socket.io and socket.io-client it was not designed for that.
First we have to listen on reconnect_failed event which will fires when reconnection attempts is exceeded.
Then we have to create a new connection to connect to the fallback url.
The VueSocketIO instance have two important properties which emitter and listener we cannot create the new emitter since it might already used in some components (with subscribe function) so we have to use old emitter but new listener.
Unfortunately, we cannot import Listener class directly from vue-socket.io package. So we have to use old listener but change the io property to the new connection then manually call register method.
Binding Vue.prototype.$socket to the new connection for the future use.
Example Code:
const url = `http://${process.env.ip}:4000`
const fallbackUrl = `http://${process.env.ip}:4001`
const options = {
reconnectionAttempts: 3
}
const connection = new SocketIO(url, options)
const instance = new VueSocketIO({
debug: true,
connection,
vuex: {
store,
actionPrefix: 'SOCKET_'
},
options
})
connection.on('reconnect_failed', error => {
const connection = new SocketIO(fallbackUrl, options)
instance.listener.io = connection
instance.listener.register()
Vue.prototype.$socket = connection;
})
Vue.use(instance)

Devserver Proxy w/ Axios

I cannot seem to get the devServer: proxy setting working in my vue / express app.
My vue.config.js file is in the root of my client folder and looks like:
module.exports = {
devServer: {
proxy: {
'api': {
target: 'http://localhost:5000'
}
}
},
transpileDependencies: [
'vuetify'
]
}
I'm sending a request from the frontend using axios like this:
const response = await http.get("/api/auth/authenticate");
My express app is running on localhost:5000 and I've configured endpoints as such:
...other endpoints
app.use("/api/auth", authController);
The request appears in my network tab as:
Request URL: http://localhost:8080/api/auth/authenticate
and returns a 404 error.
What am I missing here?
Since now it is not fetching from your backend, but searching for some static content (hitting 8080, vue must be running on this port). Try using, so that it get redirected to proxy:
proxy: {
'^/api': {
target: 'http://localhost:5000',
ws: false,
changeOrigin: true
},
Or just '/api' instead of '^/api'

Where should be the location for coturn or ice setting for sipjs 0.11.0?

I am moving from sipjs 0.7x to sipjs 0.11
After reading the Git issue https://github.com/onsip/SIP.js/pull/426#issuecomment-312065734
and
https://sipjs.com/api/0.8.0/sessionDescriptionHandler/
I have found that the ice options (coturn, turn, stun) is not in User Agent anymore,
but the problem is that I am not quite understand where should I use the
setDescription(sessionDescription, options, modifiers)
I have seen that the ice is set in options, using
options.peerConnectionOptions.rtcConfiguration.iceServers
below is what I haved tried
session.on('trackAdded', function () {
// We need to check the peer connection to determine which track was added
var modifierArray = [
SIP.WebRTC.Modifiers.stripTcpCandidates,
SIP.WebRTC.Modifiers.stripG722,
SIP.WebRTC.Modifiers.stripTelephoneEvent
];
var options = {
peerConnectionOptions:{
rtcConfiguration:{
iceServers : {
[{urls: 'turn:35.227.67.199:3478',
username: 'leon',
credential: 'leon_pass'}]
}
}
}
}
session.setDescription('trackAdded', options,modifierArray);
var pc = session.sessionDescriptionHandler.peerConnection;
// Gets remote tracks
var remoteStream = new MediaStream();
pc.getReceivers().forEach(function (receiver) {
remoteStream.addTrack(receiver.track);
});
remoteAudio.srcObject = remoteStream;
remoteAudio.play();
// Gets local tracks
// var localStream = new MediaStream();
// pc.getSenders().forEach(function(sender) {
// localStream.addTrack(sender.track);
// });
// localVideo.srcObject = localStream;
// localVideo.play();
});
}
I have tried this and it seems that the traffic is not going to the coturn server.
I have used Trickle Ice "https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/" to test and it is fine, but I have found there is not traffic going through the coturn server. You could also use this one and I do not mind.
There is even no demo on the official website to show how could we use the setDescription(sessionDescription, options, modifiers). In this case can I please ask some recommendations?
Configure STUN/TURN servers in the parameters passed to new UserAgent.
Here is sample, it seems to be working on v0.17.1:
const userAgentOptions = {
...
sessionDescriptionHandlerFactoryOptions: {
peerConnectionConfiguration: {
iceServers: [{
urls: "stun:stun.l.google.com:19302"
}, {
urls: "turn:TURN_SERVER_HOST:PORT",
username: "USERNAME",
credential: "PASSWORD"
}]
},
},
...
};
const userAgent = new SIP.UserAgent(userAgentOptions);
When using SimpleUser - pass it inside SimpleUserOptions:
const simpleUser = new Web.SimpleUser(url, { userAgentOptions })