How to create socket over https in NestJs? - ssl

I have created one socket gateway which is working very smoothly with an HTTP request. Now, I am trying to connect socket through https request in NestJs but didn't work for me.
I have also tried to give extra parameters in #WebsocketGateway(5058, { origin : "*:*", secure: true })
I have also checked for NestJs official documentation to work with SSL on the socket but found nothing.
Below is my code which I have created as per documentation.
import { InternalServerErrorException, BadRequestException } from '#nestjs/common';
import { SocketService } from './socket/socket.service';
import { Server, Socket } from 'socket.io';
#WebSocketGateway(5058, { origin : "*:*"} )
export class AppGateway implements OnGatewayConnection, OnGatewayInit {
constructor(private socketService: SocketService) { }
public userIds = [];
afterInit(server: Server) {
console.log("Socket server started");
this.socketService.socket = server;
}
async handleConnection(client) {
try {
console.log(client.id);
this.socketService.socket.to(client.id).emit('status', "connected = " + client.id);
} catch (error) {
throw new InternalServerErrorException(
`Oops.something went wrong, please try again later`,
);
}
}
async handleDisconnect(client) {
this.userIds = this.userIds.filter(user => user.conn_socket_id !== client.id);
}
}
edited:
I can start server and access socket while using an HTTP request, but I am not able to access the socket on HTTPS request.
ex. http://example.com:5058 is working for me,
https://example.com:5058 is not working.

I have fixed it by using a proxy over the socket port so if my socket URL is like https://example.com:5058 then it should be handled from the virtual host and add a proxy to get it working.
The reason for not working is that when you apply HTTPS, it will run on port 443. But now when you are applying an additional port in the URL with HTTPS then it will not run and it will show an error.
Reference for Apache reverse proxy: Link

Related

ICE failed on firefox but peerJs's exemple work fine

I try to make a simple chat with peerJs to learn how to use peer to peer
when I try to connect two clients, this error happens on both client
WebRTC: ICE failed, your TURN server appears to be broken, see about:webrtc for more details
PeerJS: iceConnectionState is failed, closing connections
the peerJs demo work fine on Firefox (even when i run it locally)
and my code seem to work correctly on edge
I try in private with disabled add-on
my media.peerconnection.ice.proxy_only is set to false.
Any setting related to ice or peer is set to default
My code
both client have the same code, they try to connect to the client in the url
exemple:
i open http://localhost:5173/room who get an peerId and rename itselfhttp://localhost:5173/room?roomId=198ec396-1691-48cf-b6ea-16d2102c4917 then i copy paste this link to another tab
<script lang="ts">
import { page } from '$app/stores';
import { Peer } from 'peerjs';
import { onMount } from 'svelte';
let peer: Peer;
onMount(() => {
const url = $page.url.searchParams;
peer = new Peer({ debug: 3 });
peer.on('open', function (id) {
console.log('open| id :', id);
if (url.has('roomId')) {
let conn = peer.connect(url.get('roomId')!, {
reliable: true
});
conn.on('open', function () {
console.log('test 1');
conn.send('test send 1');
});
}
// change url to be easily copied
window.history.replaceState({}, '', 'room?roomId=' + id);
});
peer.on('connection', function (connexion) {
console.log('test 2');
connexion.send('test send 2');
});
});
</script>
firefox log
edge log (seem to connect but test's send() not logged)

Set up Ktor authenticated shutdown route

I'd like to add a shutdown route to my Ktor server but I need it to require authentication.
I'm trying to put the shutdown url in my authenticated routes like so:
// application.conf
ktor {
deployment {
port = 8080
host = 127.0.0.1
shutdown.url = "/shutdown"
}
}
// Application.kt
routing {
root()
authenticate("authenticated-routes") {
test1()
test2()
shutdown()
}
}
// Routes.kt
fun Route.shutdown() {
get("/shutdown") {
// shutting down
}
}
But somehow the shutdown route does not require authentication for shutting down the server (something to do with the config overriding the route defined in Routes.kt?)
The docs unfortunately do not give any hints as to how to make the shutdown route authenticated. Any ideas on how I could make sure not just anyone can call the shutdown route and shutdown my server?
The ShutDownUrl plugin has nothing with Routing that's why you can't integrate it with the Authentication plugin. To solve your problem you can manually make an instance of ShutDownUrl class and execute the doShutdown method in a route that may require authentication. Here is an example:
import io.ktor.application.*
import io.ktor.auth.*
import io.ktor.response.*
import io.ktor.routing.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
fun main() {
val shutdown = ShutDownUrl("") { 1 }
embeddedServer(Netty, port = 3333) {
install(Authentication) {
basic {
realm = "Access for shutting the server down"
validate { credentials ->
if (credentials.name == "jetbrains" && credentials.password == "foobar") {
UserIdPrincipal(credentials.name)
} else {
null
}
}
}
}
routing {
get("/") {
call.respondText { "hello" }
}
authenticate {
get("/shutdown") {
shutdown.doShutdown(call)
}
}
}
}.start()
}

server does not connect to the right port for my DB when it makes a request

I am running a node/express react app, with mongoDB as my database, its being served on a EC2 instance AWS. I am using MobaXterm to ssh into my server. The problem I am encountering, when I go to signUp/Login, and it makes the http request, its not sending the Request to port:8081 where my DB is. My front end is being hosted on port:3000, as seen here url with port.
The home page of my app show display public content, but instead it shows this should display my content, instead displays html syntax. I am using PM2 to run both the front end and back end , they are 2 separate instances. Here is my ecosystem.config.js file that gets called to start the front end
module.exports = {
apps: [
{
name: 'frontend',
script: 'npx',
args: 'serve -s build -l 3000 -n',
interpreter: 'none',
env: {
NODE_ENV: 'production',
},
},
],
}
Here is my http-common.js file that sets up axios
import axios from 'axios';
const apiUrl = process.env.NODE_ENV === 'production' ? process.env.REACT_APP_PRODUCTION : process.env.REACT_APP_DEV;
export default axios.create({
baseURL: apiUrl,
headers: {
"Content-type": "application/json"
}
});
Here is where my public content on my home page is called
import http from '../http-common';
import authHeader from './auth-header';
class UserService {
getPublicContent() {
return http.get('api/test/all');
};
getUserBoard() {
return http.get('api/test/user', { headers: authHeader() });
};
getModeratorBoard() {
return http.post('api/test/mod', { headers: authHeader() });
};
getAdminBoard() {
return http.put('api/test/admin', { headers: authHeader() });
};
}
export default UserService;
and my .env.production file
REACT_APP_PRODUCTION=http://EC2-INSTANCE-URL-NOT-REAL:8081/
And you can see here that instead of the REQUEST using port:8081 it sends the request to port:3000
GET request to wrong port
**** Also my ports are open on 8081 , on the security group for the EC2 instance ***
Im new to the backend so im not sure what to do , or how to continue, I really want to have my project CI/CD working before I try to make any more progress.

What's the right way to publish Redis message from a Redis based NestJS Microservice

I built a sample Redis based Microservice with NestJS. It's fantastic and works great. In my microservice, after processing the message received over Redis (pub/sub), we publish our result to another Redis channel for a different microservice to pick.
What's the right way to publish? Are there any samples?
For my work, I used Redis package and published it (as opposed to ClientProxyFactory). Works fine and gets the job done.
import {
ClientProxy,
ClientProxyFactory,
Transport,
} from '#nestjs/microservices';
import { Injectable, Logger } from '#nestjs/common';
import * as redis from 'redis';
import { NVResponseDTO } from "../dto/nv.dto";
#Injectable()
export class NVPersistService {
logger = new Logger('NVPersistService');
private client: redis.RedisClient;
constructor() {
this.client = redis.createClient({port: 6379, host: 'localhost'});
this.logger.log('Successfully created client for publish');
}
async publish(result: NVResponseDTO) {
const channel = 'persistence';
try {
await this.client.publish(channel, JSON.stringify(result));
this.logger.log(`Message sent`);
} catch (e) {
this.logger.error(e);
}
}
}
But is this the way to do it or should I use something like below
this.client = ClientProxyFactory.create({
transport: Transport.REDIS,
options: {
url: 'redis://localhost:6379',
}
});
await this.client.connect();
const channel = 'persistence';
const status = await this.client.send<string, NVResponseDTO>(channel, result);
this.logger.log(`Status of send - ${status}`);
Note: Above code did not work for me, hence used Redis directly. Any guidance would be much appreciated

Application not connecting to Worklight server

I am using WL.Client.connect(options) to connect to worklight server from my hybrid app and its returing this error (please refer to the attachment).
I have checked the WL console, my app and adapters are deployed without any issues.
Could not find any error messages in the logs.
function performWlConnect() {
return new Promise(function (resolve, reject) {
// Worklight server connection callback configuration
var wlConnectOptions = {
onSuccess: resolve,
onFailure: reject
};
logger.info('Connecting to the worklight server side...');
// Perform connection to the server side
WL.Client.connect(wlConnectOptions);
});
}
Try with a simple connect and print the error you're getting, it might help you better diagnose your flow.
function wlCommonInit() {
WL.Client.connect({onSuccess:onConnectSuccess, onFailure:onConnectFailure});
}
function onConnectSuccess() {
...
}
function onConnectFailure(response) {
WL.Logger.debug(response);
}