Is there a way to multi browser communicate each other without peers?still able to communicate after lose peers connecting? - gun

Is there a way to multi browser communicate each other without peers?or still able to communicate after lose peers connecting?
I created sample with gun.js like below:
server.js:
const express = require('express')
const Gun = require('gun')
const app = express()
const port = 8000
app.use(Gun.serve)
const server = app.listen(port, () => {
console.log("Listening at: http://localhost://" + port)
})
Gun({web: server})
test.ts on angular demo:
gun = GUN({
peers: ['http:localhost:8000/gun']
});
data: any;
initDate(): void {
this.gun.get('mark').put({
name: "Mark",
email: "mark#gun.eco",
});
}
listenDate(): void {
this.gun.get('mark').on((data, key) => {
console.log("realtime updates:", data);
this.data = data;
});
}
submit(): void {
this.gun.get('mark').get('live').put(Math.random());
}
I start server.js as a peer and start angular app,open two broswer with same url,the two broswer communicate well.
but after i stop server.js , the two broswer are unable to communicate each other.
Is there a way to the two browser communicate each other without server.js?or how still able to communicate after I stop server.js?

Related

express and moralis token price API not fetching json data in console

I've been following video link from Moralis web3 (youtube) meanwgile I got stuck when I need to fetch data using token price Moralis API. I want the price details to be printed in the console when i do
npm start
the expected output in console is:
{
nativePrice: {
value: '13851123944545175839',
decimals: 18,
name: 'Ether',
symbol: 'ETH'
},
usdPrice: 23176.58785953117,
exchangeAddress: '0x1f98431c8ad98523631ae4a59f267346ea31f984',
exchangeName: 'Uniswap v3'
}
In localhost it should return empty json object '{}' but when I opened the same in localhost it is showing:
Cannot GET /tokenPrice
I tried different method provided in moralis doc its working fine but I did same as the tutorial that throws me error:
const express = require("express");
const Moralis = require("moralis").default;
const { EvmChain } = require("#moralisweb3/common-evm-utils");
const app = express();
const cors = require("cors");
require("dotenv").config();
const port = 3001;
app.use(cors());
app.use(express.json());
app.get("./tokenPrice", async (req, res) => {
const { query } = req;
const responseOne = await Moralis.EvmApi.token.getTokenPrice({
address: query.addressOne,
});
const responseTwo = await Moralis.EvmApi.token.getTokenPrice({
address: query.addressTwo,
});
console.log(responseOne.raw);
console.log(responseTwo.raw);
return res.status(200).json({});
});
Moralis.start({
apiKey: process.env.MORALIS_KEY,
}).then(() => {
app.listen(port, () => {
console.log(`Listening for API Calls`);
});
});
also I want to know what does that {query} & addressOne means here as I've never declared any var like before in my code.
I want to know what {query} and addressOne is , whether express js property or moralis
want to know why and where error occured and solution to resolve.
For addressOne and addressTwo, those are the query parameters used in your request. If you check the video you can see him showcasing an example of how the request should look like:
http://localhost:3001/tokenPrice?addressOne=0x51491...cf986ca&addressTwo=0xdac17f...6994597
And use 2 addresses of the tokens you wish to get the price for. You can indeed extend the logic to use more addresses
In the video at minute 45:00 You can see how the code should look like. Please make sure you complete your code for it to work properly.

Redis issue on module-redis-fork

Issue
Hi everyone,
I have an issue while trying to interact with Redis in those conditions:
Redis instance with Redisearch module,
Create node-redis client before Redis module fork is ongoing,
Redis module fork is on-going
The behaviour that I get is that "send_command" stays idle until the fork stops.
When the fork ends I get this error:
debug mode ->
Redis connection is gone from end event
client error ->
AbortError: Redis connection lost and command aborted. It might have been processed.
After I get this error the commands from the same client (without creating a new client) come back to works fine.
On every fork, I got the same behaviour.
Additional Info:
keys: 37773168,
used_memory_human: '87.31G'
Code Example:
This is a simple express app,
'use strict';
const express = require('express');
const Redis = require('redis');
// Redis.debug_mode = true;
const router = express.Router();
let client = null;
router.get('/redisearch/connect', async (req, res, next) => {
const conf = {
'host': '127.0.0.1',
'port': 6379,
'db': 0,
};
try {
if (!client) client = Redis.createClient(conf.port, conf.host, { db: conf.db });
res.send('Connected');
} catch (err) {
res.send(err);
}
});
router.get('/redisearch/d', async (req, res, next) => {
const num = 10;
const dArgs = ['testIndexName', `#ic:[${num} ${num}]`, 'GROUPBY', 1, '#d'];
try {
client.send_command('FT.AGGREGATE', dArgs, (err, reply) => {
if (err) {
res.send({ err: err });
};
res.send({ d: reply });
});
} catch (err) {
res.send(err);
}
});
module.exports = router;
this is the simplest way I have to replicate the problem.
I don't know if there is a way to force redis to use the fork, in my case it appears following a massive search on index followed by delete and insert of records.
Redis however during these operations (insert/delete) works normally,
I can launch commands from the redis-cli;
By creating a new instance of the node-redis client while the fork is present everything works normally and when the fork goes away everything keep working.
Environment
Node.js Version: v14.15.1
Redis Version: 6.0.4
redisearch Version: 1.6.15
node-redis Version: 3.2
Platform: Server 128GB RAM, 8 Core, Debian

Serving public and private ports using Nestjs

I'm building a that aims to serve a mobile application. Besides serving the client, it will have several back-office functionalities.
We are using swagger and we do want to be able to access the swagger docs of our back-office endpoints. However, we do not want to expose all of our endpoints publicly.
Assuming that having all endpoints public is a bad option one solutions we are thinking of is letting our server serve two ports, and then only exposing one port to the public. We have created a small sample repo that that serves a client module and a back-office module on two different ports.
The main.ts looks like the following:
import { NestFactory } from '#nestjs/core';
import { ClientModule } from './modules/client/client.module';
import * as express from 'express';
import * as http from 'http';
import {ExpressAdapter} from '#nestjs/platform-express';
import { BackOfficeModule } from './modules/backoffice/backoffice.module';
import { SwaggerModule, DocumentBuilder } from '#nestjs/swagger';
async function bootstrap() {
const clientServer = express();
const clientApp = await NestFactory.create(
ClientModule,
new ExpressAdapter(clientServer),
);
const clientOptions = new DocumentBuilder()
.setTitle('ClientServer')
.setDescription('The client server API description')
.setVersion('1.0')
.addTag('client')
.build();
const clientDocument = SwaggerModule.createDocument(clientApp, clientOptions);
SwaggerModule.setup('api', clientApp, clientDocument);
await clientApp.init();
const backOfficeServer = express();
const backOfficeApp = await NestFactory.create(
BackOfficeModule,
new ExpressAdapter(backOfficeServer),
);
const backOfficeOptions = new DocumentBuilder()
.setTitle('BackOffice')
.setDescription('The back office API description')
.setVersion('1.0')
.addTag('backOffice')
.build();
const backOfficeDocument = SwaggerModule.createDocument(backOfficeApp, backOfficeOptions);
SwaggerModule.setup('api', backOfficeApp, backOfficeDocument);
await backOfficeApp.init();
http.createServer(clientServer).listen(3000); // The public port (Load balancer will route traffic to this port)
http.createServer(backOfficeServer).listen(4000); // The private port (Will be accessed through a bastian host or similar)
}
bootstrap();
Another option would be to create a bigger separation of the codebase and infrastructure, however as this is a very early stage we feel that is unnecessary.
Our question to the Nest community is thus, has anyone done this? If so, what is are your experience? What are the drawbacks to separating our backend code like this?
Disclaimer: this solution is for express+REST combination.
Routing
Even thought nestjs can't separate controller's based on port, it can separate them based on host. Using that, you can add a reverse proxy in front of your application, that modifies the host header based on the port. Or, you can do that in an express middleware, to make things even more simpe. This is what I did:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
Controllers:
#Controller({ path: 'cats', host: 'public' })
export class CatsController {...}
#Controller({ path: 'internal' host: 'private' })
export class InternalController {...}
Alternatively, you can simplify by creating your own PublicController and PrivateController decorators:
// decorator for public controllers, also sets guard
export const PublicController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'public' }), UseGuards(JwtAuthGuard))
}
// decorator for private controllers
export const PrivateController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'private' }))
}
#PublicController('cats')
export class CatsController {...}
#PrivateController('internal')
export class InternalController {...}
Swagger
For swagger, SwaggerModule.createDocument has an option "include", which accepts a list of modules to include in the swagger docs. With a bit of effort we can also turn the swagger serving part into an express Router, so both the private and public swagger can be served on the same path, for the different ports:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
// setup swagger
let publicSwaggerRouter = await createSwaggerRouter(app, [CatsModule])
let privateSwaggerRouter: await createSwaggerRouter(app, [InternalModule])
server.use('/api', (req: Request, res: Response, next: NextFunction) => {
switch (req.headers.host) {
case 'public':
publicSwaggerRouter(req, res, next)
return
case 'private':
privateSwaggerRouter(req, res, next)
return
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
})
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
async function createSwaggerRouter(app: INestApplication, modules: Function[]): Promise<Router> {
const swaggerConfig = new DocumentBuilder().setTitle('MyApp').setVersion('1.0').build()
const document = SwaggerModule.createDocument(app, swaggerConfig, { include: modules })
const swaggerUi = loadPackage('swagger-ui-express', 'SwaggerModule', () => require('swagger-ui-express'))
const swaggerHtml = swaggerUi.generateHTML(document)
const router = Router()
.use(swaggerUi.serveFiles(document))
.get('/', (req: Request, res: Response, next: NextFunction) => {
res.send(swaggerHtml)
})
return router
}
That's ok, but if you want to run two servers on 1 host, I would recommend to create two files like main-client.ts and main-back-office.ts and run them in different processes, because in that case failures of one server would not affect work of another.
Also if you are not run this in Docker I would suggest tools like forever, pm2, supervisor or my own very small library workers-cluster
If you run it in Docker and don't want big refactoring, I would recommend to create
single Dockerfile with running different CMD or ENTRYPOINT commands
The NestJS docs cover how to let one server serve multiple ports:
https://docs.nestjs.com/faq/multiple-servers#multiple-simultaneous-servers
The following recipe shows how to instantiate a Nest application that listens on multiple ports (for example, on a non-HTTPS port and an HTTPS port) simultaneously.
const httpsOptions = {
key: fs.readFileSync('./secrets/private-key.pem'),
cert: fs.readFileSync('./secrets/public-certificate.pem'),
};
const server = express();
const app = await NestFactory.create(
ApplicationModule,
new ExpressAdapter(server),
);
await app.init();
http.createServer(server).listen(3000);
https.createServer(httpsOptions, server).listen(443);

Where should be the location for coturn or ice setting for sipjs 0.11.0?

I am moving from sipjs 0.7x to sipjs 0.11
After reading the Git issue https://github.com/onsip/SIP.js/pull/426#issuecomment-312065734
and
https://sipjs.com/api/0.8.0/sessionDescriptionHandler/
I have found that the ice options (coturn, turn, stun) is not in User Agent anymore,
but the problem is that I am not quite understand where should I use the
setDescription(sessionDescription, options, modifiers)
I have seen that the ice is set in options, using
options.peerConnectionOptions.rtcConfiguration.iceServers
below is what I haved tried
session.on('trackAdded', function () {
// We need to check the peer connection to determine which track was added
var modifierArray = [
SIP.WebRTC.Modifiers.stripTcpCandidates,
SIP.WebRTC.Modifiers.stripG722,
SIP.WebRTC.Modifiers.stripTelephoneEvent
];
var options = {
peerConnectionOptions:{
rtcConfiguration:{
iceServers : {
[{urls: 'turn:35.227.67.199:3478',
username: 'leon',
credential: 'leon_pass'}]
}
}
}
}
session.setDescription('trackAdded', options,modifierArray);
var pc = session.sessionDescriptionHandler.peerConnection;
// Gets remote tracks
var remoteStream = new MediaStream();
pc.getReceivers().forEach(function (receiver) {
remoteStream.addTrack(receiver.track);
});
remoteAudio.srcObject = remoteStream;
remoteAudio.play();
// Gets local tracks
// var localStream = new MediaStream();
// pc.getSenders().forEach(function(sender) {
// localStream.addTrack(sender.track);
// });
// localVideo.srcObject = localStream;
// localVideo.play();
});
}
I have tried this and it seems that the traffic is not going to the coturn server.
I have used Trickle Ice "https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/" to test and it is fine, but I have found there is not traffic going through the coturn server. You could also use this one and I do not mind.
There is even no demo on the official website to show how could we use the setDescription(sessionDescription, options, modifiers). In this case can I please ask some recommendations?
Configure STUN/TURN servers in the parameters passed to new UserAgent.
Here is sample, it seems to be working on v0.17.1:
const userAgentOptions = {
...
sessionDescriptionHandlerFactoryOptions: {
peerConnectionConfiguration: {
iceServers: [{
urls: "stun:stun.l.google.com:19302"
}, {
urls: "turn:TURN_SERVER_HOST:PORT",
username: "USERNAME",
credential: "PASSWORD"
}]
},
},
...
};
const userAgent = new SIP.UserAgent(userAgentOptions);
When using SimpleUser - pass it inside SimpleUserOptions:
const simpleUser = new Web.SimpleUser(url, { userAgentOptions })

Stomp over WebSocket in React Native

Has anyone tried using Stomp protocol over the WebSocket implementation in react native? We are using Stomp for the web application, and it would be great if I did not have build a separate back end for the web and mobile applications.
I haven't found a good way to integrate Stomp with the react native Web Sockets.
I have wrapped the latest typescript / js #stomp/stompjs STOMP client into a React HOC, making use of SockJS library to simulate the websocket. feel free to check it out
var connected =false;
var socket ='';
var stompClient = '';
const send = (param)=> {
let send_message =param;
if (stompClient && stompClient.connected) {
const msg = { name: send_message };
stompClient.send("/app/hello", JSON.stringify(msg), {});
}
}
const connect =()=> {
socket = new SockJS("your endpoint");
stompClient = Stomp.over(socket);
stompClient.connect(
{},
frame => {
connected = true;
stompClient.subscribe("/topic/greetings", tick => {
});
},
error => {
console.log(error);
connected = false;
}
);
}
const disconnect =()=> {
if (stompClient) {
stompClient.disconnect();
}
connected = false;
}
const tickleConnection =()=> {
connected ? disconnect() : connect();
}