Extend the response time of http request in electron - httprequest

I am doing an application with electron and I have the following problem:
I need to make an http request to receive data from a php but the timeout should be less than the response time and therefore cancel the request before delivering anything to me.
Anyone know how to lengthen the wait time on http request?
I leave you the code
var http = require('http');
var options = {
host: localStorage.getItem('server'),
port: localStorage.getItem('port'),
path: localStorage.getItem('directori') + '?nosession=1&call=ciberFiSessio&numSerie='+ localStorage.getItem("pc")
};
http.get(options, function(res) {
alert("hola");
if (res.statusCode == 200){
//reinicia();
res.on('data', function (chunk) {
str = chunk;
alert(str);
var myJSON = JSON.parse(str);
//alert(myJSON.fi);
if(parseInt(myJSON.fi)==0){
alert("Hi ha hagut un problema!");
}else{
reinicia();
}
});
}else{
alert("El lloc ha caigut!");
alert(res.statusCode);
}
}).on('error', function(e) {
alert("Hi ha un error: " + e.message);
});

I assume you want extend the timeout time of the node http request, to wait for the PHP server to responde.
You can set the timeout property of the http request in milliseconds.
Just add the property to your options object, for example:
var http = require('http');
var options = {
timeout: 1000, // timeout of 1 second
host: localStorage.getItem('server'),
port: localStorage.getItem('port'),
path: localStorage.getItem('directori') + '?nosession=1&call=ciberFiSessio&numSerie='+ localStorage.getItem("pc")
};
http.get(options, ...)
from the official documentation:
timeout : A number specifying the socket timeout in milliseconds. This will set the timeout before the socket is connected.
Read more about http.request (as it accepts the same options as http.get).

Related

.Net Core 3.1 SignalR Client Starts and Stops receiving messages at specific intervals

Does an Azure VM throttle SignalR messages being sent? Running locally, the client receives every message, but when hosted on the VM, clients only receive messages 30% of the time!?!?
This question is about Microsoft.AspNet.SignalR nuget package for SignalR in .Net Core 3.1 API back end, with a VueJS SPA front-end, all being hosted on an Azure Windows Server 2016 VM using IIS 10.
On my local machine, SignalR works perfectly. Messages get sent/received all the time, instantaneously. Then I publish to the VM, and when (IF) the WebSocket connection is successful, the client can only receive messages for the first 5 or so seconds at most, then stops receiving messages.
I've set up a dummy page that sends a message to my API, which then sends said message back down to all connections. It's a simple input form and "Send" button. After the few seconds of submitting and receiving messages, I need to rapidly submit (even hold down the "enter" button to submit) the form and send what should be a constant stream of messages back, until, low and behold, several seconds later messages begin to be received again, but only for a few seconds.
I've actually held down the submit button for constant stream and timed how long it takes to start getting messages, then again how long it takes to stop receiving. My small sample shows ~30 messages get received, then skips (does not receive) the next ~70 messages until another ~30 messages come in .. then the pattern persists, no messages for several seconds, then (~30) messages for a few seconds.
Production Environment Continuously Sending 1000 messages:
Same Test in Local Environment Sending 1000 messages:
If I stop the test, not matter how long I wait, when I hold the enter button (repeated submit), it takes a few seconds to get back into the 3 second/2 second pattern. It's almost as if I need to keep pressuring the server to send the message back to the client, otherwise the server gets lazy and doesn't do any work at all. If I slow play the message submits, it's rare that the client receives any messages at all. I really need to persistently and quickly send messages in order to start receiving them again.
FYI, during the time that I am holding down submit, or rapidly submitting, I receive no errors for API calls (initiating messages) and no errors for Socket Connection or receiving messages. All the while, when client side SignalR log level is set to Trace, I see ping requests being sent and received successfully every 10 seconds.
Here is the Socket Config in .Net:
services.AddSignalR()
.AddHubOptions<StreamHub>(hubOptions => {
hubOptions.EnableDetailedErrors = true;
hubOptions.ClientTimeoutInterval = TimeSpan.FromHours(24);
hubOptions.HandshakeTimeout = TimeSpan.FromHours(24);
hubOptions.KeepAliveInterval = TimeSpan.FromSeconds(15);
hubOptions.MaximumReceiveMessageSize = 1000000;
})
.AddJsonProtocol(options =>
{
options.PayloadSerializerOptions.PropertyNamingPolicy = null;
});
// Adding Authentication
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
})
// Adding Jwt Bearer
.AddJwtBearer(options =>
{
options.SaveToken = true;
options.RequireHttpsMetadata = false;
options.TokenValidationParameters = new TokenValidationParameters()
{
ValidateIssuer = true,
ValidateAudience = true,
ValidAudience = Configuration["JWT:ValidAudience"],
ValidIssuer = Configuration["JWT:ValidIssuer"],
IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["JWT:Secret"])),
ClockSkew = TimeSpan.Zero
};
// Sending the access token in the query string is required due to
// a limitation in Browser APIs. We restrict it to only calls to the
// SignalR hub in this code.
// See https://learn.microsoft.com/aspnet/core/signalr/security#access-token-logging
// for more information about security considerations when using
// the query string to transmit the access token.
options.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// If the request is for our hub...
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) && (path.StartsWithSegments("/v1/stream")))
{
// Read the token out of the query string
context.Token = accessToken;
}
return Task.CompletedTask;
}
};
});
I use this endpoint to send back messages:
[HttpPost]
[Route("bitcoin")]
public async Task<IActionResult> SendBitcoin([FromBody] BitCoin bitcoin)
{
await this._hubContext.Clients.All.SendAsync("BitCoin", bitcoin.message);
return Ok(bitcoin.message);
}
Here is the Socket Connection in JS and the button click to call message API:
this.connection = new signalR.HubConnectionBuilder()
.configureLogging(process.env.NODE_ENV.toLowerCase() == 'development' ? signalR.LogLevel.None : signalR.LogLevel.None)
.withUrl(process.env.VUE_APP_STREAM_ROOT, { accessTokenFactory: () => this.$store.state.auth.token })
.withAutomaticReconnect({
nextRetryDelayInMilliseconds: retryContext => {
if(retryContext.retryReason && retryContext.retryReason.statusCode == 401) {
return null
}
else if (retryContext.elapsedMilliseconds < 3600000) {
// If we've been reconnecting for less than 60 minutes so far,
// wait between 0 and 10 seconds before the next reconnect attempt.
return Math.random() * 10000;
} else {
// If we've been reconnecting for more than 60 seconds so far, stop reconnecting.
return null;
}
}
})
.build()
// connection timeout of 10 minutes
this.connection.serverTimeoutInMilliseconds = 1000 * 60 * 10
this.connection.reconnectedCallbacks.push(() => {
let alert = {
show: true,
text: 'Data connection re-established!',
variant: 'success',
isConnected: true,
}
this.$store.commit(CONNECTION_ALERT, alert)
setTimeout(() => {
this.$_closeConnectionAlert()
}, this.$_appMessageTimeout)
// this.joinStreamGroup('event-'+this.event.eventId)
})
this.connection.onreconnecting((err) => {
if(!!err) {
console.log('reconnecting:')
this.startStream()
}
})
this.connection.start()
.then((response) => {
this.startStream()
})
.catch((err) => {
});
startStream() {
// ---------
// Call client methods from hub
// ---------
if(this.connection.connectionState.toLowerCase() == 'connected') {
this.connection.methods.bitcoin = []
this.connection.on("BitCoin", (data) => {
console.log('messageReceived:', data)
})
}
}
buttonClick() {
this.$_apiCall({url: 'bitcoin', method: 'POST', data: {message:this.message}})
.then(response => {
// console.log('message', response.data)
})
}
For the case when the Socket Connection fails:
On page refresh, sometimes the WebSocket Connection fails, but there are multiple calls to the Socket endpoint that are almost identical, where one returns 404 and another returns a 200 result
Failed Request
This is the request that failed, the only difference to the request that succeeded (below) is the content-length in the Response Headers (highlighted). The Request Headers are identical:
Successful request to Socket Endpoint
Identical Request Headers
What could be so different about the configuration on my local machine vs. the configuration on my Azure VM? Why would the client stop and start receiving messages like this? Might it be the configuration on my VM? Are the messages getting blocked somehow? I've exhausted myself trying to figure this out!!
Update:
KeepAlive messages are being sent correctly, but the continuous stream of messages sent (expecting received) only works periodically.
Here we see that the KeepAlive messages are being sent and received every 15 seconds, as expected.

Why is Axios sending an extra 204 preflight request with my POST request?

Whenever I send a POST request using Vue.js (3.x), an additional request to the same URL is being made with HTTP status code 204 & type of "preflight".
What is this preflight request and how can I fix it so it isn't sent as a duplicate?
Register.vue
async submit() {
this.button = true;
try {
const response = await axios.post(`register`, this.form);
if(response.data.success == false)
{
console.log(response.data.message);
}
else
{
this.$router.push('/');
}
}
catch (error)
{
let { errors } = error.response.data;
this.button = false;
this.errors = {};
Object.keys(errors).forEach(element => {
this.errors[element] = errors[element][0];
});
}
},
This is not an issue and is controlled by the browser by design.
It is not something Axios or any other HTTP client decides to send.
A preflight request is a CORS OPTIONS request & are automatically sent by browsers specifically to check if the server would support the call you are trying to make in terms of method, headers and origin.
You can safely ignore the requests if they do not fail as that means that the server will not be rejecting your request on the basis of the aforementioned factors.
Your issue relates to the endpoint not existing as you are getting a 404 Not Found error - check to see if the endpoint exists or if you are calling it correctly.

I am working on a chat app with socket.io and cluster node with all incoming sockets to be handled by particular node instance(required for app need)

In my master node, a tcp socket is intercepting all requests, where I want to separate normal http requests and websocket ones. I need the wobsocket connections to be handled by a particular node in a time (say worker 5 handles all connections until socket connections reaches 10 and the next 10 will be handled by worker 6 say) . For that its checking http packet within it and sending the socket to apt node instance via IPC , but I am unable to fire up request object for express routes.
The server in worker node is receiving connection event as indicated in app.js file as :LABEL but none of my express route is working
Cluster.js
// cluster.js
const cluster = require('cluster');
const net =require('net')
const os = require('os');
if (cluster.isMaster) {
var workers=[]
const num_processes = os.cpus().length;
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(worker, code, signal) {
// logger.log('respawning worker', i);
console.log('respawning worker', i)
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
var server = net.createServer(function(c) { //'connection' listener
console.log('client connected');
c.on('end', function() {
console.log('client disconnected');
});
c.on('data',function(data){
console.log(data.toString())
c.pause()
workers[1].send('sticky-session:connection', c)
})
});
server.listen(5000, function() { //'listening' listener
console.log('server bound');
});
// var server = net.createServer({ pauseOnConnect: true }, function (connection) {
// // Incoming request processing
// var remote = connection.remoteAddress;
// var local = connection.localAddress;
// var cIp = remote + local;
// var ip = cIp.match(/[0-9]+/g)[0].replace(/,/g, '');
// // var wIndex = (ip+Math.floor(Math.random() * 10)) % num_processes;
// var wIndex=2;
// connection.on('end', function() {
// console.log('client disconnected');
// });
// connection.on('data',function(data){console.log(data.toString())})
// var worker = workers[wIndex];
// console.log("Message to work "+ worker+", remote: "+ remote+ ", local: "+ local+", ip: "+ ip +", index: "+ wIndex)
// // logger.log("Message to work "+ worker+", remote: "+ remote+ ", local: "+ local+", ip: "+ ip +", index: "+ wIndex);
// worker.send('sticky-session:connection', connection);
// });
// server.maxConnections = Infinity;
// server.listen(5000, function() { //'listening' listener
// console.log('server bound');
// });
// net.createServer({pauseOnConnect:true},(connection)=>{
// console.log(connection)
// }).listen(5000)
} else {
require('./app');
}
server.js
const http=require('http')
const express = require("express")
const hostname='0.0.0.0';
const port=process.env.PORT || 5001;
const app=express();
const server=http.createServer(app)
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// console.log(connection);
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
server.on('connection',(sock)=>{console.log(sock)})//:LABEL
connection.resume();
});
Also I need suggestion to go this way, of sending request to specific worker node programmatically and not leaving it to the node cluster to do it. The other way I know to handle socket io and node cluster together is through redis adapter but due to some ease of work I am considering this way

How to access data in Parse Server via Cloud Code?

I try to make queries using the cloud code feature of our Parse server. Unfortunately we could not retrieve any data from the database. Our code looks as follows:
main.js:
Parse.Cloud.define('test', function(request, response) {
var user = request.user;
var token = user.getSessionToken();
var query = new Parse.Query('Carpark');
query.first({ sessionToken: token }) // pass the session token to find()
.then(function(messages) {
response.success(messages);
}, function(error) {
response.error(error);
});
});
index.js:
var express = require('express');
var ParseServer = require('parse-server').ParseServer;
var path = require('path');
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
var api = new ParseServer({
databaseURI: 'mongodb://parse-server:[...]#localhost:27017/[...]',
cloud: __dirname + '/cloud/main.js',
appId: '[...]',
masterKey: '[...], //Add your master key here. Keep it secret!
serverURL: 'https://backend.[...]/parse', // Don't forget to change to https if needed
publicServerURL: 'https://backend.[...]/parse', // Don't forget to change to https if needed
liveQuery: {
classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions
}
});
// Client-keys like the javascript key or the .NET key are not necessary with parse-server
// If you wish you require them, you can set them as options in the initialization above:
// javascriptKey, restAPIKey, dotNetKey, clientKey
var app = express();
//var basicAuth = require('basic-auth-connect');
//app.use(basicAuth('triveme', 'triveme'));
app.use('/', express.static(path.join(__dirname, '/public')));
app.use('/parse', api);
app.get('/test', function(req, res) {
res.sendFile(path.join(__dirname, '/public/test.html'));
});
var port = 61004;
var httpServer = require('http').createServer(app);
httpServer.listen(port, function() {
console.log('parse-server-example running on port ' + port + '.');
});
// This will enable the Live Query real-time server
ParseServer.createLiveQueryServer(httpServer);
Example request from iOS-App:
PFCloud.callFunction(inBackground: "test", withParameters: nil) {
(response, error) -> Void in
if let response = response {
let result = response
print("Cloud data:", result )
}
if error != nil {
print(error ?? "default cloud function error")
}
}
I don't get any feedback from server (no response and no error). What is a possible problem of my issue?
Log request with verbose = 1:
REQUEST for [POST] /parse/functions/test: {} method=POST, url=/parse/functions/test, host=localhost:61004, accept=*/*, x-parse-session-token=[...], x-parse-application-id=[...].platform.dev2, x-parse-installation-id=[...], x-parse-os-version=10.2 (16D32), accept-language=en-us, accept-encoding=gzip, deflate, x-parse-client-version=i1.14.2, user-agent=trive.park/8 CFNetwork/808.2.16 Darwin/16.4.0, x-parse-app-build-version=8, x-parse-app-display-version=1.0, x-forwarded-for=[...], x-forwarded-host=backend.[...], x-forwarded-server=backend.[...], connection=Keep-Alive, content-length=0,
Log Response:
4|trive-pa | error: Failed running cloud function test for user nZ76ZimELw with:
4|trive-pa | Input: {}
4|trive-pa | Error: {"code":141,"message":{"code":100,"message":"XMLHttpRequest failed: \"Unable to connect to the Parse API\""}} functionName=test, code=141, code=100, message=XMLHttpRequest failed: "Unable to connect to the Parse API", user=nZ76ZimELw
4|trive-pa | error: Error generating response. ParseError {
4|trive-pa | code: 141,
4|trive-pa | message:
4|trive-pa | ParseError {
4|trive-pa | code: 100,
4|trive-pa | message: 'XMLHttpRequest failed: "Unable to connect to the Parse API"' } } code=141, code=100, message=XMLHttpRequest failed: "Unable to connect to the Parse API"
4|trive-pa | [object Object]
parse-server version: 2.2.23
self hosted on Apache Server
MongoDB

How to set timeout to JSONP?

I have some JSONP in my application, & I want to set timeout for them. How can I set it?
it maybe something like this, if it is possible :)
Ext.util.JSONP.request({
url: mhid.dashboard.hbf.controller+'/get_dashboard'
,method: 'POST'
,timeout : 50000 // TIMEOUT
,callbackKey: 'jsonp_callback'
,params:{
'site' : site
,'fleet' : fleet
,'sn' : sn
,'format' : 'json'
}
,callback: function(response, opts) {
var obj = response;
tpl.overwrite('content', obj);
loadMask.hide();
}
,failure: function(response, opts) {
alert('Failure');
}
});
Thanks in advance
I don't think it is possible using JSONP - you should take a look at the Ext.Ajax class. Also, POST is not possible using JSONP (refer to the Sencha forum
You will need to implement your own timeout and execute the failure callback:
var failureCallback = function() {
// MUST remove the current request,
// or it will not dispatch any subsequent new JSONP request
// unless you refresh your page
delete Ext.util.JSONP.current;
alert('Failure');
};
Ext.util.JSONP.request({..}); // as usual
var failureTimeoutId = setTimeout(function() {
failureCallback();
}, 30000);
discard the timeout if your request is success
discard current request if the request has timed out