So i use this function to switch context to web view in javascript
let contexts = driver.getContexts();
console.log(contexts)
let webview = driver.switchContext(contexts[1])
But after waiting so long, i got timeout, anyone knows the issue?
I have add driver.pause(100000) as well but still the same result -> Timeout
Error: function timed out, ensure the promise resolves within 60000 milliseconds
at Timeout.
at listOnTimeout (internal/timers.js:555:17)
at processTimers (internal/timers.js:498:7)
Related
I'm using in react-native the package #aspnet/signalr to connect with my server.
All work correctly until the app is in foreground, I'm able to reconnect if I lose the connection without receiving errors.
When I open the app after a long time in in background I can reconnect immediately to my server but I receive the error Error: Connection disconnected with error 'Error: Server timeout elapsed without receiving a message from the server.'
How can I intercept this error?
This is a piece of my code:
connection = new signalR.HubConnectionBuilder()
.withUrl("http://192.168.xxx.xxx/notificationHub?userId=" + authInfo.userId)
.build();
connection.on("receiveMessage", data => {
console.log('*** MESSAGGIO RICEVUTO ***');
Alert.alert(data);
});
connection.start()
.then(() => console.log("Connessione avvenuta"))
.catch((err) => console.log(err);
connection.onclose()
.then(() => connection.start(););
Thanks
Error: Connection disconnected with error 'Error: Server timeout elapsed without receiving a message from the server.'
The default timeout value of serverTimeoutInMilliseconds is 30,000 milliseconds (30 seconds), if this timeout elapses without receiving any messages from the server, the connection might be terminated with above error.
To troubleshoot the issue, please check if you just update KeepAliveInterval setting of your SignalR hub but not change the serverTimeoutInMilliseconds value on your client side.
And the recommended serverTimeoutInMilliseconds value is double the KeepAliveInterval value.
Update:
Is there a way to intercept this error and manage the error without warning?
If you do not want the signalR client log this error in browser console tab, you can try to modify the LogLevel to None.
.configureLogging(signalR.LogLevel.None)
Then manage error in onclose callbacks, like below.
connection.onclose(error => {
//...
console.log("Connection Disconnected");
});
"peerConnection new connection state: connected"
{
"janus": "webrtcup",
"session_id": 3414770196795261,
"sender": 4530256184020316
}
{
"janus": "media",
"session_id": 3414770196795261,
"sender": 4530256184020316,
"type": "audio",
"receiving": true
}
... 1 minute passes
"peerConnection new connection state: disconnected"
{
"janus": "timeout",
"session_id": 3414770196795261
}
"peerConnection new connection state: failed"
See pastebin for the full logs.
I'm trying to join a videoroom on my Janus server. All requests seem to succeed, and my device shows a connected WebRTC status for around one minute before the connection is canceled because of a timeout.
The WebRTC connection breaking off seems to match up with the WebSocket connection to Janus' API breaking.
I tried adding a heartbeat WebSocket message every 10 seconds, but that didn't help. I'm
joining the room
receiving my local SDP plus candidates
configuring the room with said SDP
receiving an answer from janus
accepting that answer with my WebRTC peer connection.
Not sure what goes wrong here.
I also tried setting a STUN server inside the Janus config, to no avail. Same issue.
Added the server logs to the pastebin too.
RTFM: Janus' websocket connections require a keepalive every <60s.
An important aspect to point out is related to keep-alive messages for WebSockets Janus channels. A Janus session is kept alive as long as there's no inactivity for 60 seconds: if no messages have been received in that time frame, the session is torn down by the server. A normal activity on a session is usually enough to prevent that; for a more prolonged inactivity with respect to messaging, on plain HTTP the session is usually kept alive through the regular long poll requests, which act as activity as long as the session is concerned. This aid is obviously not possible when using WebSockets, where a single channel is used both for sending requests and receiving events and responses. For this reason, an ad-hoc message for keeping alive a Janus session should to be triggered on a regular basis. Link.
You need to send 'keepalive' message with same 'session_id'to keep the session going. Janus closes session after 60 seconds.
Look for the implementation: https://janus.conf.meetecho.com/docs/rest.html
Or do it my way: i do it every 30 seconds in a runnable handler.
private Handler mHandler;
private Runnable fireKeepAlive = new Runnable() {
#Override
public void run() {
String transactionId = getRandomStringId();
JSONObject request = new JSONObject();
try {
request.put("janus", "keepalive");
request.put("session_id", yourSessionId);
request.put("transaction", transactionId);
} catch (JSONException e) {
e.printStackTrace();
}
myWebSocketConnection.sendTextMessage(request.toString());
mHandler.postDelayed(fireKeepAlive, 30000);
}
};
Then in OnCreate()
mHandler = new Handler();
then call this where WebSocket connection Opens:
mHandler.post(fireKeepAlive);
be sure to remove callback on destroy
mHandler.removeCallbacks(fireKeepAlive);
To test my app, I open multiple connections to a signalr hub running on localhost in my browser. They connect fine up until one client calls the Host function at which point the other clients throw an error:
[2020-04-23T18:17:18.374Z] Debug: HubConnection connected successfully.
Utils.ts:178 [2020-04-23T18:17:18.511Z] Debug: HttpConnection.stopConnection(Error: WebSocket closed with status code: 1011 ().) called while in state Connected.
Utils.ts:168 [2020-04-23T18:17:18.511Z] Error: Connection disconnected with error 'Error: WebSocket closed with status code: 1011 ().'.
Utils.ts:178 [2020-04-23T18:17:18.512Z] Debug: HubConnection.connectionClosed(Error: WebSocket closed with status code: 1011 ().) called while in state Connected.
Here's the function on the server side
public async Task Host(string lobbyId)
{
//generate _lobbies here...
await Clients.Others.SendAsync("ReceiveLobbies",
new { lobbies = new Lobby[] { _lobbies } })
}
client side function "ReceiveLobbies" doesn't get called since the connection is closed. I have done a quick test and know that I can have multiple clients on localhost so why is this happening?
my Lobby object contains a list of Player objects that each have their own Lobby property, ergo an infinite recursion during serialization
You can try to install the Microsoft.AspNetCore.SignalR.Protocols.NewtonsoftJson NuGet package to switch to Newtonsoft.Json, then you can make it ignore circular references rather than throw an exception by set ReferenceLoopHandling setting, like below.
services.AddSignalR().AddNewtonsoftJsonProtocol(opt=> {
opt.PayloadSerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore;
});
After debugging the signalr app I realized I was getting a json serialization error because the depth of the lobby variable exceeds limits for the now obvious reason that my Lobby object contains a list of Player objects that each have their own Lobby property, ergo an infinite recursion during serialization
I am using a websocket to push data from a process (running in the background) to my electron application (renderer, it's a electron-vue app). Mostly this works great, the data is received and displayed instantly.
In some cases, however, I noticed that the websocket client seemed to buffer incoming messages and only trigger the receive-event after some delay, leading to messages received as a batch.
To verify that the server isn't buffering anything I ran a second connection and simply logged the data (chrome-addon), there all the data is received and processed instantly while my electron application delays the messages.
I am using ReconnectingWebsocket but also tried a plain websocket application:
let webSocket = new WebSocket('ws://0.0.0.0:7700')
webSocket.onopen = function(openEvent) {
console.log('WebSocket OPEN: ' + JSON.stringify(openEvent, null, 4))
}
webSocket.onclose = function(closeEvent) {
console.log('WebSocket CLOSE: ' + JSON.stringify(closeEvent, null, 4))
}
webSocket.onerror = function(errorEvent) {
console.log('WebSocket ERROR: ' + JSON.stringify(errorEvent, null, 4))
}
webSocket.onmessage = function(messageEvent) {
var wsMsg = messageEvent.data
console.log('WebSocket MESSAGE: ' + wsMsg)
}
The WebSocket MESSAGE: is only displayed with some delay. Is there any configuration option, like buffering on the client side or must the render process be called more often..?
Not sure of the solution but we have a demo app using Vue + Electron at https://github.com/firesharkstudios/butterfly-server-dotnet/tree/master/Butterfly.Example.Todo that also uses WebSockets. I've never seen a delay or buffering like you are seeing. Maybe you can compare the implementations to find a cause.
It turns out that it wasn't the websocket implementation but electron blocking the renderer process completely while scrolling, thus the delayed reception. I had to move the websocket connection out of the renderer and tunnel all messages using the IPC system.
We are seeing a surprising scenario when we are on a slow network connection and our calls to the WL Server time out.
This happens at WL.Client.connect as well as on invokeProcedure:
we execute the call with a timeout of 10 seconds
the network connection is slow so the call times out
the defined onFailure procedure associated to that call is executed
the WL Server responds with a valid response after the timeout
the onSuccess procedure associated to that call is executed
Is this the designed and intended behavior of the WL Client Framework? Is this specified in the InfoCenter documentation or somewhere?
All developers in our team expected these two procedures to be exclusive and our code was implemented based on this assumption. We are now investigating options on how to match a timed-out/failed response to a success response to make sure we achieve an exclusive execution of onFailure or onSuccess code/logic in our app.
Note: we did not test that with connectOnStartup=true and since the initOptions does not provide an onSuccess procedure (since WL handles that internally) it might be even harder to implement an exclusive execution in this case.
That seems like expected behavior, but don't quote me on that.
You can get the behavior you want (only call the failure callback when it fails, and only call the success callback when it succeeds) using jQuery.Deferreds. There are ways of creating these deferred objects with dojo and other libraries. But, I just tested with jQuery's implementation, which is shipped with every version of IBM Worklight.
$(function () {
var WL = {};
WL.Client = {};
WL.Client.invokeProcedureMock = function (options) {
options.onFailure('failure');
options.onSuccess('success');
};
var dfd = $.Deferred();
var options = {
onSuccess: dfd.resolve,
onFailure: dfd.reject
};
WL.Client.invokeProcedureMock(options);
dfd
.done(function (msg) {
// handle invokeProcedure success
console.log(msg);
})
.fail(function (msg) {
//handle invokeProcedure failure
console.log(msg);
});
});
I put the code above in a JSFiddle, notice that even if I call the onSuccess callback, it won't have any effect because I already called the failure callback (which rejected the deferred). You would add your application logic inside the .done or .fail blocks.
This is just a suggestion, there are likely many ways to approach your issue.