On baackend I have the following modules:
express for event-stream with backend ->frontend
and req/res frontend->backend get /post
Frontend: Electron+vue+axios
When developing an application, I run both parts separately from each other and everything works well.
But in production, frontend runs as a child process of backend
const child = spawn('backend.exe', {
detached: true
});
child.unref();
Over time, and sometimes almost immediately, the backend stops responding.
When opening devTools, I see that all get/post requests are in the cancel status, but the child process is alive, and if the parent process is closed, the child is also closed by processing
app.on('will-quit', () => {
child.kill('SIGTERM')
process exit(0);
});
If you terminate the backend process and remove it from the parent thread by running it separately, then everything works fine.
I also tried to start the backend through exec, in this case everything also works fine, but exec does not provide properties for killing the process when updating the Electon application
How to deal with this spawn behavior?
Why do requests stop coming?
How can I terminate a child process started with exec? child.kill('SIGTERM') doesn't work for exec in app.on('will-quit', () => {...}) event
Related
I'm using Ionic/Vue and wheen I run the application in my browser it works well but when I compile and run on an android emulator I get the error below. Please help
Error: Request aborted at e.exports (chunk-vendors.85bc3696.js:1) at XMLHttpRequest.m.onabort (chunk-vendors.85bc3696.js:26)
That error indicates that some of your vue components gets destroyed before ajax request gets completed. Check over all your code - everywhere you invoke remote service make sure that you 'await' result before allow anything to switch to another route, which leads to components disposal. Sometimes you need to do 'unsubscribe' from http request in component disposal
Actually, that probably is not an error - that is normal that request gets aborted if result is not needed anymore
We are "successfully" running our gherkin-testcafe build on ec2 headless against chromium. The final issue we are dealing with is that at a certain point in the test a CTA button is showing ...loading instead of Add to Bag, presumably because a service call that gets the status of the product, out of stock, in stock, no longer carry, etc. is failing. The tests work locally of course and we have the luxury of debugging locally opening chrome's dev env and inspecting the network calls etc. But all we can do on the ec2 is take a video and see where it fails. Is there a way to view the logs of all the calls being made by testcafe's proxy browser so we can confirm which one is failing and why? We are using. const rlogger = RequestLogger(/.*/, {
logRequestHeaders: true,
logResponseHeaders: true
});
to log our headers but not getting very explicit reasons why calls are not working.
TestCafe uses the debug module to perform internal logging functionality. So, in order to view the TestCafe proxy logs, you can set the DEBUG environment variable in the following manner:
export DEBUG='hammerhead:*'
MFP Product version: 8.0.0.00-20180220-083852
MFP Client Vesion: 8.0.2018080605
I have an app which is using requireJS, backbone & jquery.
I am loading the main js like this:
<script data-main="js/main" src="js/lib/require/require.js"></script>
I am making sure the call to main.js is made inside wlCommoninit. The app is loaded with all dependencies.
function wlCommoninit(){
main(); // main.js has a single method named - main
}
I have a call to "WL.Client.connect" # the end of main function- which just executes & does nothing.
A subsequent call to "WL.Client.connect" returns the following error message:
Failed to connect to Worklight Server:
{"responseHeaders":{},
"responseText":"undefined",
"errorCode":"CONNECTION_IN_PROGRESS"}
What could be the reason for the above error? Though we make a call to WL.Client.connect inside wlcommoninit. Hopefully all the WL API would have loaded by the time 'wlCommoninit' is invoked.
Tired with different MFP clinet sdk versions other that mentioned above. I don't see any change.
The reason for the response
{"responseHeaders":{},"responseText":"undefined","errorCode":"CONNECTION_IN_PROGRESS"}
is that before the first WL.Client.connect() succeeds or fails, you have fired another connect() call.
Wait until the first succeeds, fails or times out from inactivity.
I am currently running some Google Cloud functions (in typescript) that require a connection to a Redis instance in order to LPUSH into the queue (on other instances, I am using Redis as a queue worker).
Everything is fine, except I am getting a huge number of ECONNECTRESET and ECONNECTIMEOUT related errors despite everything working properly.
The following code can execute successfully on the cloud function but still, I am seeing constant errors thrown related to the connection to the Redis.
I think it is somehow related to how I am importing my client- ioredis. I have utils/index.ts, utils/redis.js and inside the redis.js I have:
const Redis = require('ioredis');
module.exports = new Redis(6380, 'MYCACHE.redis.cache.windows.net', { tls: true, password: 'PASS' });
Then I am importing this in my utils/index.ts like so: code missing
And exporting some aysnc function like: code missing
When executing in the GCF environment, I get the # of expected results in results.length and I see (by monitoring the Redis internally) this list was pushed as expected to the queue.
Nevertheless, these errors continue to appear incessantly.
ioredis] Unhandled error event: Error: read ECONNRESET at _errnoException (util.js:1022:11) at TLSWrap.onread (net.js:628:25)
I'm having a hard time trying to get my task to stay persistent and run indefinitely from a WCF service. I may be doing this the wrong way and am willing to take suggestions.
I have a task that starts to process any incoming requests that are dropped into a BlockingCollection. From what I understand, the GetConsumingEnumerable() method is supposed to allow me to persistently pull data as it arrives. It works with no problem by itself. I was able to process dozens of requests without a single error or flaw using a windows form to fill out the request and submit them. Once I was confident in this process I wired it up to my site via an asmx web service and used jQuery ajax calls to submit request.
The site submits request based on a url that is submitted, the Web Service downloads the html content from the url and looks for other urls within the content. It then proceeds to create a request for each url it finds and submits it to the BlockingCollection. Within the WCF service, if the application is Online (i.e. Task has started) - it pulls the request using the GetConsumingEnumerable via a Parallel.ForEach and Processes the request.
This works for the first few submissions, but then the task just stops unexpectedly. Of course, this is doing 10x more request than I could simulate in testing - but I expected it to just throttle. I believe the issue is in my method that starts the task:
public void Start()
{
Online = true;
Task.Factory.StartNew(() =>
{
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = 20;
options.CancellationToken = token;
try
{
Parallel.ForEach(FixedWidthQueue.GetConsumingEnumerable(token), options, (request) =>
{
Process(request);
options.CancellationToken.ThrowIfCancellationRequested();
});
}
catch (OperationCanceledException e)
{
Console.WriteLine(e.Message);
return;
}
}, TaskCreationOptions.LongRunning);
}
I've thought about moving this into a WF4 Service and just wire it up in a Workflow and use Workflow Persistence, but am not willing to learn WF4 unless necessary. Please let me know if more information is needed.
The code you have shown is correct by itself.
However there are a few things that can go wrong:
If an exception occurs, your task stops (of course). Try adding a try-catch and log the exception.
If you start worker threads in a hosted environment (ASP.NET, WCF, SQL Server) the host can decide arbitrarily (without reason) to shut down any worker process. For example, if your ASP.NET site is inactive for some time the app is shut down. The hosts that I just mentioned are not made to have custom threads running. Probably, you will have more success using a dedicated application (.exe) or even a Windows Service.
It turns out the cause of this issue was with the WCF Binding Configuration. The task suddenly stopped becasue the WCF killed the connection due to a open timeout. The open timeout setting is the time that a request will wait for the service to open a connection before timing out. In certain situations, it reached the limit of 10 max connection and caused the incomming connections to get backed up waiting for a connection. I made sure that I closed all connections to the host after the transactions were complete - so I gave in to upping the max connections and the open timeout period. After this - it ran flawlessly.