"version:redis-3.0.2, file:rdb.c, method: int rdbSave(char * filename)", there're some UPDATE action to the global varaible "server":
server.dirty = 0;
server.lastsave = time(NULL);
server.lastbgsave_status = REDIS_OK;
I wonder, how can a child process update the varaible in father process? Theoretically, it can't.
rdbSave is run in the foreground in the main event loop thread, hence the update isn't done by a chile process.
Look at rdbSaveBackground for fork implementation.
Related
I have the next code and I need to check the child process' status, but the sleep() confuses me. I think that the child becomes zombie for a period of time (until the parent finishes sleeping and waits). If this is correct, then what happens if the parent sleeps for 1 second instead of 1000? Will the child become orphan for a period of time? Or is the process finished correctly since the parent waits?
pid_t pid = fork();
if (pid) {
sleep(1000);
wait(NULL);
}
else {
sleep(5);
printf("Hello!");
}
I have an eclipse plugin which has some performance issues. Looking into the progress view sometimes there are multiple jobs waiting and from the code most of it's arhitecture is based on classes which extend WorkspaceJobs mixed with Guava EventBus events. The current solution involves also nested jobs...
I read the documentation, I understand their purpose, but I don't get it why would I use a workspace job when I could run syncexec/asyncexec from methods which get triggered when an event is sent on the bus?
For example instead of creating 3 jobs which wait one for another, I could create an event which triggers what would have executed Job 1, then when the method is finished, it would have sent a different event type which will trigger a method that does what Job 2 would have done and so on...
So instead of:
WorkspaceJob Job1 = new WorkspaceJob("Job1");
Job1.schedule();
WorkspaceJob Job2 = new WorkspaceJob("Job2");
Job2.schedule();
WorkspaceJob Job1 = new WorkspaceJob("Job3");
Job3.schedule();
I could use:
#Subsribe
public replaceJob1(StartJob1Event event) {
//do what runInWorkspace() of Job1 would have done
com.something.getStaticEventBus().post(new Job1FinishedEvent());
}
#Subsribe
public replaceJob2(Job1FinishedEvent event) {
//do what `runInWorkspace()` of Job2 would have done
com.something.getStaticEventBus().post(new Job2FinishedEvent());
}
#Subsribe
public replaceJob3(Job2FinishedEvent event) {
//do what `runInWorkspace()` of Job3 would have done
com.something.getStaticEventBus().post(new Job3FinishedEvent());
}
I didn't tried it yet because I simplified the ideas as much as I could and the problem is more complex than that, but I think that the EventBus would win in terms of performance over the WorkspaceJobs.
Can anyone confirm my idea or tell my why this I shouldn't try this( except for the fact that I must have a good arhitecture of my events)?
WorkspaceJob delays resource change events until the job finishes. This prevents components listening for resource changes receiving half completed changes. This may or may not be important to your application.
I can't comment on the Guava code as I don't know anything about it - but note that if your code is long running you must make sure it runs in a background thread (which WorkbenchJob does).
I am using Hangfire to trigger a database retrieval operation as a background job.
This operation is only supposed to happen once, and can be triggered in multiple ways. (for example, in the UI whenever a user drags and drops a tool, I need to fire that job in the background. But if another tool is dragged and dropped, I don't want to fire the background job as it's already prefetched from the database).
This is what my code looks like now:
var jobId = BackgroundJob.Enqueue<BackgroundModelHelper>( (x) => x.PreFetchBillingByTimePeriods(organizationId) );
What I want is some kind of check before I execute above statement, to find if a background job has already been fired; if yes, then do not fire another and if not, then enqueue this .
for example:
bool prefetchIsFired = false;
// find out if a background job has already been fired. If yes, set prefetchIsFired to true.
if (!prefetchIsFired)
var jobId = BackgroundJob.Enqueue<BackgroundModelHelper>( (x) => x.PreFetchBillingByTimePeriods(organizationId, null) );
You can use a filter (DisableMultipleQueuedItemsFilter) on your job method like here : https://discuss.hangfire.io/t/how-do-i-prevent-creation-of-duplicate-jobs/1222/4
I'm using the following MOC setup:
Parent - using persistent store coordinator (Main queue)
Child1 - using parent (Private queue)
Child2 - using parent (Private queue)
Child3 - using parent (Private queue)
Children periodically save their changes and respawn as new snapshots of the main MOC when needed.
All works fine until I try to save the main MOC. This is the error message I get: Cannot update object that was never inserted
Unresolved error Error Domain=NSCocoaErrorDomain Code=134030 "The operation couldn’t be completed. (Cocoa error 134030.)" UserInfo=0x1758e200 {NSAffectedObjectsErrorKey=(
" (entity: Event; id: 0x1767d3d0 ; data: {\n dateBegin = nil;\n dateEnd = nil;\n identifier = nil;\n identifierBegin = 0;\n isProcessed = 1;\n nPhotos = 0;\n name = nil;\n photos = \"\";\n})"
), NSUnderlyingException=Cannot update object that was never inserted.},
It doesn't happen all the time and removing time consuming operations makes it happen less frequently. I also noticed that during the exception the other MOCs are busy saving or querying. I use performBlock or performBlockAndWaitfor all MOC related operations to run on the right queue.
If relevant, child1 imports base objects, child2 creates events, child3 processes the events and updates both Event and the base object. parent is used to persist tge data to disk and update the UI. Eliminating the thread that uses child3 solves the problem but I'm not convinced it isn't a timing issue.
Any ideas why this happens?
Edit
I think I found the source of the problem. Now looking for solutions.
child3 updated an Event and tried to save the change to parent while child2 decided to delete the Event and already saved this change. To the save is trying to update a non-existing object. Strangely the error ocurs only when I tried to save parent to the PSC.
I've just got the same to the error message and in my case the problem was in storing reference to temporary (not permanent which is generated after saving context) ObjectID, trying to get object from another context (objectWithID:) with id and then performing changes to this object. Save method gave me the same result.
Are you by any chance accessing objects between contexts by referencing to ObjectID?
First of all please don't consider this question as a duplicate of this question
I have a setup an environment which uses celery and redis as broker and result_backend. My question is how can I make sure that when the celery workers crash, all the scheduled tasks are re-tried, when the celery worker is back up.
I have seen advice on using CELERY_ACKS_LATE = True , so that the broker will re-drive the tasks until it get an ACK, but in my case its not working. Whenever I schedule a task its immediately goes to the worker which persists it until the scheduled time of execution. Let me give some example:
I am scheduling a task like this: res=test_task.apply_async(countdown=600) , but immediately in celery worker logs i can see something like : Got task from broker: test_task[a137c44e-b08e-4569-8677-f84070873fc0] eta:[2013-01-...] . Now when I kill the celery worker, these scheduled tasks are lost. My settings:
BROKER_URL = "redis://localhost:6379/0"
CELERY_ALWAYS_EAGER = False
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACKS_LATE = True
Apparently this is how celery behaves.
When worker is abruptly killed (but dispatching process isn't), the message will be considered as 'failed' even though you have acks_late=True
Motivation (to my understanding) is that if consumer was killed by OS due to out-of-mem, there is no point in redelivering the same task.
You may see the exact issue here: https://github.com/celery/celery/issues/1628
I actually disagree with this behaviour. IMO it would make more sense not to acknowledge.
I've had the issue, where I was using some open-source C libraries that went totaly amok and crashed my worker ungraceful without throwing an exception. For any reason whatsoever, one can simply wrap the content of a task in a child process and check its status in the parent.
n = os.fork()
if n > 0: //inside the parent process
status = os.wait() //wait until child terminates
print("Signal number that killed the child process:", status[1])
if status[1] > 0: // if the signal was something other then graceful
// here one can do whatever they want, like restart or throw an Exception.
self.retry(exc=SomeException(), countdown=2 ** self.request.retries)
else: // here comes the actual task content with its respected return
return myResult // Make sure there are not returns in child and parent at the same time.