Error when saving parent NSManagedObjectContext - objective-c

I'm using the following MOC setup:
Parent - using persistent store coordinator (Main queue)
Child1 - using parent (Private queue)
Child2 - using parent (Private queue)
Child3 - using parent (Private queue)
Children periodically save their changes and respawn as new snapshots of the main MOC when needed.
All works fine until I try to save the main MOC. This is the error message I get: Cannot update object that was never inserted
Unresolved error Error Domain=NSCocoaErrorDomain Code=134030 "The operation couldn’t be completed. (Cocoa error 134030.)" UserInfo=0x1758e200 {NSAffectedObjectsErrorKey=(
" (entity: Event; id: 0x1767d3d0 ; data: {\n dateBegin = nil;\n dateEnd = nil;\n identifier = nil;\n identifierBegin = 0;\n isProcessed = 1;\n nPhotos = 0;\n name = nil;\n photos = \"\";\n})"
), NSUnderlyingException=Cannot update object that was never inserted.},
It doesn't happen all the time and removing time consuming operations makes it happen less frequently. I also noticed that during the exception the other MOCs are busy saving or querying. I use performBlock or performBlockAndWaitfor all MOC related operations to run on the right queue.
If relevant, child1 imports base objects, child2 creates events, child3 processes the events and updates both Event and the base object. parent is used to persist tge data to disk and update the UI. Eliminating the thread that uses child3 solves the problem but I'm not convinced it isn't a timing issue.
Any ideas why this happens?
Edit
I think I found the source of the problem. Now looking for solutions.
child3 updated an Event and tried to save the change to parent while child2 decided to delete the Event and already saved this change. To the save is trying to update a non-existing object. Strangely the error ocurs only when I tried to save parent to the PSC.

I've just got the same to the error message and in my case the problem was in storing reference to temporary (not permanent which is generated after saving context) ObjectID, trying to get object from another context (objectWithID:) with id and then performing changes to this object. Save method gave me the same result.
Are you by any chance accessing objects between contexts by referencing to ObjectID?

Related

Chronicle Queue despite rolling cycle minutely deleting chronicle file after processing keeps file in open list lsof and not releasing memory

I am using chronicle queue version 5.20.123 and open JDK 11 with Linux Ubuntu 20.04, when we recycle current cycle on minute rolling I am listening on StoreFileListener onReleased I am deleting file then also file remains open without releasing memory nor file gets deleted..
Please guide what needs to be done in order to make it work.
Store FileListener Implemented like this:
storeFileListener = new StoreFileListener() {
#Override
public void onReleased(int cycle, File file) {
file.delete();
}
}
Creation of chronicle Queue as follows:
eventStore = SingleChronicleQueueBuilder.binary(GlobalConstants.CURRENT_DIR
+ GlobalConstants.PATH_SEPARATOR + EventBusConstants.EVENT_DIR
+ GlobalConstants.PATH_SEPARATOR + eventType)
.rollCycle(RollCycles.MINUTELY)
.storeFileListener(storeFileListener).build();
tailer = eventStore.createTailer();
appender = eventStore.acquireAppender();
previousCycle = tailer.cycle();
Recycling of previous Cycle when processing completes:
var store = eventStore.storeForCycle(previousCycle,0,false,null);
eventStore.closeStore(store);
Chronicle Queue Deleted Files lsof :
Manually getting hold of store and trying to close it will do nothing but interfere with reference counting - you increase and then decrease number of references.
Chronicle Queue will automatically release resources for given store after all appenders and tailers using that store are done with it. In your case it's unclear what you do with your tailer, but if it already reads from the new file - the old one will be released, and resources associated with it - although this is done in the background and might not happen immediately.
PS file.delete() returns boolean and it's always a good idea to check the return value to see if the delete was successful (in your case it can be seen it was but still it's considered a good practice)

Kafka streams: groupByKey and reduce not triggering action exactly once when error occurs in stream

I have a simple Kafka streams scenario where I am doing a groupyByKey then reduce and then an action. There could be duplicate events in the source topic hence the groupyByKey and reduce
The action could error and in that case, I need the streams app to reprocess that event. In the example below I'm always throwing an error to demonstrate the point.
It is very important that the action only ever happens once and at least once.
The problem I'm finding is that when the streams app reprocesses the event, the reduce function is being called and as it returns null the action doesn't get recalled.
As only one event is produced to the source topic TOPIC_NAME I would expect the reduce to not have any values and skip down to the mapValues.
val topologyBuilder = StreamsBuilder()
topologyBuilder.stream(
TOPIC_NAME,
Consumed.with(Serdes.String(), EventSerde())
)
.groupByKey(Grouped.with(Serdes.String(), EventSerde()))
.reduce { current, _ ->
println("reduce hit")
null
}
.mapValues { _, v ->
println(Id: "${v.correlationId}")
throw Exception("simulate error")
}
To cause the issue I run the streams app twice. This is the output:
First run
Id: 90e6aefb-8763-4861-8d82-1304a6b5654e
11:10:52.320 [test-app-dcea4eb1-a58f-4a30-905f-46dad446b31e-StreamThread-1] ERROR org.apache.kafka.streams.KafkaStreams - stream-client [test-app-dcea4eb1-a58f-4a30-905f-46dad446b31e] All stream threads have died. The instance will be in error state and should be closed.
Second run
reduce hit
As you can see the .mapValues doesn't get called on the second run even though it errored on the first run causing the streams app to reprocess the same event again.
Is it possible to be able to have a streams app re-process an event with a reduced step where it's treating the event like it's never seen before? - Or is there a better approach to how I'm doing this?
I was missing a property setting for the streams app.
props["processing.guarantee"]= "exactly_once"
By setting this, it will guarantee that any state created from the point of picking up the event will rollback in case of a exception being thrown and the streams app crashing.
The problem was that the streams app would pick up the event again to re-process but the reducer step had state which has persisted. By enabling the exactly_once setting it ensures that the reducer state is also rolled back.
It now successfully re-processes the event as if it had never seen it before

Airflow/SQLAlchemy Error - Loading context has changed within a load/refresh handler

I am attempting to use clairvoyant's db-cleanup dag to clear metadata in our xcom table, but when I run it, I receive the following warning, printed thousands of times before I manually stop the job in order to not take down our mysql instance:
SAWarning: Loading context for <BaseXCom at 0x7f26f789b370> has changed within a load/refresh handler, suggesting a row refresh operation took place. If this event handler is expected to be emitting row refresh operations within an existing load or refresh operation, set restore_load_context=True when establishing the listener to ensure the context remains unchanged when the event handler completes.
The other cleanup tasks work fine, but it is the xcom table in particular I am having trouble with. We have hundreds/thousands of active dags and so the xcom table is constantly being written to nearly every second or two. I think that is what is causing this error, the fact that the data is continually changing while it is being queried.
I have been unable to find the cause of this or any examples of how this can be resolved. I tried adding a "restore_load_context":True line as per SQLAlchemy docs but it did not work.
Here are the snippets I attempted to add to the database object and the cleanup task:
{
"airflow_db_model": XCom,
"age_check_column": XCom.execution_date,
"keep_last": False,
"keep_last_filters": None,
"keep_last_group_by": None,
"restore_load_context":True
},
....
def cleanup_function(**context):
logging.info("Retrieving max_execution_date from XCom")
max_date = context["ti"].xcom_pull(
task_ids=print_configuration.task_id, key="max_date"
)
max_date = dateutil.parser.parse(max_date) # stored as iso8601 str in xcom
airflow_db_model = context["params"].get("airflow_db_model")
state = context["params"].get("state")
age_check_column = context["params"].get("age_check_column")
keep_last = context["params"].get("keep_last")
keep_last_filters = context["params"].get("keep_last_filters")
keep_last_group_by = context["params"].get("keep_last_group_by")
restore_load_context = context["params"].get("restore_load_context")
In order to not paste too much code here, I am using the same code in the db-cleanup dag. Has anyone encountered this and found a way to resolve?
I am very inexperienced with sqlalchemy and am entirely unsure where else to place this code or how to go about it.

Should events be propagated to records in hasMany relationship?

I have the following relationship between two DS.Model classes:
App.DocumentType = DS.Model.extend
...
propertyTypeJoins: DS.hasMany("App.DocumentTypePropertyType")
App.DocumentTypePropertyType = DS.Model.extend
documentType: DS.belongsTo('App.DocumentType')
The children records are always embedded and never saved individually:
App.Adapter.map 'App.DocumentType'
propertyTypeJoins:
embedded: 'always'
When I commit a transaction with a documentType record and n-related DocumentTypePropertyType records, I get the following error:
"Attempted to handle event 'didCommit' on <App.DocumentTypePropertyType:ember1806:38072> while in state rootState.loaded.updated.uncommitted. Called with undefined"
Looking into the code I realized that the adapter's didSaveRecord method sends a didCommit event to each embedded record. This seems perfectly fine since the children are declared to be saved together with the parent (see embedded: 'always' above).
The error is raised because the willCommit event is not propagated to the children and thus they are still in the uncommitted state and can't handle didCommit in that state. The parent itself was transitioned to inFlight and hence no error is thrown there.
In my opinion, the observed behavior is inconsistent. Either all events should be sent to the children or none. Otherwise all sorts of inconsistent behaviors can arise.
It seems that I'm working against, and not with, ember-data so I stopped to ponder what I'm doing wrong.
Can you tell me?

NServiceBus - How to control message handler ordering when Bus.Send() occurs on different threads / processes?

Scenario:
I have a scenario where audit messages are sent via NServiceBus. The handlers insert and update a row on a preexisting database table, which we have no remit to change. The requirement is that we have control over the order that messages are handled, so that the Audit data reflects the correct system state. Messages processed out of order may cause the audit data to reflect an incorrect state.
Some of the Audit data is expected in a specific order, however some can be received at any time after the initial message, such as a status update which will be sent several times during the process.
In my test project I have been testing using a server, (specifically the ISpecifyMessageHandlerOrdering functionality) with the end point configured as follows:
public class MyServer : IConfigureThisEndpoint, AsA_Server, ISpecifyMessageHandlerOrdering
{
public void SpecifyOrder(Order order)
{
order.Specify(First<PrimaryCommand>.Then<SecondaryCommand>());
}
}
Because the explicit order of messages is not known, one message, InitialAuditMessage is the initial message, and inherits from PrimaryCommand.
Other messages which are allowed to be received at a later stage inherit from SecondaryCommand.
public class StartAuditMessage : PrimaryCommand
public class UpdateAudit1Message : SecondaryCommand
public class UpdateAudit2Message : SecondaryCommand
public class ProcessUpdateMessage : SecondaryCommand
This works in controlling the handling order of messages where they are sent from the same thread.
This breaks down however, if the messages are sent from separate threads or processes, which makes sense as there is nothing to link the messages as related.
How can I link the messages, say through an ID of some sort so that they are not processed out of order when sent from separate threads? Is this a use case for Sagas?
Also, with regard to status update messages, how can I ensure that messages of the same type are processed in the order in which they were sent?
Whenever you have a requirement for ordered processing you cannot avoid the conclusion that at some point in your processing you need to restrict everything down to a single thread. The single thread guarantees the order in which things are processed.
In some cases you can "scale out" the single thread into multiple threads by splitting the processing by a correlating identifier. The correlation ID allows you to define a logical grouping of messages within which order must be maintained. This allows you to have concurrent threads each performing ordered processing which is more efficient.