I have created a custom SharePoint 2010 Item added event receiver on a document library. I also log all exception which might be fired during this event receiver.
This event receiver fires perfectly fine almost all of the times. There are, however, only a few cases during which the event receiver does not fire. and since the event receiver does not fire I don't see any exception log. These types of events are scattered and not concentrated on a particular timespan of the day.
My question is whether there is some log that I can check on a SharePoint server that will tell me why the event receiver did not fire or probably what went wrong. Thanks in advance.
In my experience, event receivers will just stop when an uncatched exception occurs. So they will not log exceptions themselves.
What you should do is make your event logic exception-proof, so catch everything you can and log it yourself to the ULS.
Related
I've read the camunda doc, but I don't find anything about it.
I know it doesn't make sense throw something that nobody will catch, but is it possible?
https://docs.camunda.org/manual/7.7/reference/bpmn20/events/signal-events/
https://camunda.com/bpmn/reference/#events-signal
In the Business Process Model And Notation 2.0 specification(can be found at
https://www.omg.org/spec/BPMN/2.0/), P253, in the Table 10.89 - Intermediate Event Types in Normal Flow:
(Signal) This type of Event is used for sending or receiving Signals. A Signal is
for general communication within and across Process levels, across
Pools, and between Business Process Diagrams. A BPMN Signal is
similar to a signal flare that shot into the sky for anyone who might be
interested to notice and then react. Thus, there is a source of the Signal,
but no specific intended target.
Hope that helps.
Yes this is possible. You can model a throwing signal event when there are no receivers. The event will simply throw the signal and continue the normal flow (without anyone ever using the event).
In contrary to that, the catching signal events can not be used without a throwing signal event. If you use a catching signal event without a throwing signal event the process will stop at this event and will never be able to continue.
I am unclear as to whether it is permissible in a BPMN 2.0 model for a timer to be the Start Event for an event sub-process, such as in the simplified example below:
The BPMN 2.0 documentation (version 2.0.1 dated 2013-09-02) on page 174 (section 10.3.5, Event Sub-processes) suggests this is not permissible:
The Start Event of an Event Sub-Process MUST have a defined trigger. The Start Event trigger (EventDefinition) MUST be from the following types: Message, Error,
Escalation, Compensation, Conditional, Signal, and Multiple(see page 259 for more details)
On page 241 (section 10.5.2, Start Event), the specification states that a Timer is allowed as a Start Event:
A Start Event can also initiate an inline Event Sub-Process (see page 174). In that case, the same Event types as for boundary Events are allowed (see Table 10.86), namely: Message, Timer, Escalation, Error, Compensation, Conditional, Signal, Multiple, and Parallel.
Which of these sections would apply in the case of the above example?
Not a BPMN expert but have some experience using BPMN 2.0 so I'll give this a go.
The example you posted doesn't look like a completely spec-approved way of doing it, but I can't be entirely sure. I see a few different ways to do this that should be within bounds.
Here are my two suggestions:
Unless you want to model a third event like "Out of stock" I would prefer option A for its simplicity.
Also I'd like to through out a recommendation for "BPMN Method and Style, 2nd ed." by Bruce Silver.
I'm going to conclude this is almost certainly an error in §10.5.2 of the spec, and that the timer as the start event in an event sub-process is allowed.
Tables 10.86 and 10.93 are both explicit in that the timer can be the trigger for an event sub-process.
The non-interrupting timer start event is only useful in an event sub-process. That symbol would have no use if a timer event were not allowed to trigger an event sub-process.
Section 10.5.6 consistently allows the use of the timer as the start event trigger
The issue was reported to OMG in 2010 (Issue 15532), although no further action was taken.
The same principle applies to Parallel Multiple events, which are similarly omitted from the same list in §10.5.2, but permitted in other sections.
I don't remember now the terminology, but what I would do to achieve what you want is put purchase parts + unpack parts in a subprocess (or sub-task?) and have a timer on it. This seems easier, clearer to read and does what you want.
Regarding documentation: I would say one part talks about the trigger and the other about the start of the event sub-process. So a timer can't trigger the event sub-process, but the start event of the event sub-process can be a timer.
For some reason my feature event receiver still runs after deactivation and uninstall. I have an feature event receiver on my SharePoint 2010 server that runs on a survey list prevent users to delete survey responses. If I deactivated the feature event receiver, I still can't delete responses. Even after I did an uninstall and iisreset, the event receiver was still running. My solution was to run a powershell command that will physically remove the event receiver from the survey list. Once I did that, I was able to delete survey responses from an existing survey. Any idea why even after an deactivation and uninstall I still have an attached event receiver to my survey list?
If you added the Event Receiver to a list in FeatureActivated, you should remove it from that list in FeatureDeactivating.
I'm have some mysterious problems with handlers being called more than once per event which appears to be correlated with events built up via interface inheritance.
We are using only interfaces for our messages and usingNServiceBus.MessageInterfaces.MessageMapper.Reflection.MessageMapper().CreateInstance() to create instances to put on the bus.
Our interfaces:
IOperationOccured - Contains basic operation information, subscribers to this event act on things in a fairly generic way. This event is never raised directly.
ISpecificOperationOccured - Inherits IOperationOccured. Contains more specific information. Subscribers to this event are able to do more specific things since the event is more specific.
The problem is that when ISpecificOperationOccured is raised, the handlers for IOperationOccured are called, the handlers for ISpecificOperationOccured are called and then the message appears to get processed again, calling the handlers again.
What am I misunderstanding? I'd expect the handlers for IOperationOccured to get called once per event and the handlers for ISpecificOperationOccured to get called once per event.
Late answer I know but hopefully this will help others.
This happens when your separate handlers for IOperationOccured and ISpecificOperationOccured are deployed in the same endpoint. e.g.
Endpoint1 (Raises ISpecificOperationOccured)
Endpoint2 (Handles both IOperationOccured and ISpecificOperationOccured)
Endpoint1.Subscriptions will contain entries for:
IOperationOccured -> Endpoint2
ISpecificOperationOccured -> Endpoint2
So that when ISpecificOperationOccured is published it will get sent to Endpoint2 twice. The recommended approach is to have separate endpoints for the handling of different message types. e.g.
Endpoint1 (Raises ISpecificOperationOccured)
Endpoint2 (Handles IOperationOccured)
Endpoint3 (Handles ISpecificOperationOccured)
So I'm trying to write a simple TCP socket server that broadcasts information to all connected clients. So when a user connects, they get added to the list of clients, and when the stream emits the close event, they get removed from the client list.
This works well, except that sometimes I'm sending a message just as a user disconnects.
I've tried wrapping stream.write() in a try/catch block, but no luck. It seems like the error is uncatchable.
The solution is to add a listener for the stream's 'error' event. This might seem counter-intuitive at first, but the justification for it is sound.
stream.write() sends data asynchronously. By the time that node has realized that writing to the socket has raised an error your code has moved on, past the call to stream.write, so there's no way for it to raise the error there.
Instead, what node does in this situation is emit an 'error' event from the stream, and EventEmitter is coded such that if there are no listeners for an 'error' event, the error is raised as a toplevel exception, and the process ends.
Peter is quite right,
and there is also another way, you can also make a catch all error handler with
process.on('uncaughtException',function(error){
// process error
})
this will catch everything which is thrown...
it's usually better to do this peter's way, if possible, however if you where writing, say, a test framework, it may be a good idea to use process.on('uncaughtException',...
here is a gist which covers (i think) all the different aways of handling errors in nodejs
http://gist.github.com/636290
I had the same problem with the time server example from here
My clients get killed and the time server then tries to write to closed socket.
Setting an error handler does not work as the error event only fires on reception. The time server does no receiving, (see stream event documentation).
My solution is to set a handler on the stream close event.
stream.on('close', function() {
subscribers.remove(stream);
stream.end();
console.log('Subscriber CLOSE: ' + subscribers.length + " total.\n");
});