How to change the port of the client bound by the user in locate - udp

scene
I am testing the performance of UDP server. It is implemented with Locust + custom client.
When the custom client judges the packet loss, it records the sent data and throws an exception to Locust.
When Locust catches the exception, it will stop executing the tasks behind the task set, skip the current task set and select a new task set for execution.
If there is only one task set, the current task set is re executed.
problem
I want to replace the socket port bound by user when Locust re executes the task set.
However, the port cannot be changed after the socket is bound to the port (I can't find an example on Google that can be changed.)
I tried to bind a custom client in the task set bound by user, but I received an error from Locust.
Is there any way to realize my demand?
Are there any other perspectives?
For example:
After Locust catches an exception, end the current user, and then start a new user.
last
English is not my native language, please forgive my grammar mistakes.
Thanks for reading! It would be better if you could help.

Related

Catch the event of a blocked instance only after a timeout

I have a program where I start several process instances using a cron. For each process instance I have a maximum time, and if the execution time exceeds it, I have to consider it as failure and use some specific methods.
For now what I did was simply to check, once my process instance has finished, if the elapsed time exceeds or not the given maximum time.
But what if my process instance gets blocked for some reason (e.g. server not responding)? I need to catch this event and perform failure operations as soon as the process gets blocked and timeout is exceeded.
How can I catch these two conditions?
I had a look at the FlowableEngineEventType, but there isn’t a PROCESS_BLOCKED/SUSPENDED type of event. But, even if it were, how do I fire it only if a certain amount of time has passed?
I assume that this is the same question as this from the Flowable Forum.
If you are using the Flowable HTTP Task then have a look at the documentation to see how you can set the timeouts on it and how you can react on errors there. If you are firing GET requests from your own code you would need to write your own business logic that would throw some kind of BpmnError and you would then handle that in your process.
The Flowable Process instance does not have the concept of being blocked, and you have to manually to that in your modelling.

AXON framework synchronous response

I am new to AXON framework and are using it for our development. We have a requirement where command (command side) is created for the persisting data, for the same event is triggered which is consumed at query side. Now we need to have a response back to command side from query side which says if the record is persisted into database successfully (custom successful message) or if failed then the reason of the failure (custom exception message as response). Kindly help if there is any way to achieve such scenario.
Here command side and query side are 2 different micro-services and we are using Rabbit Mq for event driven technique.
Thanks in advance
I think that what you are asking is if there is a way for the command and event to be processed in a single transaction?
If you use a subscribing event processor, running in the same JVM, the event is processed synchronously and the whole transaction is rolled back in case of an exception in an event handler. This is not the case here, because you have loosely coupled separate services, which is good.
It's best practice for the aggregate with the command handler to have all the information available to decide whether or not the command can successfully be processed, and when an event is applied, this is a signal that it has happened, and the other services (the query side in this case) have to be informed. It's not good practice for a query module to overrule this ("you say it happened, I say it didn't"). If there is an error in the query side, you fix it, and replay the event.
If it really is an error in the event handler that the whole system must know about, that is really a separate event. You can apply such an event directly on the event bus and notify the whole system. Something like this:
#Autowired
private EventBus eventBus;
(...)
CatastrophicFailureEvent failureEvent = new CatastrophicFailureEvent("OH NO!");
eventBus.publish(GenericEventMessage.asEventMessage(failureEvent));
I think you might need to reconsider your architecture. Keep in mind that events should encapsulate the irreversible state changes of your system. These state changes should not be questioned after they have happened. Your query side should only need to care about projecting these valid state changes that your command side has decided on.
If you need to check whether a user already existed, you need to do this on the command side in your aggregate. The aggregate can keep a list of all the existing usernames and throw an exception if an invalid command is given. The command response (tip: using the sendAndWait() method on the CommandGateway returns a response) can then be used as the system to inform your user about the success/failure of its action.
The following flow might solve your problem, but keep in mind that the user will get a callback on the success of the action even though the query side might not have processed its result yet. This part is eventually consistent.
Command Side:
Request from frontend handled by a Controller class and creates an corresponding command
The above command is invoked and handled by a command handler which creates the corresponding event or throws an exception if the user already exists.
The invoker of the command is informed about the success of the command or the exception is handled and the error shown to the user.
The above event is published through rabbit mq event bus if the command was successful.
Query side:
The event that is published in the step 4 is consumed by the event handler in query side. No checks or validations should be necessary, since they were already handled on the command side.
#Mzzl
Series of activities
Command Side:
1. Request from frontend handled by a Controller class and creates an corresponding command
2. The above command is invoked and handled by a command handler which in return create corresponding event
3. The above event is then published through rabbit mq event bus.
Query Side:
4. The event that is published in the step 3 is consumed by the event handler in query side.
5. The event handler has the logic to perform db transaction (lets assume add a user). Once a user is added then a success message or failure message (lets assume user already available in the DB so could not create duplicate entry) should flow from query side to command side and eventually back to UI as a repsonse.
I'm not sure I've fully understand your issue (especially the microservice part :)),
but if your problem is related to having the query side up to date after the command execution, then you can have a look at this project.
In this example, you can see that he uses a SubscriptionQueryResult in conjunction with a QueryUpdateEmitter (see here)
Basically you will subscribe to query side changes before the command is issued, and you will block after the command execution until the query side send a notification when it is up to date.
This way you can avoid the eventual consistency.

Camunda: how to model task that can be cancelled?

I want to model a process that can be initiated by receipt of a message (which will be done via a REST call). The process will lead to a task that is assigned to a user. The user will supply some extra information and then the process will terminate.
However, I also want to model the case when additional information is received after the first info has been received. Receipt of this extra information via REST should terminate the process.
This overall model represents a computer system that monitors a flow of information and if it detects a problem it creates a task for someone to investigate. However if further information becomes available, the the task should've terminated.
What is the best way of modelling this in BPMN and Camunda please?
What I have at the moment:
(MSE) --> (UT) -->(TEE)
(RT) --> (TEE)
Where:
MSE = Message Send Event
UT = User Task
TEE = Termination End Event
RT = Receive Task
I can successfully start/add a process for using curl to post a message representing the start message. This adds a process and the task is allocated to a user.
However, I don't seem to be able to get the receive task to correlate with the process, it just seems to add a new process. The cancel message that the receive task is supposed to represent should specifically cancel the particular process that it exists in, not any old process.
There are different ways to model this.
You could use an interrupting boundary message receive event and if the extra info was received the user task is canceled by the boundary event.
Another approach would be to use an interrupting event sub process.
If the message with the extra information was received the event sub process is triggered and will cancel the process.
You could also use a parallel gateway and a terminate end event.
But I would recommend one of the methods above-mentioned.

How do I handle "Receive" calls being made out of order?

I have a WF4 service that emulates a sales funnel. It works by starting with a "Registration" receive call. After that, there are 10 similar stages (comprised of a 2 receives at each stage). You can't advance past a stage until after the current stage validates the data received. What I'm unsure about though is, even though my client app wouldn't allow for it, how can I make my workflow prevent anyone from calling the receive operations out of order? In my test console app, I let the user call any receive operation (just because I wanted to see what happens).
For example, if I call the Register first and then the "AddQualification" receive before the "AddProspect" receive, the test app returns with an exception like this:
Operation 'AddQualification|{http://tempuri.org/}IZSalesFunnelService' on service instance with identifier '1984c927-402b-4fbb-acd4-edfe4f0d8fa4' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees
2 things come from this that I don't know how to do:
First, how do I handle the Fault Exception to notify the client in a meaningful way and...
Second, because I'm using persistence (and property promotion), when I make the out of order call, the properties that are promoted unload. They are not promoted again after the client gets the exception.
Any thoughts?
Sorry, my server is playing up a little so the blog keeps going off the air temporarily.
With regard to your second question, you need to make sure that your workflow service is set to Abandon for unhandled exceptions. Here is the doco for AppFabric for this setting:
Abandon. The service host aborts the workflow service instance in memory. The state of the instance in the database remains “Active”. The Workflow Management Service recovers the abandoned workflow instance from last persistence point saved in the persistence database.
Abandon and suspend. The service host aborts the workflow service instance in memory and sets the state of the instance in the persistence database to “Suspended”. A suspended instance can be resumed or terminated later by using IIS Manager. These instances are not recovered by the Workflow Management Service automatically.
Terminate. The service host aborts the workflow service instance in memory, and sets the state of the instance in the persistence database to “Completed (Terminated)”. A terminated instance cannot be resumed later.
Cancel. The service host cancels the workflow service instance causing all the cancellation handlers to be invoked so that a workflow terminates in a graceful manner, and sets the state of the instance in the persistence database to “Completed (Cancelled)”.
Abandon is the only setting that will hold onto your workflow in the persistence store so that you can then call it again.
Hope this helps.
Regarding your first question I'd look at Rory Primroses post on how to shield Content Correlation Failures: Managing Content Correlation Failures. In here he translates an exception into a valid Business Exception.

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).