How do I wait till a db query is completed? - wcf

Currently I'm executing db queries this way:
_svc = new Service1Client();
_svc.GetStateCompleted += new EventHandler<GetStateCompletedEventArgs>(_svc_GetStateCompleted);
private void _svc_GetStateCompleted(object sender, userdetCompletedEventArgs e)
{
//some code
}
Calling the query function,
_svc.GetStateAsync(args);
//more code
Is there anyway I can wait after GetStateAsync till the service function returns a value?

Typically you do not wait for a long running operation to complete. There are ways to achieve that, but normally if you do not want users to interact with the application until it completes - you disable all inputs and show some sort of a progress indicator until the operation completes.

Related

Is this the correct way to stop a function and restart it with new data?

I have a function which does a bunch of searching and setting fields. Now, I want it to stop doing that and restart as soon as I change anything, is this the correct way of doing that?
The plan is that I don't have work being done that is no longer used, and that slower updates don't finish after a more recent, faster update
private var autoUpdateJob: Job = Job()
(...)
// Something happens (eg. an onTextChangedListener in an EditText gets triggered)
launch {
autoUpdateJob.cancelAndJoin()
autoUpdateJob = launch { flight = autoValues() }
}

Custom command to go back in a process instance (execution)

I have a process where I have 3 sequential user tasks (something like Task 1 -> Task 2 -> Task 3). So, to validate the Task 3, I have to validate the Task 1, then the Task 2.
My goal is to implement a workaround to go back in an execution of a process instance thanks to a Command like suggested in this link. The problem is I started to implement the command by it does not work as I want. The algorithm should be something like:
Retrieve the task with the passed id
Get the process instance of this task
Get the historic tasks of the process instance
From the list of the historic tasks, deduce the previous one
Create a new task from the previous historic task
Make the execution to point to this new task
Maybe clean the task pointed before the update
So, the code of my command is like that:
public class MoveTokenCmd implements Command<Void> {
protected String fromTaskId = "20918";
public MoveTokenCmd() {
}
public Void execute(CommandContext commandContext) {
HistoricTaskInstanceEntity currentUserTaskEntity = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstanceById(fromTaskId);
ExecutionEntity currentExecution = commandContext.getExecutionEntityManager()
.findExecutionById(currentUserTaskEntity.getExecutionId());
// Get process Instance
HistoricProcessInstanceEntity historicProcessInstanceEntity = commandContext
.getHistoricProcessInstanceEntityManager()
.findHistoricProcessInstance(currentUserTaskEntity.getProcessInstanceId());
HistoricTaskInstanceQueryImpl historicTaskInstanceQuery = new HistoricTaskInstanceQueryImpl();
historicTaskInstanceQuery.processInstanceId(historicProcessInstanceEntity.getId()).orderByExecutionId().desc();
List<HistoricTaskInstance> historicTaskInstances = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstancesByQueryCriteria(historicTaskInstanceQuery);
int index = 0;
for (HistoricTaskInstance historicTaskInstance : historicTaskInstances) {
if (historicTaskInstance.getId().equals(currentUserTaskEntity.getId())) {
break;
}
index++;
}
if (index > 0) {
HistoricTaskInstance previousTask = historicTaskInstances.get(index - 1);
TaskEntity newTaskEntity = createTaskFromHistoricTask(previousTask, commandContext);
currentExecution.addTask(newTaskEntity);
commandContext.getTaskEntityManager().insert(newTaskEntity);
AtomicOperation.TRANSITION_CREATE_SCOPE.execute(currentExecution);
} else {
// TODO: find the last task of the previous process instance
}
// To overcome the "Task cannot be deleted because is part of a running
// process"
TaskEntity currentUserTask = commandContext.getTaskEntityManager().findTaskById(fromTaskId);
if (currentUserTask != null) {
currentUserTask.setExecutionId(null);
commandContext.getTaskEntityManager().deleteTask(currentUserTask, "jumped to another task", true);
}
return null;
}
private TaskEntity createTaskFromHistoricTask(HistoricTaskInstance historicTaskInstance,
CommandContext commandContext) {
TaskEntity newTaskEntity = new TaskEntity();
newTaskEntity.setProcessDefinitionId(historicTaskInstance.getProcessDefinitionId());
newTaskEntity.setName(historicTaskInstance.getName());
newTaskEntity.setTaskDefinitionKey(historicTaskInstance.getTaskDefinitionKey());
newTaskEntity.setProcessInstanceId(historicTaskInstance.getExecutionId());
newTaskEntity.setExecutionId(historicTaskInstance.getExecutionId());
return newTaskEntity;
}
}
But the problem is I can see my task is created, but the execution does not point to it but to the current one.
I had the idea to use the activity (via the object ActivityImpl) to set it to the execution but I don't know how to retrieve the activity of my new task.
Can someone help me, please?
Unless somethign has changed in the engine significantly the code in the link you reference should still work (I have used it on a number of projects).
That said, when scanning your code I don't see the most important command.
Once you have the current execution, you can move the token by setting the current activity.
Like I said, the code in the referenced article used to work and still should.
Greg
Referring the same link in your question, i would personally recommend to work with the design of you your process. use an exclusive gateway to decide whether the process should end or should be returned to the previous task. if the generation of task is dynamic, you can point to the same task and delete local variable. Activiti has constructs to save your time from implementing the same :).

How to limit JProfiler to a subtree

I have a method called com.acmesoftware.shared.AbstractDerrivedBean.getDerivedUniqueId(). When I JProfiler the application, this method, getDerivedUniqueId(), is essentially buried 80 methods deep as expected. The method is invoked on behalf of every bean in the application. I'm trying to record CPU calltree starting with this method down to leaf node (ie, one of the excluded classes).
I tried the following but it didn't produce the expected outcome:
Find a method above the method targeted for profiling, eg, markForDeletion().
set trigger to start recording at getDerivedUniqueId()
set trigger to STOP recording at markForDeletion()
I was expecting to only see everything below markForDeletion(), but I saw everything up to but not INCLUDING getDerivedUniqueId(), which is the opposite of my intended goal. Worse yet, even with 5ms sampling, this trigger increased the previous running time from 10 minutes to "I terminated after 3 hours of running". It seems the trigger is adding a giant amount of overhead on top of the overhead. Hence, even if I figure out how to correctly enable the trigger, the added overhead would seem to render it ineffective.
The reason I need to limit the recording to just this method is: When running in 5ms sampling mode, the application completes in 10 minutes. When I run it in full instrumentation, I've waited 3 hours and it still hasn't completed. Hence, I need to turn on full instrumentation ONLY after getDerivedUniqueId() is invoked and pause profiling when getDerivedUniqueId() is exited.
-- Updated/Edit:
Thank you Ingo Kegel for your assistance.
I am likely not clear on how to use triggers. In the code below, I set triggers as shown after the code. My expectation is that when I JProfile the application (both sampling and full instrumentation) with the below configured triggers, if boolean isCollectMetrics is false, I should see 100% or 99.9% of cpu in filtered classes. However, that is not the case. The CPU tree seems not to take into account the triggers.
Secondly, when isCollectMetrics is true, the jprofiler call tree I expect would start with startProfiling() and end at stopProfiling(). Again, this is not the case either.
The method contains() is the bottleneck. It eventually calls one of 150 getDerivedUniqueId(). I am trying to pinpoint which getDerivedUniqueId() is causing the performance degradation.
public static final AtomicLong doEqualContentTime = new AtomicLong();
public static final AtomicLong instCount = new AtomicLong();
protected boolean contentsEqual(final InstanceSetValue that) {
if (isCollectMetrics) {
// initialization code removed for clarity
// ..........
// ..........
final Set<Instance> c1 = getReferences();
final Set<Instance> c2 = that.getReferences();
long st = startProfiling(); /// <------- start here
for (final Instance inst : c1) {
instCount.incrementAndGet();
if (!c2.contains(inst)) {
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return false;
}
}
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return true;
} else {
// same code path as above but w/o the profiling. code removed for bravity.
// ......
// ......
return true;
}
}
public long startProfiling() {
return System.nanoTime();
}
public long stopProfiling() {
return System.nanoTime();
}
public static void reset() {
doEqualContentTime.set(0);
instCount.set(0);
}
The enabled triggers:
startProfiling trigger:
stopProfiling trigger:
I've tried 'Start Recordings' or 'Record CPU' buttons separately to capture the call tree only
If the overhead with instrumentation is large, you should refine your filters. With good filters, the instrumentation overhead can be very small,
As for the trigger setup, the correct actions are:
"Start recording" with CPU data selected
"Wait for the event to finish"
"Stop recording" with CPU data selected

Asynchronously start only one Task to process a static Queue, stopping when it's done

Basically I have a static custom queue of objects I want to process. From multiple threads, I need to kick off a singular Task that will process the queued objects, stopping the task when all items are dequeued.
Some psuedo code:
static CustomQueue _customqueue;
static Task _processQueuedItems;
public static void EnqueueSomething(object something) {
_customqueue.Enqueue(something);
StartProcessingQueue();
}
static void StartProcessingQueue() {
if(_processQueuedItems != null) {
_processQueuedItems = new Task(() => {
while(_customqueue.Any()) {
var stuffToDequeue = _customqueue.Dequeue();
/* do stuff */
}
});
_processQueuedItems.Start();
}
if(_processQueuedItems.Status != TaskStatus.Running) {
_processQueuedItems.Start();
}
}
If it makes a difference my custom queue is a queue that essentially holds items until they reach a certain age, then allows them to dequeue. Everytime an item is touched its timer starts again. I know this piece works fine.
The part I'm struggling with is the parallelism. (Clearly, I don't know what I'm doing here). What I want is to have one thread process the queue until it's complete, then go away. If another call comes in it doesn't start a new thread unless it has to.
I hope that explains my issue okay.
You might want to consider using BlockingCollection<T> here. You could make your custom queue implement IProducerConsumerCollection, in which case BC could use it directly.
You'd then just need to start a long running Task to call blockingCollection.GetConsumingEnumerable() and process the items in a foreach. The task will automatically block when the collection is empty, and restart when a new item is Enqueued.

Why isn't this transaction isolated?

I have a few methods - a couple of calls to SQL Server and some business logic to generate a unique value. These methods are all contained inside a parent method:
GenerateUniqueValue()
{
//1. Call to db for last value
//2. Business logic to create new value
//3. Update db with new value created
}
I want the call to GenerateUniqueValue to be isolated, i.e - when two clients call it simultaneously, the second client must wait for the first one to finish.
Originally, I made my service a singleton; however, I have to anticipate future changes that may include load balancing, so I believe a singleton approach is out. Next I decided to try the transaction approach by decorating my service:
[ServiceBehavior(TransactionIsolationLevel = IsolationLevel.Serializable, TransactionTimeout = "00:00:30")]
And my GenerateUniqueValue with:
[OperationBehavior(TransactionScopeRequired = true)]
The problem is that a test of simultaneous hits to the service method results in an error:
"System.ServiceModel.ProtocolException: The transaction under which this method call was executing was asynchronously aborted."
Here is my client test code:
private static void Main(string[] args)
{
List<Client> clients = new List<Client>();
for (int i = 1; i < 20; i++)
{
clients.Add(new Client());
}
foreach (var client in clients)
{
Thread thread = new Thread(new ThreadStart(client.GenerateUniqueValue));
thread.Start();
}
Console.ReadLine();
}
If the transaction is suppose to be isolated, why are multiple threads calling out to the method clashing?
Transaction is for treating multiple actions as a single atomic action. So if you want to make the second thread to wait for the first thread's completion, you have to deal with concurrency not transaction.
Try using System.ServiceModel.ServiceBehaviorAttribute.ConcurrencyMode attribute with Single or Reentrant concurrency modes. I guess that's what you are expecting.
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant)]
I guess you got the exception because the IsolationLevel.Serializable would enable the second thread to access the volatile data, but wouldn't let it to change it. You perhapse be doing some change operation which is not permitted with this isolation level.