Batch printing exception - vb.net

I get this error while printing multiple .xps documents to a physical printer
Dim defaultPrintQueue As PrintQueue = GetForwardPrintQueue(My.Settings.SelectedPrinter)
Dim xpsPrintJob As PrintSystemJobInfo
xpsPrintJob = defaultPrintQueue.AddJob(JobName, Document, False)
Documents are spooled succesfully till, a print job exception occurs
The InnerException is Insufficient memory to continue the execution of the program.
The source is PresentationCore.dll
Where should i start searching?

When attempting to perform tasks that may fail due to temporary or permanent restrictions on some resource, I tend to use a back-off strategy. This strategy has been followed on things as diverse as message queuing and socket opens.
The general process for such a strategy is as follows.
set maxdelay to 16 # maximum time period between attempts
set maxtries to 10 # maximum attempts
set delay to 0
set tries to 0
while more actions needed:
if delay is not 0:
sleep delay
attempt action
if action failed:
add 1 to tries
if tries is greater than maxtries:
exit with permanent error
if delay is 0:
set delay to 1
else:
double delay
if delay is greater than maxdelay:
set delay to maxdelay
else:
set delay to 0
set tries to 0
This allows the process to run at full speed in the vast majority of cases but backs off when errors start occurring, hopefully giving the resource provider time to recover. The gradual increase in delays allows for more serious resource restrictions to recover and the maximum tries catches what you would term permanent errors (or errors that are taking too long to recover).
I actually prefer this try-it-and-catch-failure approach to the check-if-okay-then-try one since the latter can still often fail if something changes between the check and the try. This is called the "better to seek forgiveness than ask permission" method, which also works quite well with bosses most of the time, and wives a little less often :-)
One particularly useful case was a program which opened a separate TCP session for each short-lived transaction. On older hardware, the closed sockets (those in TCP WAIT state) eventually disappeared before they were needed again.
But, as the hardware got faster, we found that we could open sessions and do work much quicker and Windows was running out of TCP handles (even when increased to the max).
Rather than having to re-engineer the communications protocol to maintain sessions, this strategy was implemented to allow graceful recovery in the event handles were starved.
Granted it's a bit of a kludge but this was legacy software approaching end-of-life, where bug fixes are often just enough to get it working and it wasn't deemed strategic enough to warrant spending a lot of money in fixing it properly.
Update: It may be that there's a (more permanent) problem with PresentationCore. This KB article states that there's a memory leak in WPF within .NET 3.5SP1 (of which your print driver may be a client).
If the backoff strategy doesn't fix your problem (it may not if it's a leak in a long lived process), you might want to try applying the hotfix. Me, I'd replicate the problem in a virtual machine and then patch that to test it (but I'm an extreme paranoid).
It was found by googling PresentationCore Insufficient memory to continue the execution of the program and checking the first link here. Search for the string "hotfix that relates to this issue" on that page.

Before adding a new job to the queue you should check the queue state. More info on PrintQueue.IsOutOfMemory property and related properties that can be queried to verify that the queue is not in an error state.
Of course pax' hint to use a defensive strategy when accessing resources like printers is best practice. For starter you may want to put the line adding the job into a try block.

You might want to consider launching a new process to handle the printing of each document, the overhead should be low compared to the effort of printing the documents.

Related

CXSYNC_PORT wait type in Azure Sql Database

I'm facing this issue intermittently now, where the query (called from stored Procedure) goes for CXSYNC_PORT wait type and continues to remain in that for longer time (sometimes 8hours in stretch). I had to kill the process and then rerun the procedure. This procedure is called every 2-hours from ADF pipeline.
What's the reason for this behavior and how do I fix the issue?
I searched a lot and there is not Microsoft documents talk about the wait type: CXSYNC_PORT. Others have asked the same question but still with no more details.
Most suggestions are that ask the same problem in more forums. Or ask professional engineer for help, and they will deal with your problem separately and confidentially.
Ask Azure support for details help: https://learn.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request
And here's the same question which Microsoft engineer gave more details about the issue:
As part of a fix CXPACKET waits were further broken down into
CXSYNC_CONSUMER and CXSYNC_PORT (and data transfer waits still
reported as CXPACKET) as to distinguish between different wait times
for correct diagnose of the problem.
Basically, CXPACKET is divided into 3: CXPACKET, CXSYNC_PORT,
CXSYNC_CONSUMER. CXPACKET is used for data transfer sync, while
CXSYNC_* are used for other synchronizations. CXSYNC_PORT is used for
synchronizing opening/closing of exchange port between consuming
thread and producing thread. Long waits here may indicate server load
and lack of available threads. Plans containing sort may contribute
this wait type because complete sorting may occur before port is
synchronized.
Please ref this link What is causing wait type CXSYNC_PORT and what to do about it? to get more useful messages. But for now, there isn't an exact solution.
use query hint OPTION(MAXDOP 1)
This will run your long running query in a single thread and you won't get the CX type waits. In my experience this can make a massive 10-20X decrease in execution time and will free up CPU for other tasks as there will be no context switching and thread coordination activity.

Weblogic Stuck thread impacts other runnable threads in it

I am using Weblogic 10.3.6 with 8 managed servers configured with session timeout as 600 seconds. I have an issue with my application that when a session gets timed out in 600 seconds(I am receiving as STUCK alerts which is also configured) I am facing slowness in my application. My question is,
Will all threads be impacted because of one STUCK thread(STUCK thread
was due to DB transaction timeout)
I assume it will not be, but wanted to confirm.
Depends on your application. In general no, but if for example the stuck thread is holding a lock on an object (database, file, etc.) called by other requests, these may be affected too. Also, depending on what the stuck thread is doing, it may use excessive resources (cpu, memory, disk, etc.). I suggest to investigate why the thread is taking so long and if it's possible to

How to create a distributed 'debounce' task to drain a Redis List?

I have the following usecase: multiple clients push to a shared Redis List. A separate worker process should drain this list (process and delete). Wait/multi-exec is in place to make sure, this goes smoothly.
For performance reasons I don't want to call the 'drain'-process right away, but after x milliseconds, starting from the moment the first client pushes to the (then empty) list.
This is akin to a distributed underscore/lodash debounce function, for which the timer starts to run the moment the first item comes in (i.e.: 'leading' instead of 'trailing')
I'm looking for the best way to do this reliably in a fault tolerant way.
Currently I'm leaning to the following method:
Use Redis Set with the NX and px method. This allows:
to only set a value (a mutex) to a dedicated keyspace, if it doesn't yet exist. This is what the nx argument is used for
expires the key after x milliseconds. This is what the px argument is used for
This command returns 1 if the value could be set, meaning no value did previously exist. It returns 0 otherwise. A 1 means the current client is the first client to run the process since the Redis List was drained. Therefore,
this client puts a job on a distributed queue which is scheduled to run in x milliseconds.
After x milliseconds, the worker to receive the job starts the process of draining the list.
This works on paper, but feels a bit complicated. Any other ways to make this work in a distributed fault-tolerant way?
Btw: Redis and a distributed queue are already in place so I don't consider it an extra burden to use it for this issue.
Sorry for that, but normal response would require a bunch of text/theory. Because your good question you've already written a good answer :)
First of all we should define the terms. The 'debounce' in terms of underscore/lodash should be learned under the David Corbacho’s article explanation:
Debounce: Think of it as "grouping multiple events in one". Imagine that you go home, enter in the elevator, doors are closing... and suddenly your neighbor appears in the hall and tries to jump on the elevator. Be polite! and open the doors for him: you are debouncing the elevator departure. Consider that the same situation can happen again with a third person, and so on... probably delaying the departure several minutes.
Throttle: Think of it as a valve, it regulates the flow of the executions. We can determine the maximum number of times a function can be called in certain time. So in the elevator analogy you are polite enough to let people in for 10 secs, but once that delay passes, you must go!
Your are asking about debounce sinse first element would be pushed to list:
So that, by analogy with the elevator. Elevator should go up after 10 minutes after the lift came first person. It does not matter how many people crammed into the elevator more.
In case of distributed fault-tolerant system this should be viewed as a set of requirements:
Processing of the new list must begin within X time, after inserting the first element (ie the creation of the list).
The worker crash should not break anything.
Dead lock free.
The first requirement must be fulfilled regardless of the number of workers - be it 1 or N.
I.e. you should know (in distributed way) - group of workers have to wait, or you can start the list processing. As soon as we utter the phrase "distributed" and "fault-tolerant". These concepts always lead with they friends:
Atomicity (eg by blocking)
Reservation
In practice
In practice, i am afraid that your system needs to be a little bit more complicated (maybe you just do not have written, and you already have it).
Your method:
Pessimistic locking with mutex via SET NX PX. NX is a guarantee that only one process at a time doing the work (atomicity). The PX ensures that if something happens with this process the lock is released by the Redis (one part of fault-tolerant about dead locking).
All workers try to catch one mutex (per list key), so just one be happy and would process list after X time. This process can update TTL of mutex (if need more time as originally wanted). If process would crash - the mutex would be unlocked after TTL and be grabbed with other worker.
My suggestion
The fault-tolerant reliable queue processing in Redis built around RPOPLPUSH:
RPOPLPUSH item from processing to special list (per worker per list).
Process item
Remove item from special list
Requirements
So, if worker would crashed we always can return broken message from special list to main list. And Redis guarantees atomicity of RPOPLPUSH/RPOP. That is, there is only a problem group of workers to wait a while.
And then two options. First - if have much of clients and lesser workers use locking on side of worker. So try to lock mutex in worker and if success - start processing.
And vice versa. Use SET NX PX each time you execute LPUSH/RPUSH (to have "wait N time before pop from me" solution if you have many workers and some push clients). So push is:
SET myListLock 1 PX 10000 NX
LPUSH myList value
And each worker just check if myListLock exists we should wait not at least key TTL before set processing mutex and start to drain.

The CLR has been unable to transition from COM context 0x22c4f60 to COM context 0x22c51b0 for 60 seconds

I am developing an application that calls a web service, which deletes information from a database (the web service was developed by a third party vendor). On the first run approximately 100,000 records are deleted.
I have tested the routine a few times and this appears in Visual Studio occasionally:
"The CLR has been unable to transition from COM context 0x22c4f60 to COM context 0x22c51b0 for 60 seconds.
The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages.
This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time.
To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations."
I assume that the web service is taking more than sixty seconds to pass control back to the .NET Forms app. Please see the following quote from the message: "To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations". As this is a Windows Forms app, does this mean that I do not need to do anything to allow for this?
Sometimes this issue may also occur due to wrong server name, like if you had included SERVER/SQLEXPRESS instead of SERVER\SQLEXPRESS then also this error displays, which was my case.
Check and be sure if you are using a reader that you do not included try/catches. You should make and effort to try and resolve your solution issue better than that. A try/catch will cause a time out especially in a while loop.

TADOStoredProc/TADOQuery CommandTimeout...?‏

On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.