TimeOut in Thread with a query from io.vertx.ext.sql.SQLClient; - sql

Well, I a new developer with Vert.x... so, I have a problem with an implementation with a database connection.
In one or many querys, I have a lot of information like 160K records, those records will be in a JSON object throw GraphQL; so... when the query time is over 30000(ms)... the console says:
Thread Thread[vert.x-eventloop-thread-1,5,main] has been blocked for 5026 ms, time limit is 2000 ms
io.vertx.core.VertxException: Thread blocked
So I investigated about this, and I cannot find a way to resolve, maximize or set a bigger value to the query until these is finish or get all records.

This question is actually covered in detail in the official documentation.
you can’t call blocking operations directly from an event loop, as
that would prevent it from doing any other useful work
That's what you're doing at the moment - calling a blocking operation.
An alternative way to run blocking code is to use a worker verticle A
worker verticle is always executed with a thread from the worker pool.
Run your "slow" code in a worker verticle. Communicate between EventLoop verticls and workers using EventBus. As long as you're inside same VM, passing even large collections over EventBus has no overhead.

Related

boost::asio and boost::thread_group where each thread has it's own libpqxx connection

I'm trying to combine boost::Asio, boost::thread_group where each thread has its own libpqxx(Prostgres) connection to the database. I seem unable to find any examples of asio/thread_group where the thread the task runs on has connection specific information. Asio seems to be specialized on the task containing all the information required to run it. Am I looking at the wrong combination to solve my specific problem?
I have a lot of requests coming in to my program, each of these requests require SQL commands to be run agains the DB ( timescaledb in my case ). These requests must be run on a limited number of connections agains the DB ( normally 8 in total).
My plan was to set up a thread_group of 8 threads each with it's own connection to the DB, and each thread connected to the asio::run. So that I could post new queries to the asio::post, and get a callback via signal2 when the result comes in.
Asio "hide" the threads, and thanks to assio::strands you can avoid more or less the concurrency. In very short you only throw task to asio, as a thread is available your task is submitted, but asio has a learning curve, as concurrency ...
As you describe your problem thread local storage is the answer.

Compute task with query from cache

I'm new to Apache Ignite (using 2.7) and I'm looking to create a set of compute tasks that also query data from a cache. I see in the docs the concept of collocated processing but I don't see any examples in the repo. Couple of things I'm unclear on:
1) I want to query the cache from within the task, do I need to create another instance of Cache using Ignite.start or Client mode from within this task, or is there some implicit variable I can use from the context to query the cache.
2) Specifically I'd like to to execute this task as the result of a Continuous Query callback, are there any example detailing that?
thanks
You should inject an instance of Ignite into your task - this is preferred approach.
This may be tricky - make sure to not run this task synchronously since you should not acquire any locks from Continuous Query callback. Maybe Async() methods are OK. The preferred approach is to schedule a taks into your own thread pool to handle procesing latter, and return from callback. Make sure that you don't wait on thread pool as it exhausts (since the common strategy is to run task synchronously if pool is full).

VB.Net multiple background workers leads to high CPU usage

I've got a VB.Net application that has two background workers. The first one connects to a device and reads a continuous stream of data from it into a structure. This runs and utilises around 2% of CPU.
From time to time new information comes in that's incomplete so I have another background worker which sits in a loop waiting for a global variable to be anything other than null. This variable is set when I want it to look up the missing information.
When both are running CPU utilisation goes to 30-50% of CPU.
I thought that offloading the lookup to its own thread would be a good move as the lookup process may block (it's querying a url) and this would avoid the first background worker from getting stuck as it needs to read the incoming data in realtime. However, just commenting out the code in worker 2 to leave just the Loop as shown below still results in the same high CPU.
Do While lookupRunning = True
If lookup <> "" Then
' Query a URL and get data
End If
Loop
The problem is clearly that I'm running an infinite loop on worker 2. Other than dumping this idea and looking up the information on the main thread with a very short timeout in case the web service fails to respond, putting Application.DoEvents in my loop doesn't seem to make much difference and seems to be frowned upon in any case.
Is there are better way to do this ?

Run long-running sproc (that doesn't need to return) from ASP.NET page

I would like to know how you would run a stored procedure from a page and just "let it finish" even if the page is closed. It doesn't need to return any data.
A database-centric option would be:
Create a table that will contain a list (or queue) of long-running jobs to be performed.
Have the application add an entry to the queue if, when, and as desired. That's all it does; once logged and entered, no web session or state data need be maintained.
Have a SQL Agent job configured to check every 1, 2, 5, whatever minutes to see if there are any jobs to run.
If there are as-yet unstarted items, mark the most recent one as started, and start it.
When it's completed, mark it as completed, or just delete it
Check if there are any other items to run. If there are, repeat; if not, exit the job.
Depending on capacity, you could have several (differently named) copies of this job running, concurrently processing items from the list.
(I've used this method for very long-running methods. It's more an admin-type trick, but it may be appropriate for your situation.)
Prepare the command first, then queue it in the threadpool. Just make sure the thread does not depend on any HTTP Context or any other http intrinsic object. If your request finishes before the thread; the context might be gone.
See Asynchronous procedure execution. This is the only method that guarantees the execution even if the ASP process crashes. It also self tuning and can handle spikes of load, requests are queued up and processed as resources become available.
The gist of the solution is leveraging the SQL Server Activation concept, which allows you to run a stored procedure in a background thread in SQL Server without a client connection.
Solutions based on SqlClient asynch methods or on CLR thread pool are unreliable, the calls are lost as the ASP process is recycled, and besides they build up in-memory queues of requests that actually trigger a process recycle due to memory consumption.
Solutions based on tables and Agent jobs are better, as they are reliable, but they lack the self tuning of Activation based solutions.

Does Debug.Writeline in VB.NET stop thread execution?

I have a VB.NET application that uses threads to asynchronously process some tasks in a "Scheduled Task" (console application).
We are limiting this application to run 10 threads at once, like so:
(pseudo-code)
- Create a generic list of 10 threads
- Spawn off the threadproc for each one
- Do a thread.join statement for each thread to wait for the longest running one to complete.
I am finding that if the code called by the threadproc contains any "Debug.Writeline" or "Trace.Traceinformation" statements, the thread hangs. I can see the thread in the Debug - Windows - Threads window and switch to it, but it highlights the Debug.Writeline statement and never gets past it.
Is there something special about the Debug or Trace statements that make them non-thread-safe?
Why would this hang things up? If I leave the debug statement in, the thread never completes. If I take the debug statement out, the thread completes in less than 5 seconds.
Yes and no.
Internally, Debug.WriteLine ends up calling into TraceInternal.WriteLine. This particular function does not explicitly stop thread execution but it does acquire a process global lock during the execution of the method. This lock protects both the list of trace listeners and serializes the processing of WriteLine commands.
It's possible for 2 threads to simultaneously hit this WriteLine statement and hence have one thread pause for a short period of time. It is also possible for a custom trace listener to be doing a very long lived or blocking operation which would essentially freeze all other threads for a noticable period of time.
Use Visual Studio to check and see what other threads are currently broken in this function. See if that gives you a clue as to what is holding up this process.
You may have a trace listener that maybe is interfering with the Debug.WriteLine.
There is a custom trace listener in this application. Once I commented it out, my locking problems were solved. Now if I could only track down the original developer to find out what they were doing with this custom listener...