concurrency thread group
Ultimate thread group
Both are pretty much similar when it comes to defining a load pattern when it comes to concurrency, however:
In the Ultimate Thread Group you can have > 1 row so you're more flexible when it comes to more advanced ramp-up and especially ramp-down behaviour. I.e. here is how you can add your load pattern one more time:
On the other hand Concurrency Thread Group can be connected with the Throughput Shaping Timer via Feedback Function so you will be able to control the throughput of your test execution and define it like: 100 requests per second and Concurrency Thread Group will add more threads if the current amount is not enough to reach/maintain the current throughput
Related
My understanding of the Bulkhead pattern is that it's a way of isolating thread pools. Hence, interactions with different services use different thread pools: if the same thread pool is shared, one service timing out constantly might exhaust the entire thread pool, taking down the communication with the other (healthy) services. By using different ones, the impact is reduced.
Given my understanding, I don't see any reason to apply this pattern to non-blocking applications as threads don't get blocked and, therefore, thread pools wouldn't get exhausted either way.
I would appreciate if someone could clarify this point in case I'm missing something.
EDIT (explain why it's not a duplicate):
There's another (more generic) question asking about why using Circuit-Breaker and Bulkhead patterns with Reactor. The question was answered in a very generic way, explaining why all Resilience4J decorators are relevant when working with Reactor.
My question, on the other hand, is particular to the Bulkhead pattern, as I don't understand its benefits on scenarios where threads don't get blocked.
The Bulkhead pattern is not only about isolating thread pools.
Think of Little's law: L = λ * W
Where:
L – the average number of concurrent tasks in a queuing system
λ – the average number of tasks arriving at a queuing system per unit of time
W – the average service time a tasks spends in a queuing system
The Bulkhead pattern is more about controlling L in order to prevent resource exhaustion. This can be done by using:
bounded queues + thread pools
semaphores
Even non-blocking applications require resources per concurrent task which you might want to restrict. Semaphores could help to restrict the number of concurrent tasks.
The RateLimiter pattern is about controlling λ and the TimeLimiter about controlling the maximum time a tasks is allowed to spend.
An adaptive Bulkhead can even replace RateLimiters. Have a look at this awesome talk "Stop Rate Limiting! Capacity Management Done Right" by Jon Moore"
We are currently developing an AdaptiveBulkhead in Resilience4j which adapts the concurrency limit of tasks dynamically. The implementation is comparable to TCP Congestion Control algorithms which are using an additive increase/multiplicative decrease (AIMD) scheme to dynamically adapt a congestion window.
But the AdaptiveBulkhead is of course protocol-agnostic.
This is an interview question I encountered today. I have some knowledge about OS but not really proficient at it. I think maybe there are limited threads for each process can create?
Any ideas will help.
This question can be viewed [at least] in two ways:
Can your process get more CPU time by creating many threads that need to be scheduled?
or
Can your process get more CPU time by creating threads to allow processing to continue when another thread(s) is blocked?
The answer to #1 is largely system dependent. However, any rationally-designed system is going to protect again rogue processes trying this. Generally, the answer here is NO. In fact, some older systems only schedule processes; not threads. In those cases, the answer is always NO.
The answer to #2 is generally YES. One of the reasons to use threads is to allow a process to continue processing while it has to wait on some external event.
The number of threads that can run in parallel depends on the number of CPUs on your machine
It also depends on the characteristic of the processes you're running, if they're consuming CPU - it won't be efficient to run more threads than the number of CPUs on your machine, on the other hand, if they do a lot of I/O, or any other kind of tasks that blocks a lot - it would make sense to increase the number of threads.
As for the question "how many" - you'll have to tune your app, make measurements and decide based on actual data.
Short answer: Depends on the OS.
I'd say it depends on how the OS scheduler is implemented.
From personal experience with my hobby OS, it can certainly happen.
In my case, the scheduler is implemented with a round robin algorithm, per thread, independent on what process they belong to.
So, if process A has 1 thread, and process B has 2 threads, and they are all busy, Process B would be getting 2/3 of the CPU time.
There are certainly a variety of approaches. Check Scheduling_(computing)
Throw in priority levels per process and per thread, and it really depends on the OS.
I am currently pursuing an undergraduate level course in Operating Systems. I'm somewhat confused about the functions of dispatcher and scheduler in process scheduling. Based on what I've learnt, the medium term scheduler selects the process for swapping out and in , and once the processes are selected, the actual swap operation is performed by Dispatcher by context switching. Also the short term scheduler is responsible for scheduling the processes and allocate them CPU time, based on the scheduling algorithm followed.
Please correct me if I'm wrong. I'm really confused about the functions of medium term scheduler vs dispatcher, and differences between Swapping & context switching.
You describing things in system specific terms.
The scheduler and the dispatcher could be all the same thing. However, the frequently are divided so that the scheduler maintains a queue of processes and the dispatcher handles the actual context switch.
If you divide the scheduler into long term, medium term, and short term, that division (if it exists at all) is specific to the operating system.
Swapping in the process of removing a process from memory. A process can be made non-executable through a context switch but may not be swapped out. Swapping is generally independent of scheduling. However, a process must be swapped in to run and the memory management will try to avoid swapping out executing processes.
A scheduler evaluate the requirement of the request to be serviced and thus imposes ordering.
Basically,whatever you have known about scheduler and dispatcher is correct.Sometimes they are referred to as a same unit or scheduler(short time in this case) contains dispatcher as a single unit and together are responsible for allocating a process to CPU for execution.Sometimes they are referred as two separate units,the scheduler selects a process according to some algorithm and the dispatcher is a software that is responsible for actual context switching.
This question is probably opposite of what every developer wants their system to do.
I am creating a software that looks into a directory for specific files and reads them in and does certain things. This can create high CPU load. I use GCD to make threads which are put into NSOperationQueue. What I was wondering, is it possible to make this operation not take this huge CPU load? I want to run things way slower, as speed is not an issue, but that the app should play very nice in the background is very important.
In short. Can I make NSOperationQueue or threads in general run slowly without using things like sleep?
The app traverses a directory structure, finds all images and creates thumbnails. Just the traversing of the directories makes the CPU load quite high.
Process priority: nice / renice.
See:
https://superuser.com/questions/42817/is-there-any-way-to-set-the-priority-of-a-process-in-mac-os-x#42819
but you can also do it programmatically.
Your threads are being CPU-intensive. This leads to two questions:
Do they need to be so CPU-intensive? What are they doing that's CPU-intensive? Profile the code. Are you using (say) a quadratic algorithm when you could be using a linear one?
Playing nicely with other processes on the box. If there's nothing else on the box then you /want/ to use all of the available CPU resource: otherwise you're just wasting time. However, it there are other things running then you want to defer to them (within reason), which means giving your process a lower priority (i.e. /higher/ nice value) than other processes. Processes by default have nice value 0, so just make it bigger (say +10). You have to be root to give a process negative niceness.
The Operation Queues section of the in the Concurrency Programming Guide describes the process for changing the priority of a NSOperation:
Changing the Underlying Thread Priority
In OS X v10.6 and later, it is possible to configure the execution priority of an operation’s underlying thread. Thread policies in the system are themselves managed by the kernel, but in general higher-priority threads are given more opportunities to run than lower-priority threads. In an operation object, you specify the thread priority as a floating-point value in the range 0.0 to 1.0, with 0.0 being the lowest priority and 1.0 being the highest priority. If you do not specify an explicit thread priority, the operation runs with the default thread priority of 0.5.
To set an operation’s thread priority, you must call the setThreadPriority: method of your operation object before adding it to a queue (or executing it manually). When it comes time to execute the operation, the default start method uses the value you specified to modify the priority of the current thread. This new priority remains in effect for the duration of your operation’s main method only. All other code (including your operation’s completion block) is run with the default thread priority. If you create a concurrent operation, and therefore override the start method, you must configure the thread priority yourself.
Having said that, I'm not sure how much of a performance difference you'll really see from adjusting the thread priority. For more dramatic performance changes, you may have to use timers, sleep/suspend the thread, etc.
If you're scanning the file system looking for changed files, you might want to refer to the the File System Events Programming Guide for guidance of lightweight techniques for responding to file system changes.
I have been doing some research on priority scheduling algorithms, and although I find Priority Aging to be a very basic (and seemingly sound) strategy, I can barely find information about it. Could someone please let me know the issues and advantages of implementing an algorithm? Thanks!
It appears that priority aging changes the priority of a task (usually lower) depending on how long the task has been running and / or how many resources the task consumes.
IBM has an explanation of the priority aging in DB2 version 9.7 for Linux, Unix, and Windows.
The biggest advantage of priority aging comes from the IBM explanation:
A simple approach that you can use to help short queries to run faster is to define a series of service classes with successively lower levels of resource priority and threshold actions that move activities between the service subclasses. Using this setup, you can decrease, or age, the priority of longer-running work over time and perhaps improve response times for shorter-running work without having detailed knowledge of the activities running on your data server.
The biggest disadvantage is that priority aging is harder to implement than a first-in, first-out queue, and may not provide any response time improvement.
Advantages of Priority Scheduling:
Simplicity.
Reasonable support for priority.
Suitable for applications with varying time and resource requirements.
Disadvantages of Priority Scheduling:
Indefinite blocking or starvation.
A priority scheduling can leave some low priority waiting processes indefinitely for CPU.
If the system eventually crashes then all unfinished low priority processes gets lost.
Depending on how the aging process works, the worst case behavior can be just as bad as a simple queue.