Why can't I mine in more than one pool at the same time in pool mining? - bitcoin

Why can't I mine in more than one pool at the same time in pool mining?
I thought I could send one share to more than one pool if the difficulty was the same.
It will be possible to increase my reward.

When you mine in a pool, you must try to mine blocks that reward the members of that pool. That's what it means to mine in a pool.
An attempt to mine a block must be an attempt to mine some particular block. If you're attempting to mine a block that pays rewards to the members of pool A, you aren't attempting to mine a block that pays rewards to the members of pool B.
Why can't I mine in more than one pool at the same time in pool mining?
Because no two pools ever try to mine the same block. To mine in a pool, you must be trying to mine the block that pool wishes to mine.
I thought I could send one share to more than one pool if the difficulty was the same.
Nope. If you aren't actually trying to produce the blocks that a pool wants to mine, you are aren't entitled to shares.
It will be possible to increase my reward.
No, it won't. Pool A won't reward you with shares for trying to mine blocks that pay to pool B's members. Why would they? And Pool B will only pay you for trying to mine blocks that pay to their members. Why would they pay you to mine any other blocks?

You surely can do mining in more than one pool at the same time, but your hashrate don't speedup. Because mining pools use different methodologies to assign work to miners. Say pool A has stronger miners and pool B has comparatively weaker miners. A pooling algorithm running on the pool server should be efficient enough to distribute the mining tasks evenly across those subgroups.
Your miner device can only solve one question which distributed by PoolA or PoolB. Though the difficulty is same, the mining tasks from PoolA and PoolB are different.
You can checkout bitcon stratum mining protocol for more information.

Related

Can a Deadlock occur with CPU as a resource?

I am on my fourth year of Software Engineering and we are covering the topic of Deadlocks.
The generalization goes that a Deadlock occurs when two processes A and B, use two resources X and Y and wait for the release of the other process resource before releasing theirs.
My question would be, given that the CPU is a resource in itself, is there a scenario where there could be a deadlock involving CPU as a resource?
My first thought on this problem is that you would require a system where a process cannot be released from the CPU by timed interrupts (it could just be a FCFS algorithm). You would also require no waiting queues for resources, because getting into a queue would release the resource. But then I also ask, can there be Deadlocks when there are queues?
CPU scheduler can be implemented in any way, you can build one which used FCFS algorithm and allowed processes to decide when they should relinquish control of CPU. but these kind of implementations are neither going to be practical nor reliable since CPU is the single most important resource an operating system has and allowing a process to take control of it in such a way that it may never be preempted will effectively make process the owner of the system which contradicts the basic idea that operating system should always be in control of the system.
As far as contemporary operating systems (Linux, Windows etc) are concerned, this will never happen because they don't allow such situations.

What's the purpose of applying the Bulkhead pattern on a non-blocking application?

My understanding of the Bulkhead pattern is that it's a way of isolating thread pools. Hence, interactions with different services use different thread pools: if the same thread pool is shared, one service timing out constantly might exhaust the entire thread pool, taking down the communication with the other (healthy) services. By using different ones, the impact is reduced.
Given my understanding, I don't see any reason to apply this pattern to non-blocking applications as threads don't get blocked and, therefore, thread pools wouldn't get exhausted either way.
I would appreciate if someone could clarify this point in case I'm missing something.
EDIT (explain why it's not a duplicate):
There's another (more generic) question asking about why using Circuit-Breaker and Bulkhead patterns with Reactor. The question was answered in a very generic way, explaining why all Resilience4J decorators are relevant when working with Reactor.
My question, on the other hand, is particular to the Bulkhead pattern, as I don't understand its benefits on scenarios where threads don't get blocked.
The Bulkhead pattern is not only about isolating thread pools.
Think of Little's law: L = λ * W
Where:
L – the average number of concurrent tasks in a queuing system
λ – the average number of tasks arriving at a queuing system per unit of time
W – the average service time a tasks spends in a queuing system
The Bulkhead pattern is more about controlling L in order to prevent resource exhaustion. This can be done by using:
bounded queues + thread pools
semaphores
Even non-blocking applications require resources per concurrent task which you might want to restrict. Semaphores could help to restrict the number of concurrent tasks.
The RateLimiter pattern is about controlling λ and the TimeLimiter about controlling the maximum time a tasks is allowed to spend.
An adaptive Bulkhead can even replace RateLimiters. Have a look at this awesome talk "Stop Rate Limiting! Capacity Management Done Right" by Jon Moore"
We are currently developing an AdaptiveBulkhead in Resilience4j which adapts the concurrency limit of tasks dynamically. The implementation is comparable to TCP Congestion Control algorithms which are using an additive increase/multiplicative decrease (AIMD) scheme to dynamically adapt a congestion window.
But the AdaptiveBulkhead is of course protocol-agnostic.

Understanding BLOCKED_TIME in PerfView

We are suspecting that we're experciencing thread pool starvation on a server that is running a couple of ASP.NET Core APIs and a couple of .NET Core consoles.
I ran perfview one one of our servers were we are suspecting problems with thread pool starvation. However I'm having a bit of trouble analyzing the results.
I ran PerfView /threadTime collect for about 60 seconds. And this is the result I got (I chose one to look at one of our ASP.NET Core APIs):
Looking at "By Name" we can see that there is a lot of time spent in BLOCKED_TIME. If I double click then I'm taken to the following view where I can expand one of the nodes to get the following view (the overwritten part is the name of our API process):
What does that tell me? Shouldn't I be able to see what exactly is blocking? And does it look like the problem is that a lot of threads is blocking each one for a small amount of time?
Are there any other conclusions we can draw from this?
BLOCKED_TIME generally means a period when the thread wasn't doing anything at all. This could be periods of I/O, where network or other types of latency are involved or time spent waiting on locks such as in situations with semaphores. In short, this doesn't necessarily tell you anything, as there's perfectly standard and reasonable reasons for the thread to be idled. However, a goodish amount of time spent blocked can be an indication of an underlying problem. Perhaps you have too much network latency. Perhaps you're trying to do too much file system work on a slow drive. In short, it may or may not indicate a problem, and even if it does indicate a problem, it doesn't really tell you what the problem is.
In general, if you're experiencing thread starvation, the first thing you should look at is thread pool utilization. Are you using async everywhere you can? Are you doing things that are big no-nos in web apps such as using Task.Run, Task.StartNew or worse, Thread.Start? All those created threads are coming out of the same thread pool, and thus proportionally reducing your server throughput.
There's an all too common pattern of attempting to schedule long-running jobs by shuffling them to new threads. That's death to a web application. All threads in the pool are there to service requests, not long-running jobs, and as such, requests should be handled quickly and efficiently so that the thread can be returned to the pool in short order to field other requests. If you need to background work, you need to truly background it, by offloading to another process or even a different machine entirely.
Short of all that, maybe you're just getting more load than the server can handle in general. That's always a possibility. Perhaps you need to vertically scale your system resources (and the thread pool with it). Perhaps you need to horizontally scale by replicating this server with a load balancer in front. Given that you're running multiple different things on the same server, an easy way to horizontally scale is to simply divvy out these things to their own machines. That alone would probably help tremendously. However, scaling, either vertically or horizontally, should be your last resort. Make sure you're using resources efficiently first, before throwing more resources at your inefficient things.

Is it useful to create many threads for a specific process to increase the chance of this process executed?

This is an interview question I encountered today. I have some knowledge about OS but not really proficient at it. I think maybe there are limited threads for each process can create?
Any ideas will help.
This question can be viewed [at least] in two ways:
Can your process get more CPU time by creating many threads that need to be scheduled?
or
Can your process get more CPU time by creating threads to allow processing to continue when another thread(s) is blocked?
The answer to #1 is largely system dependent. However, any rationally-designed system is going to protect again rogue processes trying this. Generally, the answer here is NO. In fact, some older systems only schedule processes; not threads. In those cases, the answer is always NO.
The answer to #2 is generally YES. One of the reasons to use threads is to allow a process to continue processing while it has to wait on some external event.
The number of threads that can run in parallel depends on the number of CPUs on your machine
It also depends on the characteristic of the processes you're running, if they're consuming CPU - it won't be efficient to run more threads than the number of CPUs on your machine, on the other hand, if they do a lot of I/O, or any other kind of tasks that blocks a lot - it would make sense to increase the number of threads.
As for the question "how many" - you'll have to tune your app, make measurements and decide based on actual data.
Short answer: Depends on the OS.
I'd say it depends on how the OS scheduler is implemented.
From personal experience with my hobby OS, it can certainly happen.
In my case, the scheduler is implemented with a round robin algorithm, per thread, independent on what process they belong to.
So, if process A has 1 thread, and process B has 2 threads, and they are all busy, Process B would be getting 2/3 of the CPU time.
There are certainly a variety of approaches. Check Scheduling_(computing)
Throw in priority levels per process and per thread, and it really depends on the OS.

Apache Commons Pool LIFO vs FIFO

I'm wondering which are the advantages of a LIFO stack vs a FIFO queue in the implementation of a pool with Apache Commons Pool. Wouldn't be more "secure" to default as FIFO to avoid getting timeout connections (opened at start but not used until peak hours) and probably avoid having to test on idle?
I'd appreciate any opinions. Thank you very much.
Some advantages to LIFO (the default) can be
The idle object evictor will work more effectively if turned on
Work may be concentrated on a smaller number of instances, reusing more recently used resources.
Whether or not these are benefits depends on what the pooled objects are, what the load distribution is, how important it is to keep workload concentrated on a small number of instances and how beneficial it is to reuse more recently used resources.
You are correct that using LIFO can cause some instances to sit idle in the pool for longer periods. If keeping the pool trimmed down and concentrating load are not advantages, timeouts are a problem and load distribution is such that FIFO access works to keep instances fresh, that configuration can make sense. This is why the configuration option is there.