Repast - Is batch run already parallelized? - repast-simphony

If I my computer has 12 cores and my model has 15 scenarios runs, is the batch run automatically distribute the initial 12 runs to each core and run the respective runs concurently to save time? If yes, I'd like to also know if I can control the use of cores, e.g. limit to using 8 cores at a time of runinng the batch runs to prevent OOM if a single run is large scale.

Take a look at section 2.3 Host Panel here. You'll see that the Instances property determines how many independent workers will be used to process the scenarios you define. E.g., if indicate Instances: 8, then 8 workers using 8 cores will be processing your 15 scenarios.

Related

Optaplanner - multithreading

I am using optaplanner 8.17.FINAL with Java 17.0.2 inside a kubernetes cluster, my server has 32 cores + hyper threading. My app scales to 14 pods and I use moveThreadCount = 4 . On a single run, everything works fine, but on a parallel run, the speed of the optaplanner drops. With 7 launches, the drop is insignificant, 5-10%. But with 14 launches, the speed drop is about 50%. Of course, you can say that there are not enough physical cores, but I'm not sure that hyperthreading works like that. In resource monitoring, I see that 60 logical cores are involved with 14 launches, but why then do the speed drop twice?
I'm tried to inscrease heap size and change garbage collector (G1GC, SerialGC, ParallelGC), but it has little effect
I am not an expert on hyperthreading by any means but perhaps OptaPlanner, by
fully utilizing the entire core(s), cannot benefit from HT so much. If so, you just don't have enough CPU cores to run so many solvers in parallel, which leads to context switching and performance drop, as a result.
You can prove that by adding more cores. If it helps, it means there is no artificial bottleneck for this amount of tasks.

Usages of cores in Spark SQL Execution

I am new to Spark SQL queries and trying to understand it's working under the hood.
I have come across the term "Core" in the Spark vocabulary but still struggling to get a hold on the same.
I know that - 1 core = 1 task.
My questions -
Can anyone please explain what exactly does a core mean ?
Does Spark UI show the number of cores currently allocated for my job ? If yes,
then where can I see it ?
If I find in the Spark UI that the number of tasks running is less, is
there a way to increase the number of cores allocated for my job, so
that Spark can submit more tasks and make my job run faster ?
Please advise.
Yes, you are right in a way.
In spark task are distributed across executors, on each executor number of task running is equal to the number of cores on that executors. So basically core is something that is going to execute your task. The task here is the most granular work that needs to be carried out.
JOB=>STAGE=>TASK
Yes, spark UI shows you the number of the task currently running on your every executor. You can check them under the Executors tab. This tab shows you a very detailed view of your task allocation against the number of cores available and a lot of other details.
Yes, you can increase the number of cores. You can do that by passing the argument in the spark-submit command.
--executor-cores n
Here n is the number of cores you want. For optimum usage, it should be 5.
It is not necessary that more than the number of cores faster your job will run.
Your task needs to be distributed equally across all the cores available to run faster.
If you provide more cores than required they will remain idle most of the time.

single thread process on multi cpu and threads

lets says I have single threaded process and 2 CPU each with 2 cores.
How many processes can I run at any moment? 2 or 4? I couldn't find a clear answer for this.
is the cpu bound to he process and a core is wasted so only 2 processes can run at the same time or there is optimizations and we can run 4 processes at the same time on the 4 cores even if we only have 2 cpus?
There is no limit. The number of cores or CPUs has no connection whatsoever to the number of processes you can run.
I'm typing this answer to you on a machine with 8 cores that's currently executing 218 processes with a total of 524 threads.
is the cpu bound to he process and a core is wasted so only 2 processes can run at the same time or there is optimizations and we can run 4 processes at the same time on the 4 cores even if we only have 2 cpus?
A CPU has no idea what a process is and doesn't care whether a thread it's executing is associated with a process or not. Processes are OS concepts and CPUs don't know or care about them.

How To Let A Program Use All CPU Power In VB.NET?

I'm working on a password list generator program. This program needs to be as fast as possible. But it only uses 13% of CPU:
What should I do to make it use all CPU power available ?
Heh. I thought it might be 8 cores. The reason is that your app is running on one thread and therefore only one core is being used. 13% is about 1/8 of 100 :)
If you can split the process up into 8 separate threads, then it will use the other 7 cores.
Obviously your program is only using one thread and because of this not all cores of your CPU are used.
You have to convert your program into something multithreaded

How to ensure multiple redis instances running on different cores?

I've a 4-core server and I want to run redis on it. To fully utilize the capabilities of the 4 cores, it is expected to launch 4 redis instances, since redis is designed to be single-threaded.
However, I'm curious how to ensure that the 4 instances are exactly running on 4 different cores? How can an instance decide the core on which it is running when it is launched?
Redis itself does not provide such guarantee.
If you launch 4 instances, there will be 4 different processes that the operating system will have to get scheduled on the 4 cores. It is up to the OS to perform this load balancing, optimizing the performance of the system.
Now, if you really want to bind each instance to a specific core, modern OS usually provides tools to enforce the execution of a process on a specific CPU core.
For instance, on Linux, you can have a look at the taskset and the numactl commands.
In practice, you need to be careful with this, because once you launch Redis on a specific core (setting a CPU mask), all the threads and child processes will inherit from this CPU mask. So when Redis will try to trigger a background save operation, or a background AOF rewrite, it will seriously impact the performance of the Redis instance. This is due to the fact the main Redis thread will have share the CPU core with the background operation (which is typically CPU consuming).
If you really want to play with CPU binding (but is it really a good idea?), you need to bind N Redis instances to N+1 CPU cores, keeping one core free for the background operations, and make sure at most one background operation can run at the same time for these instances.