Apache Ignite - Pool size in Executor - ignite

I am trying to use cluster based executor service.
// Get cluster-enabled executor service.
ExecutorService exec = ignite.executorService();
Is there anyway to set the number of threads in the executor service pool?
Hope, Jobs will be executed in each node in a cluster in a round robin fashion.
Thanks

Jobs submitter to distributed executor service are executed in a public thread pool. Its size can be configured via IgniteConfiguration.publicThreadPoolSize configuration property. Note that the size is specified per node.

Related

RabbitMq Consumer not working because micronaut assigns executor threads to kafka consumer

I am running kafka in micronaut v3.4.3 in Kotlin and recently I integrated RabbitMq with the server using micronaut-rabbitmq v3.4.0. In the docs it is mentioned to specify the executors for the RabbitMq consumers in application.yml.
Now when the server starts, since the kafka listeners are already using the executor threads indefinitely RabbitMq consumers are not able to get a lock on those threads.
So, Is there a way to segregate consumer executor threads for both kafka and RabbitMq?

Jedis pool configuration for get and set/flush operations

I am new to redis and using Jedis client in my application. I have gone through couple of threads and did not find the conclusive answers.
I have 2 questions where I need clarity.
For my production use I want to set separate timeout for jedis get operations and set operations. For all set operation I want to set timeout to 2000ms and for get 100ms. I have implemented below configuration.
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxIdle(30);
poolConfig.setMinIdle(10);
poolConfig.setMaxWaitMillis(2000);
jedisPool = new JedisPool(poolConfig, RedisDBUrl, port, 100);
Let me know if above configuration will do the job. I am setting read timeout to 100ms and maxwait to 2000ms.
Let me know if my understanding is correct.
At times I get JedisConnectionException: java.net.SocketTimeoutException: Read timed out or sometimes connect timeout.
Here connect time out is thrown when my application is not able to make connection to redis withing configured time?
Secondly, read timeout comes when application is connected to redis but operations(get/set) are taking time or for some reason?
Lastly, how do i configure timeout for read timeout and connect timeout?
After many hit and trial and some test runs found out, you can not set separate timeout for jedis get and set operations.
May be you can use some external library to achieve(Like google's SimpleTimeLimiter.
Further from what I have observed connect timeout occurs when jedis tries to connect to redis server. In my case my redis server latency from my system is ~120-125ms so if I set timeout=100ms in jedis constructor I get "connect time out".
Whereas "read time out" comes when you are connected to redis server but redis operation doesn't return in specified time. To test this scenario I have set the timeout in constructor to 180ms and tried to run the flushall operation(takes long time), here I got read timeout.
Still not sure though whats the significance of poolConfig.setMaxWaitMillis().

What is difference between YARN and hive2 queues?

What is difference between yarn.scheduler.capacity.root.queues and hive.server2.tez.default.queues?
In short :
hive.server2.tez.default.queues values are subset of
yarn.scheduler.capacity.root.queues(If capacity scheduler is configured in YARN, if not other scheduler) values.
Detailed answer:
hive.server2.tez.default.queues: (Default: empty)
A list of comma separated values corresponding to YARN queues of the
same name. When HiveServer2 is launched in Tez mode, this
configuration needs to be set for multiple Tez sessions to run in
parallel on the cluster.
This does NOT mean that queries can't be issued to other "existing"
queue defined in capacity scheduler. source
yarn.scheduler.capacity.root.queues:
The CapacityScheduler has a pre-defined queue called root. All queueus in the system are children
of the root queue. Further queues can be setup by configuring
yarn.scheduler.capacity.root.queues with a list of comma-separated
child queues. source, setting up capacity scheduler
So, the scope of hive.server2.tez.default.queues is upto Hive queries only, but yarn.scheduler.capacity.root.queues scope will be for all the components(like MapReduce and Spark) in the cluster which are using YARN as Resource Manager.

Amazon EMR managing my spark cluster

I have a spark setup on Amazon EC2 machines with 2 worker machines running. It reads data from cassandra, do some processing and write to sql server. I have heard about amazon EMR and read about it. I want a managed system where my worker machines are automatically added to my cluster if my job is taking more time and shutdown when my job gets completed.
Can I achieve this through Amazon EMR?
The requirements are:
My worker machines are automatically added to my cluster if my job is taking more time.
Shutdown when my job gets completed.
No. 2 is definitely possible if your job is launched from the steps. There is an option that auto-terminates cluster after the last step is completed. Alternatively, this could also be done programatically with the SDK.
No. 1 is a little more difficult but EMR has three classes of nodes; master, core, and task. Task nodes can be added after cluster creation. The trigger for that would probably have to be done programatically or utilizing another Amazon service, like Lambda.

What is the correct way to use the timeout manager with the distributor in NServiceBus 3+?

Version pre-3 the recommendation was to run a timeout manager as a standalone process on your cluster, beside the distributor. (As detailed here: http://support.nservicebus.com/customer/portal/articles/965131-deploying-nservicebus-in-a-windows-failover-cluster).
After the inclusion of the timeout manager as a satellite assembly, what is the correct way to use it when scaling out with the distributor?
Should each worker of Service A run with timeout manager enabled or should only the distributor process for service A be configured to run a timeout manager for service A?
If each worker runs it, do they share the same Raven instance for storing the timeouts? (And if so, how do you make sure that two or more workers don't pick up the same expired timeout at the same time?)
Allow me to answer this clearly myself.
After a lot of digging and with help from Andreas Öhlund on the NSB team(http://tech.groups.yahoo.com/group/nservicebus/message/17758), the correct anwer to this question is:
Like Udi Dahan mentioned, by design ONLY the distributor/master node should run a timeout manager in a scale out scenario.
Unfortunately in early versions of NServiceBus 3 this is not implemented as designed.
You have the following 3 issues:
1) Running with the Distributor profile does NOT start a timeout manager.
Workaround:
Start the timeout manager on the distributor yourself by including this code on the distributor:
class DistributorProfileHandler : IHandleProfile<Distributor>
{
public void ProfileActivated()
{
Configure.Instance.RunTimeoutManager();
}
}
If you run the Master profile this is not an issue as a timeout manager is started on the master node for you automatically.
2) Workers running with the Worker profile DO each start a local timeout manager.
This is not as designed and messes up the polling against the timeout store and dispatching of timeouts. All workers poll the timeout store with "give me the imminent timeouts for MASTERNODE". Notice they ask for timeouts of MASTERNODE, not for W1, W2 etc. So several workers can end up fetching the same timeouts from the timeout store concurrently, leading to conflicts against Raven when deleting the timeouts from it.
The dispatching always happens through the LOCAL .timouts/.timeoutsdispatcher queues, while it SHOULD be through the queues of the timeout manager on the MasterNode/Distributor.
Workaround, you'll need to do both:
a) Disable the timeout manager on the workers. Include this code on your workers
class WorkerProfileHandler:IHandleProfile<Worker>
{
public void ProfileActivated()
{
Configure.Instance.DisableTimeoutManager();
}
}
b) Reroute NServiceBus on the workers to use the .timeouts queue on the MasterNode/Distributor.
If you don't do this, any call to RequestTimeout or Defer on the worker will die with an exception saying that you have forgotten to configure a timeout manager. Include this in your worker config:
<UnicastBusConfig TimeoutManagerAddress="{endpointname}.Timeouts#{masternode}" />
3) Erroneous "Ready" messages back to the distributor.
Because the timeout manager dispatches the messages directly to the workers input queues without removing an entry from the available workers in the distributor storage queue, the workers send erroneous "Ready" messages back to the distributor after handling a timeout. This happens even if you have fixed 1 and 2, and it makes no difference if the timeout was fetched from a local timeout manager on the worker or on one running on the distributor/MasterNode. The consequence is a build up of an extra entry in the storage queue on the distributor for each timeout handled by a worker.
Workaround:
Use NServiceBus 3.3.15 or later.
In version 3+ we created the concept of a master node which hosts inside it all the satellites like the distributor, timeout manager, gateway, etc.
The master node is very simple to run - you just pass a /master flag to the NServiceBus.Host.exe process and it runs everything for you. So, from a deployment perspective, where you used to deploy the distributor, now you deploy the master node.