AWS EMR Presto Cluster Terminated abruptly Error: All slaves in the job flow were terminated due to Spot - amazon-emr

I am having trouble with AWS EMR PrestoDB.
I launched a cluster with master nodes as coordinator and core nodes as workers. Core nodes were spot instances. But, master node was on demand. After 5 weeks of cluster launch, i got this error message
Terminated with errorsAll slaves in the job flow were terminated due to Spot
Is it that if all slaves are terminated will make the cluster itself terminate?
I see the spot pricing history, and it didn't reached around the max price I set.
What I have already done?
I have checked logs that are dumped to s3. I didn't found any information about the cause of termination. It just said
Failed to visit ... <many directories>

I am answering my own question. As per the presto community, there must be at least one master node up and running in the AWS EMR Presto cluster. But since it got terminated, the whole cluster got terminated.

To avoid data loss because of spot pricing/interruption the data needs to be backed up by either snapshot, frequent copy to s3 or leaving EBS volume behind.
Ref: https://aws.amazon.com/premiumsupport/knowledge-center/spot-instance-terminate/
Your cluster should be still be up but without task nodes. Under Cluster-> Details -> Hardware you can add the task nodes.
Adding task nodes
Similar scenario : AWS EMR Error : All slaves in the job flow were terminated
For using Spot you might want to use instance termination notice and also setup the max price :
https://aws.amazon.com/blogs/compute/new-amazon-ec2-spot-pricing/

Related

EMR : All slaves in the job flow were terminated due to Spot

We are having one issue with EMR and Spot instances.
We have clusters in different environment (different AWS accounts) in same region :
One master node with market type : On Demand
Two core nodes with market type : Spot
When the spot instances are terminated (over my maximum bids, out of capacity or wathever), the cluster terminate and I've only this message :
All slaves in the job flow were terminated due to Spot
After research, people already had this issues but it was due to a master node with market type Spot, this is not my case :
AWS EMR Presto Cluster Terminated abruptly Error: All slaves in the job flow were terminated due to Spot (thought this one is curious because it presents an "on-demand" master node in the question but then explain its problem by the termination of the master node)
AWS EMR Error : All slaves in the job flow were terminated
I tryied to find a response in AWS documentation but its all tell the opposite of what we suspect :the two core nodes termination, terminates the cluster
Regards,
This has happened because you have chosen core node to be of spot type. If you read the best practices for instance type in AWS EMR, you will find that they suggest using at least one on-demand instance for the core node. Remember that this will come at an extra cost.
You can use instance fleet option for the core node and add both the spot and on-demand instance type to this instance fleet.
So the general thumb rule is
Keep master and core instances as on-demand and task instances as
spot.
I am adding a few links where you can read more about this and configure your cluster accordingly.
Link1: Cluster configuration and Best Practices
Link2: Types of nodes in EMR

Apache Ignite Cache Read Issue (All affinity node left the grid)

Ignite Version: 2.5
Ignite Cluster Size: 10 nodes
One of our spark job is writing data to ignite cache in every one hour. The total records per hour is 530 million. Another spark job read the cache but when it try to read it, we are getting the error as "Failed to Execute the Query (all affinity nodes left the grid)
Any pointers will be helpful.
If you are using "embedded mode" deployment it will mean that nodes are started when jobs are ran and stopped when jobs finish. If you do not have enough backups you will lose data when this happens. Any chance this may be your problem? Be sure to connect to Ignite cluster with client=true.

Minimum amount of Nodes in Redis Cluster [duplicate]

I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.

Cluster Failover

I know I'm asking something very obvious about cluster failover.
I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters.
I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things,
1) Resharding
Could not possible until failed server goes live
2) Minimum 3 node limitation for create cluster
As per bit understanding, redis-trib.rb not allowing me to create cluster for two nodes
There might be some solution in code file :)
3) Automatic Way to Re-Create new structure with live nodes
As programmer point of view, I'm searching something automatic for my system. Something that trigger one command when Redis Cluster fails some tasks happens internally. like
Shutdown all other redis cluster servers
Remove nodes-[port].conf files from all cluster nodes folder
Start redis cluster servers
Run "redis-trib.rb create ip:port ip:port"
I'm just trying to minimize administration work :). Otherwise I need to implement some other algorithm "Data Consistency" here.
If any of you guys have any solution or idea, kindly share.
Thanks,
Sanjay Mohnani
In a cluster with only master nodes, if a node fails, data is lost. Therefore no resharding is possible, since it is not possible to migrate the data (hash slots) out of the failed node.
To keep the cluster working when a master fails, you need slave nodes (one per master). This way, when a master fails, its slave fails over (becomes the new master with the same copy of the data).
The redis-trib.rb script does not handle cluster creation with less than 3 masters, however in redis-cluster a cluster can be of any size (at least one node).
Therefore adding slave nodes can be considered an automatic solution to your problem.

How to terminate/remove a job flow in Amazon EMR?

I created a job flow using Amazon Elastic MapReduce (Amazon EMR) and it failed due to some unknown reasons. Then I tried to terminate the job flow thru the AWS Management Console but the 'Terminate' button was disabled. Then I tried to terminate the job flow using the CLI and it showed that the job flow is terminated, but still it shows as failed in the job flow list when seen thru the CLI as well as in the Elastic MapReduce tab in the management console.
Please let me know how to remove the job flow from the list.
When I tried to debug the job flow it shows two errors:
The debugging functionality is not available for this job flow because you did not specify an Amazon S3 Log Path when you created it.
Job flow failed with reason: Invalid bucket name 'testBucket': buckets names must contain only lowercase letters, numbers, periods (.), and dashes (-).
You are facing two issues here:
Job Flow Failure
First and foremost, the problem triggering the termination state of the Amazon EMR job flow that's irritating you can be remedied immediately:
I created a job flow using Amazon Elastic MapReduce (Amazon EMR) and
it failed due to some unknown reasons.
The reason for the job flow failure can actually be inferred from error 2 in the listing you provided:
Job flow failed with reason: Invalid bucket name 'testBucket': buckets
names must contain only lowercase letters, numbers, periods (.), and
dashes (-). [emphasis mine]
Your bucket name 'testBucket' clearly violates the stated lowercase naming requirement, thus changing the name to lowercase only (e.g. 'testbucket' or 'test-bucket') will allow you to run the job flow as desired.
Termination State
Furthermore, the Job Flow termination state is presumably no problem at all. While it can happen in rare cases, that Amazon EC2 instances or other resources are actually stuck in some state, what you are seeing is perfectly reasonable and normal at first sight:
It may take a while to completely terminate a job flow in the first place, see TerminateJobFlows:
The call to TerminateJobFlows is asynchronous. Depending on the
configuration of the job flow, it may take up to 5-20 minutes for
the job flow to completely terminate and release allocated
resources, such as Amazon EC2 instances. [emphasis mine]
Even terminated EC2 resources may be listed for quite a while still, see e.g. the AWS team response to EC2 Instance stuck in "terminated" state:
Terminated means "gone forever"; although sometimes it hangs around in
the UI for several hours. [emphasis mine]
I regularly witness this behavior for EC2 instances, which usually vanish from the instance listing quite some hours later only indeed. Consequently I suspect that the terminated job flow has meanwhile vanished from your job flow list.
Update
I've actually suspected this to be the case indeed, but am still unable to find related information in the official documentation; however, apparently terminated job flows are potentially visible one way or another for up to two month even, see e.g. the AWS team response to Console not showing jobs older than a month:
While the console lists all running job flows, it only displays
terminated job flows launched in the last month. Alternatively, you
can use the Ruby CLI to list all job flows launched in the last two
months with the following command: [...] [emphasis mine]
if your application is running on hadoop yarn, you can always use yarn to manage your application:
yarn application -list
yarn application -kill application_name