How to install sqoop in Amazon EMR? - hive

I've created a cluster in Amazon EMR and using -emr-4.0.0. Hadoop distribution:Amazon 2.6.0 and Hive 1.0.0. Need to install Sqoop so that I can communicate between Hive and Redshift? What are the steps to install Sqoop in EMR cluster? Requesting to provide the steps. Thank You!

Note that in EMR 4.0.0 hadoop fs -copyToLocal will throw errors.
Use aws s3 cp instead.
To be more specific than Amal:
Download the latest version of SQOOP and upload it to an S3 location. I am using sqoop-1.4.4.bin__hadoop-2.0.4-alpha and it seems to work just fine with EMR 4.0.0
Download the JAR connector for Redshift and upload it to same S3 location. This page might help.
Upload a script similar to the one below to S3
#!/bin/bash
# Install sqoop and mysql connector. Store in s3 and load
# as bootstrap step.
bucket_location='s3://your-sqoop-jars-location/'
sqoop_jar='sqoop-1.4.4.bin__hadoop-2.0.4-alpha'
sqoop_jar_gz=$sqoop_jar.tar.gz
redshift_jar='RedshiftJDBC41-1.1.7.1007.jar'
cd /home/hadoop
aws s3 cp $bucket_location$sqoop_jar_gz .
tar -xzf $sqoop_jar_gz
aws s3 cp $bucket_location$redshift_jar .
cp $redshift_jar $sqoop_jar/lib/
Set SQOOP_HOME and add SQOOP_HOME to the PATH to be able to call sqoop from anywhere. These entries should be made in /etc/bashrc. Otherwise you will have to use the full path, in this case: /home/hadoop/sqoop-1.4.4.bin__hadoop-2.0.4-alpha/bin/sqoop
I am using Java to programatically launch my EMR cluster. To configure bootstrap steps in Java I create a BootstrapActionConfigFactory:
public final class BootstrapActionConfigFactory {
private static final String bucket = Config.getBootstrapBucket();
// make class non-instantiable
private BootstrapActionConfigFactory() {
}
/**
* Adds an install Sqoop step to the job that corresponds to the version set in the Config class.
*/
public static BootstrapActionConfig newInstallSqoopBootstrapActionConfig() {
return newInstallSqoopBootstrapActionConfig(Config.getHadoopVersion().charAt(0));
}
/**
* Adds an install Sqoop step to the job that corresponds to the version specified in the parameter
*
* #param hadoopVersion the main version number for Hadoop. E.g.: 1, 2
*/
public static BootstrapActionConfig newInstallSqoopBootstrapActionConfig(char hadoopVersion) {
return new BootstrapActionConfig().withName("Install Sqoop")
.withScriptBootstrapAction(
new ScriptBootstrapActionConfig().withPath("s3://" + bucket + "/sqoop-tools/hadoop" + hadoopVersion + "/bootstrap-sqoop-emr4.sh"));
}
}
Then when creating the job:
Job job = new Job(Region.getRegion(Regions.US_EAST_1));
job.addBootstrapAction(BootstrapActionConfigFactory.newInstallSqoopBootstrapActionConfig());

Download the tarball of sqoop and keep it in an s3 bucket. Create a bootstrap script that performs the following activity
Download the sqoop tarball to the required instances
extract the tarball
set SQOOP_HOME and add SQOOP_HOME to the PATH. These entries should be made in /etc/bashrc
Add the required connector jars to the lib of SQOOP.
Keep this script in S3 and point this script in the bootstrap actions.

Note that from Emr-4.4.0 AWS added support for Sqoop 1.4.6 to the EMR cluster. Installation is done with couple clicks on setup. No need for manual installation.
References:
https://aws.amazon.com/blogs/aws/amazon-emr-4-4-0-sqoop-hcatalog-java-8-and-more/
http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-sqoop.html

Related

AWS GLUE Not able to write Delta lake in s3

I am working on AWS Glue and created an ETL job for upserts. I have a s3 bucket where I have my csv file in a folder. I am reading the file from s3 and want to write back to s3 using delta lake (parquet file) using this code
from delta import *
from pyspark.sql.session import SparkSession
spark = SparkSession.builder \
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") \
.getOrCreate()
inputDF = spark.read.format("csv").option("header", "true").load('s3://demonidhi/superstore/')
print(inputDF)
# Write data as DELTA TABLE
inputDF.write.format("delta").mode("overwrite").save("s3a://demonidhi/current/")
# Generate MANIFEST file for Athena/Catalog
deltaTable = DeltaTable.forPath(spark, "s3a://demonidhi/current/")
I am using a jar file of delta named 'delta-core_2.11-0.6.1.jar' which is in s3 bucket folder and i gave the path of it in python libraby path and in Dependent jars path while creating my job.
Till the reading part the code is working just fine but after that for the writing and manifesting it is not working and giving some error which I am not able to see in GLUE terminal. I tried to follow several different approaches, but not able to figure out how can i resolve this. Any help would be appericiated.
Using the spark.config() notation will not work in Glue, because the abstraction that Glue is using (the GlueContext), will override those parameters.
What you can do instead is provide the config as a parameter to the job itself, with the key --conf and the value spark.delta.logStore.class=org.apache.spark.sql.delta.storage.S3SingleDriverLogStore --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog

Flink on EMR cannot access S3 bucket from "flink run" command

I'm prototyping the use of AWS EMR for a Flink-based system that we're planning to deploy. My cluster has the following versions:
Release label: emr-5.10.0
Hadoop distribution: Amazon 2.7.3
Applications: Flink 1.3.2
In the documentation provided by Amazon here: Amazon flink documentation
and the documentation from Flink: Apache flink documentation
both mention directly using S3 resources as an integrated file system with the s3://<bucket>/<file> pattern. I have verified that all the correct permissions are set, I can use the AWS CLI to copy S3 resources to the Master node with no problem, but attempting to start a Flink job using a Jar from S3 does not work.
I am executing the following step:
JAR location : command-runner.jar
Main class : None
Arguments : flink run -m yarn-cluster -yid application_1513333002475_0001 s3://mybucket/myapp.jar
Action on failure: Continue
The step always fails with
JAR file does not exist: s3://mybucket/myapp.jar
I have spoken to AWS support, and they suggested having a previous step copy the S3 file to the local Master node and then referencing it with a local path. While this would obviously work, I would rather get the native S3 integration working.
I have also tried using the s3a filesystem and get the same result.
You need to download your jar from s3 to be available in the classpath.
aws s3 cp s3://mybucket/myapp.jar myapp.jar
and then run the run -m yarn-cluster myapp.jar

Could we use AWS Glue just copy a file from one S3 folder to another S3 folder?

I need to copy a zipped file from one AWS S3 folder to another and would like to make that a scheduled AWS Glue job. I cannot find an example for such a simple task. Please help if you know the answer. May be the answer is in AWS Lambda, or other AWS tools.
Thank you very much!
You can do this, and there may be a reason to use AWS Glue: if you have chained Glue jobs and glue_job_#2 is triggered on the successful completion of glue_job_#1.
The simple Python script below moves a file from one S3 folder (source) to another folder (target) using the boto3 library, and optionally deletes the original copy in source directory.
import boto3
bucketname = "my-unique-bucket-name"
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucketname)
source = "path/to/folder1"
target = "path/to/folder2"
for obj in my_bucket.objects.filter(Prefix=source):
source_filename = (obj.key).split('/')[-1]
copy_source = {
'Bucket': bucketname,
'Key': obj.key
}
target_filename = "{}/{}".format(target, source_filename)
s3.meta.client.copy(copy_source, bucketname, target_filename)
# Uncomment the line below if you wish the delete the original source file
# s3.Object(bucketname, obj.key).delete()
Reference: Boto3 Docs on S3 Client Copy
Note: I would use f-strings for generating the target_filename, but f-strings are only supported in >= Python3.6 and I believe the default AWS Glue Python interpreter is still 2.7.
Reference: PEP on f-strings
I think you can do it with Glue, but wouldn't it be easier to use the CLI?
You can do the following:
aws s3 sync s3://bucket_1 s3://bucket_2
You could do this with Glue but it's not the right tool for the job.
Far simpler would be to have a Lambda job triggered by a S3 created-object event. There's even a tutorial on AWS Docs on doing (almost) this exact thing.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
We ended up using Databricks to do everything.
Glue is not ready. It returns error messages that make no sense. We created tickets and waited for five days still no reply.
the S3 API lets you do a COPY command (really a PUT with a header to indicate source URL) to copy objects within or between buckets. It's used to fake rename()s regularly but you could initiate the call yourself, from anything.
There is no need to D/L any data; within the same S3 region the copy has a bandwidth of about 6-10 MB/s.
AWS CLI cp command can do this.
You can do that by downloading your zip file from s3 to tmp/ directory and then re-uploading the same to s3.
s3 = boto3.resource('s3')
Download file to local spark directory tmp:
s3.Bucket(bucket_name).download_file(DATA_DIR+file,'tmp/'+file)
Upload file from local spark directory tmp:
s3.meta.client.upload_file('tmp/'+file,bucket_name,TARGET_DIR+file)
Now you can write python shell job in glue to do it. Just select Type in Glue job Creation wizard to Python Shell. You can run normal python script in it.
Nothing required. I believe aws data pipeline is a best options. Just use command line option. Scheduled run also possible. I already tried. Successfully worked.

External checkpoints to S3 on EMR

I am trying to deploy a production cluster for my Flink program. I am using a standard hadoop-core EMR cluster with Flink 1.3.2 installed, using YARN to run it.
I am trying to configure my RocksDB to write my checkpoints to an S3 bucket. I am trying to go through these docs: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#set-s3-filesystem. The problem seems to be getting the dependencies working correctly. I receive this error when trying run the program:
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/Configuration;)V
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.initialize(EmrFileSystem.java:93)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:328)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:350)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory.<init>(FsCheckpointStreamFactory.java:99)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createStreamFactory(FsStateBackend.java:282)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createStreamFactory(RocksDBStateBackend.java:273
I have tried both leaving and adjusting the core-site.xml and leaving it as is. I have tried setting the HADOOP_CLASSPATH to the /usr/lib/hadoop/share that contains(what I assume are) most of the JARs described in the above guide. I tried downloading the hadoop 2.7.2 binaries, and copying over them into the flink/libs directory. All resulting in the same error.
Has anyone successfully gotten Flink being able to write to S3 on EMR?
EDIT: My cluster setup
AWS Portal:
1) EMR -> Create Cluster
2) Advanced Options
3) Release = emr-5.8.0
4) Only select Hadoop 2.7.3
5) Next -> Next -> Next -> Create Cluster ( I do fill out names/keys/etc)
Once the cluster is up I ssh into the Master and do the following:
1 wget http://apache.claz.org/flink/flink-1.3.2/flink-1.3.2-bin-hadoop27-scala_2.11.tgz
2 tar -xzf flink-1.3.2-bin-hadoop27-scala_2.11.tgz
3 cd flink-1.3.2
4 ./bin/yarn-session.sh -n 2 -tm 5120 -s 4 -d
5 Change conf/flink-conf.yaml
6 ./bin/flink run -m yarn-cluster -yn 1 ~/flink-consumer.jar
My conf/flink-conf.yaml I add the following fields:
state.backend: rocksdb
state.backend.fs.checkpointdir: s3:/bucket/location
state.checkpoints.dir: s3:/bucket/location
My program's checkpointing setup:
env.enableCheckpointing(getCheckpointRate,CheckpointingMode.EXACTLY_ONCE)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(getCheckpointMinPause)
env.getCheckpointConfig.setCheckpointTimeout(getCheckpointTimeout)
env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
env.setStateBackend(new RocksDBStateBackend("s3://bucket/location", true))
If there are any steps you think I am missing, please let me know
I assume that you installed Flink 1.3.2 on your own on the EMR Yarn cluster, because Amazon does not yet offer Flink 1.3.2, right?
Given that, it seems as if you have a dependency conflict. The method org.apache.hadoop.conf.Configuration.addResource(Lorg/apache/hadoop/conf/Configuration) was only introduced with Hadoop 2.4.0. Therefore I assume that you have deployed a Flink 1.3.2 version which was built with Hadoop 2.3.0. Please deploy a Flink version which was built with the Hadoop version running on EMR. This will most likely solve all dependency conflicts.
Putting the Hadoop dependencies into the lib folder seems to not reliably work because the flink-shaded-hadoop2-uber.jar appears to have precedence in the classpath.

how to copy file from amazon server to s3 bucket

i am working with s3 bucket. i need to copy an image from my amazon server to s3 bucket. any idea how can i do it? i saw some sample codes but i dont know how to use it.
if (S3::copyObject($sourceBucket, $sourceFile, $destinationBucket, $destinationFile, S3::ACL_PRIVATE)) {
echo "Copied file";
} else {
echo "Failed to copy file";
}
it seems that this code is used only to bucket but not for the server?
thanks for help.
Copy between S3 Buckets
AWS released a command line interface for copying between buckets.
http://aws.amazon.com/cli/
$ aws s3 sync s3://mybucket-src s3://mybucket-target --exclude *.tmp
..
This will copy from one target bucket to another bucket.
I have no tested this, but I believe that this will operate in series, by downloading the files to your system and then uploading to the bucket.
See the documentation here : S3 CLI Documentation
I've used s3cmd for several years, and it's been very reliable. If you're using Ubuntu it's available with:
apt-get install s3cmd
You can also use one of the SDKs to develop your own tool.