After running a spark job on an Amazon EMR cluster, I deleted the output files directly from s3 and tried to rerun the job again. I received the following error upon trying to write to parquet file format on s3 using sqlContext.write:
'bucket/folder' present in the metadata but not s3
at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:455)
I tried running
emrfs sync s3://bucket/folder
which did not appear to resolve the error even though it did remove some records from the DynamoDB instance that keeps track of the metadata. Not sure what else I can try. How do I resolve this error?
It turned out that I needed to run
emrfs delete s3://bucket/folder
first before running sync. Running the above solved the issue.
Mostly the consistent problem comes due to retry logic in spark and hadoop systems. When a process of creating a file on s3 failed, but it already updated in the dynamodb. when the hadoop process restarts the process as the entry is already present in the dynamodb. It throws the consistent error.
If you want to delete the metadata of s3 which is stored in the dynamaoDB, whose objects are already removed.
This are the steps,
Delete all the metadata
Deletes all the objects in the path,
emrfs delete uses the hash function to delete the records, so it may delete unwanted entries also, so we are doing the import and sync in the consequent steps
emrfs delete s3://path
Retrieves the metadata for the objects that are physically present in s3 into dynamo db
emrfs import s3://path
Sync the data between s3 and the metadata.
emrfs sync s3://path
After all the operations, to see whether that particular object is present in both s3 and metadata
emrfs diff s3://path
http://docs.aws.amazon.com/emr/latest/ManagementGuide/emrfs-cli-reference.html
I arrived at this page because I was getting the error "key is marked as directory in metadata but is file in s3" and was very puzzled. I think what happened is that I accidentally created both a file and directory by the same name. By deleting the file it solved my issue:
aws s3 rm s3://bucket/directory_name_without_trailing_slash
Related
I wanted to copy an S3 bucket on Kubernetes nodes as a DaemonSet, as the new node will also get the s3 bucket copy as soon it gets launched,
I prefer an S3 copy to the Kubernetes node because copying S3 to directly to the pod as an AWS API would mean multiple calls as multiple pods require it and it will take time to copy content each time when the pod is launching.
Assuming that your S3 content is static and doesn't change often. I believe more than a DaemonSet it makes more sense to use a one time Job to copy the whole S3 bucket to a local disk. It's not clear how you would signal the kube-scheduler that your node is not ready until the S3 bucket is fully copied. But, perhaps you can taint your node before the job is finished and remove the taint after the job finishes.
Note also that S3 is inherently slow and meant to be used for processing (reading/writing) single files at a time, so if your bucket has a large amount of data it would take a long time to copy to the node disk.
If your S3 content is dynamically (constantly changing) then it would be more challenging since you would have to files in sync. Your apps would probably have to cache architecture where you would go to the local disk to find files and if they are not there, then make a request to S3.
Is there a way to make gsutil rsync remove synced files?
As far as I know, normally it is done by passing --remove-source-files, but it does not seem to be an option with gsutil rsync (documentation).
Context:
I have a script that produces a large amount of CSV files (100GB+) I want those files to be transferred to Cloud Storage (and once transferred to be removed from my HDD).
Ended up using gcsfuse.
Per documentation:
Local storage: Objects that are new or modified will be stored in
their entirety in a local temporary file until they are closed or
synced.
One work-around for small buckets is delete all bucket contents and re-sync periodically.
I extensively use S3 to store encrypted and compressed backups of my workstations. I use the aws cli to sync them to S3. Sometimes, the transfer might fail when in progress. I usually just retry it and let it finish.
My question is: Does S3 has some kind of check to make sure that the previously failed transfer didn't leave corrupted files? Does anyone know if syncing again is enough to fix the previously failed transfer?
Thanks!
Individual files uploaded to S3 are never partially uploaded. Either the entire file is completed and S3 stores the file as an S3 object, or the upload is aborted and S3 object is never stored.
Even in the multi-part upload case, multiple parts can be uploaded but they never form a complete S3 object unless all of the pieces are uploaded and the "Complete Multipart Upload" operation is performed. So there is no need worry about corruption via partial uploads.
Syncing will certainly be enough to fix the previously failed transfer.
Yes, looks like AWS CLI does validate what it uploads and takes care of corruption scenarios by employing MD5 checksum.
From https://docs.aws.amazon.com/cli/latest/topic/s3-faq.html
The AWS CLI will perform checksum validation for uploading and downloading files in specific scenarios.
The AWS CLI will calculate and auto-populate the Content-MD5 header for both standard and multipart uploads. If the checksum that S3 calculates does not match the Content-MD5 provided, S3 will not store the object and instead will return an error message back the AWS CLI.
Is there a command in AWS CLI to restore Versioning files?
i've been developing a web server using Django
someday i found there was deleted image files randomly in S3
i think Django sorl-thumbnail will delete it
and tried to fix it but it failed
So I thought of temporary solution.
AWS S3 is delivering versioning. i use it to recover manually every day.
This is very Annoying to do, so I am writing a script.
But I could not find a way to restore a file with a delete marker.
Does anyone know the situation above?
thanks you!
Recovering of objects is a bit tricky in s3. As per AWS documentation http://docs.aws.amazon.com/AmazonS3/latest/dev/DeletingObjects.html
When you delete an object from a versioned bucket, S3 creates a new object called a delete marker, which has its own, new version ID.
If you delete that "version" of the object, it will restore your object's visibility.
You can use this command
aws s3api delete-object --bucket <bucket> --key <key> --version-id <version_id_of_delete_marker>
I'm running hive over EMR,
and need to copy some files to all EMR instances.
One way as I understand is just to copy files to the local file system on each node the other is to copy the files to the HDFS however I haven't found a simple way to copy stright from S3 to HDFS.
What is the best way to go about this?
the best way to do this is to use Hadoop's distcp command. Example (on one of the cluster nodes):
% ${HADOOP_HOME}/bin/hadoop distcp s3n://mybucket/myfile /root/myfile
This would copy a file called myfile from an S3 bucket named mybucket to /root/myfile in HDFS. Note that this example assumes you are using the S3 file system in "native" mode; this means that Hadoop sees each object in S3 as a file. If you use S3 in block mode instead, you would replace s3n with s3 in the example above. For more info about the differences between native S3 and block mode, as well as an elaboration on the example above, see http://wiki.apache.org/hadoop/AmazonS3.
I found that distcp is a very powerful tool. In addition to being able to use it to copy a large amount of files in and out of S3, you can also perform fast cluster-to-cluster copies with large data sets. Instead of pushing all the data through a single node, distcp uses multiple nodes in parallel to perform the transfer. This makes distcp considerably faster when transferring large amounts of data, compared to the alternative of copying everything to the local file system as an intermediary.
Now Amazon itself has a wrapper implemented over distcp, namely : s3distcp .
S3DistCp is an extension of DistCp that is optimized to work with
Amazon Web Services (AWS), particularly Amazon Simple Storage Service
(Amazon S3). You use S3DistCp by adding it as a step in a job flow.
Using S3DistCp, you can efficiently copy large amounts of data from
Amazon S3 into HDFS where it can be processed by subsequent steps in
your Amazon Elastic MapReduce (Amazon EMR) job flow. You can also use
S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon
S3
Example Copy log files from Amazon S3 to HDFS
This following example illustrates how to copy log files stored in an Amazon S3 bucket into HDFS. In this example the --srcPattern option is used to limit the data copied to the daemon logs.
elastic-mapreduce --jobflow j-3GY8JC4179IOJ --jar \
s3://us-east-1.elasticmapreduce/libs/s3distcp/1.latest/s3distcp.jar \
--args '--src,s3://myawsbucket/logs/j-3GY8JC4179IOJ/node/,\
--dest,hdfs:///output,\
--srcPattern,.*daemons.*-hadoop-.*'
Note that according to Amazon, at http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/FileSystemConfig.html "Amazon Elastic MapReduce - File System Configuration", the S3 Block FileSystem is deprecated and its URI prefix is now s3bfs:// and they specifically discourage using it since "it can trigger a race condition that might cause your job flow to fail".
According to the same page, HDFS is now 'first-class' file system under S3 although it is ephemeral (goes away when the Hadoop jobs ends).