AWS S3 not stops uploading from my Lenovo® ix2-dl - api

I have a NAS drive Lenovo® ix2-dl that I set up to back up to AWS S3. It connected fine. But for some reason it only uploads 5% of my Lenovo® ix2-dl Data. How can I get it to upload my whole Lenovo® ix2-dl Data?

I updated my NAS to the latest Firmware 4.1.218.34037.
I recently had issues with the s3 backup feature, where the uploads simply stopped working. No errors, nothing in logs to indicate an issue. I tested by AWS S3 access key and secret with another method and was able to upload files just fine.
To resolve the issue, i had to create a new AWS S3 bucket, then go into the S3 setup of Lenovo and provide the required info. I think what made this work for me, was i made sure to not have anything in the bucket name other than letters and numbers. My bucket name before was similar to this lastname.family.pics, my new bucket which works is similar to this lastname123.
Hope this helps, this feature has worked fine for a long time, perhaps an update came down which has different requirements for the api.

Related

Mount S3 bucket as an NFS share on an EC2 instance

long time reader but I've usually been able to find the answers I've been looking for in existing posts - but this time I've not been able to.
I am essentially teaching myself AWS CDK from scratch, I've only really just started with it so not finding anything which helps me on my mission may be a result of not knowing enough yet to be asking the right questions... so please bare with me.
Thus far I've used the AWS CDK with Python to create a stack which creates an S3 bucket, and also fires up an EC2 instance with an AWS file storage gateway AMI loaded on it (so running Amazon Linux). This deploys and runs fine - however now I'd like to programmatically set up the S3 bucket to be accessed via an NFS share on the EC2 instance. From what I've seen I'd assumed it is or should be fairly trivial however I keep getting a bit lost in documentation and internet hunts and not quite sure I'm looking in the right places or asking search engines the right questions to unlock the path to achieve this.
It looks like I should be able to script something up to make it happen when the instance is start using user-data but I'm a bit lost. Is anyone able to throw me some crumbs to follow to find a good way of achieving this, or a better way of achieving what I want to happen (which is basically accessing the S3 bucket contents as though they are files on an EC2 instance) - if not tell me how to do it if it's trivial enough?
Much appreciated :)
Dan
You are on good track. user_data can be used for that.
I don't have full code to give you as its use case specific (e.g. which OS are you using?), but the user_data would have to download and install s3fs:
s3fs allows Linux and macOS to mount an S3 bucket via FUSE. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.
However, S3 is an object storage system, and it can't be really mounted on an instance like you would do with NFS or EBS storage solutions. But with s3fs-fuse you can mimic such a behavior. And for some use-cases it will be sufficient.
So what you can do, is to setup the user_data script through console, verify that it works, and then basically just copy and paste to CDK. Its more of a trial-and-see approach, but this is the best way to learn.

Amazon S3 suddenly stopped working with EC2 but working from localhost

Create folders and upload files to my S3 bucket stopped working.
The remote server returned an error: (403) Forbidden.
Everything seems to work previously as i did not change anything recently
After days of testing - i see that i am able to create folders in my bucket from localhost but same code doesnt work on the EC2 instance.
I must resolve the issue ASAP.
Thanks
diginotebooks
Does your EC2 instance have a role? If yes, what is this role? Is it possible that someone detached or modified a policy that was attached to it?
If your instance doesn't have a role, how do you upload files to S3? Using the AWS CLI tools? Same questions for the IAM profile used.
If you did not change anything - are you using the same IAM credentials from the server and localhost? May be related to this.
Just random thoughts...

Amazon S3 problems with S3fox

I have created an Amazon S3 account and trying to upload some files with S3fox add-on.
I have added S3fox and logged in with my accesskey and secure id credentials.
Now, i created a bucket by right clicking and selecting create a directory and selected the option to put the bucket in europe. Now when i try to drill down into my folder, i keep getting an error message saying "Error connecting! - Temporary Redirect". And also i can not transfer any files.
but if i create the bucket without selecting the option to put it into europe, then i am able to drill down into the bucket.
I would like my bucket to be in europe as i am from UK. Please suggest what i am missing and how can i resolve this issue?
Thanks
Sreekanth
I have the same problem - it still doesn't work after an hour. To save waiting I've installed Cloudberry (freeware for Windows), which seems to be a better alternative anyway (it looks more user-friendly and has more options): http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx

Allowing users to download files as a batch from AWS s3 or Cloudfront

I have a website that allows users to search for music tracks and download those they they select as mp3.
I have the site on my server and all of the mp3s on s3 and then distributed via cloudfront. So far so good.
The client now wishes for users to be able to select a number of music track and then download them all in bulk or as a batch instead of 1 at a time.
Usually I would place all the files in a zip and then present the user a link to that new zip file to download. In this case, as the files are on s3 that would require I first copy all the files from s3 to my webserver process them in to a zip and then download from my server.
Is there anyway i can create a zip on s3 or CF or is there someway to batch / group files in to a zip?
Maybe i could set up an EC2 instance to handle this?
I would greatly appreciate some direction.
Best
Joe
I am afraid you won't be able to create the batches w/o additional processing. firing up an EC2 instance might be an option to create a batch per user
I am facing the exact same problem. So far the only thing I was able to find is Amazon's s3sync tool:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
In my case, I am using Rails + its Paperclip addon which means that I have no way to easily download all of the user's images in one go, because the files are scattered in a lot of subdirectories.
However, if you can group your user's files in a better way, say like this:
/users/<ID>/images/...
/users/<ID>/songs/...
...etc., then you can solve your problem right away with:
aws s3 sync s3://<your_bucket_name>/users/<user_id>/songs /cache/<user_id>
Do have in mind you'll have to give your server the proper credentials so the S3 CLI tools can work without prompting for usernames/passwords.
And that should sort you.
Additional discussion here:
Downloading an entire S3 bucket?
s3 is single http request based.
So the answer is threads to achieve the same thing
Java api - uses TransferManager
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
You can get great performance with multi threads.
There is no bulk download sorry.

broken pipe error with rails 3 while trying to upload data to AWS-S3

I am trying to upload some static data to my aws s3 account.
I am using aws/s3 gem for this purpose.
I have a simple upload button on my webpage which hits the controller where it create the AWS connection and tries uploading data to AWS S3.
The connection to the AWS is successful, how-ever while trying to store data in S3, i get following error : Errno::EPIPE:Broken pipe" ...always.
I tried running the same piece of code from s3sh (S3 Shell) and i am able to execute all calls properly.
Am i missing something here?? its been quite some time now since i am facing this issue.
My config are : ruby 1.8, rails 3, mongrel, s3 bucket region us.
any help will be great.
I think the broken pipe error could mean a lot of things. I was experiencing it just now and it was because the bucket name in my s3.yml configuration file didn't match the name of the bucket I created on Amazon (typo).
So for people running into this answer in the future, it could be something as silly and simple as that.
In my case the problem was with the file size. S3 puts a limit of 5GB on single file uploads. Chopping up the file into several 500MB files worked for me.
I also had this issue uploading my application.css which had compiled file size > 1.1MB. I set the fog region with:
config.fog_region = 'us-west-2'
and that seems to have fixed the issue for me...