I'm building in Cloud9 to deploy to Lambda. My function works fine in Cloud9 but when I go to deploy I get the error
Unzipped size must be smaller than 262144000 bytes
Running du -h | sort -h shows that my biggest offenders are:
/debug at 291M
/numpy at 79M
/pandas at 47M
/botocore at 41M
My function is extremely simple, it calls a service, uses panda to format the response, and sends it on.
What is in debug and how do I slim it down/eliminate it from the deploy package?
How do others use libraries at all if they eat up most of the memory limit?
A brief background to understand the problem root-cause
The problem is not with your function but with the size of the zipped packages. As per AWS documentation, the overall size of zipped package must not exceed greater than 3MB. With that said, if the package size is greater than 3MB which happens inevitably, as a library can have many dependencies, then consider uploading the zipped package to a AWS S3 bucket. Note: even s3 bucket has a size limit of 262MB. Ensure that your package does not exceed this limit. The error message that you have posted, Unzipped size must be smaller than 262144000 bytes is referring to the size of the deployment package aka the libraries.
Now, Understand some facts when working with AWS,
AWS Containers are empty.
AWS containers have a linux kernel
AWS Cloud9 is only an IDE like RStudio or Pycharm. And it uses S3 bucket for saving the installed packages.
This means you'll need to know the following:
the package and its related dependencies
extract the linux-compiled packages from cloud9 and save to a folder-structure like, python/lib/python3.6/site-packages/
Possible/Workable solution to overcome this problem
Overcome this problem by reducing the package size. See below.
Reducing the deployment package size
Manual method: delete files and folders within each library folder that are named *.info and *._pycache. You'll need to manually look into each folder for the above file extensions to delete them.
Automatic method: I've to figure out the command. work in progress
Use Layers
In AWS go to Lambda and create a layer
Attach the S3 bucket link containing the python package folder. Ensure the lambda function IAM role has permission to access S3 bucket.
Make sure the un-zipped folder size is less than 262MB. Because if its >260 MB then it cannot be attached to AWS Layer. You'll get an error, Failed to create layer version: Unzipped size must be smaller than 262144000 bytes
Instance operating system is ubuntu 16.04.
I was uploading using the instance upload file option.
File size was 2.24 GB.
I didn't find anything useful on internet.
Thanks
The file "xyz.zip.ccsupload" is the file with the partial upload. Once the upload is complete, then the file will have the proper name. You cannot resume the upload from where it left off. If it fails, then you will have to attempt uploading the file again.
The reason why it failed is most likely due to the file size. Due to the size of the file, I would suggest using the "gcloud compute scp" command to upload the file to the VM instance as documented here.
I have a trouble gatting drupal to upload files (images, configuration, I guess anything).
The error message is:
The file could not be saved because the upload did not complete.
File upload error. Could not move uploaded file.
When I go to log it says:
Upload error. Could not move uploaded file multimedia.svg to destination public://2017-03/multimedia.svg.
I already read about permissions and here they are:
sites/default/files - 770
sites/default/files/2017-03 - 770
/tmp - 1777
However I'm able to upload & install themes and modules without any ploblems at all.
So what could it be? And how do I fix this?
switch php mode from "Apache module (mod_php)" to "Fast CGI (mod_fcgid)"
core\includes\file.inc
function file_prepare_directory()
should return true
see this patch
or just temporarily put
return true;
instead of
return $writable;
in the function
I have a 27GB file that I am trying to move from an AWS Linux EC2 to S3. I've tried both the 'S3put' command and the 'S3cmd put' command. Both work with a test file. Neither work with the large file. No errors are given, the command returns immediately but nothing happens.
s3cmd put bigfile.tsv s3://bucket/bigfile.tsv
Though you can upload objects to S3 with sizes up to 5TB, S3 has a size limit of 5GB for an individual PUT operation.
In order to load files larger than 5GB (or even files larger than 100MB) you are going to want to use the multipart upload feature of S3.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UploadingObjects.html
http://aws.typepad.com/aws/2010/11/amazon-s3-multipart-upload.html
(Ignore the outdated description of a 5GB object limit in the above blog post. The current limit is 5TB.)
The boto library for Python supports multipart upload, and the latest boto software includes an "s3multiput" command line tool that takes care of the complexities for you and even parallelizes part uploads.
https://github.com/boto/boto
The file did not exist, doh. I realised this after running the s3 commands in verbose mode by adding the -v tag:
s3cmd put -v bigfile.tsv s3://bucket/bigfile.tsv
s3cmd version 1.1.0 supports the multi-part upload as part of the "put" command, but its still in beta (currently.)
When I try to upload a folder with subfolders to S3 through the AWS console, only the files are uploaded not the subfolders.
You also can't select a folder. It always requires opening the folder first before you can select anything.
Is this even possible?
I suggest you to use AWS CLI. As it is very easy using command line and awscli
aws s3 cp SOURCE_DIR s3://DEST_BUCKET/ --recursive
or you can use sync by
aws s3 sync SOURCE_DIR s3://DEST_BUCKET/
Remember that you have to install aws cli and configure it by using your Access Key ID and Secrect Access Key ID
pip install --upgrade --user awscli
aws configure
You don't need Enhanced Uploader (which I believe does not exist anymore) or any third-party software (that always has a risk that someone will steal your private data or access keys from the S3 bucket or even from all AWS resources).
Since the new AWS S3 Web Upload manager supports drag'n'drop for files and folders, just login to https://console.aws.amazon.com/s3/home and start the uploading process as usual, then just drag the folder from your desktop directly to the S3 page.
The Amazon S3 Console now supports uploading entire folder hierarchies. Enable the Ehanced Uploader in the Upload dialog and then add one or more folders to the upload queue.
http://console.aws.amazon.com/s3
Normally I use the Enhanced Uploader available via the AWS management console. However, since that requires Java it can cause problems. I found s3cmd to be a great command-line replacement. Here's how I used it:
s3cmd --configure # enter access keys, enable HTTPS, etc.
s3cmd sync <path-to-folder> s3://<path-to-s3-bucket>/
Execute something similar to the following command:
aws s3 cp local_folder_name s3://s3_bucket_name/local_folder_name/ --recursive
I was having problem with finding the enhanced uploader tool for uploading folder and subfolders inside it in S3. But rather than finding a tool I could upload the folders along with the subfolders inside it by simply dragging and dropping it in the S3 bucket.
Note: This drag and drop feature doesn't work in Safari. I've tested it in Chrome and it works just fine.
After you drag and drop the files and folders, this screen opens up finally to upload the content.
Solution 1:
var AWS = require('aws-sdk');
var path = require("path");
var fs = require('fs');
const uploadDir = function(s3Path, bucketName) {
let s3 = new AWS.S3({
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY
});
function walkSync(currentDirPath, callback) {
fs.readdirSync(currentDirPath).forEach(function (name) {
var filePath = path.join(currentDirPath, name);
var stat = fs.statSync(filePath);
if (stat.isFile()) {
callback(filePath, stat);
} else if (stat.isDirectory()) {
walkSync(filePath, callback);
}
});
}
walkSync(s3Path, function(filePath, stat) {
let bucketPath = filePath.substring(s3Path.length+1);
let params = {Bucket: bucketName, Key: bucketPath, Body: fs.readFileSync(filePath) };
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
} else {
console.log('Successfully uploaded '+ bucketPath +' to ' + bucketName);
}
});
});
};
uploadDir("path to your folder", "your bucket name");
Solution 2:
aws s3 cp SOURCE_DIR s3://DEST_BUCKET/ --recursive
Custom endpoint
if you have a custom endpoint implemented by your IT, try this
aws s3 cp <local-dir> s3://bucket-name/<destination-folder>/ --recursive --endpoint-url https://<s3-custom-endpoint.lan>
It's worth mentioning that if you are simply using S3 for backups, you should just zip the folder and then upload that. This Will save you upload time and costs.
If you are not sure how to do efficient zipping from the terminal have a look here for OSX.
And $ zip -r archive_name.zip folder_to_compress for Windows.
Alternatively a client such as 7-Zip would be sufficient for Windows users
I do not see Python answers here.
You can script folder upload using Python/boto3.
Here's how to recursively get all file names from directory tree:
def recursive_glob(treeroot, extention):
results = [os.path.join(dirpath, f)
for dirpath, dirnames, files in os.walk(treeroot)
for f in files if f.endswith(extention)]
return results
Here's how to upload a file to S3 using Python/boto:
k = Key(bucket)
k.key = s3_key_name
k.set_contents_from_file(file_handle, cb=progress, num_cb=20, reduced_redundancy=use_rr )
I used these ideas to write Directory-Uploader-For-S3
I ended up here when trying to figure this out. With the version that's up there right now you can drag and drop a folder into it and it works, even though it doesn't allow you to select a folder when you open the upload dialogue.
You can drag and drop those folders. Drag and drop functionality is supported only for the Chrome and Firefox browsers.
Please refer this link
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html
You can use Transfer Manager to upload multiple files, directories etc
More info on:
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-transfermanager.html
You can upload files by dragging and dropping or by pointing and clicking. To upload folders, you must drag and drop them. Drag and drop functionality is supported only for the Chrome and Firefox browsers
Drag and drop is only usable for a relatively small set of files. If you need to upload thousands of them in one go, then the CLI is the way to go. I managed to upload 2,000,00+ files using 1 command...