How can I get the custom storage filename for uploaded images using UploadCare? - uploadcare

I want to use my own S3 storage and display the image that was just uploaded. How can I get the filename that was uploaded to S3?
For example I can upload a jpg image to UploadCare.
This is the output I can get:
fileInfo.cdnUrl: "https://ucarecdn.com/6314bead-0404-4279-9462-fecc927935c9/"
fileInfo.name: "0 (3).jpg"
But if I check my S3 bucket this is the file name that was actually uploaded to S3: https://localdevauctionsite-us.s3.us-west-2.amazonaws.com/6314bead-0404-4279-9462-fecc927935c9/03.jpg
Here is the javascript I have so far:
var widget = uploadcare.Widget('[role=uploadcare-uploader]');
widget.onChange(group => {
group.files().forEach(file => {
file.done(fileInfo => {
// Try to list the file from aws s3
console.log('CDN url:', fileInfo.cdnUrl);
//https://uploadcare.com/docs/file_uploader_api/files_uploads/
console.log('File name: ', fileInfo.name);
});
});
});

Filenames are sanitized before copying to S3, so the output file name can contain the following characters only:
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_
The following function can be used to get a sanitized file name:
function sanitize(filename) {
var extension = '.' + filename.split('.').pop();
var name = filename.substring(0, filename.length - extension.length);
return name.replace(/[^A-Za-z0-9_]+/g, '') + extension;
}
The final S3 URL can be composed of the S3 base URL, a file UUID, which you can obtain from the fileInfo object, and a sanitized name of the uploaded file.

Related

Use Terraform to create folder and subfolder in s3 bucket

How can I create folder and subfolder in S3 bucket using Terraform?
This is how my current code look like.
resource "aws_s3_bucket_object" "Fruits" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/"
content_type = "application/x-directory"
}
variable "folder_fruits" {
type = string
}
I would need a folder structure like fruits/apples
Folders in S3 are simply objects that end with a / character. You should be able to create the fruits/apples/ folder with the following Terraform code:
variable "folder_fruits" {
type = string
}
resource "aws_s3_bucket_object" "fruits" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/"
content_type = "application/x-directory"
}
resource "aws_s3_bucket_object" "apples" {
bucket = "${aws_s3_bucket.s3_bucket_name.id}"
key = "${var.folder_fruits}/apples/"
content_type = "application/x-directory"
}
It is likely that this would also work without the fruits folder.
For more information, see https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html
You can create a null object with a prefix that ends with '/'. All objects in a bucket are at the same hierarchy level but AWS displays it like folders using '/' as the separator.
resource "aws_s3_bucket_object" "fruits" {
bucket = "your-bucket"
key = "fruits/"
source = "/dev/null"
resource "aws_s3_bucket_object" "apples" {
bucket = "your-bucket"
key = "fruits/apples/"
source = "/dev/null"
}
It creates the following folder like structure:
s3://your-bucket/fruits/
s3://your-bucket/fruits/apples/

AWS S3 filenaming when using MediaConvert

I am currently uploading files to an Amazon S3 using MediaConvert via Lambda functions. However part of my processing is to create thumbnail images from uploaded videos. To do this I am using the AmazonMediaConvetClient and creating a job request.
However the files that are generated have a suffix applied to them of 0000000 which from what I can gather is in reference to the frame captured.
However I do not want this suffix on the filename. Is there anyway to ensure that the filename created from a video thumbnail is what I specify with no suffix?
var jpgOutput = new Output
{
NameModifier = $"-Medium",
ContainerSettings = new ContainerSettings { Container = ContainerType.RAW },
Extension = "jpg",
VideoDescription = new VideoDescription
{
CodecSettings = new VideoCodecSettings()
{
Codec = VideoCodec.FRAME_CAPTURE,
FrameCaptureSettings = new FrameCaptureSettings()
{
MaxCaptures = 1, Quality = 100
}
},
Height = thumbnail.Height,
Width = thumbnail.Width
}
};
In the code snippet above the file is created as 1-Medium.000000.jpg but I want 1-Medium.jpg

Handling Streaming TarArchiveEntry to S3 Bucket from a .tar.gz file

I am use aws Lamda to decompress and traverse tar.gz files then uploading them back to s3 deflated retaining the original directory structure.
I am running into an issue streaming a TarArchiveEntry to a S3 bucket via a PutObjectRequest. While first entry is successfully streamed, upon trying to getNextTarEntry() on the TarArchiveInputStream a null pointer is thrown due to the underlying GunzipCompress inflator being null, which had an appropriate value prior to the s3.putObject(new PutObjectRequest(...)) call.
I have not been able to find documentation on how / why the gz input stream inflator attribute is being set to null after partially being sent to s3.
EDIT Further investigation has revealed that the AWS call appears to be closing the input stream after completing the upload of specified content length... haven't not been able to find how to prevent this behavior.
Below is essentially what my code looks like. Thank in advance for your help, comments, and suggestions.
public String handleRequest(S3Event s3Event, Context context) {
try {
S3Event.S3EventNotificationRecord s3EventRecord = s3Event.getRecords().get(0);
String s3Bucket = s3EventRecord.getS3().getBucket().getName();
// Object key may have spaces or unicode non-ASCII characters.
String srcKey = s3EventRecord.getS3().getObject().getKey();
System.out.println("Received valid request from bucket: " + bucketName + " with srckey: " + srcKeyInput);
String bucketFolder = srcKeyInput.substring(0, srcKeyInput.lastIndexOf('/') + 1);
System.out.println("File parent directory: " + bucketFolder);
final AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
TarArchiveInputStream tarInput = new TarArchiveInputStream(new GzipCompressorInputStream(getObjectContent(s3Client, bucketName, srcKeyInput)));
TarArchiveEntry currentEntry = tarInput.getNextTarEntry();
while (currentEntry != null) {
String fileName = currentEntry.getName();
System.out.println("For path = " + fileName);
// checking if looking at a file (vs a directory)
if (currentEntry.isFile()) {
System.out.println("Copying " + fileName + " to " + bucketFolder + fileName + " in bucket " + bucketName);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(currentEntry.getSize());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + fileName, tarInput, metadata)); // contents are properly and successfully sent to s3
System.out.println("Done!");
}
currentEntry = tarInput.getNextTarEntry(); // NPE here due underlying gz inflator is null;
}
} catch (Exception e) {
e.printStackTrace();
} finally {
IOUtils.closeQuietly(tarInput);
}
}
That's true, AWS closes an InputStream provided to PutObjectRequest, and I don't know of a way to instruct AWS not to do so.
However, you can wrap the TarArchiveInputStream with a CloseShieldInputStream from Commons IO, like that:
InputStream shieldedInput = new CloseShieldInputStream(tarInput);
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + fileName, shieldedInput, metadata));
When AWS closes the provided CloseShieldInputStream, the underlying TarArchiveInputStream will remain open.
PS. I don't know what ByteArrayInputStream(tarInput.getCurrentEntry()) does but it looks very strange. I ignored it for the purpose of this answer.

KeystoneJsS: S3 filetype path for every item

I want to save my file to S3 and every item will have it own path, such as base/item._id/filename. How does one accomplish this?
It seems the filename is auto generated and the option doesn't work.
Keystone's Types.S3File type has a filename option that you can set to a custom function in order to determine your own file name. Documentation Link
{
type: Types.S3File,
filename: function(item, filename) {
// prefix file name with object id
return item._id + '-' + filename;
}
}
item has all the properties of the current item, so you could set it to item.name if that value exists.

List directories in amazon S3 with AWS SDK

I am trying to list folders in S3:
string delimiter = "/";
folder = "a/";
ListObjectsResponse r = s3Client.ListObjects(new Amazon.S3.Model.ListObjectsRequest()
{
BucketName = BucketName,
Prefix = folder,
MaxKeys = 1000,
Delimiter = delimiter
});
and i expect list of directories such as:
a/Folder1
a/Folder2
....
a/FolderN
but my actual result is only 1 object:
'a1'
Folders are not treated as objects in S3.
Instead, I need to read string[] CommonPrefixes property, which has my subfolders