I am trying to read files from S3 bucket using minio client.
https://docs.min.io/docs/java-client-quickstart-guide.html
I am able to make connection using this client and able to access the bucket also. Now, I need to access a file inside a folder in the bucket but I am not sure how to do it. I thought once I have access to the bucket, I can list out the file names using File library but not able to do it.
File path : s3 bucket endpoint/4275/input/test.csv
Code :
public void listS3BucketObject() {
MinioClient minioClient =
MinioClient.builder()
.endpoint(s3BucketEndpoint)
.credentials(s3BucketAccessKey, s3BucketSecretKey)
.build();
String fileUrl = s3BucketEndpoint + "/" + "4275" + "/" + "input";
File[] fileList = new File(fileUrl).listFiles();
for(File file : fileList) {
System.out.println("File name: "+file.getName()); // getting null exception here
To list a "folder" (called a prefix in S3 terms), use the listObjects call.
See this for an example: https://docs.min.io/docs/java-client-api-reference.html#listObjects
Related
I have created a s3 object resource in terraform using below code
resource "aws_s3_bucket_object" "object" {
bucket = <bucket_name>
acl = "private"
key = "prefix"
source = "/dev/null"
}
Post deploy I accidentely created a file in the prefix with name as /, now when I tried to apply new changes using terraform apply it throws error
Error: error reading S3 Object (prefix/): Forbidden: Forbidden.
Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.
i am working on terraform,i am facing issue in download the zip file from s3 to local using terraform.
creating the lambda function using zip file. Can any one please help on this.
I believe you can use the aws_s3_bucket_object data_source. This allows you to download the contents of an s3 bucket. Sample code snippet is shown below:
data "aws_s3_bucket_object" "secret_key" {
bucket = "awesomecorp-secret-keys"
key = "awesomeapp-secret-key"
}
resource "aws_instance" "example" {
## ...
provisioner "file" {
content = "${data.aws_s3_bucket_object.secret_key.body}"
}
}
Hope this helps!
I you want to create a lamdba function using a file in an S3 Bucket you can simply reference it in your ressource :
resource aws_lambda_function lambda {
function_name = "my_function"
s3_bucket = "some_bucket"
s3_key = "lambda.zip"
...
}
I want to upload multiple files to AWS S3 from a specific folder in my local device. I am running into the following error.
Here is my terraform code.
resource "aws_s3_bucket" "testbucket" {
bucket = "test-terraform-pawan-1"
acl = "private"
tags = {
Name = "test-terraform"
Environment = "test"
}
}
resource "aws_s3_bucket_object" "uploadfile" {
bucket = "test-terraform-pawan-1"
key = "index.html"
source = "/home/pawan/Documents/Projects/"
}
How can I solve this problem?
As of Terraform 0.12.8, you can use the fileset function to get a list of files for a given path and pattern. Combined with for_each, you should be able to upload every file as its own aws_s3_bucket_object:
resource "aws_s3_bucket_object" "dist" {
for_each = fileset("/home/pawan/Documents/Projects/", "*")
bucket = "test-terraform-pawan-1"
key = each.value
source = "/home/pawan/Documents/Projects/${each.value}"
# etag makes the file update when it changes; see https://stackoverflow.com/questions/56107258/terraform-upload-file-to-s3-on-every-apply
etag = filemd5("/home/pawan/Documents/Projects/${each.value}")
}
See terraform-providers/terraform-provider-aws : aws_s3_bucket_object: support for directory uploads #3020 on GitHub.
Note: This does not set metadata like content_type, and as far as I can tell there is no built-in way for Terraform to infer the content type of a file. This metadata is important for things like HTTP access from the browser working correctly. If that's important to you, you should look into specifying each file manually instead of trying to automatically grab everything out of a folder.
You are trying to upload a directory, whereas Terraform expects a single file in the source field. It is not yet supported to upload a folder to an S3 bucket.
However, you can invoke awscli commands using null_resource provisioner, as suggested here.
resource "null_resource" "remove_and_upload_to_s3" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/s3Contents s3://${aws_s3_bucket.site.id}"
}
}
Since June 9, 2020, terraform has a built-in way to infer the content type (and a few other attributes) of a file which you may need as you upload to a S3 bucket
HCL format:
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/src"
template_vars = {
# Pass in any values that you wish to use in your templates.
vpc_id = "vpc-abc123"
}
}
resource "aws_s3_bucket_object" "static_files" {
for_each = module.template_files.files
bucket = "example"
key = each.key
content_type = each.value.content_type
# The template_files module guarantees that only one of these two attributes
# will be set for each file, depending on whether it is an in-memory template
# rendering result or a static file on disk.
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.digests.md5
}
JSON format:
{
"resource": {
"aws_s3_bucket_object": {
"static_files": {
"for_each": "${module.template_files.files}"
#...
}}}}
#...
}
Source: https://registry.terraform.io/modules/hashicorp/dir/template/latest
My objective was to make this dynamic, so whenever i create a folder in a directory, terraform automatically uploads that new folder and its contents into S3 bucket with the same key structure.
Heres how i did it.
First you have to get a local variable with a list of each Folder and the files under it. Then we can loop through that list to upload the source to S3 bucket.
Example: I have a folder called "Directories" with 2 sub folders called "Folder1" and "Folder2" each with their own files.
- Directories
- Folder1
* test_file_1.txt
* test_file_2.txt
- Folder2
* test_file_3.txt
Step 1: Get the local var.
locals{
folder_files = flatten([for d in flatten(fileset("${path.module}/Directories/*", "*")) : trim( d, "../") ])
}
Output looks like this:
folder_files = [
"Folder1/test_file_1.txt",
"Folder1/test_file_2.txt",
"Folder2/test_file_3.txt",
]
Step 2: dynamically upload s3 objects
resource "aws_s3_object" "this" {
for_each = { for idx, file in local.folder_files : idx => file }
bucket = aws_s3_bucket.this.bucket
key = "/Directories/${each.value}"
source = "${path.module}/Directories/${each.value}"
etag = "${path.module}/Directories/${each.value}"
}
This loops over the local var,
So in your S3 bucket, you will have uploaded in the same structure, the local Directory and its sub directories and files:
Directory
- Folder1
- test_file_1.txt
- test_file_2.txt
- Folder2
- test_file_3.txt
I have a 900 MB file that I'd like to download to disk from S3 if it isn't already in place downloaded. Is there an easy way for me to only download the file if it isn't already in place? I know S3 supports querying MD5 checksum of files, but I'm hoping not to have to build this logic myself.
You can use AWS CLI's s3 sync command.
Syncs directories and S3 prefixes. Recursively copies new and updated files from the source directory to the destination.
According to this forum thread, you can use sync to synchronize only one file:
aws s3 sync s3://bucket/path/ local/path/ --exclude "*" --include "File.txt"
It says: sync the given paths, exclude all files, but include "File.txt" - so it will sync only "File.txt" under those given paths.
Or with the Java SDK:
According to the javadoc, there is a getObjectMetadata method which will return information about an S3 object (file) without downloading it's contents.
The method returns an ObjectMetadata object which can give you some useful information:
getLastModified method:
Gets the value of the Last-Modified header, indicating the date and time at which Amazon S3 last recorded a modification to the associated object.
getContentMD5 method:
Gets the base64 encoded 128-bit MD5 digest of the associated object (content - not including headers) according to RFC 1864.
getETag method:
Gets the hex encoded 128-bit MD5 digest of the associated object according to RFC 1864.
I have used below code to download S3 files which have timestamp greater than the local folder timestamp. First it's check if any of the files in S3 folder have timestamp greater than the local folder timestamp. If yes then download those files only.
TransferManager transferManager = TransferManagerBuilder.standard().build();
AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard().build();
Path location = Paths.get("/data/test/");
FileTime lastModifiedTime = null;
try {
lastModifiedTime = Files.getLastModifiedTime(location, LinkOption.NOFOLLOW_LINKS);
} catch (IOException e) {
e.printStackTrace();
}
Date lastUpdatedTime = new Date(lastModifiedTime.toMillis());
ObjectListing listing = amazonS3.listObjects("bucket", "test-folder");
List<S3ObjectSummary> summaries = listing.getObjectSummaries();
for (S3ObjectSummary os: summaries) {
if(os.getLastModified().after(lastUpdatedTime)) {
try {
String fileName="/data/test/"+os.getKey();
Download multipleFileDownload = transferManager.download(bucket, os.getKey(), new File(fileName));
while (multipleFileDownload.isDone() == false) {
Thread.sleep(1000);
}
}catch(InterruptedException i){
LOG.error("Exception Occurred while downloading the file ",i);
}
}
}