get file name boto3 python - amazon-s3

I'm a new in flask and boto3 and want create a simple upload form to s3 amazon and i need to save file on s3 with the existing filename and return link to this file.
Two issues:
1) In the example below file always uploads with a name 'test'. If upload file name will be 'my file.pdf' i need to upload file to s3 with the same name ('file.pdf').
I believe it can be done with request but i do not know how exactly. How it can be done?
2) How to return link to the file that i just have uploaded? (I have no idea)
below is my code
#app.route('/')
def index():
return '''
<form method="post" enctype="multipart/form-data" action="upload">
<input type="file" name="file" multiple>
<input type="submit">
</form>
'''
#app.route('/upload', methods=['POST', 'GET'])
def upload():
s3 = boto3.resource('s3')
s3.Bucket('dimkzn').put_object(='test', Body=request.files['file'])
return 'file save! to S3'
if name=='main':
app.run(debug=True) code here

In your upload function you are missing the Key parameter names where you have ="test" which is why every file is being saved as test. You can pass through the filename from the file object within request.files
#app.route('/upload', methods=['POST', 'GET'])
def upload():
s3 = boto3.resource('s3')
s3.Bucket('dimkzn').put_object(Key=request.files['file'].filename, Body=request.files['file'])
return 'file save! to S3'
The URL path will then be https://s3-<region>.amazonaws.com/dimkzn/<filename>
If you wish to upload the file into a subfolder, folders don't actually exist within S3 but you can create folder structures by changing the key of the file. E.G
#app.route('/upload', methods=['POST', 'GET'])
def upload():
s3 = boto3.resource('s3')
s3.Bucket('dimkzn').put_object(Key=f"media/example/{request.files['file'].filename}", Body=request.files['file'])
return 'file save! to S3'
Will upload the file to:
https://s3-<region>.amazonaws.com/dimkzn/media/example/<filename>

I think your problem is in the line:
s3.Bucket('dimkzn').put_object(='test', Body=request.files['file'])
You're missing an argument name before the ='test'. I can't see from your code what request.files['file'] returns: is that the filename or the file content?
Correct syntax is
object = bucket.put_object(
Body=b'bytes'|file,
Key='filename')
...where Body is the content of the file, Key is the filename.
Once successful, you can construct the link yourself as follows:
https://s3-<region>.amazonaws.com/<bucketname>/<filename>
Good luck!

Your Missing "Key", should be like this:
s3.Bucket('dimkzn').put_object(Key='test', Body=request.files['file']

Related

List out files inside folder in S3 bucket using minio

I am trying to read files from S3 bucket using minio client.
https://docs.min.io/docs/java-client-quickstart-guide.html
I am able to make connection using this client and able to access the bucket also. Now, I need to access a file inside a folder in the bucket but I am not sure how to do it. I thought once I have access to the bucket, I can list out the file names using File library but not able to do it.
File path : s3 bucket endpoint/4275/input/test.csv
Code :
public void listS3BucketObject() {
MinioClient minioClient =
MinioClient.builder()
.endpoint(s3BucketEndpoint)
.credentials(s3BucketAccessKey, s3BucketSecretKey)
.build();
String fileUrl = s3BucketEndpoint + "/" + "4275" + "/" + "input";
File[] fileList = new File(fileUrl).listFiles();
for(File file : fileList) {
System.out.println("File name: "+file.getName()); // getting null exception here
To list a "folder" (called a prefix in S3 terms), use the listObjects call.
See this for an example: https://docs.min.io/docs/java-client-api-reference.html#listObjects

how to download zip file from aws s3 using terraform

i am working on terraform,i am facing issue in download the zip file from s3 to local using terraform.
creating the lambda function using zip file. Can any one please help on this.
I believe you can use the aws_s3_bucket_object data_source. This allows you to download the contents of an s3 bucket. Sample code snippet is shown below:
data "aws_s3_bucket_object" "secret_key" {
bucket = "awesomecorp-secret-keys"
key = "awesomeapp-secret-key"
}
resource "aws_instance" "example" {
## ...
provisioner "file" {
content = "${data.aws_s3_bucket_object.secret_key.body}"
}
}
Hope this helps!
I you want to create a lamdba function using a file in an S3 Bucket you can simply reference it in your ressource :
resource aws_lambda_function lambda {
function_name = "my_function"
s3_bucket = "some_bucket"
s3_key = "lambda.zip"
...
}

Uploading Multiple files in AWS S3 from terraform

I want to upload multiple files to AWS S3 from a specific folder in my local device. I am running into the following error.
Here is my terraform code.
resource "aws_s3_bucket" "testbucket" {
bucket = "test-terraform-pawan-1"
acl = "private"
tags = {
Name = "test-terraform"
Environment = "test"
}
}
resource "aws_s3_bucket_object" "uploadfile" {
bucket = "test-terraform-pawan-1"
key = "index.html"
source = "/home/pawan/Documents/Projects/"
}
How can I solve this problem?
As of Terraform 0.12.8, you can use the fileset function to get a list of files for a given path and pattern. Combined with for_each, you should be able to upload every file as its own aws_s3_bucket_object:
resource "aws_s3_bucket_object" "dist" {
for_each = fileset("/home/pawan/Documents/Projects/", "*")
bucket = "test-terraform-pawan-1"
key = each.value
source = "/home/pawan/Documents/Projects/${each.value}"
# etag makes the file update when it changes; see https://stackoverflow.com/questions/56107258/terraform-upload-file-to-s3-on-every-apply
etag = filemd5("/home/pawan/Documents/Projects/${each.value}")
}
See terraform-providers/terraform-provider-aws : aws_s3_bucket_object: support for directory uploads #3020 on GitHub.
Note: This does not set metadata like content_type, and as far as I can tell there is no built-in way for Terraform to infer the content type of a file. This metadata is important for things like HTTP access from the browser working correctly. If that's important to you, you should look into specifying each file manually instead of trying to automatically grab everything out of a folder.
You are trying to upload a directory, whereas Terraform expects a single file in the source field. It is not yet supported to upload a folder to an S3 bucket.
However, you can invoke awscli commands using null_resource provisioner, as suggested here.
resource "null_resource" "remove_and_upload_to_s3" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/s3Contents s3://${aws_s3_bucket.site.id}"
}
}
Since June 9, 2020, terraform has a built-in way to infer the content type (and a few other attributes) of a file which you may need as you upload to a S3 bucket
HCL format:
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/src"
template_vars = {
# Pass in any values that you wish to use in your templates.
vpc_id = "vpc-abc123"
}
}
resource "aws_s3_bucket_object" "static_files" {
for_each = module.template_files.files
bucket = "example"
key = each.key
content_type = each.value.content_type
# The template_files module guarantees that only one of these two attributes
# will be set for each file, depending on whether it is an in-memory template
# rendering result or a static file on disk.
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.digests.md5
}
JSON format:
{
"resource": {
"aws_s3_bucket_object": {
"static_files": {
"for_each": "${module.template_files.files}"
#...
}}}}
#...
}
Source: https://registry.terraform.io/modules/hashicorp/dir/template/latest
My objective was to make this dynamic, so whenever i create a folder in a directory, terraform automatically uploads that new folder and its contents into S3 bucket with the same key structure.
Heres how i did it.
First you have to get a local variable with a list of each Folder and the files under it. Then we can loop through that list to upload the source to S3 bucket.
Example: I have a folder called "Directories" with 2 sub folders called "Folder1" and "Folder2" each with their own files.
- Directories
- Folder1
* test_file_1.txt
* test_file_2.txt
- Folder2
* test_file_3.txt
Step 1: Get the local var.
locals{
folder_files = flatten([for d in flatten(fileset("${path.module}/Directories/*", "*")) : trim( d, "../") ])
}
Output looks like this:
folder_files = [
"Folder1/test_file_1.txt",
"Folder1/test_file_2.txt",
"Folder2/test_file_3.txt",
]
Step 2: dynamically upload s3 objects
resource "aws_s3_object" "this" {
for_each = { for idx, file in local.folder_files : idx => file }
bucket = aws_s3_bucket.this.bucket
key = "/Directories/${each.value}"
source = "${path.module}/Directories/${each.value}"
etag = "${path.module}/Directories/${each.value}"
}
This loops over the local var,
So in your S3 bucket, you will have uploaded in the same structure, the local Directory and its sub directories and files:
Directory
- Folder1
- test_file_1.txt
- test_file_2.txt
- Folder2
- test_file_3.txt

Cant read web2py uploaded .txt from the shell

I have a simple table:
db.define_table('myfiles',
Field('title','string'),
Field('myfile','upload))
Then i run my app from shell:
python web2py.py -S myapp -M
Choose my file_path:
file_path = os.path.join(request.folder,'upload',db.myfiles[1].myfile)
but then i try to read my uploaded file, i get "File not open for reading"
with open(file_path, 'wb') as f: data = f.readlines()
I even tried the same process with copy-paste my file to private folder but still get the same error.
First, the default folder for uploaded files is "uploads", not "upload":
file_path = os.path.join(request.folder, 'uploads', db.myfiles[1].myfile)
Second, you should open the file for reading rather than writing:
with open(file_path, 'rb') as f:
data = f.readlines()

how to remove the file after download in specified path

filepath = self.class.instance_variable_get(:#filename)
# puts" #{:#filename}"
qget = params['clientquery']
if !qget.nil? then
begin
systemCmd = "bash /home/abc/t.sh \"#{qget}\" \"#{filepath}\""
puts systemCmd
output = system("#{systemCmd} 2>&1")
data = File.read(filepath)
send_data data, filename: File.basename(filepath),
type: 'application/csv',
disposition: 'attachment'
ensure
# delfile = File.basename("/tmp/download.csv")
FileUtils.remove_entry_secure File.basename("/tmp/download.csv")
# File.delete(delfile)
# redirect_to '/report'
end
FileUtils.remove_entry_secure File.basename("/tmp/download.csv") using this code i try to remove file after downloading but it not working
if i comment the line FileUtils.remove_entry_secure File.basename("/tmp/download.csv")
The file downloaded but i want remove that file after download the file
I think permission problem.could you please verify permission for /tmp folder.
because FileUtils.remove_entry_secure method will check all permission,user and group and it will remove.
Please refer click here