Serving STATIC FILES in development and Amazon S3 together - amazon-s3

I would like to server static files from Amazon S3 and local server?
also I don't know how to setup MEDIA_URL STATIC_ROOT and MEDIA_ROOT
Context:
I am serving my static files from Amazon S3 using django-boto and my settings/base.py are:
STATICFILES_LOCATION = 'assets'
STATICFILES_STORAGE = 'custom_storages.StaticStorage'
STATIC_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, STATICFILES_LOCATION)
MEDIAFILES_LOCATION = 'media'
MEDIA_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION)
DEFAULT_FILE_STORAGE = 'custom_storages.MediaStorage'
My custom_storages.py file content is:
from django.conf import settings
# from storages.backends.s3boto3 import S3Boto3Storage
from storages.backends.s3boto import S3BotoStorage
class StaticStorage(S3BotoStorage):
location = settings.STATICFILES_LOCATION
class MediaStorage(S3BotoStorage):
location = settings.MEDIAFILES_LOCATION
And all of this is working fine.
When I execute collectstatic my static files are being uploaded to my bucket in Amazon S3.
The problem I have is that every time I make a change in my css or js files, I need to do the collectstatic command.
How can I setup my project (settings) to serve my static files from S3 and my django in a local server together?
I have a settings/development.py file in which I am overriding the following settings:
STATIC_URL = '/assets/'
STATICFILES_LOCATION = 'assets'
MEDIAFILES_LOCATION = 'media/'
MEDIA_URL = MEDIAFILES_LOCATION
STATIC_ROOT = os.path.join(BASE_DIR, "assets")
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
And my urls.py main file I have this condition:
if settings.DEBUG:
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

Related

Save Pandas or Pyspark dataframe from Databricks to Azure Blob Storage

Is there a way I can save a Pyspark or Pandas dataframe from Databricks to a blob storage without mounting or installing libraries?
I was able to achieve this after mounting the storage container into Databricks and using the library com.crealytics.spark.excel, but I was wondering if I can do the same without the library or without mounting because I will be working on clusters without these 2 permissions.
Here the code for saving the dataframe locally to dbfs.
# export
from os import path
folder = "export"
name = "export"
file_path_name_on_dbfs = path.join("/tmp", folder, name)
# Writing to DBFS
# .coalesce(1) used to generate only 1 file, if the dataframe is too big this won't work so you'll have multiple files qnd you need to copy them later one by one
sampleDF \
.coalesce(1) \
.write \
.mode("overwrite") \
.option("header", "true") \
.option("delimiter", ";") \
.option("encoding", "UTF-8") \
.csv(file_path_name_on_dbfs)
# path of destination, which will be sent to az storage
dest = file_path_name_on_dbfs + ".csv"
# Renaming part-000...csv to our file name
target_file = list(filter(lambda file: file.name.startswith("part-00000"), dbutils.fs.ls(file_path_name_on_dbfs)))
if len(target_file) > 0:
dbutils.fs.mv(target_file[0].path, dest)
dbutils.fs.cp(dest, f"file://{dest}") # this line is added for community edition only cause /dbfs is not recognized, so we copy the file locally
dbutils.fs.rm(file_path_name_on_dbfs,True)
The code that will send the file into az storage :
import requests
sas="YOUR_SAS_TOKEN_PREVIOUSLY_CREATED" # follow the link below to create SAS token (using sas is slightly more secure than raw key storage)
blob_account_name = "YOUR_BLOB_ACCOUNT_NAME"
container = "YOUR_CONTAINER_NAME"
destination_path_w_name = "export/export.csv"
url = f"https://{blob_account_name}.blob.core.windows.net/{container}/{destination_path_w_name}?{sas}"
# here we read the content of our previously exported df -> csv
# if you are not on community edition you might want to use /dbfs + dest
payload=open(dest).read()
headers = {
'x-ms-blob-type': 'BlockBlob',
'Content-Type': 'text/csv' # you can change the content type according to your needs
}
response = requests.request("PUT", url, headers=headers, data=payload)
# if response.status_code is 201 it means your file was created successfully
print(response.status_code)
Follow this link to setup a SAS token
Remember that anyone who got the sas token can access your storage depending on permissions you set while creating the sas token
Code for Excel export version (using com.crealytics:spark-excel_2.12:0.14.0)
Saving the dataframe :
data = [
('a',25,'ast'),
('b',15,'phone'),
('c',32,'dlp'),
('d',45,'rare'),
('e',60,'phq' )
]
colums = ["column1" ,"column2" ,"column3"]
sampleDF = spark.createDataFrame(data=data, schema = colums)
sampleDF.show()
# export
from os import path
folder = "export"
name = "export"
file_path_name_on_dbfs = path.join("/tmp", folder, name)
# Writing to DBFS
sampleDF.write.format("com.crealytics.spark.excel")\
.option("header", "true")\
.mode("overwrite")\
.save(file_path_name_on_dbfs + ".xlsx")
# excel
dest = file_path_name_on_dbfs + ".xlsx"
dbutils.fs.cp(dest, f"file://{dest}") # this line is added for community edition only cause /dbfs is not recognized, so we copy the file locally
Uploading the file to azure storage :
import requests
sas="YOUR_SAS_TOKEN_PREVIOUSLY_CREATED" # follow the link below to create SAS token (using sas is slightly more secure than raw key storage)
blob_account_name = "YOUR_BLOB_ACCOUNT_NAME"
container = "YOUR_CONTAINER_NAME"
destination_path_w_name = "export/export.xlsx"
# destination_path_w_name = "export/export.csv"
url = f"https://{blob_account_name}.blob.core.windows.net/{container}/{destination_path_w_name}?{sas}"
# here we read the content of our previously exported df -> csv
# if you are not on community edition you might want to use /dbfs + dest
# payload=open(dest).read()
payload=open(dest, 'rb').read()
headers = {
'x-ms-blob-type': 'BlockBlob',
# 'Content-Type': 'text/csv'
'Content-Type': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
}
response = requests.request("PUT", url, headers=headers, data=payload)
# if response.status_code is 201 it means your file was created successfully
print(response.status_code)

Storing cache results

I have activated the middleware extension scrapy.extensions.httpcache.FilesystemCacheStorage to return the cache as results in a folder (.gzip) when scraping. However, I get the following error:
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b'\x80\x04')
I think the issue is that the file names are saved as the following:
My settings.py:
HTTPCACHE_ENABLED = True
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
HTTPCACHE_DBM_MODULE = 'dbm'
HTTPCACHE_GZIP = True
How do I correctly activate the extension and store the cache as files in my working directory?
example scraper:
import scrapy
class email_spider(scrapy.Spider):
name = 'email'
start_urls = [ 'http://quotes.toscrape.com/tag/humor/' ]
def parse(self, response):
content = response.xpath("(//div[#class='col-md-8'])[2]//div")
for stuff in content:
yield {
'stuff':stuff.xpath(".//a//#href).get(),}

Access denied for s3 bucket for terraform backend

My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG

how to download zip file from aws s3 using terraform

i am working on terraform,i am facing issue in download the zip file from s3 to local using terraform.
creating the lambda function using zip file. Can any one please help on this.
I believe you can use the aws_s3_bucket_object data_source. This allows you to download the contents of an s3 bucket. Sample code snippet is shown below:
data "aws_s3_bucket_object" "secret_key" {
bucket = "awesomecorp-secret-keys"
key = "awesomeapp-secret-key"
}
resource "aws_instance" "example" {
## ...
provisioner "file" {
content = "${data.aws_s3_bucket_object.secret_key.body}"
}
}
Hope this helps!
I you want to create a lamdba function using a file in an S3 Bucket you can simply reference it in your ressource :
resource aws_lambda_function lambda {
function_name = "my_function"
s3_bucket = "some_bucket"
s3_key = "lambda.zip"
...
}

Uploading Multiple files in AWS S3 from terraform

I want to upload multiple files to AWS S3 from a specific folder in my local device. I am running into the following error.
Here is my terraform code.
resource "aws_s3_bucket" "testbucket" {
bucket = "test-terraform-pawan-1"
acl = "private"
tags = {
Name = "test-terraform"
Environment = "test"
}
}
resource "aws_s3_bucket_object" "uploadfile" {
bucket = "test-terraform-pawan-1"
key = "index.html"
source = "/home/pawan/Documents/Projects/"
}
How can I solve this problem?
As of Terraform 0.12.8, you can use the fileset function to get a list of files for a given path and pattern. Combined with for_each, you should be able to upload every file as its own aws_s3_bucket_object:
resource "aws_s3_bucket_object" "dist" {
for_each = fileset("/home/pawan/Documents/Projects/", "*")
bucket = "test-terraform-pawan-1"
key = each.value
source = "/home/pawan/Documents/Projects/${each.value}"
# etag makes the file update when it changes; see https://stackoverflow.com/questions/56107258/terraform-upload-file-to-s3-on-every-apply
etag = filemd5("/home/pawan/Documents/Projects/${each.value}")
}
See terraform-providers/terraform-provider-aws : aws_s3_bucket_object: support for directory uploads #3020 on GitHub.
Note: This does not set metadata like content_type, and as far as I can tell there is no built-in way for Terraform to infer the content type of a file. This metadata is important for things like HTTP access from the browser working correctly. If that's important to you, you should look into specifying each file manually instead of trying to automatically grab everything out of a folder.
You are trying to upload a directory, whereas Terraform expects a single file in the source field. It is not yet supported to upload a folder to an S3 bucket.
However, you can invoke awscli commands using null_resource provisioner, as suggested here.
resource "null_resource" "remove_and_upload_to_s3" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/s3Contents s3://${aws_s3_bucket.site.id}"
}
}
Since June 9, 2020, terraform has a built-in way to infer the content type (and a few other attributes) of a file which you may need as you upload to a S3 bucket
HCL format:
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/src"
template_vars = {
# Pass in any values that you wish to use in your templates.
vpc_id = "vpc-abc123"
}
}
resource "aws_s3_bucket_object" "static_files" {
for_each = module.template_files.files
bucket = "example"
key = each.key
content_type = each.value.content_type
# The template_files module guarantees that only one of these two attributes
# will be set for each file, depending on whether it is an in-memory template
# rendering result or a static file on disk.
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.digests.md5
}
JSON format:
{
"resource": {
"aws_s3_bucket_object": {
"static_files": {
"for_each": "${module.template_files.files}"
#...
}}}}
#...
}
Source: https://registry.terraform.io/modules/hashicorp/dir/template/latest
My objective was to make this dynamic, so whenever i create a folder in a directory, terraform automatically uploads that new folder and its contents into S3 bucket with the same key structure.
Heres how i did it.
First you have to get a local variable with a list of each Folder and the files under it. Then we can loop through that list to upload the source to S3 bucket.
Example: I have a folder called "Directories" with 2 sub folders called "Folder1" and "Folder2" each with their own files.
- Directories
- Folder1
* test_file_1.txt
* test_file_2.txt
- Folder2
* test_file_3.txt
Step 1: Get the local var.
locals{
folder_files = flatten([for d in flatten(fileset("${path.module}/Directories/*", "*")) : trim( d, "../") ])
}
Output looks like this:
folder_files = [
"Folder1/test_file_1.txt",
"Folder1/test_file_2.txt",
"Folder2/test_file_3.txt",
]
Step 2: dynamically upload s3 objects
resource "aws_s3_object" "this" {
for_each = { for idx, file in local.folder_files : idx => file }
bucket = aws_s3_bucket.this.bucket
key = "/Directories/${each.value}"
source = "${path.module}/Directories/${each.value}"
etag = "${path.module}/Directories/${each.value}"
}
This loops over the local var,
So in your S3 bucket, you will have uploaded in the same structure, the local Directory and its sub directories and files:
Directory
- Folder1
- test_file_1.txt
- test_file_2.txt
- Folder2
- test_file_3.txt