Cant read web2py uploaded .txt from the shell - file-upload

I have a simple table:
db.define_table('myfiles',
Field('title','string'),
Field('myfile','upload))
Then i run my app from shell:
python web2py.py -S myapp -M
Choose my file_path:
file_path = os.path.join(request.folder,'upload',db.myfiles[1].myfile)
but then i try to read my uploaded file, i get "File not open for reading"
with open(file_path, 'wb') as f: data = f.readlines()
I even tried the same process with copy-paste my file to private folder but still get the same error.

First, the default folder for uploaded files is "uploads", not "upload":
file_path = os.path.join(request.folder, 'uploads', db.myfiles[1].myfile)
Second, you should open the file for reading rather than writing:
with open(file_path, 'rb') as f:
data = f.readlines()

Related

Save Pandas or Pyspark dataframe from Databricks to Azure Blob Storage

Is there a way I can save a Pyspark or Pandas dataframe from Databricks to a blob storage without mounting or installing libraries?
I was able to achieve this after mounting the storage container into Databricks and using the library com.crealytics.spark.excel, but I was wondering if I can do the same without the library or without mounting because I will be working on clusters without these 2 permissions.
Here the code for saving the dataframe locally to dbfs.
# export
from os import path
folder = "export"
name = "export"
file_path_name_on_dbfs = path.join("/tmp", folder, name)
# Writing to DBFS
# .coalesce(1) used to generate only 1 file, if the dataframe is too big this won't work so you'll have multiple files qnd you need to copy them later one by one
sampleDF \
.coalesce(1) \
.write \
.mode("overwrite") \
.option("header", "true") \
.option("delimiter", ";") \
.option("encoding", "UTF-8") \
.csv(file_path_name_on_dbfs)
# path of destination, which will be sent to az storage
dest = file_path_name_on_dbfs + ".csv"
# Renaming part-000...csv to our file name
target_file = list(filter(lambda file: file.name.startswith("part-00000"), dbutils.fs.ls(file_path_name_on_dbfs)))
if len(target_file) > 0:
dbutils.fs.mv(target_file[0].path, dest)
dbutils.fs.cp(dest, f"file://{dest}") # this line is added for community edition only cause /dbfs is not recognized, so we copy the file locally
dbutils.fs.rm(file_path_name_on_dbfs,True)
The code that will send the file into az storage :
import requests
sas="YOUR_SAS_TOKEN_PREVIOUSLY_CREATED" # follow the link below to create SAS token (using sas is slightly more secure than raw key storage)
blob_account_name = "YOUR_BLOB_ACCOUNT_NAME"
container = "YOUR_CONTAINER_NAME"
destination_path_w_name = "export/export.csv"
url = f"https://{blob_account_name}.blob.core.windows.net/{container}/{destination_path_w_name}?{sas}"
# here we read the content of our previously exported df -> csv
# if you are not on community edition you might want to use /dbfs + dest
payload=open(dest).read()
headers = {
'x-ms-blob-type': 'BlockBlob',
'Content-Type': 'text/csv' # you can change the content type according to your needs
}
response = requests.request("PUT", url, headers=headers, data=payload)
# if response.status_code is 201 it means your file was created successfully
print(response.status_code)
Follow this link to setup a SAS token
Remember that anyone who got the sas token can access your storage depending on permissions you set while creating the sas token
Code for Excel export version (using com.crealytics:spark-excel_2.12:0.14.0)
Saving the dataframe :
data = [
('a',25,'ast'),
('b',15,'phone'),
('c',32,'dlp'),
('d',45,'rare'),
('e',60,'phq' )
]
colums = ["column1" ,"column2" ,"column3"]
sampleDF = spark.createDataFrame(data=data, schema = colums)
sampleDF.show()
# export
from os import path
folder = "export"
name = "export"
file_path_name_on_dbfs = path.join("/tmp", folder, name)
# Writing to DBFS
sampleDF.write.format("com.crealytics.spark.excel")\
.option("header", "true")\
.mode("overwrite")\
.save(file_path_name_on_dbfs + ".xlsx")
# excel
dest = file_path_name_on_dbfs + ".xlsx"
dbutils.fs.cp(dest, f"file://{dest}") # this line is added for community edition only cause /dbfs is not recognized, so we copy the file locally
Uploading the file to azure storage :
import requests
sas="YOUR_SAS_TOKEN_PREVIOUSLY_CREATED" # follow the link below to create SAS token (using sas is slightly more secure than raw key storage)
blob_account_name = "YOUR_BLOB_ACCOUNT_NAME"
container = "YOUR_CONTAINER_NAME"
destination_path_w_name = "export/export.xlsx"
# destination_path_w_name = "export/export.csv"
url = f"https://{blob_account_name}.blob.core.windows.net/{container}/{destination_path_w_name}?{sas}"
# here we read the content of our previously exported df -> csv
# if you are not on community edition you might want to use /dbfs + dest
# payload=open(dest).read()
payload=open(dest, 'rb').read()
headers = {
'x-ms-blob-type': 'BlockBlob',
# 'Content-Type': 'text/csv'
'Content-Type': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
}
response = requests.request("PUT", url, headers=headers, data=payload)
# if response.status_code is 201 it means your file was created successfully
print(response.status_code)

Reading an XLSX file in Azure File Storage

I need to read an Excel file that is located in a folder in Azure File Storage. This is not in blob storage.
I cannot download the file to a local drive, since there is none.
I cannot seem to get started on how to access the file or read it in place.
Can someone help to get me started?
thanks
This is the basic code that works!
ShareClient share = new ShareClient(connectionString, shareName);
ShareDirectoryClient directory = share.GetDirectoryClient(dirName);
ShareFileClient file = directory.GetFileClient(filename);
OpenSettings openSettings = new OpenSettings();
using (Stream stream = file.OpenRead())
{
using (SpreadsheetDocument document = SpreadsheetDocument.Open(stream, false, openSettings))
{
readRow();
}
}

SSH - How to delete all .htaccess files inside the wp-content folders?

I have thousands of .htaccess files added by malware, I need to login via SSH and delete them, but only when they are inside wp-content folder, outside this folder they need to stay.
What's the cmd please?
Note this is for Python
Try This:
import os
file_type = input("Enter file type: ")
folder_path = input("Enter folder path: ")
filelist = []
for root, dirs, files in os.walk(folder_path):
for file in files:
#append the file name to the list
filelist.append(os.path.join(root,file))
#print all the file names
for name in filelist:
delete_file = str(name)
if delete_file.endswith(file_type):
os.remove(delete_file)
print("File deleted successfully.")
break
else:
print("No files found.")

Few errors when attempting to write a script that uploads any data in EC2 Instance to S3 Bucket using BOTO

I have been trying to back up my EC2 Instance to an S3 Bucket but have contentiously come across few errors when I run the file. Most notable error being S3ResponceError: 403 Forbidden
FYI, I am using my aws access key id, access key secret from Rossetahub (Provided by the school)
below is the code I have written
import boto
import boto.s3
import os.path
import sys
AWS_ACCESS_KEY_ID = ''
AWS_ACCESS_KEY_SECRET = ''
bucket_name = 'bucketpoly'
sourceDir = 'example_files/'
destDir = 'example_files1/'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET)
bucket= conn.get_bucket(bucket_name)
uploadFileNames = []
for (sourceDir, dirname, filename) in os.walk(sourceDir):
uploadFileNames.extend(filename)
break
def percent_cb(complete, total):
sys.stdout.write('.')
sys.stdout.flush()
for filename in uploadFileNames:
sourcepath = os.path.join(sourceDir + filename)
destpath = os.path.join(destDir, filename)
print ('Uploading %s to Amazon S3 bucket %s' %
(sourcepath, bucket_name))
print ("singlepart upload")
k = boto.s3.key.Key(bucket)
k.key = destpath
k.set_contents_from_filename(sourcepath, cb=percent_cb, num_cb=10)
this is the resulting error
Traceback (most recent call last):
File "/home/student/Desktop/PROJECT FILES/testing2.py", line 11, in <module>
bucket= conn.get_bucket(bucket_name)
File "/usr/local/lib/python3.5/dist-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python3.5/dist-packages/boto/s3/connection.py", line 542, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
what do you think might be the problem with it?
403 Forbidden caused by insufficient IAM permissions. From the error, IAM user doesn't have Listobject and Listbucket permission.
This is a link you can use to get start with S3 permissions:
https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

how to remove the file after download in specified path

filepath = self.class.instance_variable_get(:#filename)
# puts" #{:#filename}"
qget = params['clientquery']
if !qget.nil? then
begin
systemCmd = "bash /home/abc/t.sh \"#{qget}\" \"#{filepath}\""
puts systemCmd
output = system("#{systemCmd} 2>&1")
data = File.read(filepath)
send_data data, filename: File.basename(filepath),
type: 'application/csv',
disposition: 'attachment'
ensure
# delfile = File.basename("/tmp/download.csv")
FileUtils.remove_entry_secure File.basename("/tmp/download.csv")
# File.delete(delfile)
# redirect_to '/report'
end
FileUtils.remove_entry_secure File.basename("/tmp/download.csv") using this code i try to remove file after downloading but it not working
if i comment the line FileUtils.remove_entry_secure File.basename("/tmp/download.csv")
The file downloaded but i want remove that file after download the file
I think permission problem.could you please verify permission for /tmp folder.
because FileUtils.remove_entry_secure method will check all permission,user and group and it will remove.
Please refer click here