How to reduce the execution time of uploading many files to Dropbox in Python? - dropbox

I have 100 files ( .csv) inside the folder and I want to upload these files to Dropbox. I have successfully done this with the following code, but the problem is that the execution time takes a long time. So my question is: How to reduce the execution time of uploading these files to Dropbox in Python.
With my thanks and appreciation
dbx = dropbox.Dropbox('Access token')
filename = (os.listdir('path'))
for fn in filename:
with open(fn, 'rb') as f:
dbx.files_upload(f.read(), '/sendTOcloud/' + fn + '.csv', mute=True)

Related

Micropython/Circuitpython --> Writing directly to an FTP server?

Is it possible to write to a file directly on an FTP server without first writing the file locally? In other words: writing to a remote file from local memory.
Board: ESP32-CAM
IDE: Thonny
Language: MicroPython (Lemariva's firmware)
from ftplib import FTP
import camera
ftp = FTP('192.168.1.65', '2121')
ftp.login('user', '12345')
ftp.cwd("/Cam/")
filename = "test.jpeg"
camera.init(0, format=camera.JPEG, fb_location=camera.PSRAM)
camera.quality(10)
camera.framesize(camera.FRAME_240X240)
pic = camera.capture()
fh = open(filename, 'rwb')
ftp.storbinary('STOR '+filename, fh)
fh.close()
I'm assuming that I have to convert the camera.capture() object into a Byte array? But how do I do so without first capturing the image and thus writing to disk?

In Google collab I get IOPub data rate exceeded

IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
--NotebookApp.iopub_data_rate_limit.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
An IOPub error usually occurs when you try to print a large amount of data to the console. Check your print statements - if you're trying to print a file that exceeds 10MB, its likely that this caused the error. Try to read smaller portions of the file/data.
I faced this issue while reading a file from Google Drive to Colab.
I used this link https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/io.ipynb
and the problem was in this block of code
# Download the file we just uploaded.
#
# Replace the assignment below with your file ID
# to download a different file.
#
# A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'target_file_id'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
#Remove this print statement
#print('Downloaded file contents are: {}'.format(downloaded.read()))
I had to remove the last print statement since it exceeded the 10MB limit in the notebook - print('Downloaded file contents are: {}'.format(downloaded.read()))
Your file will still be downloaded and you can read it in smaller chunks or read a portion of the file.
The above answer is correct, I just commented the print statement and the error went away. just keeping it here so someone might find it useful. Suppose u are reading a csv file from google drive just import pandas and add pd.read_csv(downloaded) it will work just fine.
file_id = 'FILEID'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
pd.read_csv(downloaded);
Maybe this will help..
from via sv1997
IOPub Error on Google Colaboratory in Jupyter Notebook
IoPub Error is occurring in Colab because you are trying to display the output on the console itself(Eg. print() statements) which is very large.
The IoPub Error maybe related in print function.
So delete or annotate the print function. It may resolve the error.
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
!apt update
!apt-get install libopencv-dev
its important to update your make file. and also, keep your input file name correct

PySpark - Spark clusters EC2 - unable to save to S3

I have set up a spark cluster with a master and 2 slaves (I'm using Spark Standalone). The cluster is working well with some of the examples but not my application. My application workflow is that, it will read the csv -> extract each line in the csv along with the header -> convert to JSON -> save to S3. Here is my code:
def upload_func(row):
f = row.toJSON()
f.saveAsTextFile("s3n://spark_data/"+ row.name +".json")
print(f)
print(row.name)
if __name__ == "__main__":
spark = SparkSession \
.builder \
.appName("Python Spark SQL data source example") \
.getOrCreate()
df = spark.read.csv("sample.csv", header=True, mode="DROPMALFORMED")
df.rdd.map(upload_func)
I have also export the AWS_Key_ID and AWS_Secret_Key into the ec2 environment. However with the above code, my application does not work. Below are the issues:
The JSON files are not saved in S3, I have tried run the application few times and also reload the S3 page but no data. The application completed without any error in the log. Also, the print(f) and print(row.name) are not printed out in the log. What do I need to fix to get the JSON save on S3 and is there anyway for me to print on the log for debug purpose?
Currently I need to put the csv file in the worker node so the application can read the csv file. How can I put the file in another place, let say the master node and when the application runs, it will split the csv file to all the worker nodes so they can do the upload parallel as a distributed system?
Help is really appreciated. Thanks for your help in advance.
UPDATED
After putting Logger to debug, I have identified the issue that the map function upload_func() is not being called or the application could not get inside this function (Logger printed messages before and after function call). Please help if you know the reason why?
you need to force the map to be evaluated; spark will only execute work on demand.
df.rdd.map(upload_func).count() should do it

Unzip a file to s3

I am looking at a simple way to extract a zip/gzip present in s3 bucket to the same bucket location and delete the parent zip/gzip file post extraction.
I am unable to achieve this with any of the API's currently.
Have tried native boto, pyfilesystem(fs), s3fs.
The source and destination links seem to be an issue for these functions.
(Using with Python 2.x/3.x & Boto 2.x )
I see there is an API for node.js(unzip-to-s3) to do this job , but none for python.
Couple of implementations i can think of:
A simple API to extract the zip file within the same bucket.
Use s3 as a filesystem and manipulate data
Use a data pipeline to achieve this
Transfer the zip to ec2 , extract and copy back to s3.
The option 4 would be the least preferred option, to minimise the architecture overhead with ec2 addon.
Need support in getting this feature implementation , with integration to lambda at a later stage. Any pointers to these implementations are greatly appreciated.
Thanks in Advance,
Sundar.
You could try https://www.cloudzipinc.com/ that unzips/expands several different formats of archives from S3 into a destination in your bucket. I used it to unzip components of a digital catalog into S3.
Have solved by using ec2 instance.
Copy the s3 files to local dir in ec2
and copy that directory back to S3 bucket.
Sample to unzip to local directory in ec2 instance
def s3Unzip(srcBucket,dst_dir):
'''
function to decompress the s3 bucket contents to local machine
Args:
srcBucket (string): source bucket name
dst_dir (string): destination location in the local/ec2 local file system
Returns:
None
'''
#bucket = s3.lookup(bucket)
s3=s3Conn
path=''
bucket = s3.lookup(bucket_name)
for key in bucket:
path = os.path.join(dst_dir, key.name)
key.get_contents_to_filename(path)
if path.endswith('.zip'):
opener, mode = zipfile.ZipFile, 'r'
elif path.endswith('.tar.gz') or path.endswith('.tgz'):
opener, mode = tarfile.open, 'r:gz'
elif path.endswith('.tar.bz2') or path.endswith('.tbz'):
opener, mode = tarfile.open, 'r:bz2'
else:
raise ValueError ('unsuppported format')
try:
os.mkdir(dst_dir)
print ("local directories created")
except Exception:
logger_s3.warning ("Exception in creating local directories to extract zip file/ folder already existing")
cwd = os.getcwd()
os.chdir(dst_dir)
try:
file = opener(path, mode)
try: file.extractall()
finally: file.close()
logger_s3.info('(%s) extracted successfully to %s'%(key ,dst_dir))
except Exception as e:
logger_s3.error('failed to extract (%s) to %s'%(key ,dst_dir))
os.chdir(cwd)
s3.close
sample code to upload to mysql instance
Use the "LOAD DATA LOCAL INFILE" query to upload to mysql directly
def upload(file_path,timeformat):
'''
function to upload a csv file data to mysql rds
Args:
file_path (string): local file path
timeformat (string): destination bucket to copy data
Returns:
None
'''
for file in file_path:
try:
con = connect()
cursor = con.cursor()
qry="""LOAD DATA LOCAL INFILE '%s' INTO TABLE xxxx FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (col1 , col2 ,col3, #datetime , col4 ) set datetime = str_to_date(#datetime,'%s');""" %(file,timeformat)
cursor.execute(qry)
con.commit()
logger_rds.info ("Loading file:"+file)
except Exception:
logger_rds.error ("Exception in uploading "+file)
##Rollback in case there is any error
con.rollback()
cursor.close()
# disconnect from server
con.close()
Lambda function:
You can use a Lambda function where you read zipped files into the buffer, gzip the individual files, and reupload them to S3. Then you can either archive the original files or delete them using boto.
You can also set an event based trigger that runs the lambda automatically everytime there is a new zipped file in S3. Here's a full tutorial for the exact thing here: https://betterprogramming.pub/unzip-and-gzip-incoming-s3-files-with-aws-lambda-f7bccf0099c9

Jasper Reports taking long time in Server, But not in local

I am trying to generate PDF files by DynamicReports which uses JasperReports in background. I tried generating 50 pdf files in my local machine. It worked fine, it took around 35 seconds to generate and upload the file to my aws S3 client.
After uploading the JAR to server, server takes 50 seconds for each PDF file to generate. So totally it take 2500 seconds for 50 files to get generated and uploaded in s3. Each file is around 3 or 4 kb
Updated:
I tried to debug more in server. As far as I found, toPdf function takes a lot of time. Not sure this is because of toPdf or Storing the filestream in server
Code Snippet:
try {
String directory = "/home/ubuntu/generatedFiles";
for(Invoice eachInvoice : invoiceList){
JasperReportBuilder report = design.build(eachInvoice);
String fileName = "invoice_"+eachInvoice.getId()+".pdf";
String filePath= directory+"/"+fileName;
FileOutputStream stream = new FileOutputStream(filePath);
report.toPdf(stream);
client.uploadDocument(filePath, fileName);
}
// Delete all the files in the generatedFiles folder
File file = new File(directory);
String[] invoiceFiles;
if(file.isDirectory()){
// Delete the files from machine
}
}
catch (DRException e) {
e.printStackTrace();
}
If anyone can help me through this!! It would be awesome!!