Micropython/Circuitpython --> Writing directly to an FTP server? - camera

Is it possible to write to a file directly on an FTP server without first writing the file locally? In other words: writing to a remote file from local memory.
Board: ESP32-CAM
IDE: Thonny
Language: MicroPython (Lemariva's firmware)
from ftplib import FTP
import camera
ftp = FTP('192.168.1.65', '2121')
ftp.login('user', '12345')
ftp.cwd("/Cam/")
filename = "test.jpeg"
camera.init(0, format=camera.JPEG, fb_location=camera.PSRAM)
camera.quality(10)
camera.framesize(camera.FRAME_240X240)
pic = camera.capture()
fh = open(filename, 'rwb')
ftp.storbinary('STOR '+filename, fh)
fh.close()
I'm assuming that I have to convert the camera.capture() object into a Byte array? But how do I do so without first capturing the image and thus writing to disk?

Related

Save the output as csv file from VScode programmatically

My Requirement: Save the SQL query output (received the data) in to the local drives in csv file format.
My OS: Windows 10 64 bit
VS Code 1.67.1:
I have installed the following extensions to connect with snowflake data warehouse:
SQL Tools
Snowflake driver for SQL Tools
I have successfully connected my snowflake (cloud) data warehouse and received the data at
the VS code.
What I want is to save (export) the output (received the data) to the local file (for example to D:\result\result.csv).
How can I achieve that?
image attached for your reference.
thank you all.
pmk
If you can run a bit of python, this does pretty much what you need
import snowflake.connector
import csv
conn = snowflake.connector.connect(
user='',
password='',
account='',
warehouse='',
database='',
schema='',
role=''
)
results = conn.cursor().execute("""MY QUERY""").fetchall()
csvfile = open(r"PATH TO FILE",'a', newline='')
output = csv.writer(csvfile)
output.writerows(results)
csvfile.close

Using pandas to open Excel files stored in GCS from command line

The following code snippet is from a Google tutorial, it simply prints the names of files on GCP in a given bucket:
from google.cloud import storage
def list_blobs(bucket_name):
"""Lists all the blobs in the bucket."""
# bucket_name = "your-bucket-name"
storage_client = storage.Client()
# Note: Client.list_blobs requires at least package version 1.17.0.
blobs = storage_client.list_blobs(bucket_name)
for blob in blobs:
print(blob.name)
list_blobs('sn_project_data')
No from the command line I can run:
$ python path/file.py
And in my terminal the files in said bucket are printed out. Great, it works!
However, this isn't quite my goal. I'm looking to open a file and act upon it. For example:
df = pd.read_excel(filename)
print(df.iloc[0])
However, when I pass the path to the above, the error returned reads "invalid file path." So I'm sure there is some sort of GCP specific function call to actually access these files...
What command(s) should I run?
Edit: This video https://www.youtube.com/watch?v=ED5vHa3fE1Q shows a trick to open files and needs to use StringIO in the process. But it doesn't support excel files, so it's not an effective solution.
read_excel() does not support google cloud storage file path as of now but it can read data in bytes.
pandas.read_excel(io, sheet_name=0, header=0, names=None,
index_col=None, usecols=None, squeeze=False, dtype=None, engine=None,
converters=None, true_values=None, false_values=None, skiprows=None,
nrows=None, na_values=None, keep_default_na=True, na_filter=True,
verbose=False, parse_dates=False, date_parser=None, thousands=None,
comment=None, skipfooter=0, convert_float=True, mangle_dupe_cols=True,
storage_options=None)
Parameters: io : str, bytes, ExcelFile, xlrd.Book, path object, or
file-like object
What you can do is use the blob object and use download_as_bytes() to convert the object into bytes.
Download the contents of this blob as a bytes object.
For this example I just used a random sample xlsx file and read the 1st sheet:
from google.cloud import storage
import pandas as pd
bucket_name = "your-bucket-name"
blob_name = "SampleData.xlsx"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
data_bytes = blob.download_as_bytes()
df = pd.read_excel(data_bytes)
print(df)
Test done:

Unzip a file to s3

I am looking at a simple way to extract a zip/gzip present in s3 bucket to the same bucket location and delete the parent zip/gzip file post extraction.
I am unable to achieve this with any of the API's currently.
Have tried native boto, pyfilesystem(fs), s3fs.
The source and destination links seem to be an issue for these functions.
(Using with Python 2.x/3.x & Boto 2.x )
I see there is an API for node.js(unzip-to-s3) to do this job , but none for python.
Couple of implementations i can think of:
A simple API to extract the zip file within the same bucket.
Use s3 as a filesystem and manipulate data
Use a data pipeline to achieve this
Transfer the zip to ec2 , extract and copy back to s3.
The option 4 would be the least preferred option, to minimise the architecture overhead with ec2 addon.
Need support in getting this feature implementation , with integration to lambda at a later stage. Any pointers to these implementations are greatly appreciated.
Thanks in Advance,
Sundar.
You could try https://www.cloudzipinc.com/ that unzips/expands several different formats of archives from S3 into a destination in your bucket. I used it to unzip components of a digital catalog into S3.
Have solved by using ec2 instance.
Copy the s3 files to local dir in ec2
and copy that directory back to S3 bucket.
Sample to unzip to local directory in ec2 instance
def s3Unzip(srcBucket,dst_dir):
'''
function to decompress the s3 bucket contents to local machine
Args:
srcBucket (string): source bucket name
dst_dir (string): destination location in the local/ec2 local file system
Returns:
None
'''
#bucket = s3.lookup(bucket)
s3=s3Conn
path=''
bucket = s3.lookup(bucket_name)
for key in bucket:
path = os.path.join(dst_dir, key.name)
key.get_contents_to_filename(path)
if path.endswith('.zip'):
opener, mode = zipfile.ZipFile, 'r'
elif path.endswith('.tar.gz') or path.endswith('.tgz'):
opener, mode = tarfile.open, 'r:gz'
elif path.endswith('.tar.bz2') or path.endswith('.tbz'):
opener, mode = tarfile.open, 'r:bz2'
else:
raise ValueError ('unsuppported format')
try:
os.mkdir(dst_dir)
print ("local directories created")
except Exception:
logger_s3.warning ("Exception in creating local directories to extract zip file/ folder already existing")
cwd = os.getcwd()
os.chdir(dst_dir)
try:
file = opener(path, mode)
try: file.extractall()
finally: file.close()
logger_s3.info('(%s) extracted successfully to %s'%(key ,dst_dir))
except Exception as e:
logger_s3.error('failed to extract (%s) to %s'%(key ,dst_dir))
os.chdir(cwd)
s3.close
sample code to upload to mysql instance
Use the "LOAD DATA LOCAL INFILE" query to upload to mysql directly
def upload(file_path,timeformat):
'''
function to upload a csv file data to mysql rds
Args:
file_path (string): local file path
timeformat (string): destination bucket to copy data
Returns:
None
'''
for file in file_path:
try:
con = connect()
cursor = con.cursor()
qry="""LOAD DATA LOCAL INFILE '%s' INTO TABLE xxxx FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (col1 , col2 ,col3, #datetime , col4 ) set datetime = str_to_date(#datetime,'%s');""" %(file,timeformat)
cursor.execute(qry)
con.commit()
logger_rds.info ("Loading file:"+file)
except Exception:
logger_rds.error ("Exception in uploading "+file)
##Rollback in case there is any error
con.rollback()
cursor.close()
# disconnect from server
con.close()
Lambda function:
You can use a Lambda function where you read zipped files into the buffer, gzip the individual files, and reupload them to S3. Then you can either archive the original files or delete them using boto.
You can also set an event based trigger that runs the lambda automatically everytime there is a new zipped file in S3. Here's a full tutorial for the exact thing here: https://betterprogramming.pub/unzip-and-gzip-incoming-s3-files-with-aws-lambda-f7bccf0099c9

displaying all cmd.exe text to textbox

I've search high and low looking for a way to display all text from FTP.exe to a richtextbox. so far i've only been able to do is display the output code. the idea is to run the test and display and capture to a file which hasn't been a problem except i can't seem to display all text as you would see it in Command.Hoping to see all text when done. Please Help!!
Here is he code:
Private Sub Rectangle1_Click(sender As Object, e As EventArgs) Handles Rectangle1.Click
Dim p As New Process()
With p
.StartInfo.Arguments = " -s:c:\dsl\ftptest\speed1.txt 65.40.220.20"
.StartInfo.CreateNoWindow = True
.StartInfo.FileName = "ftp"
.StartInfo.RedirectStandardError = True
.StartInfo.RedirectStandardOutput = True
.StartInfo.UseShellExecute = False
.Start()
Dim StErr As StreamReader = .StandardError
Dim StOut As StreamReader = .StandardOutput
While (Not StOut.EndOfStream)
Me.RichTextBox1.AppendText(String.Format("{0}", StOut.ReadLine() & vbCrLf))
End While
.WaitForExit()
End With
End Sub
End Class
Here is the output from the code:
User (65.40.220.20:(none)): Hash mark printing On ftp: (2048 bytes/hash mark) .
hash
get test.1meg
#
cd upload
put test.1meg
#
close
bye
Here is What I'm looking for:
C:\DSL\FTPTEST>call FTP -s:c:\dsl\FTPtest\speed1.txt 65.40.220.20
Connected to 65.40.220.20.
220-
This server is provided as a EMBARQ Speedtest server for DSL customers only.
Any other use is prohibited.
You may login using anonymous ftp and download the test files to determine your speed.
You may upload the same files to the upload directory to test your upload speed.
You may only upload the files that you previously downloaded from this server.
You cannot download anything from the upload directory.
Remember, some ftp programs measure speed in bytes per second.
DSL speeds are measured in bits per second. There are 8 bits in a byte.
If you can download at 64 kilobytes per second then that is the same as
512 kilobits per second.
220 65.40.220.20 FTP server ready
User (65.40.220.20:(none)):
331 Anonymous login ok, send your complete email address as your password.
230-
This server is provided as a EMBARQ Speedtest server for DSL customers only.
Any other use is prohibited.
You may login using anonymous ftp and download the test files to determine your speed.
You may upload the same files to the upload directory to test your upload speed.
You may only upload the files that you previously downloaded from this server.
You cannot download anything from the upload directory.
Remember, some ftp programs measure speed in bytes per second.
DSL speeds are measured in bits per second. There are 8 bits in a byte.
If you can download at 64 kilobytes per second then that is the same as
512 kilobits per second.
230 Anonymous access granted, restrictions apply.
ftp> hash
Hash mark printing On ftp: (2048 bytes/hash mark) .
ftp> get test.1meg
200 PORT command successful
150 Opening ASCII mode data connection for test.1meg (1048576 bytes)
#
#
#
ftp: 1048576 bytes received in 5.96Seconds 175.94Kbytes/sec.
ftp>
ftp> cd upload
250 CWD command successful
ftp> put test.1meg
200 PORT command successful
150 Opening ASCII mode data connection for test.1meg
#
#
#
226 Transfer complete.
ftp: 1048576 bytes sent in 5.98Seconds 175.23Kbytes/sec.
ftp>
ftp>
I think that you might be able to redirect the output of your command to a file. e.g, at the end of the command add (assuming that you have a directory c:\temp)
your command here > c:\temp\TestOutput.text
Then in your program, add a file system watcher to watch that file and load it into the textbox when it changes. If you're doing this lots of time then you might have to dynamically generate a filename and delete the files when no-longer needed.

How to read text files transfered as binary

My code copies files from ftp (using text transfer mode) to local disk and then trys to process them.
All files contain only text and values are seperated using new line. Sometimes files are moved to this ftp using binary transfer mode and looks like this will mess up line-ends.
Using hex editor, I compared line ends depending the transfer mode used to send files to ftp:
using text mode: file endings are 0D 0A
using binary mode: file endings are 0D 0D 0A
Is it possible to modify my code so it could read files in both cases?
Code from job that illustrates my problem and shows how i'm reading file:
(here i use same file, that contains 14 rows of data)
int i;
container con;
container files = ["c:\\temp\\axa_keio\\ascii.txt", "c:\\temp\\axa_keio\\binary.txt"];
boolean purchLineFirstRow;
IO inFile;
;
for(i=1; i<=conlen(files); i++)
{
inFile = new AsciiIO(conpeek(files,i), "R");
inFile.inFieldDelimiter('\n');
con = inFile.read();
info(int2str(conlen(con)));
}
Files come from Unix system to Windows sytem.
Not sure but maybe the question could be: "Which inFieldDelimiter values should i use to read both Unix and Windows line ends?"
Use inRecordDelimiter:
inFile.inRecordDelimiter('\n');
instead of:
inFile.inFieldDelimiter('\n');
There may still be a dangling CR on the last field, you may wish remove this:
strRem(conpeek(con, conlen(con)), '\r')
See also: http://en.wikipedia.org/wiki/Line_endings