How to upload images taken by raspberry to AWS IoT - amazon-s3

I am trying to program a raspberry pi so it can take picture every 1o seconds and upload to DynamoDB through AWS IoT. So far I have programmed pi to take picture every 10 minutes. But I am not being able to send it to AWS IoT. I have been working on this for weeks now. Can anybody help me pleaseeee ?? I would really appreciate it. I am very new to programming. Thank you in advance
Things I have already done:
I have created a thing in AWS
I have also created certificate and that kind of stuff.
I have also created a table in DynamoDB
I need help with what codes do I need to add on what I have right now. So the pictures taken by Pi is uploaded to DynamoDB instead of saving in pi. If you can direct me to other websites or places you know where i can get help will be really appreciated.
Here is my code
ROLL=$(cat /var/tlcam/series)
SAVEDIR=/var/tlcam/stills
while [ true ]; do
filename=$ROLL-$(date -u +"%d%m%Y_%H%M-%S").jpg
/opt/vc/bin/raspistill -o $SAVEDIR/$filename
sleep 4;
done;

I believe you want to use S3 instead of DynamoDB. The object limit in DynamoDB is 64KB, which would be a very small picture. S3 will allow you to store an object up to 5TB's in size. (Storing a lot of images S3 vs DynamoDB)
S3 has a couple SDK's available for use (aws.amazon.com/code), but since you are using a Raspberry Pi I'd assume you would want to use Python or CLI. You can find some Python examples using S3 here: boto3.readthedocs.org/en/latest/guide/s3.html. You can also find examples for using the CLI here: docs.aws.amazon.com/cli/latest/reference/s3api/index.html
These SDK's will allow you to upload images to S3 and download the images from S3 (say to a web interface or app).

Related

Backend-framework that can use ImageMagick and S3

I'm new to web development field, and this is the first time for me to design system. If I can receive any advice, it will be appreciated.
Currently, I am working on video-hosting platform, using AWS S3. So the backend-framework should be connected to S3. When a user download a image from backend, they should be able to edit the image with ImageMagick or something similar.
Here is a question;
Is there a good backend-framework that makes uploading images to S3 easier? I think rails active storage system works really well in this regard. But the problem of rails is that it can't use full ImageMagick functions.
What I have to care about is the connection to S3, and the ability to use image-editting system.
For now, I plant to use Ruby on Rails, but I would like to know other options and their pros and cons.
Thank you.

How to permanently upload files onto Google Colab such that it can be directly accessed by multiple people?

My friend and I are working on a project together on Google Colab for which we require a dataset but we keep running into the same problem while uploading it.
What we're doing right now is uploading onto drive and giving each other access and then mounting gdrive each time. This becomes time consuming and irritating as we need to authorize and mount each time.
Is there a better way so that the we can upload the dataset to the home directory and directly access it each time? Or is that not possible because we're assessed a different machine(?) each time?
If you create a new notebook, you can set it to mount automatically, no need to authenticate every time.
See this demo.

How to access data from machine using google colab

I want to use google colab. But my data is pretty huge. So I want to access my data directly from the machine in google colab. And I also want to save the files directly in my machine directory. Is there a way I can do that as I can't seem to find any.
Look at how to use local runtime here.
https://research.google.com/colaboratory/local-runtimes.html
Otherwise, you can store your data on GDrive, GCS, or S3. Then, you can just mount it, no need to upload every time.

Extract data fom Marklogic 8.0.6 to AWS S3

I'm using Marklogic 8.0.6 and we also have JSON documents in it. I need to extract a lot of data from Marklogic and store them in AWS S3. We tried to run "mlcp" locally and them upload the data to AWS S3 but it's very slow because it's generating a lot of files.
Our Marklogic platform is already connected to S3 to perform backup. Is there a way to extract a specific database in aws s3 ?
It can be OK for me if I have one big file with one JSON document per line
Thanks,
Romain.
I don't know about getting it to s3, but you can use CORB2 to extract MarkLogic documents to one big file with one JSON document per line.
S3:// is a native file type in MarkLogic. So you can also iterate through all your docs and export them with xdmp:save("s3://...).
If you want to make agrigates, then You may want to marry this idea into Sam's suggestion of CORB2 to control the process and assist in grouping your whole database into multiple manageable aggregate documents. Then use a post-back task to run xdmp-save
Thanks guys for your answers. I do not know about CORB2, this is a great solution! But unfortunately, due to bad I/O I prefer a solution to write directly on s3.
I can use a basic Ml query and dump to s3:// with native connector but I always face memory error even launching with the "spawn" function to generate a background process.
Do you have any xquey example to extract each document on s3 one by one without memory permission?
Thanks

Change resolutions of image files stored on S3 server

Is there a way to run imagemagick or some other tool on s3 servers to resize the images.
The way I know is first downloading all the image files on my machine and then convert these files and reupload them on s3 server. The problem is the number of file is more than 10000. I don't want to download all the files on my local machine.
Is there a way to convert it on s3 server itself.
look at it: https://github.com/Turistforeningen/node-s3-uploader.
It is a library providing some features for s3 uploading including resizing as you want
Another option is NOT to change the resolution, but to use a service that can convert the images on-the-fly when they are accessed, such as:
Cloudinary
imgix
Also check out the following article on amazon's compute blog.. I found myself here because i had the same question. I think i'm going to implement this in Lambda so i can just specify the size and see if that helps. My problem is i have image files on s3 that are 2MB.. i dont want them at full resolution because I have an app that is retrieving them and it takes a while sometimes for a phone to pull down a 2MB image. But i dont mind storing them at full resolution if i can get a different size just by specifying it in the URL. easy!
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/
S3 does not, alone, enable arbitrary compute (such as resizing) on the data.
I would suggest looking into AWS-Lambda (available in the AWS console), which will allow you to setup a little program (which they call a Lambda) to run when certain events occur in a S3 bucket. You don't need to setup a VM, you only need to specify a few files, with a particular entry point. The program can be written in a few languages, namely node.js python and java. You'd be able to do it all from the console's web GUI.
Usually those are setup for computing things on new files being uploaded. To trigger the program for files that are already in place on S3, you have to "force" S3 to emit one of the events you can hook into for the files you already have. The list is here. Forcing a S3 copy might be sufficient (copy A to B, delete B), an S3 rename operation (rename A to A.tmp, rename A.tmp to A), and creation of new S3 objects would all work. You essentially just poke your existing files in a way that causes your Lambda to fire. You may also invoke your Lambda manually.
This example shows how to automatically generate a thumbnail out of an image on S3, which you could adapt to your resizing needs and reuse to create your Lambda:
http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html
Also, here is the walkthrough on how to configure your lambda with certain S3 events:
http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser.html