Creating RESTful server for raspbery pi 4 to display images - api

I got a task and have absolutely no clue on how to do it at the moment.
I watched a couple of tutorials on REST API, but none of them are applicable for my application. I don't intend to use a localhost, but if it's required then sure.
What is this task?
So there are two parts.
PC (client)
Raspberry Pi 4 (server)
Here’s the sequence:
The PC is the client and sends a request to the server, which is the Raspberry Pi 4, to display an image, let's say image1.jpg. The rpi4 is connected to an external monitor via HDMI.
The server/Raspberry Pi 4 receives the request and opens up image1.jpg which will then be displayed on the screen in full screen to be shown on the screen through HDMI.
Perhaps there is a better solution than to use RESTful API to solve this. If there is please give me recommendations.

There are 3 parts to this:
capturing an image
displaying an image
telling RasPi to do both the above
In order to capture an image you can use raspistill or libcamera utils in newer versions of Raspberry Pi OS.
If you aren't capturing pictures with the camera, you must presumably be supplying them from the PC. So you can either use scp to copy one across from the PC:
scp SOMEIMAGE.JPG raspberrypi:image.jpg
Or you can use a Windows SHARE to share a directory between the PC and the RasPi. In Windows you'd use the "Share Folder" option and on the RasPi you can use smbclient or cifs-utils to mount it. Example here.
In order to display an image, either use raspistill built-in options, or use fbi or fim or feh depending on how things are connected and whether you are running an X11 server or not.
In order to tell RasPi to do the above, just use ssh (or Putty on Windows) like this:
ssh user#raspberrypi 'raspistill ... -o /tmp/image.jpg; fim /tmp/image.jpg'
Note that RasPi implements avahi, so if your Raspberry Pi's hostname is set to simon, you should be able to talk to it under the name simon.local on your network, so the command above would become:
ssh user#simon.local '...'
where user is your username that you login to your RasPi with.
You can set your RasPi hostname with:
sudo raspi-config

Related

Raspberry Pi Zero no data over SCP at low data transmission

I have a Raspberry Zero connected to a SIM7600G-H 4G HAT with a camera module connected. I want to use it as a webcam, who makes a picture in a defined cycle and send's it via scp to a web server who display it on a homepage.The created shell script is started via a CRONJob every 2 hours.
The whole setup works very well if I have a good, powerful SIM connection. However, as soon as I operate the setup at the desired location, a strange behavior appears.
At the location where I run the webcam I only have a relatively poor 3G connection, if I run the scp command from a connected laptop it works fine. I can therefore assume that the problem has nothing to do with the SIM module.
The Raspian shows two peculiar behaviors.
Even though i created a key and gave it to the webserver, every now and then it wants me to enter the password when i run the scp command.This does not happen when I connect directly to the webserver via ssh.
The first few images the raspian loads without problems using scp command on the webserver, but then suddenly it does not work anymore.
I send two pictures each. I replace one with an existing one on the web server. This is the image that is displayed on the homepage and another one I put in an archive folder named after the timestamp. It looks like this:
scp foo.jpg <username>#webserver:dir/to/folder/default.jpg
FILENAME=`date +"%Y-%m-%d_%H-%M-%S"`
scp foo.jpg <username>#webserver:dir/to/archive_folder/${FILENAME}.jpg
Because of the password issue I downloaded an additional service called sshpass and added in addition to the scp command the following command:
sshpass -p <password>
However, it seems like the issue is not related to sshpass since it also happens if I try it only with the scp command and enter the password by my self.
At the end for the "new file" which goes into the archive folder, the raspian creates the filename at the web server, but he does not transmit the data of the file. At the end, the file remains empty.
The file which should be replaced "default.jpg" is not touched at all.
I tried to find out what happens via the debug output. But there is no useful information. It always stops with the line who shows the transmission state and with 0% and 0KB/s.
I have now spent several days on a solution. I have also already taken it home and everything has suddenly worked smoothly again. But as soon as I mounted it there again, the problem reappeared.
Does anyone know of a bug with the raspberry zero that it can no longer transfer scp files when the data transfer rate is low? One image is about 300kb and my laptop takes about 20 seconds to transfer over the same connection as the one from the Raspberry.
After countless attempts, my simplest solution was to set up a cronjob, which restarts the raspberry shortly before it takes a photo for the webcam. It then searches for a new network and finds it very reliably.

How to programmatically download a file from a remote desktop if I have the data required to configure a Jump Desktop (remote desktop) connection?

I want to programmatically download a file from a remote machine.
So, I know the host's IP and port:
Login data:
I also know that it creates an SSH tunnel.
Any suggestions? Is it even possible knowing just that data?
My knowledge on that topic is very scarce.
My answer focuses on SSH usage. In order to download a file via SSH, you need to run the scp command, like
scp yourusername#server.url:/the/path/to/the/file.extension ./
That's enough in order to download the file. However, it is possible that this will not work by itself. First, you need the other machine to know about your ssh, so you will need to
vim ~/.ssh/authorized_keys
hit insert and paste your public SSH key to its end. Don't remove anything. If it is still not working, then ssh might be disallowed on the server and you will need to enable it. Example for Ubuntu: https://linuxize.com/post/how-to-enable-ssh-on-ubuntu-18-04/
Your user needs access to the file you want to download, otherwise this won't work.
Alternatively you could set up an SFTP connection as well and use that.

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

How to figure out port information in mininet

I use python to create a custom mininet topology. To know the topology in detail is not important for the question.
I use ryu as controller. Especially I use the app "ofctl_rest.py". This controller does not install rules in the switch on its own. You have to issue rest - commands to establish rules. In every rest request (rule) you have to specify an outgoing port. To specify this port I need information about the topology of the network.
I need to know which link is connected to a port. I need to know which interface the port runs on. Also helpful would be to know the foreign interface, foreign switch/host, and foraign port of the actual port. How can I retrieve this information???
Please help me. I am really frustrated right now, because I do not know how to figure it out.
Inside the mininet CLI you can use the net command to find out about the topology. The nodes command will show you a list of nodes.
You can also use the dump command to display the interface details.
For information on the 'hosts', such as they are, you can run normal linux commands on each host, e.g.
mn> h1 ifconfig
will run ifconfig on host h1, showing you some of the network configuration for that host.
Given that you seem to be running mininet from a custom script, you could start the CLI at the end of your script (if that's possible) e.g.
net = Mininet(your_topo)
net.start()
CLI(net)
net.stop()
Otherwise, you can use the mininet python APIs to find much of the information.
the dump* functions in mininet.util will print out lots of information.
topo.links() will give you a list of the links in the topology.
topo.linkinfo() might give you some extra info.
For flow information you can either run ovs-dpctl, ovs-ofctl etc. outside of mininet (in a normal shell), or run the equivalents without the ovs- prefix inside the mininet CLI.

Access broccoli server from other device

I'm using broccoli to develop a simple app and I'm trying to access it from a mobile phone.
However, even if I'm able to access the site from my computer at localhost:42000 I'm not able to do so when I point the browser of my phone at 10.0.1.8:4200.
Is it possible to access the broccoli server from other devices? If not, how do you suggest me to handle this issue?
Thanks
Well, localhost is 127.0.0.1, which is not accessible from outside your computer by default, but... You can use a development proxy like Pow to fix this.
Pow is very simple to setup and configure to your needs and apart from that awesomeness it can serve your apps to other devices on the same network.
What you need to know before you proceed with the steps below:
Your computer's IP address on your LAN.
The name of your app.
For the example steps below let's use the following values.
IP: 10.0.1.8
App name: magicapp
Steps to happiness (if you run OS X or Linux):
Install Pow with:
$ curl get.pow.cx | sh
Tell Pow to look for your app at localhost port 4200 with:
echo 4200 > ~/.pow/magicapp
Voilà. If all went well, you should be able to access your app from other devices on the same network at: http://magicapp.10.0.1.8.xip.io/
Hope this helps.
You could simply launch the broccoli hosting server via
broccoli serve --host YOUR_IP --port YOUR_PORT