Raspberry Pi Zero no data over SCP at low data transmission - scp

I have a Raspberry Zero connected to a SIM7600G-H 4G HAT with a camera module connected. I want to use it as a webcam, who makes a picture in a defined cycle and send's it via scp to a web server who display it on a homepage.The created shell script is started via a CRONJob every 2 hours.
The whole setup works very well if I have a good, powerful SIM connection. However, as soon as I operate the setup at the desired location, a strange behavior appears.
At the location where I run the webcam I only have a relatively poor 3G connection, if I run the scp command from a connected laptop it works fine. I can therefore assume that the problem has nothing to do with the SIM module.
The Raspian shows two peculiar behaviors.
Even though i created a key and gave it to the webserver, every now and then it wants me to enter the password when i run the scp command.This does not happen when I connect directly to the webserver via ssh.
The first few images the raspian loads without problems using scp command on the webserver, but then suddenly it does not work anymore.
I send two pictures each. I replace one with an existing one on the web server. This is the image that is displayed on the homepage and another one I put in an archive folder named after the timestamp. It looks like this:
scp foo.jpg <username>#webserver:dir/to/folder/default.jpg
FILENAME=`date +"%Y-%m-%d_%H-%M-%S"`
scp foo.jpg <username>#webserver:dir/to/archive_folder/${FILENAME}.jpg
Because of the password issue I downloaded an additional service called sshpass and added in addition to the scp command the following command:
sshpass -p <password>
However, it seems like the issue is not related to sshpass since it also happens if I try it only with the scp command and enter the password by my self.
At the end for the "new file" which goes into the archive folder, the raspian creates the filename at the web server, but he does not transmit the data of the file. At the end, the file remains empty.
The file which should be replaced "default.jpg" is not touched at all.
I tried to find out what happens via the debug output. But there is no useful information. It always stops with the line who shows the transmission state and with 0% and 0KB/s.
I have now spent several days on a solution. I have also already taken it home and everything has suddenly worked smoothly again. But as soon as I mounted it there again, the problem reappeared.
Does anyone know of a bug with the raspberry zero that it can no longer transfer scp files when the data transfer rate is low? One image is about 300kb and my laptop takes about 20 seconds to transfer over the same connection as the one from the Raspberry.

After countless attempts, my simplest solution was to set up a cronjob, which restarts the raspberry shortly before it takes a photo for the webcam. It then searches for a new network and finds it very reliably.

Related

Creating RESTful server for raspbery pi 4 to display images

I got a task and have absolutely no clue on how to do it at the moment.
I watched a couple of tutorials on REST API, but none of them are applicable for my application. I don't intend to use a localhost, but if it's required then sure.
What is this task?
So there are two parts.
PC (client)
Raspberry Pi 4 (server)
Here’s the sequence:
The PC is the client and sends a request to the server, which is the Raspberry Pi 4, to display an image, let's say image1.jpg. The rpi4 is connected to an external monitor via HDMI.
The server/Raspberry Pi 4 receives the request and opens up image1.jpg which will then be displayed on the screen in full screen to be shown on the screen through HDMI.
Perhaps there is a better solution than to use RESTful API to solve this. If there is please give me recommendations.
There are 3 parts to this:
capturing an image
displaying an image
telling RasPi to do both the above
In order to capture an image you can use raspistill or libcamera utils in newer versions of Raspberry Pi OS.
If you aren't capturing pictures with the camera, you must presumably be supplying them from the PC. So you can either use scp to copy one across from the PC:
scp SOMEIMAGE.JPG raspberrypi:image.jpg
Or you can use a Windows SHARE to share a directory between the PC and the RasPi. In Windows you'd use the "Share Folder" option and on the RasPi you can use smbclient or cifs-utils to mount it. Example here.
In order to display an image, either use raspistill built-in options, or use fbi or fim or feh depending on how things are connected and whether you are running an X11 server or not.
In order to tell RasPi to do the above, just use ssh (or Putty on Windows) like this:
ssh user#raspberrypi 'raspistill ... -o /tmp/image.jpg; fim /tmp/image.jpg'
Note that RasPi implements avahi, so if your Raspberry Pi's hostname is set to simon, you should be able to talk to it under the name simon.local on your network, so the command above would become:
ssh user#simon.local '...'
where user is your username that you login to your RasPi with.
You can set your RasPi hostname with:
sudo raspi-config

google cloud ssh inconsistent

I have created 4 instances in two separate instance groups based on two vm templates.
Initially I was using the "SSH" button within google cloud console, and I noticed about 40% of the time would it actually work. I would often have to stop/restart the machines in order for the SSH to work. After a day or so later, the SSH button stops working. I figured this was just silly bug, and having actual SSH keys and logging in via normal SSH would work fine.
Well today I configured normal ssh keys, and I was getting the following on 3 of 4 instances:
Permission denied (publickey).
I logged into the cloud console and clicked the ssh button on all 4 instances and low and behold only 1 / 4 works.
So my question is... why am I having to keep rebooting instances just to keep my ssh working. I have never had this problem on any other cloud server before.
Note: I created a base ubuntu from their available images, and built a generic server, then used that as the base template and forked it to create the other 2 instance group templates.
I am thinking that the ssh daemon might be crashing, but how the heck can I tell, and how can I fix it?
I took the silence from the community as an indicator that the problem was only affecting myself. It turns out the stock image I had chosen to start as a base template had a buggy SSH daemon. It was a fairly quick process to rebuild my templates off of a different stock image, and since then I have had no problems connecting to my machines via ssh.

gcloud compute ssh connects shows wrong instance name

I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Detecting sd card presence for web page

Im working on a (web) control panel of sorts for a fanless PC running Debian. The device collects data into MySQL for a research study. The db tables live on an SD card. For the control panel I need to detect the presence / absence of the sd card to determine if the system is setup and running properly.
I've been working with blkid to get the list of attached block devices. After some searching I found the -c /dev/null param to avoid the cached values (if you don't do this the results are inaccurate. Removing the sd card while the system is running doesn't get reflected in the results.). This is great - when I'm running as root. But the web server runs as www-data. If I run blkid on this account as blkid -c /dev/null it doesn't return anything. Empty list. Running this command as root gives me the expected values. Same thing for fdisk -l (my backup plan).
Running apache2 as root isn't an option.
So I'm hunting around for suggestions to either:
1. find an accurate list of devices that I can use in php
2. figure out why these commands don't return any values when run as the apache2 account