How do you set the fps to 60 for the coral mipi camera - camera

I am trying to set the fps of the coral mipi camera to 60fps. From the datasheet, it says it can run 720p at 60fps so i know the camera is capable.
I have set the resolution using the following command which works.
v4l2-ctl --device=/dev/video1 --set-fmt-video=width=1280,height=720
but when I set the fps to 60, the max limit seem to be 30.
mendel#mocha-calf:~$ v4l2-ctl --device=/dev/video1 -p 60
Frame rate set to 30.000 fps
There are no option to set exposure or gain. Would I have to rebuild the driver to be able to have these options?
Regards
Paul

60 FPS not supported. Please see the supported formats below.
mendel#tuned-rabbit:~$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'YUYV' (YUYV 4:2:2)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 720x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1920
And for changing/setting exposure and gain, you may need to make modifications in the current driver code and rebuild the kernel image. Driver code can be found at:
https://coral.googlesource.com/linux-imx/+/refs/heads/master/drivers/media/platform/mxc/capture/ov5645_mipi_v2.c#2774
And instructions to build the kernel image can be seen at : https://coral.googlesource.com/docs/+/refs/heads/master/GettingStarted.md

Related

Rigaya's NVEnc encodes file with no video or audio track

My source video file (1h 30min movie) is playable in both PotPlayer and VLC: h264, 8-bit color and 7755kb/s bitrate.
The NVEnc command I'm using is this:
.\nvencc\NVEncC64.exe --avhw -i "input.mkv" --codec hevc --preset quality --bframes 4 --ref 7 --cu-max 32 --cu-min 8 --output-depth 10 --audio-copy --sub-copy -o "output.mkv"
Encoding works fine (I believe):
NVEncC (x64) 5.26 (r1786) by rigaya, Jan 31 2021 09:23:04 (VC 1928/Win/avx2)
OS Version Windows 10 x64 (19042)
CPU AMD Ryzen 5 1600 Six-Core Processor [3.79GHz] (6C/12T)
GPU #0: GeForce GTX 1660 (1408 cores, 1830 MHz)[PCIe3x16][457.51]
NVENC / CUDA NVENC API 11.0, CUDA 11.1, schedule mode: auto
Input Buffers CUDA, 21 frames
Input Info avcuvid: H.264/AVC, 1920x800, 24000/1001 fps
AVSync vfr
Vpp Filters cspconv(nv12 -> p010)
Output Info H.265/HEVC main10 # Level auto
1920x800p 1:1 23.976fps (24000/1001fps)
avwriter: hevc, eac3, subtitle#1 => matroska
Encoder Preset quality
Rate Control CQP I:20 P:23 B:25
Lookahead off
GOP length 240 frames
B frames 4 frames [ref mode: disabled]
Ref frames 7 frames, MultiRef L0:auto L1:auto
AQ off
CU max / min 32 / 8
Others mv:auto
encoded 142592 frames, 219.97 fps, 1549.90 kbps, 1098.83 MB
encode time 0:10:48, CPU: 8.7%, GPU: 5.2%, VE: 98.3%, VD: 21.5%, GPUClock: 1966MHz, VEClock: 1816MHz
frame type IDR 595
frame type I 595, avgQP 20.00, total size 39.44 MB
frame type P 28519, avgQP 23.00, total size 471.93 MB
frame type B 113478, avgQP 25.00, total size 587.45 MB
but when I try to play it in either PotPlayer or VLC it says there is no video track or it just doesn't play at all.
MediaInfo also doesn't show any video, audio, or subtitle tracks either, just the name of the file and the file size. Am I missing something?
Switching --avhw to --avsw solved the problem.

Loading large set of images kill the process

Loading 1500 images of size (1000,1000,3) breaks the code and throughs kill 9 without any further error. Memory used before this line of code is 16% of system total memory. Total size of images direcotry is 7.1G.
X = np.asarray(images).astype('float64')
y = np.asarray(labels).astype('float64')
system spec is:
OS: macOS Catalina
processor: 2.2 GHz 6-Core Intel Core i7 16 GB 2
memory: 16 GB 2400 MHz DDR4
Update:
getting the bellow error while running the code on 32 vCPUs, 120 GB memory.
MemoryError: Unable to allocate 14.1 GiB for an array with shape (1200, 1024, 1024, 3) and data type float32
You would have to provide some more info/details for an exact answer but, assuming that this is a memory error(incredibly likely, size of the images on disk does not represent the size they would occupy in memory, so that is irrelevant. In 100% of all cases, the images in memory will occupy a lot more space due to pointers, objects that are needed and so on. Intuitively I would say that 16GB of ram is nowhere nearly enough to load 7GB of images. It's impossible to tell you how much you would need but from experience I would say that you'd need to bump it up to 64GB. If you are using Keras, I would suggest looking into the DirectoryIterator.
Edit:
As Cris Luengo pointed out, I missed the fact that you stated the size of the images.

Vulkan swapchain image extent capabilities

I am doing query on swapchain capabilities where I am checking on currentExtent,minImageExtent and maxImageExtent properties of VkSurfaceCapabilitiesKHR.
For window size of 128x128 I am getting:
currentExtent = 148x128
minImageExtent = 148x128
maxImageExtent = 148x128
But for window size 256x256 I am getting:
currentExtent = 256x256
minImageExtent = 256x256
maxImageExtent = 256x256
For 1280x720:
currentExtent = 1280x720
minImageExtent = 1280x720
maxImageExtent = 1280x720
I have two questions:
Why for 128x128 the width is not the same value?
Why current,min,max for the rest of dimension are the same?
My hardware: NVIDIA RTX 3000, Driver version 431.86, Windows 10
Q1: Feels like a bug (yours, or driver).
Q2: Because it works like that on some platforms. See the specification, e.g.:
With Win32, minImageExtent, maxImageExtent, and currentExtent must always equal the window size.

How to get NED velocity from GPS?

I have an Adafruit Ultimate GPS module which I am trying to fuse with a BNO055 IMU sensor. I'm trying to follow https://github.com/slobdell/kalman-filter-example this kalman-filtering example. Although most of his code is pretty clear, I looked at his input json file(https://github.com/slobdell/kalman-filter-example/blob/master/pos_final.json) and saw that he's getting velocity north, velocity east and velocity down from the GPS module. I looked at the NMEA messages and none seem to give me that. What am I missing? How to get these direction velocities?
Thanks!
pos_final.json is not the input file, but the output file. The input file is taco_bell_data.json and is found in the tar.gz archive. It contains the following variables:
"timestamp": 1.482995526836e+09,
"gps_lat": 0,
"gps_lon": 0,
"gps_alt": 0,
"pitch": 13.841609,
"yaw": 225.25635,
"roll": 0.6795258,
"rel_forward_acc": -0.014887575,
"rel_up_acc": -0.025188839,
"abs_north_acc": -0.0056906715,
"abs_east_acc": 0.00010974275,
"abs_up_acc": 0.0040153866
He measures position with a GPS and orientation/acceleration with an accelerometer. The NED velocities that are found in pos_final.json are estimated by the Kalman filter. That's one of the main tasks of a Kalman filter (and other observers): to estimate unknown quantities.
A GPS will often output velocities, but they will be relative to the body of the object. You can convert the body-relative velocties to NED-velocities if you know the orientation of the body (roll, pitch and yaw). Let's say you have a drone moving at heading 030°, and the GPS says the forward velocity is 1 m/s, the drone will have the following North velocity:
vel_north = 1 m/s * cos(30°) = 0.86 m/s
and the following East velocity:
vel_east = 1 m/s * sin(30°) = 0.5 m/s
This doesn't take into account roll and pitch. To take roll and pitch into account you can take a look at rotation matrices or quaternions on Wikipedia.
The velocities are usually found in the VTG telegram the GPS outputs. It's not always being output. The GPS has to have that feature and it has to be enabled on the GPS. The RMC telegram can also be used.
The velocities from the GPS are often very noisy, which is why a Kalman filter is typically used instead of converting the body-relative velocities to NED-velocities with the method above. The GPS velocities will work fine in higher speeds though.

Convert MP4 to HLS with AWS Elastic transcoder.

I am planning to convert MP4(1920x1080, bitrate may vary from mp4 to mp4) to HLS(different type of resolution).
different type of resolution, I am looking for
1080p = 1920x1080
720p = 1280x720
480p = 854x480
360p = 640x360
To achieve the above, I have written a Lambda functin in NodeJS and I have used below "System Presets". HLS O/P file is creating but the RESOLUTIONs are not as per my expectation. It's some time coming correct for few cases. But in generic the value(WxH) is not constant.
HLS v3 and v4 (Apple HTTP Live Streaming), 400 kilobits/second, Video-only --------- 1351620000001-200055
HLS v3 and v4 (Apple HTTP Live Streaming), 600 kilobits/second, Video-only --------- 1351620000001-200045
HLS v3 and v4 (Apple HTTP Live Streaming), 1 megabit/second, Video-only --------- 1351620000001-200035
HLS v3 and v4 (Apple HTTP Live Streaming), 1.5 megabits/second, Video-only --------- 1351620000001-200025
I tried but not getting any solution. I need the help of anyone to resolve my problems.
Thanks, your question is very clear. Recently I had experience same kind of issue. Please find the below solutions.
Here what I understand, you want specific resolution output files.
You have to create new custom presets.I am mentioning one custom preset for 1080p, you have follow for the rest.
1080p = 1920x1080
Create new presets : -
First of all you have to choose one existing System Presets. For example - System preset: HLS Video - 1.5M and change the configuration value in the video section only as per below settings
Name - Custom HLS Video Auto - 1080p
Container - ts
Codec - H.264
Codec Options - InterlacedMode:Progressive,MaxReferenceFrames:3,Level:3.1,ColorSpaceConversionMode:None,Profile:main
Max Bit Rate - left blank (optional)
Buffer Size - left blank (optional)
Maximum Number of Frames Between Keyframes - 90
Fixed Number of Frames Between Keyframes - true
Bit Rate - auto
Frame Rate - auto
Video Max Frame Rate - 30
Max Width - 1920
Max Height - 1080
Sizing Policy - Fit
Padding Policy - NoPad
Display Aspect Ratio - auto
These 3 settings are important
Max Width - 1920
Max Height - 1080
Sizing Policy - Fit
For other resolution, you have to create new custom presets by changing the Max Width & Max Height.Everything will remain same as it is.