How can I use one media pipe graph to process multiple camera(rtsp stream)? - mediapipe

for example, 16 cameras, only one GPU on a server, maybe at most init 4 graph to decode,inference, than encode. So every 1 graph need to process 4 video streams. but I didn't find any config like camera_id or source_id yet in mediapipe.

Related

Python TTS stops after 9 iterations. file is created, but no audio

Raspberry PI, Python 3.7, program reads city list for weather conditions. Trying to output just the city name, e.g., New York, Detroit, etc. Program outputs the audio okay for first nine cities, afterwards the file is created but has no audio. Program loop is 240 seconds, 15 TTS per hour which is within the 100 TTS calls per hour limit. Python program continues, no errors, just no audio after the first nine cities. Restarting program after the tenth city, audio is okay but only for first nine cities, then no audio.
Appreciate any thoughts on this.
As said no Python program errors, restart works for 9 cities, then no audio.
Below are the two routines which run in a loop for each city [name]. The routines are called from main which send data to Nextion display.
enter code here
TTS Name
def tts_name(name):
global saveit
# TTS and Save City Name
tts = gtts.gTTS(name)
saveit = "Name.mp3"
tts.save(saveit)
# Play City Name
def Play_Name():
# Define Media Player file
media = vlc.MediaPlayer("/home/pi/Nextion/OpenWeather_10/" + saveit)
# setting volume
media.audio_set_volume(90)
#Play Name
media.play()

How to build Yocto hddimg on i.MX7 to boot from usb stick

I have an i.mx7 som. I want to build a Yocto image which I can dd onto a usb stick to boot from. I believe that I want an hddimg image but cannot see how to create one (I have sdimg which works prefectly).
I would appreciate advice.
I have set IMAGE_FSTYPES to "hddimg" but get "ERROR: Nothing PROVIDES 'syslinux'"
The SOM is the Technexion i.MX7. Layers are:
layer path priority
=======================================================
meta sources/poky/meta 5
meta-poky sources/poky/meta-poky 5
meta-oe sources/meta-openembedded/meta-oe 6
meta-multimedia sources/meta-openembedded/meta-multimedia 6
meta-freescale sources/meta-freescale 5
meta-freescale-3rdparty sources/meta-freescale-3rdparty 4
meta-freescale-distro sources/meta-freescale-distro 4
meta-powervault sources/meta-powervault 6
meta-python sources/meta-openembedded/meta-python 7
meta-networking sources/meta-openembedded/meta-networking 5
meta-virtualization sources/meta-virtualization 8
meta-filesystems sources/meta-openembedded/meta-filesystems 6
meta-cpan sources/meta-cpan 10
meta-mender-core sources/meta-mender/meta-mender-core 6
meta-mender-freescale sources/meta-mender/meta-mender-freescale 10
Nope, you certainly do not want an hddimg, as this is a mostly deprecated format for x86 systems. On ARM, you almost never want syslinux :-)
Usually your SOM comes with a Board Support Package in the form of a layer, which includes the MACHINE definition which in turn defines the IMAGE_FSTYPE that this machine likes for booting. If in doubt, consult the manual or ask your vendor.
Having said that, if you specify SOM and layers in use we can have a look if publicly accessible, but without those details it is impossible to give a proper answer.

Flink: How to process rest of finite stream with combination of countWindowAll()

//assume following logic
val source = arrayOf(1,2,3,4,5,6,7,8,9,10,11,12) // total 12 elements
val env = StreamExecutionEnvironment.createLocalEnvironment(1);
val input = env.fromCollection(source)
.countWindowAll(5)
.aggregate(...) // pack them to List<Int> for bulk upload to DB
.addSink(...) // sends bulk
When i execute it - only first 10 processed, but rest 2 elements
are thrown away - flink shutdown without processing of them.
The only avoid for me - while i totally controll source data, i can push some well-known IGNORABLE_VALUES to source collection to fit window size and then ignore them in sink... but i think where is some far more professional way in flink.
You have a finite stream of 12 and a window that triggers for every 5 elements. So the first window gets 5 elements and then triggers, then the next 5 are received and it triggers, but the last 2 come and the job knows that no more are going to come. So since there aren't 5 elements in the window the trigger doesn't fire so nothing is done with them.

code delivery in stream impact on another stream

If code is delivered to a stream will it have any impact on another stream which has the same component.
eg :
stream 1
Comp 1 - baseline 1
Stream 2
Comp 1- baseline 1
If a create a repo workpace out of stream 2 and make code changes and deliver to Stream 2 will the change be available in stream 1.
Are the components same or two different copies?
Are the components same or two different copies?
They are the same component.
But each stream only display the LATEST changesets delivered for that component.
That means delivering new change sets on Stream2 (and making a new baseline) has no effect on the same component on Stream1.

Getting the images produced by AzureML experiments back

I have created a toy example in Azure.
I have the following dataset:
amounts city code user_id
1 2.95 Colleferro 100 999
2 2.95 Subiaco 100 111
3 14.95 Avellino 101 333
4 14.95 Colleferro 101 999
5 14.95 Benevento 101 444
6 -14.95 Subiaco 110 111
7 -14.95 Sgurgola 110 555
8 -14.95 Roma 110 666
9 -14.95 Colleferro 110 999
I create an AzureML experiment that simply plots the column of the amounts.
The code into the R script module is the following:
data.set <- maml.mapInputPort(1) # class: data.frame
#-------------------
plot(data.set$amounts);
title("This title is a very long title. That is not a problem for R, but it becomes a problem when Azure manages it in the visualization.")
#-------------------
maml.mapOutputPort("data.set");
Now, if you click on the right output port of the R script and then on "Visualize"
you will see the Azure page where the outputs are shown.
Now, the following happens:
The plot is stucked into an estabilished space (example: the title is cut!!!)
The image produced is a low resolution one.
The JSON produced by Azure is "dirty" (making the decoding in C# difficult).
It seems that this is not the best way to get the images produced by the AzureML experiment.
Possible solution: I would like
to send the picture produced in my experiment to a space like the blob
storage.
This would be also a great solution when I have a web-app and I have to pick the image produced by Azure and put it on my Web App page.
Do you know if there is a way to send the image somewhere?
To saving the images into Azure Blob Storage with R, you need to do two steps, which include getting the images from the R device output of Execute R Script and uploading the images to Blob Storage.
There are two ways to implement the steps above.
You can publish the experiment as a webservice, then get the images with base64 encoding from the response of the webservice request and use Azure Blob Storage REST API with R to upload the images. Please refer to the article How to retrieve R data visualization from Azure Machine Learning.
You can directly add a module in C# to get & upload the images from the output of Execute R Script. Please refer to the article Accessing a Visual Generated from R Code in AzureML.
You can resize the image in following way:
graphics.off()
png("myplot.png",width=300,height=300) ## Create new plot with desired size
plot(data.set);
file.remove(Sys.glob("*rViz*png")) ## Get rid of default rViz file