QT6 Using QVideoSink with QCamera to process every frame - camera

I`m digging example qt6 6.2.2 camera.pro and found next line code:
m_captureSession.setVideoOutput(ui->viewfinder);
this is the way of outing frames to ui.
I know what QVideoSinc is used for grab every frame and process it.
I have replaced this line with m_captureSession.setVideoSink(&videoSink);
where videoSinc defined as my class :
class MyVideoSinc : public QVideoSink
{
Q_OBJECT
public:
bool videoframeReady=false;
MyVideoSinc()
{
connect(this, &QVideoSink::videoFrameChanged, this, &MyVideoSinc::hvideoFrameChanged);
}
public Q_SLOTS:
void hvideoFrameChanged(const QVideoFrame &frame)
{
videoframeReady=true;
}
};
hvideoFrameChanged raised for every frame in Windows build, but only once in Android application.
What is wrong here. How to grab and process frames from QCamera in QT6 ? I don`t want show frames with ui->viewfinder.
I need processing frames myself.

I made a small repository to show how to process frames with QVideoSink in Qt6. Have a look at it.
I use such approach in my own applications, I tested it on Android 9, all works like a charm.
But be careful, on Linux from launch to launch FPS can differ very much, sometimes FPS are normal, but very often FPS are very-very low - 1 FPS only. This is a bug Qt6, 6.2.3 affected still. Have a look at the bug.

Related

Is it possible to make chromium instantly reload on tab crash?

We are running chromium 83 on an embedded system and experience some random tab crashes.
Is it possible to directly reload a tab in chromium, if it crashes (without showing the "Aw snap!" page)?
We're currently trying to patch the source code to get it working and those were our approaches so far.
(both in sad_tab_helper.cc -> SadTabHelper::RenderProcessGone()
Approach 1:
if (SadTab::ShouldShow(status)) {
web_contents()->GetController().Reload(content::ReloadType::NORMAL, true);
}
Approach 2:
if (SadTab::ShouldShow(status)) {
content::RunOrPostTaskOnThread(
FROM_HERE,
content::BrowserThread::ID::UI,
base::BindOnce(
[](content::WebContents* contents) {
contents->GetController().Reload(content::ReloadType::NORMAL, true);
},
std::move(web_contents())));
}
Both changes finally lead to crash of the entire browser.
It seems that chromium tries to reload the page but as said, it then crashes. The log we get are:
[1663:1671:0321/090914.211931:VERBOSE1:network_delegate.cc(32)] NetworkDelegate::NotifyBeforeURLRequest: http://127.0.0.1/login
[1663:1671:0321/090919.082378:ERROR:broker_posix.cc(40)] Recvmsg error: Connection reset by peer (104)
After that the entire browser crashes. Is there a way to do what we want or are we on a dead end here?
The second approach is suboptimal, SadTabHelper::RenderProcessGone only runs on UI.
Initiating navigation while handling notifications from any WebContentsObserver (SadTabHelper is a WebContentsObserver) must be avoided. It leads to problems. The both approaches attempt to do this. I suppose using base::PostTask instead of content::RunOrPostTaskOnThread should help.
if (SadTab::ShouldShow(status)) {
base::PostTask(
FROM_HERE,
{content::BrowserThread::UI},
base::BindOnce(
[](content::WebContents* contents) {
contents->GetController().Reload(content::ReloadType::NORMAL, true);
},
web_contents()));
}
My reputation is not good enough to comment so I can't leave this comment where it should be.
If you come across this particular solution, the required includes are:
#include "base/task_scheduler/post_task.h"
#include "content/public/browser/browser_thread.h"
#include "base/bind.h"

Hololens application stopping at splash screen

I'm working on a Unity project for Hololens, that uses the camera to capture pictures, send them to a photo recognition API and displays the result. The project works perfectly fine in unity, but not on the emulator/Hololens.
Unfortunately, I wrote a lot of code at once, so i don't know at what point this problem started. The problem show's up after building the project and running it on the Hololens/emulator in Debug mode. On the Hololens, I see the starting window (the one you see after you open any application). After i place it, i see End showing splash screen. on the Output window in Visual Studio, and it just doesn't go any further (but doesn't freeze either, just does nothing).
I don't know where it's coming from, since no exceptions are thrown, but i suspect the camera is the cause. Earlier, i had to comment this line of code:
transform.position = Camera.main.ScreenToWorldPoint(new Vector3((CameraManager.Resolution.width * .5f), (CameraManager.Resolution.height * .5f), 10));
because the function ScreenToWorldPoint was throwing the following exception:
Screen position out of view frustum (screen pos 0.000000, 0.000000, 10.000000) (Camera rect 0 0 0 0)
As you see it says that the Camera rect's size is 0. I even tried directly logging the camera's dimensions to make sure (Debug.Log(Camera.main.pixelWidth + ", " + Camera.main.pixelHeight)), and sure enough, they were (0, 0) on the Hololens/emulator.
I made sure that webcam is supported, and that my camera settings are all set, but that didn't help either.
So i'm not sure if that's the cause of the problem or simply a symptom. And I can't start anywhere since neither the Output nor the Error window show anything wrong. Any help or suggestions would be greatly appreciated.
Thanks for reading!
Edit: Here's the entire output log from beginning to end.
Edit2: I don't know if this is significant, but if I paused execution (in Visual Studio), it always seem to be at Build/ProjectName/App.css => Line 78:
[MTAThread]
static void Main(string[] args)
{
var app = new App();
CoreApplication.Run(app); //<===== Here
}
You might want to check any of your Start() methods. You might have some code that is CPU intensive. Even if it runs smoothly in Unity, doesn't mean it will run easily on HoloLens since their CPU is not powerful.
Also, to avoid any Camera problems, make sur to use the Camera prefab from this
repository :
https://github.com/Microsoft/MixedRealityToolkit-Unity
Those are just some thoughts, hope it helps!
Turns out i didn't enable "Virtual Reality Supported" under Other settings in PlayerSettings. It's really dumb, but i hope this helps someone.

ArcGIS for Java: Event when layer is completely loaded

I'm new to the development of ArcGIS for Java. Currently, I've got an application which creates a JMap and loads 3 layers:
ArcGISTiledMapServiceLayer (World_Imagery)
ArcGISTiledMapServiceLayer (World_Transportation)
GraphicsLayer to display some GPS-Points as Polylines
When all the map content is fully loaded, I want to save the map as an image. At the moment this is done by writing a bufferedImage to a file. Because in future the application should run automatically in backgorund, without showing the JFrame, I need some sort of event, signalizing when all content is loaded.
I searched the API-Reference, but couldn't find anything.
Is there any chance to get the correct moment, when all the work is done? Is there a more elegant way to save the map as an image?
Thanks in advance!
for all people encountering the same problem, there is the ProgressEvent, "which is fired by the map and indicates the draw progress of the Map's layer collection"
Therefore, saving the fully loaded map as an image is possible, when the progress has reached 100 %:
jMap.addProgressEventListener(new ProgressEventListener()
{
#Override
public void progress(ProgressEvent event)
{
System.out.println(event.getProgress());
if(event.getProgress() == 100)
{
SaveMap((JComponent) appWindow.getComponent(0));
}
}
});

MediaPlayer issues - audio file only plays back once

I'm trying to implement a basic MediaPlayer in an app, and have the button change states depending on whether the clip is playing, playback is completed, or playback is manually interrupted (by pressing the same button).
With the code below, I get the following results:
On first load, the audio plays back fine.
If I press the ImageButton a second time during playback, the playback stops. When I press it again, the playback resumes from the timestamp in which it was stopped (I thought this was strange behaviour more typical of a "pause()".
Once the playback is completed, the button change works perfectly, however I can not replay the audio file a second time. When I press start, it starts playing back, then immediately transitions to playbackcompleted, without actually playing the audio.
I've been scouring through other posts / google / android documentation, but haven't found a solution as yet.
I have also tried so far:
setLooping(true); - this had no effect at all, other than the setOnCompletionListener never being reached. The audio did not replay at all.
In the onCompletion method, setting the seekTo() to several different values (0, 100), and using log messages including the "getCurrentPosition()" to confirm it was actually doing it, but even when this confirms that it's starting from position 0, or 100, the result is still the same (no audio is heard and completion occurs immediately).
In the onCompletion method, several combinations of calling "stop()", "prepareAsync()" or even prepare(). The results were the same, however on subsequent attempts (i.e. attempt 2, 3, etc) to playback, when the onCompletion method was called, I started getting various errors for calling stop() / prepare() methods in the incorrect state.
final ImageButton pauseButton = (ImageButton) rootView.findViewById(R.id.playButton1);
final MediaPlayer mediaPlayer = MediaPlayer.create(getActivity().getApplicationContext(), R.raw.ch01_01);
mediaPlayer.setOnCompletionListener(new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
pauseButton.setImageResource(R.drawable.play_button);
}
});
pauseButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mediaPlayer.isPlaying()) {
mediaPlayer.stop();
mediaPlayer.prepareAsync();
pauseButton.setImageResource(R.drawable.play_button);
} else {
mediaPlayer.start();
pauseButton.setImageResource(R.drawable.stop_button);
}
}
});
Any help would be appreciated!
P.S. I'm using all of this code in the onCreateView method for a Fragment in my application. Just in case anyone thinks that might be relevant.
Apparently this issue is simply my Samsung Galaxy S2, as the same code works perfectly as expected on my Nexus 7 (2013) running Lollipop 5.0.1.
The issue on the Galaxy S2 includes the following:
Once an MediaPlayer is in the PlaybackCompleted state, it will not play the same file a second time when play is pressed
Attempting to run Methods in the PlaybackCompleted state (which should be valid according to Android documentation) seem to do something (i.e. seekTo(0); will actually seek to 0, but the audio file will never play a second time.

Ti.Geolocation callbacks only works when running KitchenSink

I have a problem with Ti.Geolocation that drives me crazy. Only using IOS platform so far. The goal is to get GPS callbacks with highest possible accuracy when I move around with the phone.
The problem is that I have copied most of the code from geolocation.js in KitchenSink, with the relevant part shown below. It looks OK to me, however the behaviour I get is very strange.
I just don't get regular GPS callbacks when I walk around! The compass works fine however and sends me callbacks all the time. I have also tried without subscribing to 'heading' events, but no change in GPS behaviour.
There is only one event that can trigger a GPS-callback with correct data, and that is running KitchenSink! Switching to KitchenSink and back also gives me a callback with accuracy between 5 and 10. If I don't do that, my accuracy can be as high as 1500-2500 (if I get a callback at all, that is).
KitchenSink seems to work fine, but I fail to see what that app does that I do not?!?
I have turned off Wifi in the phone so it wont disturb. This problem is very frustrating and I have spent three days on it now, can someone please help? I have tried compiling against different SDK's too (normally 2.1.1GA but also down to 1.8.2). No change.
if (locationServicesAvailable()) {
// APPLICATION LOGIC
ui.init();
Ti.Geolocation.purpose = "Get Lat/Long of your current position";
Ti.Geolocation.accuracy = Ti.Geolocation.ACCURACY_BEST;
Ti.Geolocation.distanceFilter = 10;
Ti.Geolocation.frequency = 0; /* as fast as possible */
Ti.Geolocation.preferredProvider = Ti.Geolocation.PROVIDER_GPS;
if (Ti.Geolocation.hasCompass) {
// TURN OFF ANNOYING COMPASS INTERFERENCE MESSAGE
Ti.Geolocation.showCalibration = false;
// SET THE HEADING FILTER (THIS IS IN DEGREES OF ANGLE CHANGE)
// EVENT WON'T FIRE UNLESS ANGLE CHANGE EXCEEDS THIS VALUE
Ti.Geolocation.headingFilter = 45;
/*
Ti.Geolocation.getCurrentHeading(handleCompass);
Ti.Geolocation.addEventListener('heading', handleCompass);
*/
}
Ti.Geolocation.getCurrentPosition(handlePosition);
Ti.Geolocation.addEventListener('location', handlePosition);
ui.refresh.addEventListener('click', function(e) {
Ti.Geolocation.getCurrentPosition(handlePosition);
});
}
OK it seems I have found an answer to my problem. It's as simple as setting accuracy to Ti.Geolocation.ACCURACY_NEAREST_TEN_METERS. I'm using the IOS platform, and I'm located in Sweden (in the countryside) if it matters. Don't know if this solution applies to everyone, but I have seen references to this problem (http://developer.appcelerator.com/question/130596/strange-behavior-of-the-gps) and solution so I know it exists.
No idea why ACCURACY_BEST doesnt work, but it doesnt for me. I'll test all the other settings too when I get the time but now at least I can continue developing.