How to limit the frame rate in Vulkan - vulkan

I know that the present mode of the swap chain can be used to sync the frame rate to the refresh rate of the screen (with VK_PRESENT_MODE_FIFO_KHR for example).
But is there a way of limiting the frame rate to a fraction of the monitor refresh rate? (eg. I want my application to run at 30 FPS instead of 60.)
In other words, is there a way of emulating what wglSwapIntervalEXT(2) does for OpenGL?

Vulkan is a low-level API. It tries to give you the tools you need to build the functionality you want.
As such, when you present an image, the API assumes that you want the image presented as soon as possible (within the restrictions of the swapchain). If you want to delay presentation, then you delay presentation. That is, you don't present the image until it's near the time to present a new image, based on your own CPU timings.

Related

Can I use only GPS to plot my current position instead of the cellular network

I am working on a scheduling and routing app that uses Google Maps to create my route to the appointment location. Upon arriving at the appointment location, the app sets up a geo-fence around that location. The app is designed to send an eta text to the next appointment when I leave out of the geo-fence. Since I am likely to drive through the geo-fence due to either missing the address or looking for a parking spot, I have a timer set for 3 minutes. This requires that I stay inside of the geo-fence for 3 minutes. Once the 3 minutes elapses, then the app is set to send my eta text to the next appointment once I leave out of the fence. My location on the map appears to be determined by the triangulation of the cell towers. Here's the problem, I'm testing in a rural area with poor cell signals and limited towers. This reduces the accuracy of my actual location as determined by the triangulation of the cell towers. It is causing the icon that represents my location to bounce in and out of the geo-fence without completing the 3 minute time requirement. This is most likely due to the poor cell connection and network. Is there a way for me use only the GPS feature of the phone/Google Maps and disable the cell network from trying to position me? This doesn't happen in urban areas where there are more towers and a strong network.

increase webrtc frame rate to allow virtual desktop view

Is it possible to increase the web RTC frame rate during screen sharing to allow useable viewing of a virtual desktop in VR? Current testing, with the request set for 30, shows the frame rate at about 20 fps for a desktop-to desktop connection, 17 in a-frame, and 13 when connecting to Oculus Quest 2. At those speeds, the mouse, controlled by the source computer, lags behind its actual position in the shared screen view just enough to make it very difficult to use. Here is the current code to try and set the frame rate:
var displayMediaOptions = {
video: {
frameRate: 30
}
};
window.displayMediaStream = await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
I also tried minFrameRate and increasing the bit rate in the peer connection, per other posts, to no effect. Most of the posts discuss how to reduce the bit rate, and some, for example https://github.com/ant-media/Ant-Media-Server/wiki/How-to-improve-WebRTC-bit-rate%3F recommend 10-20 as the optimal frame rate, but can it be forced higher if necessary without breaking everything, or is another solution needed? Other virtual desktop solutions require a native app and/or cable link to the source computer - is that the solution?
WebRTC implements congestion control: it dynamically probes the network, and determines a rate that is safe to use. If the probed rate is too low, it will reduce the frame rate, reduce the resolution, or reduce the video quality.
Short of using a faster network, there are three ways to increse framerate at the cost of resolution or quality:
you may decrese the capture resolution by passing the video.height and video.width constraints to getUserMedia;
you may request that the video be downsampled, by passing the scaleResolutionDownBy constraint to RTCRtpSender.setParameters;
you may request that the rate control prefer to sacrifice resolution and quality by passing a contentHint to RTCRtpSender.setParameters.

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

WebRTC video and photo at same time

I'm working on an application that transmits video in low quality using webrtc. Periodically I want to send from same camera single frame in high resolution.
When I try to acquire another stream using getUserMedia I get same low quality one and when I try to pass some constraints to force higher resolution then then operation fails with overconstrained error (even though normally when there is no other stream it works fine).
Is it even possible to have at the same time many streams with different parameters from same device? Or is it possible acquire high resolution image without requesting for a new stream?

Ranging an iBeacon latency

I have been playing around with the new iBeacons in iOS 7. I have one device setup as a beacon, and the other device ranging to detect when I am near, far, immediate, etc. I'd like to know very quickly when I cross between these ranges. Is there any way to adjust the latency? I find that I have to move my device around very slowly or I will not know when I cross these thresholds.
No, you would not be able to adjust the beacon latency. As Apple says in Region Monitoring Guide:
To prevent spurious notifications, iOS does not deliver region
notifications until certain threshold conditions are met.
Specifically, the user’s location must cross the region boundary and
move away from that boundary by a minimum distance and remain at that
minimum distance for at least 20 seconds before the notifications are
reported.
Apple does not define what the latency is, it seems it's not fast enough for your application.
You can have a tradeoff - to implement beacon ranging yourself using Core Bluetooth and listen to the CBPeripheral advertisement events while scanning and range using RSSI:
centralManager:didDiscoverPeripheral:advertisementData:RSSI:
If you are using a custom beacon, such as the RadiusNetworks VirtualiBeacon VM image you can adjust the frequency of the advertisements. The flip side your app must run in the foreground opposed to CoreLocation delivering beacon events even when your app is not running.