I am using GLFW and I would like to know how to toggle full-screen windowed mode. Not changing the resolution, but instead setting the window to be on top and without decoration. If GLFW is not capable of doing this, then what cross platform library do you suggest to achieve this?
You can tell glfw to open your window fullscreen.
glfwOpenWindow( width, height, 0, 0, 0, 0, 0, 0, GLFW_FULLSCREEN )
As far as I know you would have to close and reopen this window to switch between a window and fullscreen mode.
To avoid GLFW changing the screen resolution, you can use glfwGetDesktopMode to query the current desktop resolution and colour depth and then pass those into glfwOpenWindow.
// get the current Desktop screen resolution and colour depth
GLFWvidmode desktop;
glfwGetDesktopMode( &desktop );
// open the window at the current Desktop resolution and colour depth
if ( !glfwOpenWindow(
desktop.Width,
desktop.Height,
desktop.RedBits,
desktop.GreenBits,
desktop.BlueBits,
8, // alpha bits
32, // depth bits
0, // stencil bits
GLFW_FULLSCREEN
) ) {
// failed to open window: handle it here
}
Since version 3.2:
Windowed mode windows can be made full screen by setting a monitor
with glfwSetWindowMonitor, and full screen ones can be made windowed
by unsetting it with the same function.
http://www.glfw.org/docs/latest/window.html
Related
i have a camera view in my app in which there is a resizable bounding box
now after taking the image i want to able to only take the part of the image that was focused so i used ImageEditor from this react-native library
the issue i have i am not getting consistent results on the cropping i currently have th following values
boxX staring X position of the bounding box; boxY staring Y position of the bounding box; boxWidth width of the bounding box; boxHeight height of the bounding box.
i used the following code at first
ImageEditor.cropImage(image.uri,
{
offset: {x: boxX, y: boxY},
size: {width: boxWidth, height: boxHeight},
}
)
this gives a very pixalated and very wrong cropping of the image i dont know why, then i added some calculations to it by adding new variables like the image width and height and also the devices width and height and came up with this code:
ImageEditor.cropImage(data.uri,
{
offset: {x: ((boxX)/deviceWidth)*data.width, y:((boxY)/deviceHeight)*data.height},
size: {width: boxWidth/deviceWidth*imageWidth, height: boxHeight/deviceHeight*imageHeight},
}
)
this is much better but the cropping is still wrong on Android but on iOS this seems to work fine and accurate, my question is how can i achieve this please let me know if there is any calculations i should do to get consistent results.
For image cropping I think you should try:
1) https://github.com/ivpusic/react-native-image-crop-picker
It is more used, looks to be maintained better and could simplify your work.
or
2) a picker and https://github.com/prscX/react-native-photo-editor
if you want more complicated editing.
or
3) if you are satisfied with your current library for iOS, use one of the 2 from above only for android.
Note: this is a known issue of the react-native-image-editor, especially for android. The discussions and possible workarounds that could work on some devices can be found here:
https://github.com/callstack/react-native-image-editor/issues/54#issuecomment-754688978
I've the following constraints which are working perfectly fine over Chrome in Desktop (simulating mobile resolution)
const constraints = {
audio: false,
video: {
width: screen.width,
height: screen.height
}
};
navigator.mediaDevices.getUserMedia(constraints).then(stream => {})
However when actually trying this on iPhone / Safari the camera doesn't respects this at all and gets super small or distorted - removing the width / height from the constraints makes it better ratio but not full screen at all, just centralized.
I've also tried with min / max constraints without lucky.
Is there any way to get this working on iPhones?
I have built a few AR Websites which are mobile first. When you request a resolution the web browser sees if the resolution exists, and if it doesn't it then decides if it should emulate the feed for you. Not all browsers do emulation (even though it is part of the spec). This is why it may work in some browsers and not others. Safari won't emulate the resolution you are asking for with the camera you have picked (I presume the front).
You can read more about this here (different problem, but provides a deeper explaination): Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?
Solution
The way I tackled this is:
Without canvas
Ask for a 720p feed, fallback to 480p feed if 720 gives an over-constrained error. This will work cross-browser.
Have a div element which is 100% width and height, fills the screen, and sets overlay to hidden.
Place the video element connected to the MediaStream inside, make it 100% height of the container. The parent div overlay hidden will in effect crop the sides. There will be no feed distortion.
With canvas
Do not show the video element, use a canvas as the video view. Make the canvas the same size as your screen or the same aspect ratio and use CSS to make it fill the screen (latter is more performant).
Calculate the top, left, width and height variables to draw the video in the canvas (make sure your calculation centers the video). Make sure you do a cover calculation vs fill. The aim is to crop the parts of the video which do not need to be shown (I.e. like the descriptions of various methods in https://css-tricks.com/almanac/properties/o/object-fit) . Example on how to draw video into a canvas here: http://html5doctor.com/video-canvas-magic/
This will give you the same effect of what you are looking for. Production examples of something similar.
https://www.maxfactor.com/vmua/
https://demo.holitionbeauty.com/
P.s. when I get time I can code an example, short on hours this week.
There are a couple of quirks on mobile gUM() you need to know about.
First, if the device is in portrait orientation things work weirdly. You need to swap the width and height. So, let's say you're on a 480x640 device (do those even exist? who cares? it's an example). To get the appropriate size video you need
const constraints = {
audio: false,
video: {
width: screen.height,
height: screen.width
}
};
I can't figure out exactly why it's like this. But it is. On iOS and Android devices.
Second, it's hard to get the cameras to deliver exactly the same resolution as the device screen size. I tweak the width and height to make them divisible by eight and I get a decent result.
Third, I figure the sizes I need by putting a <video ...> tag in my little web app with CSS that makes it fill the browser screen, then querying its size with
const rect = videoElement.getBoundingClientRect()
const width = rect.width > rect.height ? rect.width : rect.height
const height = rect.width > rect.height ? rect.height : rect.width
This makes the mobile browser do the work of figuring out what size you actually need, and adapts nicely to the browser's various toolbars.
Is it possible to clear a preview window after preview from camera is done? I am using MFCaptureEngine, calling m_pPreview->SetRenderHandle(m_hwnd) to render the video. But when I stop the video I am not able to draw on the window. There remains a last frame from the camera. I need to fill the window by black brush and draw some text, but the image from the camera cannot be overdrawn.
It is not clear understand from you answer what is it MFCaptureManager, but by code SetRenderHandle(m_hwnd) I see that you use IMFCapturePreviewSink::SetRenderHandle. So, I can say that I had faced with similar problem some time ago, and it is related with difference between of the old window system which exist from WinXP and current window system from Vista. Code sets window context to the renderer by calling IMFCapturePreviewSink::SetRenderHandle - for IMFCapturePreviewSink it is DirectX11 - and DirectX11 has got FULL access to the window and it is switched to current window system. As a result, any calling fill the window by black brush and draw some text which is done by old Windows API from Win95-XP generation do nothing - because window handler context is LOCKED by DirectX11.
There are three ways for resolving of this problem:
Write the new UI by the new Microsoft DirectComposition GUI API which is based on DirectX11 and set it to IMFCapturePreviewSink::SetRenderSurface.
Create EVR Media Sink by MFCreateVideoRenderer - it creates DirectX9 video renderer which is compatible with old Windows API from Win95-XP generation, and set this IMFMediaSink in IMFCapturePreviewSink::SetCustomSink.
Create code of the video renderer on DirectX9 base - for example MFCaptureD3D/device.cpp, and draw raw IMFSample from callback IMFCapturePreviewSink::SetSampleCallback.
Regards.
I've implemented it this way:
// Sink
CComPtr<IMFCaptureSink> pSink;
m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PREVIEW, &pSink);
CComPtr<IMFMediaSink> pCustomSink;
::MFCreateVideoRenderer(IID_IMFMediaSink, (void**)&pCustomSink);
CComPtr<IMFCapturePreviewSink> pPreviewSink;
pSink.QueryInterface(&pPreviewSink);
pPreviewSink->SetCustomSink(pCustomSink);
// preview
pSink.QueryInterface(&m_pPreview); // or pPreviewSink.QueryInterface(&m_pPreview)
m_pPreview->SetRenderHandle(m_hwndPreview);
But the behaviour is still the same (the screen cannot be redrawn after the preview is stopped).
I'm capturing the screen on osx with
capturedImage = CGDisplayCreateImageForRect(displayID, CGRectMake(point.x - 4, point.y - 4, 8, 8));
This returns the portion of the screen under the cursor. Later on I'm setting a custom image cursor with:
[[[NSCursor alloc] initWithImage:img hotSpot:NSMakePoint(4, 4)] set];
The problem occurs after I set the cursor and attempt to capture the screen again. The cursor is included in the framebuffer. This makes the captured image the the same as the image I've placed as a cursor. I've tried to hide the cursor, then capture the screen and then show it back but it doesn't work and it also makes the cursor flicker.
It gets even stranger that on a particular laptop the cursor image is not captured but on other laptops running the same OS (Mountain Lion, Snow Leopard) the cursor image is captured.
What can cause the cursor to be included on the frame buffer? Is there a way to guarantee a screen capture without the custom cursor?
Thanks
it depends who renders the cursor and if its purely hardware-accelerated or software rendered. that differs from hardware to hardware and also between os's ... do what markk suggested: set it to the default
I would like to emulate the default shadow applied to NSWindows with a CALayer shadow. I can't quite figure out the exact values for the the following properties though:
theLayer.shadowOffset = ?;
theLayer.shadowRadius = ?;
theLayer.shadowOpacity = ?;
I assume that the shadowColor is black (the default).
Does anyone have an idea what those values could be to get a native (Snow) Leopard window shadow?
EDIT:
To clarify, I'm asking if there's any system API that can give me those values. I don't want to hard code those values, as they have changed in the past and probably will change again at some point in the future.
First, it depends on if a window is in background or in foreground. Windows in foreground have a bigger shadow compared to windows in the background.
For foreground windows you could try the following values:
Color: black
X-Offset: 0
Y-Offset: 4 pixels (downwards)
Opacity: 100%
Radius/Blur: 20 pixels
A word of warning: the window shadow values have changed before (from Leopard to Snow Leopard), so hardcoding values will likely end up looking off in future OS versions.