EDSDK LiveView zoom 10x - edsdk

Using LiveView on EOS is fun and helps getting objects in focus (in case of objectives which do not offer autofocus). Magnification of the LiveView image (stream) really helps focusing.
On camera site, you may magnify that LiveView image 5x and 10x using the button with magnifying glass icon. That works well for my 600D.
Programming using EDSDK I got a problem:
It is possible to set the 5x zoom mode for LiveView programmatically.
But I did not succeed for 10x mode.
Did anyone succeed in doing zoomed LiveView and zoom that LiveView image more than 5x ?
For successful 5x LiveView zoom I used following code for my 600D:
// Start LiveView wait for the stream apearing on the screen and then do:
_iZoomStage= 5;
bool Success=_CameraHandler.SetSetting(EDSDK.PropID_Evf_Zoom,(UInt32) _iZoomStage);
That works fine, BUT:
If you try to get higher zoom factors that fails. Success is returned true, but no effect is visible on screen.
If you do LiveView zooming on the camera itself 10x works fine pressing the "magnifier" button. But programmatically I did not succeed in values greater than 5.
Any idea to that topic?

Well, thanks a lot for your answers.
Meanwhile I did the following workaround, which seems to solve the problem. I simply crop and zoom the bitmap during LiveView streaming:
if(_zoomFactorOfEdskd == true) // That is 1 and 5
g.DrawImage(_LiveViewStreamedBmp,_LvOutput);
else // Our own ones which do not work with EDSDK
{
Int32 newWidth= (Int32)(_LiveViewStreamedBmp.Width / _zoomFactor);
Int32 newHeight= (Int32)(_LiveViewStreamedBmp.Height / _zoomFactor);
// Cropping around the center of the original bitmap
Int32 xOffset= (_LiveViewStreamedBmp.Width-newWidth)/2;
Int32 yOffset= (_LiveViewStreamedBmp.Height-newHeight)/2;
Rectangle rectSource=new Rectangle(xOffset, xOffset, newWidth,newHeight);
Rectangle rectTarget=new Rectangle( 0, 0, _LiveViewStreamedBmp.Width, _LiveViewStreamedBmp.Height);
// Do the zoomed output...
g.DrawImage(_LiveViewStreamedBmp,rectTarget,rectSource,GraphicsUnit.Pixel);
}
Please take care that "really good" results will appear with _zoomFactor below 5x
(means something between 2.0 and 3.0). If you use too strong zoom values here, you get "pixels" and the image is much too big in size (you may not see anything).
Perhaps it is a good idea to define the _zoomFactor value otherwise, so that it fits better to Canon's understanding of "5x" or "10x". But for the moment this workaround may serve.
Kind regards Gerhard Kauer

I have stumbled into the same problem (on 5D Mark IV) - only the 5x zoom is actually possible and for 10x zoom you are supposed to zoom the returned bitmap by yourself.
HOWEVER: It doesn't seem to be a bug, but a very badly documented feature (i.e. not documented at all). The SDK actually gives out additional data to hint you that you should do a software zoom and also gives precise coords. So this is how I understand it:
Say we have a sensor with a resolution of 1000 x 1000 pixels and we want to zoom-in 10x in the center. Then this happens in my tests:
Reading the kEdsPropID_Evf_ZoomRect returns the position 450:450 and size 100x100 - all as expected.
Reading the kEdsPropID_Evf_ZoomPosition returns 450:450 - expected too.
But you will receive a bitmap of 200 x 200 pixels - "incorrectly", because this is for x5 zoom... you would expect 100 x 100, but this was observed on various cameas, so probably a normal thing.
But by reading the kEdsPropID_Evf_ImagePosition you can tell, where this bitmap actually lies. This will return 400:400 position and so can be used to calculate the final crop and enlargement of the returned bitmap.
So while the code of user3856307 should work, there might be some limitations of the camera (such as returning bitmaps on position divisible by 32), so incorporating the kEdsPropID_Evf_ImagePosition should give more precise results in my opinion.

Related

could someone tell me why everything vibrates when the camera in my game moves?

I'm not sure why, but for some reason whenever the camera in my game moves, everything but the character it's focusing on does this weird thing where they move like they should, but they almost vibrate and you can see a little trail of the back of the object, although it's very small. can someone tell me why this is happening? here's the code:
x+= (xTo-x)/camera_speed_width;
y+= (yTo-y)/camera_speed_height;
x=clamp(x, CAMERA_WIDTH/2, room_width-CAMERA_WIDTH/2);
y=clamp(y, CAMERA_HEIGHT/2, room_height-CAMERA_HEIGHT/2);
if (follow != noone)
{
xTo=follow.x;
yTo=follow.y;
}
var _view_matrix = matrix_build_lookat(x,y,-10,x,y,0,0,1,0);
var _projection_matrix = matrix_build_projection_ortho(CAMERA_WIDTH,CAMERA_HEIGHT,-10000,10000)
camera_set_view_mat(camera,_view_matrix);
camera_set_proj_mat(camera,_projection_matrix);
I can think of 2 options:
Your game runs on a low Frames Per Second (30 or lower), a higher FPS will render moving graphics smoother (60 FPS been the usual minimum)
another possibility is that your camera is been set to a target multiple times, perhaps one part (or block code) follows the player earlier than the other. I think you could also let a viewport follow an object in the room editor, perhaps that's set as well.
Try and see if these options will help you out.
If your camera is low-resolution, you should consider rounding/flooring your camera coordinates - otherwise the instances are (relative to camera) at fractional coordinates, at which point you are at mercy of GPU as to how they will be rendered. If the instances themselves also use fractional coordinates, you are going to get wobble as combined fractions round to one or other number.

Viola-Jones - what does the 24x24 window mean?

I'm learning about the Viola-James detection framework and I read that it uses a 24x24 base detection window[1][2]. I'm having problems understanding this base detection window.
Let's say I have an image of size 1280x960 pixels and 3 people in it. When I try to perform face detection on this image, will the algorithm:
Shrink the picture to 24x24 pixels,
Tile the picture with 24x24 pixel large sections and then test each section,
Position the 24x24 window in the top left of the image and then move it by 1px over the whole image area?
Any help is appreciated, even a link to another explanation.
Source: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf
[1] - page 2, last paragraph before Integral images
[2] - page 4, Results
Does this video help? It is 40 minutes long.
Adam Harvey Explains Viola-Jones Face Detection
Also called Haar Cascades, the algorithm is very popular for face-detection.
About half way down that page is another video which shows a super slow-mo scan in progress so you can see how the window starts small (although much larger than 24x24 for the purpose of demonstration) and shifts around the image pixel by pixel, then does it again and again on successively larger square portions. At each stage, it's still only looking at those windows as though they were resampled to the 24x24 size.
You can also see how it quickly rejects many of those windows and spends most of its time in areas that seem face-like while it computes more and more complex comparisons that become more stringent. This is where the term "cascade" comes into play.
I found this video that perfectly explains how the detection window moves and scales on a picture. I wanted to draw a flowchart how this looks but I think the video illustrates it better:
https://vimeo.com/12774628
Credits to the original author of the video.

libgdx tiledmap flicker with Nearest filtering

I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)

Why are the maximum X and Y touch coordinates on the Surface Pro is different from native display resolution?

I have noticed that the Surface Pro and I believe the Sony Vaio Duo 11 are reporting maximum touch coordinates of 1366x768, which is surprising to me since their native display resolution is 1920x1080.
Does anyone know of a way to find out at runtime what the maximum touch coordinates are? I'm running a DirectX app underneath the XAML, so I have to scale the touch coordinates into my own world coordinates and I cannot do this without knowing what the scale factor is.
Here is the code that I'm running that looks at the touch coordinates:
From DirectXPage.xaml
<Grid PointerPressed="OnPointerPressed"></Grid>
From DirectXPage.xaml.cpp
void DirectXPage::OnPointerPressed(Platform::Object^ sender, Windows::UI::Xaml::Input::PointerRoutedEventArgs^ args)
{
auto pointerPoint = args->GetCurrentPoint(nullptr);
// the x value ranges between 0 and 1366
auto x = pointerPoint->Position.X;
// the y value ranges between 0 and 768
auto y = pointerPoint->Position.Y;
}
Also, here is a sample project setup that can demonstrate this issue if run on a Surface Pro:
http://andrewgarrison.com/files/TouchTester.zip
Everything on XAML side is measured in device independent pixels. Ideally you should never have to worry about actual physical pixels and let winrt do its magic in the background.
If for some season you do need to find you current scale factor you can use DisplayProperties.ResolutionScale and use it to convert DIPs into screen pixels.
their native display resolution is 1920x1080
That makes the display fit the HD Tablet profile, everything is automatically scaled by 140%. With of course the opposite un-scaling occurring for any reported touch positions. You should never get a position beyond 1371,771. This ensures that any Store app works on any device, regardless of the quality of its display and without the application code having to help, beyond providing bitmaps that still look sharp when the app is rescaled to 140 and 180%. You should therefore not do anything at all. It is unclear what problem you are trying to fix.
An excellent article that describes the automatic scaling feature is here.

OpenGL - animation stuttering when in full screen

I'm currently running into a problem regarding animation in OpenGL. I have between 200 and 10000 gears on the screen at a time all rotating. When the window is not in maximized view, my CPU runs at about 10-20 % consistently. No spikes, no stuttering in the animation, it runs perfectly smooth regardless of the number of gears on screen. When I maximize the window though, everything falls apart. My CPU maxes out, I begin getting weird spikes in CPU usage, the animation begins stuttering as a result, and it just looks really ugly, even when I have only 200 gears on screen.
My animation technique looks like this:
While Animating
Calculate current rotation angle based on a running timer
draw Image
call glFlush()
End While
If it helps, I'm using the Tao framework in VB.net. I'm not performing any other calculations other than the ones to calculate the rotation angle mentioned above and perform a few glRotateD and glscaleD in the method to draw the image.
In addition, I guess I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
Any help is greatly appreciated =)
Edit
Note that I've seen the animation run perfectly smooth at full screen before. Every once in awhile, OpenGL will decide it's happy and run perfectly at full screen using between 10-20% of the CPU (same as when not maximized). I haven't pinpointed what causes this though, because it will run one time perfectly, then without changing anything, I will run it again and encounter the choppiness. I simply want to pinpoint what causes the animation to slow down and eliminate it.
I've run a dot trace on my program and it says that the swapBuffers method is using 55 % of my processing time even though I'm never explicitly calling the method. Is the method called by something else that I can eliminate, or is this simply OpenGL's "dead time" method to limit the animation to 60 fps?
I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
If only :)
More pixels require more memory bandwidth/shader units/etc. Look into fillrate.