distortion using gvr android sdk 1.190.0 - gvr-android-sdk

I am using the gvr-android-sdk-1.190.0 video360 sample. If I play a stereo video like the example video, I have a distortion.
If I load the video in a built in player app that comes with the vr headset (pico g2 4k) it shows correct, with no distortion.
I have a screenshot attached where you can see the distortion on the white lines and also on the green square.
Is it possible to say why is that and how to fix it?
edit
I have to note, that the problem also occurs if I use a monoscopic video. But then the distortion does not appear that much.
Screenshot

So I figured out by myself:
The following is written in MediaLoader.java:
/** The 360 x 180 sphere has 15 degree quads. Increase these if lines in your video look wavy. */
private static final int DEFAULT_SPHERE_ROWS = 12;
private static final int DEFAULT_SPHERE_COLUMNS = 24;
Increasing the values for rows/columns solved the problem

Related

CreateJS MovieClip performance issue

I'm using Adobe Animate HTML5 to create a board game to run on Smart TV (low-performance machine).
All my previous games were done using AS3.
I quickly found there is no way to create a Sprite anymore (A movie clips with only 1 frame).
After creating my board game (no code yet just elements) which is basically movie clips inside other movie clips. All single frame.
I checked the FPS on LG TV and so it is done from 60 to 20. On a static image.
After research, I found that is the advance method in MovieClip class there is a constant check to update the frame.
I added a change to check if the MovieClip class total frame is equal to 1 to change it the mode of the MovieClip to a single frame. This increases performance back to 60 FPS.
Who do I go to, to check and maybe fix/"add a feature" to the code of createjs
Thanks
code issues or suggestions can be logged here https://github.com/CreateJS/EaselJS/issues for CreateJS. All the best.
inside html-code in script part there is a line
createjs.Ticker.addEventListener("tick", stage);
Remove it and call the update manually when you need it (when something has changed)
stage.update();

EDSDK LiveView zoom 10x

Using LiveView on EOS is fun and helps getting objects in focus (in case of objectives which do not offer autofocus). Magnification of the LiveView image (stream) really helps focusing.
On camera site, you may magnify that LiveView image 5x and 10x using the button with magnifying glass icon. That works well for my 600D.
Programming using EDSDK I got a problem:
It is possible to set the 5x zoom mode for LiveView programmatically.
But I did not succeed for 10x mode.
Did anyone succeed in doing zoomed LiveView and zoom that LiveView image more than 5x ?
For successful 5x LiveView zoom I used following code for my 600D:
// Start LiveView wait for the stream apearing on the screen and then do:
_iZoomStage= 5;
bool Success=_CameraHandler.SetSetting(EDSDK.PropID_Evf_Zoom,(UInt32) _iZoomStage);
That works fine, BUT:
If you try to get higher zoom factors that fails. Success is returned true, but no effect is visible on screen.
If you do LiveView zooming on the camera itself 10x works fine pressing the "magnifier" button. But programmatically I did not succeed in values greater than 5.
Any idea to that topic?
Well, thanks a lot for your answers.
Meanwhile I did the following workaround, which seems to solve the problem. I simply crop and zoom the bitmap during LiveView streaming:
if(_zoomFactorOfEdskd == true) // That is 1 and 5
g.DrawImage(_LiveViewStreamedBmp,_LvOutput);
else // Our own ones which do not work with EDSDK
{
Int32 newWidth= (Int32)(_LiveViewStreamedBmp.Width / _zoomFactor);
Int32 newHeight= (Int32)(_LiveViewStreamedBmp.Height / _zoomFactor);
// Cropping around the center of the original bitmap
Int32 xOffset= (_LiveViewStreamedBmp.Width-newWidth)/2;
Int32 yOffset= (_LiveViewStreamedBmp.Height-newHeight)/2;
Rectangle rectSource=new Rectangle(xOffset, xOffset, newWidth,newHeight);
Rectangle rectTarget=new Rectangle( 0, 0, _LiveViewStreamedBmp.Width, _LiveViewStreamedBmp.Height);
// Do the zoomed output...
g.DrawImage(_LiveViewStreamedBmp,rectTarget,rectSource,GraphicsUnit.Pixel);
}
Please take care that "really good" results will appear with _zoomFactor below 5x
(means something between 2.0 and 3.0). If you use too strong zoom values here, you get "pixels" and the image is much too big in size (you may not see anything).
Perhaps it is a good idea to define the _zoomFactor value otherwise, so that it fits better to Canon's understanding of "5x" or "10x". But for the moment this workaround may serve.
Kind regards Gerhard Kauer
I have stumbled into the same problem (on 5D Mark IV) - only the 5x zoom is actually possible and for 10x zoom you are supposed to zoom the returned bitmap by yourself.
HOWEVER: It doesn't seem to be a bug, but a very badly documented feature (i.e. not documented at all). The SDK actually gives out additional data to hint you that you should do a software zoom and also gives precise coords. So this is how I understand it:
Say we have a sensor with a resolution of 1000 x 1000 pixels and we want to zoom-in 10x in the center. Then this happens in my tests:
Reading the kEdsPropID_Evf_ZoomRect returns the position 450:450 and size 100x100 - all as expected.
Reading the kEdsPropID_Evf_ZoomPosition returns 450:450 - expected too.
But you will receive a bitmap of 200 x 200 pixels - "incorrectly", because this is for x5 zoom... you would expect 100 x 100, but this was observed on various cameas, so probably a normal thing.
But by reading the kEdsPropID_Evf_ImagePosition you can tell, where this bitmap actually lies. This will return 400:400 position and so can be used to calculate the final crop and enlargement of the returned bitmap.
So while the code of user3856307 should work, there might be some limitations of the camera (such as returning bitmaps on position divisible by 32), so incorporating the kEdsPropID_Evf_ImagePosition should give more precise results in my opinion.

A camera zoom issue with libGDX

Hey guys I'm working on a android game with libGDX, and I got a zoom problem here. It's basically a ski safari style 2D game, and I want to implement a zoom in/out effect as the height changing. Is that possible to work this out with OrthoGraphicCamera? Or shall I change the size of the objects in real time(because I still wanna keep the background in a fixed size)?
if Camera.zoom not solve your problem, you can use
batch unmodified to draw your backgroud, and use batch.getProjectionMatrix().cpy().scale (yourScaleVariableX, yourScaleVariableY, 0);
to simulate only the items that you want, varying varibles, not if you're looking hopefully help.
simple example:
variable Class
Matrix4 testMatrix;
float yourScaleVariableX;
float yourScaleVariableY;
example render method
.//
batch.begin();
yourBackground.draw...
batch.end();
batch.begin();
testMatrix = batch.getProjectionMatrix().cpy().scale (yourScaleVariable, yourScaleVariable, 0);
batch.setProjectionMatrix(testMatrix);
yourItem.draw..
batch.end();
I think it is more efficient, changing the matrix, which resize all objects.
I hope to explain well.
Edit
I did not realize when I wrote the answer, you can store save the original matrix, before editing it for later use, or use batch.setProjectionMatrix (camera.combined); to restore

Why are the maximum X and Y touch coordinates on the Surface Pro is different from native display resolution?

I have noticed that the Surface Pro and I believe the Sony Vaio Duo 11 are reporting maximum touch coordinates of 1366x768, which is surprising to me since their native display resolution is 1920x1080.
Does anyone know of a way to find out at runtime what the maximum touch coordinates are? I'm running a DirectX app underneath the XAML, so I have to scale the touch coordinates into my own world coordinates and I cannot do this without knowing what the scale factor is.
Here is the code that I'm running that looks at the touch coordinates:
From DirectXPage.xaml
<Grid PointerPressed="OnPointerPressed"></Grid>
From DirectXPage.xaml.cpp
void DirectXPage::OnPointerPressed(Platform::Object^ sender, Windows::UI::Xaml::Input::PointerRoutedEventArgs^ args)
{
auto pointerPoint = args->GetCurrentPoint(nullptr);
// the x value ranges between 0 and 1366
auto x = pointerPoint->Position.X;
// the y value ranges between 0 and 768
auto y = pointerPoint->Position.Y;
}
Also, here is a sample project setup that can demonstrate this issue if run on a Surface Pro:
http://andrewgarrison.com/files/TouchTester.zip
Everything on XAML side is measured in device independent pixels. Ideally you should never have to worry about actual physical pixels and let winrt do its magic in the background.
If for some season you do need to find you current scale factor you can use DisplayProperties.ResolutionScale and use it to convert DIPs into screen pixels.
their native display resolution is 1920x1080
That makes the display fit the HD Tablet profile, everything is automatically scaled by 140%. With of course the opposite un-scaling occurring for any reported touch positions. You should never get a position beyond 1371,771. This ensures that any Store app works on any device, regardless of the quality of its display and without the application code having to help, beyond providing bitmaps that still look sharp when the app is rescaled to 140 and 180%. You should therefore not do anything at all. It is unclear what problem you are trying to fix.
An excellent article that describes the automatic scaling feature is here.

Cocos2d how to get object position from tile map depending on retina or not?

Hey guys I have a object sitting in my tile map for spawn point reference, problem is the -hd version is twice as big as the non -hd version, so going:
(width = width of character getting spawned)
int spawnX = (width/2) + [tilemap spawnX];
Get's the wrong position when in hd mode, because the tile map is in pixels but cocos2d is in points.
I.E I could test if retina display was supported, but from what I hear thats a bit dodgey.
How can you do this?
Retina display is supported properly on cocos2d v2.0 rc2.
First, make sure to call [director_ enableRetinaDisplay:YES] in your app launch with all the other cocos2d initialization stuff.
Then, use CC_CONTENT_SCALE_FACTOR() * pointCount to get pixels out of it.
There are also other convenience macros defined in the same header as the CC_CONTENT_SCALE_FACTOR() macro to help you convert CGRects, etc. that are in points to pixels, and vice-versa.