Aframe VR : Eyeoffset Settings - camera

This SCREENSHOT gets loaded in my Oneplus 6 phone and is creating problem with the Stereoscopic vision, no one is able to create focus with such a high offset between left & right eyes.
Can anyone help me out in setting the eyeoffset parameters, since i haven't been able to figure them out neither on Aframe.io documentation nor Github
Or is there something that i can do from THREE.js Camera

The parameters for OnePlus 6 (A6000) are missing from the webvr-polyfill device database. You can fork the webvr-polyfill and database, add your device (you need to know dpi and bezel width) and see if the problem is solved in the examples. A-Frame will pick it up when we bump the polyfill version on next release. In the meantime you can do your own A-Frame build pointing to your webvr-polyfill fork.

Related

How to configure MOCP mapping and Expression with metahuman and Live Link

as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

Screen Recording in Mac using AVFoundation's documentation

I have been working on screen recording on MacOS. I have working code for the same based on Apple's Documentation (https://developer.apple.com/library/content/qa/qa1740/_index.html). The problem is that the resolution of the recorded video is very low. According to the logs generated it looks like SD 480x300 is the default resolution. I was unable to find any methods to change the resolution of the video quality. Can somebody help me out here?
I found the solution to the problem. You can set the screen resolution at mSession.sessionPreset = AVCaptureSessionPreset1280x720;
There are several values for the sessionPreset including
AVCaptureScreenPresetLow
AVCaptureScreenPresetMedium
AVCaptureScreenPresetHigh
AVCaptureScreenPreset320x240
AVCaptureScreenPreset352x288
AVCaptureScreenPreset640x480
AVCaptureScreenPreset960x540
AVCaptureScreenPreset1280x720

Kinect 2 shows black screen while capturing Infrared Basics

I am trying to use Kinect 2 and SDK v2 for capturing Infrared Images/videos.
Kinect shows Depth and RGB images properly, But when i try to visualize Infrared Basics in Kinect for Window. It does not show any image, rather a black screen.
What is the reason for it. I reinstalled SDK v2, but still the same problem. In a similar post some one suggested that reinstall a newer version, which I did. But still the same problem. Can any one suggest any solution?
thanks
it is better to use "KinectConfigurationVerifierSetup" for test system requirements. and i suggest you that use Infrared Basic-WPF Samples in SDK Browser, also you could use that sample code and install them to your computer. if still infrared data source not show, you could test Kinect on other computer
I fixed my problem by updating GPU Driver. It has a conflict/bug/error with older version. However Nvidia removed it. And if one install new driver, it start showing infrared images.
Attention for your graphics card setting, Maybe changing your computer to auto or Inter HD Graphics will work.

YouTube iFrame API quality parameter "vq" bug - video playback with black screen

There seems to be a huge issue with the iframe url parameter "vq" (in this case "vq=hd720").
If you use this paraneter, the video screen in the flash player turns black.
This example URL worked until today:
http://www.youtube.com/embed/dFVDJlM6zLY?vq=hd720
(feel free to replace the example YouTube ID)
Now, this example works only without the "vq" parameter (vq=hd720):
http://www.youtube.com/embed/dFVDJlM6zLY
The problem is, that we delivered this YouTube iframe in a lot of websites for a lot of clients and it worked quite well for month (years?). Now every single site has black youtube videos!!!
Does anyone know if this parameter was depricated?
Was there a new YouTube API release today?
The parameter was already discussed and recommended in diffrent forums:
e.g. Force youtube embed to start in 720p
Any ideas how to force Google (YouTube) to solve this problem?
It seems that the only way out of this (currently) is to remove the vq parameter or set it to auto. This seems like a widespread problem though, that has occurred very recently. You may wish to star this issue at google make them take notice:
https://code.google.com/p/gdata-issues/issues/detail?id=6009
I was having a similar issue but when I tried to force an embeded youtube video to play in HD at a dimension smaller than the HD resolution.
I was able to get around the "black screen" HD issue by using the old embed code.
If you set the video size to the highest resolution you can then use vq=hd720 parameter and set the video width and height to a lower resolution
<object width="1280" height="720"><param name="movie" value="//www.youtube.com/v/VIDEO_ID?hl=en_US&version=3&rel=0&vq=hd720"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="//www.youtube.com/v/kyilUYoxcww?hl=en_US&version=3&rel=0" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>
Hope this helps!
Just had this exact problem. Found changing it to vq=auto works in the code. But on the videos we're working with it's coming out at a slightly lower quality than we'd like.
Nevertheless, this at least removes the blackout.
What are your 3d settings set to? Here's what I've noticed...
1) Taking away the vq=hd setting works.
2) Turning the 3D setting on the video player ON (or sometimes OFF the ON again) makes the
video work.
3) Curious about number 2, I went to the video settings on the video (the admin settings) and changed the 3D settings under Advanced. I changed it from "Disable 3D for this video" to "Please make this video 3D." For some reason this works -- the vq code is back to working the way it should.
HOWEVER I don't know if I would recommend doing this! The reason I set all my videos to "Disable 3D for this video" in the first place was because if I left on on the on the default "No Preference," it often caused glitches in the video.
Has YouTube changed anything with the 3D settings recently? I think thay may have but am not certain.