Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly - blender

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <

It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

Related

How to configure MOCP mapping and Expression with metahuman and Live Link

as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

Adobe Animate/CreateJS - this.myMovieClip.stop(); not working - no error in console

I'm just getting started in Adobe Animate/CreateJS. I'm trying to control the timeline of a movie clip ("myMovieClip") on frame 1 on the main stage, preventing it from playing. According to the documentation, this should work:
this.myMovieClip.stop();
But it does not... the movie "myMovieClip" clip plays immediately on test in the browser, and I'm not getting any error love from the console. It's as if the above line of code wasn't even there.
This seems pretty basic. What am I missing? I ultimately want to stop all movie clips on the main timeline as well as a large group of nested ones. If there's a single command that does this, I'd love to hear about it.
I am fairly certain this is related to a bug in the Adobe Animate output that makes MovieClip timelines not immediately available.
You can get around this by forcing an update before you try to access the children:
this.gotoAndStop(0); // Forces update
this.myMovieClip.stop();
Hope that helps!

An unexpected error occurred during export

I am using Adobe Flash Professional CC to convert flash games to HTML5.
When i export my animation an error pop up saying "a JavaScript error occurred", and in the output "An unexpected error occurred during export. [JSX]".
I search for an answer got few results on it including http://community.createjs.com/. But none of it solves the problem.
For me, the problem was due to some Advanced Color Effects applied to the symbol (the ones where you mix alpha and RGB channels settings together). If I use effect styles like alpha or brightness, it works ok.
I had the same problem. For me the fix was just deleting one of my symbols from my stage.. When it was in my library it published fine, but bringing it to the stage gave me that error.
It seems that HTML5 canvas really doesn't like to support certain art assets. I had the same problem, but I realised that certain of my art elements had been created in Illustrator, with masks and paths. Once I got rid of them things started working properly
I recently struggled with this desperately trying to work out what was causing the problem.
The answer was a custom easing curve set on a particular tween in a particular MovieClip.
Incredibly hard to isolate, by deleting aspects of the project one by one.
Apparently that curve didn't play well. Weird thing is that I never defined a custom easing curve there, pretty sure. This may be the result of a CC update changing curve presets?

Kinect shows only grayscale images instead of colored one

I have very weird problem and haven't found any answer yet, so maybe someone here ever saw smth like this?
When I start NiViewer demo, it works and shows depth image on the left, but grayscale image on the right.
When I start Sample-NiSimpleViewer demo it won't work and gives error
The device image format must be RGB24
The weirdest thing is that about 2 weeks ago everything was fine - no errors and image was coloured.
I think changes occurred after installing ROS packages for openni or maybe I made some damage in settings(however I am very inexperienced user, I don't recollect changing anything anywhere).
I thought that maybe kinect is broken, but when I start it from under ROS (rviz or image_view) it actually shows coloured image.
Anyone have any suggestions?
Have you tried openni_launch with RVIZ !? RViz can also visualize RGB images / point clouds.