Socket added in UE4 is too big- skeleton exported from blender - blender

I have looked into some of the things that could cause this and after checking all of them, I still have the same issue.
The scale for both the skeleton and the mesh in blender have been applied and are at a value of 1 for all axis
The unit system in blender is set to metric and the unit scale is 1.00
I have tried both checking and unchecking "apply unit" in the export window
The scale of the character is perfect when I open and use it in UE4 its just the issues when adding a socket, the socket by default is giant compared to the skeleton

You need to set Blenders scene units scale to 0.01 if you're using Metric by default.
Similar problem resolved here:
https://www.reddit.com/r/unrealengine/comments/afvjrp/sockets_are_huge_and_upscale_everything/

Related

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

Why GUI Component in Qt5 shows different sizes when deployed on system with different resolution

I am developing an application in Qt5. when I deployed it on the machine on which it is developed it looks fine. But when I deployed it on laptop with higher resolution, the size of components like QPushButtons etc,reduces.Please help me, I have no idea why it is happening.
I'm not sure how are you specifying the size but, maybe you are using fixed sizes? If so, when deploying on a higher resolution display the interface is apparently smaller but in fact has the same size than in the smaller resolution display.
Try using relative sizes or at list show how you are setting the size right now

GNU Radio audio underrun on FM Radio Capture, using the hackrf compatible rad1o badge from CCC

I'm being unable to finish the 1st lesson of http://greatscottgadgets.com/sdr/1/ successfully. The example runs, but instead of being able to capture the tuned radio station, I only get noise. GNU Radio companion keeps printing audio underrun erros.
I'm using GNURadio on a Kali VM on a Mac OS X i7 with 16Gb
GNU Radio companion keeps printing audio underrun erros.
Underrun means that your audio device is not getting the samples per seconds it needs. Maybe it doesn't support the sampling rate you configured it to, maybe your VM is just running too slowly (audio emulation in VMs is especially problematic).
You should try using GNU Radio natively. GNU Radio has a live image that you can put a DVD or USB stick and try out natively.
Also, try different audio sampling rates in the audio sink (you will need to adjust the audio rate/decimation in the demodulator, too!). 44100 works best, typically.
I was able to find success with this by setting the Audio Sink to 36kHz. Good luck
This may be caused by Pulse Audio, see the wiki.
Please try modifying the ~/.gnuradio/config.conf to:
[audio_alsa]
nperiods = 32
period_time = 0.010
verbose = false
Open the file in a text editor:
nano ~/.gnuradio/config.conf
identify the [audio_alsa] section.
Check the values match the text above.
This should resolve the bitty audio, but the complete lack of radio could mean your signal isn't strong enough or you're on the wrong frequency. Try moving the SDR aerial around, preferably near a window to get a better signal.
For finding radio stations, it might help to replace Scott's channel_freq variable with a slider from GUI Widgets > QT > QT GUI Range. This gives you a slider that will control the frequency, some suggested values but I don't know the full FM range:
Id: channel_freq
Label: Channel Frequency
Type: "float"
Default Value: 100e6
Start: 70e6
Stop: 120e6
Step: 1e3
Wisget "Counter + Slider"
Minimum Length: 200
For anyone else following Scott's guide in 2021, some of the features he's using have been deprecated, such as the WX GUIs, there are replacements for these under QT instead, in particular the "WX GUI FFT Sink" can be replaced with the "QT GUI Sink".

Sample cairo applications to test cairo-gles backend in 1.12.14

I was able to successfully port, cross compile and run the cairo gears
application in gles backend, on my embedded system target.
http://people.linaro.org/~afrantzis/cairogears-0~git20100719.2b01100+gles2.tar.gz
The ported samples trap, comp, text and shadow run well in cairo1.12.3
and 1.12.4.
But I face problem in running the same in 1.12.14.
I could not run the texture related samples like comp, text, shadow.
Trap plays well but the gradient could not be displayed in the gradient sample.
I use gles backend and converting all image surfaces I load from png
file to gl surface.
Let me know if there is something that should be done for the
texture+gradient samples to work in 1.12.14.
thanks
Sundara raghavan
The problem was because of the need to convert the GL_BGRA,the internal image format of cairo , to GL_RGBA for loading in to GL textures (which were GL_RGBA by default). I solved it by applying an existing patch which uses BGRA GL texture and hence avoids conversion. This was possible because my hardware is capable of both reading as well as creating bgra textures.
The Patch was found here:
http://lists.freedesktop.org/archives/cairo/2013-February/024038.html