What is the best way to set up basic player movement (UE4)? - input

I am using UE4 Blueprints for this. I followed the instructions in the UE4 documentation(https://docs.unrealengine.com/en-US/Gameplay/HowTo/CharacterMovement/Blueprints/index.html) and a YouTube tutorial(https://www.youtube.com/watch?v=OExuEFp5kE0), both of which used the same basic movement setup.I copied their setups, and my character cannot look or move around at all. Here are some screenshots of my project:
Movement Setup, Game Mode
I attached print strings to the "add movement input" nodes already.This confirmed that the event nodes are firing properly, and all of my inputs and keys are set up correctly. My character also can't look anywhere either. From what I've seen (the YouTube tutorial for example), characters are supposed to be able to look around by default. That did not work, and I don't know why.

Related

How to configure MOCP mapping and Expression with metahuman and Live Link

as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

AviSynth AvsP disolve not working

I'm trying to make a 30sec video of images and can't seem to find useful information on how to achieve this if someone could point me to some examples would be great, other then that I got my script below I think dissolve should connect images is that right? What about how to connect first message window and make it appear first? Basically all three of these I want to go from to bottom as a video clip.
Script below:
MyMessage="Wireless Communications"
MessageClip(MyMessage, 320,240, text_color=color_antiquewhite,
\bg_color=color_blue)
Rails="C:\Users\me\Desktop\1.png"
RailClip=ImageReader(Rails,start=0,end=100,fps=25)
Info(RailClip)
PointResize(RailClip, 320,240, 0,20, 148,148)
Rails2="C:\Users\me\Desktop\3.jpg"
RailClip2=ImageReader(Rails2,start=101,end=200,fps=25)
Dissolve(RailClip+RailClip2,25)
error: 'frame sizes don't match'.

Displaying Webpages From RSS Feed in iOS Xcode

Currently, I have an RSS Feed Reader in a UITableView within a Navigation Control. I would like to click on the links and Open up a formatted page (containing all of the information from the website formatted for the iOS screen). I'm not sure if I should do this using the RSS data and a UITextView? I'm currently attempting to use UITextView in hopes that I can separate the information (title, author, body) without anything looking promising. I want to be able to move around the data and format it to my liking in the actual application itself. I read around and noted that you can include HTML and custom CSS. Would this be the way to go? I'm not quite sure how to tackle this. I want the page that opens up to be entirely scrollable (Like the IGN application or Slashgear application). Many apps for websites utilize this (and I am a bit new to this). How do they go about this? I also want to note, at some point I will like to cache the data so it may load what is already loaded without being connected to the internet. Does anyone have any ideas?
Edit:
Ok, I believe I found the correct path to go down from playing around and a lot of googling (nothing directly says what a decent way of doing this is). My particular way as of now is the route of a UIScrollView in general. Now the part I don't understand is how to divide up the long text into 'pages' for scrolling (and I am using the paging feature. This situation has led me into this question: How To Separate Strings For UIScrollView/UITextView based on the size of the frame
Ok, I believe I found the correct path to go down from playing around and a lot of googling (nothing directly says what a decent way of doing this is). My particular way as of now is the route of a UIScrollView in general. Now the part I don't understand is how to divide up the long text into 'pages' for scrolling (and I am using the paging feature. This situation has led me into this question: How To Separate Strings For UIScrollView/UITextView based on the size of the frame

Very confused by a binding issue between a Cocoa app and a Movie Loader patch in Quartz Composer

I've been programming for a while, but just recently decided to start developing for Mac OS X. I feel like I've come to grips with the basics of Objective-C and Cocoa development over the past week. I'm planning on making graphics apps, and as such am currently in the process of learning how to control Quartz compositions through a Cocoa app. I went through the tutorial that apple offers (with the Mac Engravings composition), and was able to create that just fine. In order to make sure that I truly understood what I learned, I decided to create my own composition and link it to a slightly more complicated Cocoa application.
Essentially, I have a composition that loads a movie or image through a Movie Loader patch, at which point it applies various filters to the frames before outputting it. In my Cocoa app, I've written code (or rather copied and pasted from other apple examples) that lets a user pick a file using an NSOpenPanel object. The filepath of the file they pick gets placed in a text-box that I placed in the app's window using Interface Builder. I binded the value of said text-box to the "Movie_Location" key in my composition, which is a published input in the Movie Loader patch that I'm using. However, no matter what I do, movies and images aren't loaded into this composition no matter what I try. The only thing that gets displayed is the default image that I have saved in that input from Quartz Composer (or nothing if I leave it blank before publishing).
I've added a Clear Color patch to the composition and binded that to a colorwell in my UI, and that successfully changes the color in my display, so I know that the composition and my Cocoa app are communicating. I've spent numerous hours at this point trying to figure out what's going on, and I've just about given up. Does the Movie Loader have any weird behaviors that I'm not aware of, or is there something obvious that I seem to be missing? I'd really appreciate any help or advice from anybody.
Thanks for reading through this...
Best,
Sami
There are two things I can think of as reasons why it is doing this:
The file path isn't formatted incorrectly. Try checking backslashes, colons, etc.
The box isn't updating the value. Try literally clicking in the text field and hitting enter.
That's all I can think of without seeing your quartz composition and/or code.
EDIT:
Check the other continuous box, in the general properties.
I figured this out yesterday. spudwaffle's second idea is what was going on. If I were to type a filepath in and hit enter, it would work just fine. I got this to work properly by just removing the bind and instead using the setValue:keyInPath: function that a patch controller offers. That said, is there some way to force a text-box to update? I remember seeing a "continuously update" or something like that button within the bind sub-menu in the inspector, but my code didn't work with that checked either.
Thanks to those of you that tried to help me! I really appreciate it.
Best,
Sami