this is my first time modelling a character in blender.
link to pic: screenshot
the rest of the body is complete but now i wanna combine the hands to the arms and as you can see the hands are child to the body. i tried many things which i found on google like pressing U, Alt+P, etc but nothing worked. when i right click and select unlink it says not yet implemented. everything i do to hand is happening to the body too. i want the hand to be a separate entity. pls tell me how can i do that :'<
pressing P worked. thanks for your time anyway
Related
as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.
I'm doing a science assignment that simulates the effects of different volcanoes. The user inputs information and then taking that information, there is a display of the effects the volcano has. when I try to switch the 'slide' from the input page to the output page using a 'next' button, it won't work.
here is the code: https://www.khanacademy.org/computer-programming/volcano-sim/6659613043589120
as you can see, the rectangle does pop up, but the rest of the stuff from the input page doesn't go away. Please help.
(also this is not exactly processing.js. it's on a site called khan academy that uses a very close spin-off of processing.js)
I made a spin-off of your program. The slides work now and I cleared out the long and confusing logic. It's different, but if you look through it a bit you should be able to pick up what it's doing.
Hope you find it helpful. :) If you have any trouble with any of it, just post in the Tips and Thanks there.
https://www.khanacademy.org/computer-programming/total-rewrite-of-code-free-to-use/5501032388755456
I m trying to setup a few image categorization tasks on Mechanical Turk sandbox developer version. When I try to view the HIT(the annotation image), it appears blank. I clicked on the 'Accept HIT' button but I still couldn't see anything.
In order to make sure that nothing was wrong with my project setup in particular, I signed in as a worker to accept HITS on other projects involving image categorization. I still continue to see a blank image in their categorization projects, where the image to be annotated is supposed to be displayed.
Can anyone help with this problem?Thanks.
Problem solved - it was a simple browser incompatibility problem.
I know you have already answered this for yourself, however for other requesters out there I think this may be useful.
I was developing HITs and I too was having issues view the HIT in Sandbox in Chrome and Firefox, I realized that it had something to do with the script being blocked by the browser and the way to fix this was to "unblock the content" - usually a shield icon in the URL bar.
When further developing my HIT I added information about how to see the HIT in the description box of the HIT so turkers could read the instructions and then work on the HIT - to be absolutely clear to the turkers, I added a "(READ DESCRIPTION)" in the title so turkers would know where to look.
Hope this helps!
I've been programming for a while, but just recently decided to start developing for Mac OS X. I feel like I've come to grips with the basics of Objective-C and Cocoa development over the past week. I'm planning on making graphics apps, and as such am currently in the process of learning how to control Quartz compositions through a Cocoa app. I went through the tutorial that apple offers (with the Mac Engravings composition), and was able to create that just fine. In order to make sure that I truly understood what I learned, I decided to create my own composition and link it to a slightly more complicated Cocoa application.
Essentially, I have a composition that loads a movie or image through a Movie Loader patch, at which point it applies various filters to the frames before outputting it. In my Cocoa app, I've written code (or rather copied and pasted from other apple examples) that lets a user pick a file using an NSOpenPanel object. The filepath of the file they pick gets placed in a text-box that I placed in the app's window using Interface Builder. I binded the value of said text-box to the "Movie_Location" key in my composition, which is a published input in the Movie Loader patch that I'm using. However, no matter what I do, movies and images aren't loaded into this composition no matter what I try. The only thing that gets displayed is the default image that I have saved in that input from Quartz Composer (or nothing if I leave it blank before publishing).
I've added a Clear Color patch to the composition and binded that to a colorwell in my UI, and that successfully changes the color in my display, so I know that the composition and my Cocoa app are communicating. I've spent numerous hours at this point trying to figure out what's going on, and I've just about given up. Does the Movie Loader have any weird behaviors that I'm not aware of, or is there something obvious that I seem to be missing? I'd really appreciate any help or advice from anybody.
Thanks for reading through this...
Best,
Sami
There are two things I can think of as reasons why it is doing this:
The file path isn't formatted incorrectly. Try checking backslashes, colons, etc.
The box isn't updating the value. Try literally clicking in the text field and hitting enter.
That's all I can think of without seeing your quartz composition and/or code.
EDIT:
Check the other continuous box, in the general properties.
I figured this out yesterday. spudwaffle's second idea is what was going on. If I were to type a filepath in and hit enter, it would work just fine. I got this to work properly by just removing the bind and instead using the setValue:keyInPath: function that a patch controller offers. That said, is there some way to force a text-box to update? I remember seeing a "continuously update" or something like that button within the bind sub-menu in the inspector, but my code didn't work with that checked either.
Thanks to those of you that tried to help me! I really appreciate it.
Best,
Sami
I have a navigation controller where i have the list of appointments in a table view. I have an add button on the right side corner for the navigation controller. When the user clicks the add button a view will appear with the textfields and buttons. The problem is when ever the user (of the app) adds his appointments and clicks the add button it has to hit the server and store that data. I want to do that using RestKit. Can anybody tell me how to hit the server and how i can store the data.
Yes, Google can.
Here is the first result for the search 'restkit tutorial' :
http://mobile.tutsplus.com/tutorials/iphone/restkit_ios-sdk/
And, in case that one is a little out of date, here's the second result for the search 'restkit tutorial' :
http://mobile.tutsplus.com/tutorials/iphone/advanced-restkit-development_iphone-sdk/
The current documentation is only on github. Most other sources are outdated, thanks to RestKit's rapid development.
The wiki has a lot of good information, and you can find documentation that's always up-to-date in the docs directory.
I recently wrote a detailled overview of RestKit with many piece of code and I think it can help you to understand how it works and how to get things done.
http://blog.octo.com/en/overview-of-restkit-a-core-data-enabled-ios-macosx-framework-for-restful-apps/