I'm trying to make a 30sec video of images and can't seem to find useful information on how to achieve this if someone could point me to some examples would be great, other then that I got my script below I think dissolve should connect images is that right? What about how to connect first message window and make it appear first? Basically all three of these I want to go from to bottom as a video clip.
Script below:
MyMessage="Wireless Communications"
MessageClip(MyMessage, 320,240, text_color=color_antiquewhite,
\bg_color=color_blue)
Rails="C:\Users\me\Desktop\1.png"
RailClip=ImageReader(Rails,start=0,end=100,fps=25)
Info(RailClip)
PointResize(RailClip, 320,240, 0,20, 148,148)
Rails2="C:\Users\me\Desktop\3.jpg"
RailClip2=ImageReader(Rails2,start=101,end=200,fps=25)
Dissolve(RailClip+RailClip2,25)
error: 'frame sizes don't match'.
Related
as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.
I'm just getting started in Adobe Animate/CreateJS. I'm trying to control the timeline of a movie clip ("myMovieClip") on frame 1 on the main stage, preventing it from playing. According to the documentation, this should work:
this.myMovieClip.stop();
But it does not... the movie "myMovieClip" clip plays immediately on test in the browser, and I'm not getting any error love from the console. It's as if the above line of code wasn't even there.
This seems pretty basic. What am I missing? I ultimately want to stop all movie clips on the main timeline as well as a large group of nested ones. If there's a single command that does this, I'd love to hear about it.
I am fairly certain this is related to a bug in the Adobe Animate output that makes MovieClip timelines not immediately available.
You can get around this by forcing an update before you try to access the children:
this.gotoAndStop(0); // Forces update
this.myMovieClip.stop();
Hope that helps!
Currently, I have an RSS Feed Reader in a UITableView within a Navigation Control. I would like to click on the links and Open up a formatted page (containing all of the information from the website formatted for the iOS screen). I'm not sure if I should do this using the RSS data and a UITextView? I'm currently attempting to use UITextView in hopes that I can separate the information (title, author, body) without anything looking promising. I want to be able to move around the data and format it to my liking in the actual application itself. I read around and noted that you can include HTML and custom CSS. Would this be the way to go? I'm not quite sure how to tackle this. I want the page that opens up to be entirely scrollable (Like the IGN application or Slashgear application). Many apps for websites utilize this (and I am a bit new to this). How do they go about this? I also want to note, at some point I will like to cache the data so it may load what is already loaded without being connected to the internet. Does anyone have any ideas?
Edit:
Ok, I believe I found the correct path to go down from playing around and a lot of googling (nothing directly says what a decent way of doing this is). My particular way as of now is the route of a UIScrollView in general. Now the part I don't understand is how to divide up the long text into 'pages' for scrolling (and I am using the paging feature. This situation has led me into this question: How To Separate Strings For UIScrollView/UITextView based on the size of the frame
Ok, I believe I found the correct path to go down from playing around and a lot of googling (nothing directly says what a decent way of doing this is). My particular way as of now is the route of a UIScrollView in general. Now the part I don't understand is how to divide up the long text into 'pages' for scrolling (and I am using the paging feature. This situation has led me into this question: How To Separate Strings For UIScrollView/UITextView based on the size of the frame
I've got a terminal application that needs to take a webcam picture and then perform some processing on it. I'm having trouble getting it to initialize. There's a fairly complete demo with an app called MyRecorder in the Apple docs that uses QTKit, which I was able to make work fine. I was also able to modify it to grab a single frame instead of a stream.
When I move this to a terminal application, the startRunning of the QTCaptureSession command simply does nothing. There are no errors, and everything reports as successful, but my webcam doesn't light up, and no frames are captured.
Any idea what's going on here? Are there any kind of security restrictions, or other kinds of restrictions that would prevent the QTCaptureSession from working?
So switching to AVFoundation solved my problem. I'm still not certain what the issue is, but for now using AVFoundation seems like the way to go since it was designed to replace QtKit anyways.
I am developing an iPhone application where i want to display three image in each row on scroll view where i need to click action on each image like Photo album in iPhone. I am not getting any sample code.
Hoping for help
subodh
There's plenty of sample code out there, I found this after only basic googling. You want to search for "UIImageView Iphone". It's also worth mentioning that Apple's very own Developer Center is extremely well written, and will teach you everything you need to know about iPhone programming.
Generally it is frowned down upon to say to look more or read documentation, but you really haven't looked at all. Especially because of Apple's own resource that tells you how to do almost anything, especially something like this. It's not something you can pick up and bits and pieces of and expect to be successful with, it really should be learned starting from the beginning and moving forward. This is especially true if you've never programmed before or are unfamiliar with C/Objective-C.
Three20 has a photo browser that is open source and works similarly to the iPhone's photo browser with some nice code examples. The images come from an image source object that can relate them to images in your apps bundle or images on the web. Looks like its Google group is here. I think that to use images in your bundle you use a URL formed like: bundle://image-name.png and not the typical use of the main bundle to get a path to resource.