An unexpected error occurred during export - createjs

I am using Adobe Flash Professional CC to convert flash games to HTML5.
When i export my animation an error pop up saying "a JavaScript error occurred", and in the output "An unexpected error occurred during export. [JSX]".
I search for an answer got few results on it including http://community.createjs.com/. But none of it solves the problem.

For me, the problem was due to some Advanced Color Effects applied to the symbol (the ones where you mix alpha and RGB channels settings together). If I use effect styles like alpha or brightness, it works ok.

I had the same problem. For me the fix was just deleting one of my symbols from my stage.. When it was in my library it published fine, but bringing it to the stage gave me that error.

It seems that HTML5 canvas really doesn't like to support certain art assets. I had the same problem, but I realised that certain of my art elements had been created in Illustrator, with masks and paths. Once I got rid of them things started working properly

I recently struggled with this desperately trying to work out what was causing the problem.
The answer was a custom easing curve set on a particular tween in a particular MovieClip.
Incredibly hard to isolate, by deleting aspects of the project one by one.
Apparently that curve didn't play well. Weird thing is that I never defined a custom easing curve there, pretty sure. This may be the result of a CC update changing curve presets?

Related

How to configure MOCP mapping and Expression with metahuman and Live Link

as my title says I am having trouble mapping face data from the Live Link app to a metahuman.
Here is what I have done so far:
Created a UE5 project (Film/Video & Live Event)
Imported a metahuman (custom-made) into the project
Added required plugins to project (Live Link, ArKit, Apple-etc)
Connected Live Link mobile application to local network
Set the metahuman's animation controller to the Live Link feed
Calibrated the Live Link data within the Live Link application
The problem I am having:
Parts of the face are not responding at all (ex. metahuman's right eyebrow does not respond to me lifting my left eyebrow).
The left corner of the mouth seems to be stuck (ex. when I try to open my mouth, all points respond except for the single point stays where it is).
The mapping/naming of facial components seems to be mirrored/off/labeled wrong (ex. if I was to wink my right eye this would result in my right eye closing and my right cheek pressing upward. On the metahuman, the left eye would blink and the right cheek would raise.)
These issues are very frustrating as I can not seem to get past this basic calibration. I see online people using these same tools and getting results with the metahuman's facial movements that are really clean. Is there something I am missing? I know that after the metahuman has been calibrated, I will create sequences, am I supposed to be modifying these value there? I am not sure... I have commented on every video I can find and I have posted this question in the Unreal Discord (here) with basically no help.
Note: I don't need a full solution, I just need to be pointed in the right direction! Please let me know if there is anything I am missing in my setup or calibration workflow.
thanks for reading.
Had the exact same problem with a customer project, and the issue was it wasn't using the correct animation blueprint. Once we switched it to the correct animation blueprint for LiveLink the issue was immediately resolved. I remember that exact same facial expression - sorry as I know this was a frustrating one for the customer as well.
It seemed that I was doing everything right. After upgrading from 5.0.3 to 5.1 the issue stopped completely.

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

Blender: border rendering is not supported by sequencer

Im using blender to edit a video and when i try to render it, it comes up with the error: border rendering is not supported by sequencer.
Using blender 2.80
Disabling Render Region fixed the problem for me.
Turning off border rendering (render region blender 2.8) in Rendering -> Properties -> Output properties
fixed it for me.
I recall playing with menus and the hypothesis is that I checked it somewhere along the way to cause that problem in the first place.
I cant comment yet bc of reputation apparently, so I guess I have to leave my comment behind here. I do have an answer but I'm not sure if it is a correct one.
I had the same problem, and I think i fixed it by unchecking the sequencer box under Post processing in the same tab as where you edit the format and encoding etc. But now that it does let me render my video (just a picture with audio under it) the render result is audio only? Could be a seperate problem or part of this solution, idk. Would unchecking that box help for you?

Very confused by a binding issue between a Cocoa app and a Movie Loader patch in Quartz Composer

I've been programming for a while, but just recently decided to start developing for Mac OS X. I feel like I've come to grips with the basics of Objective-C and Cocoa development over the past week. I'm planning on making graphics apps, and as such am currently in the process of learning how to control Quartz compositions through a Cocoa app. I went through the tutorial that apple offers (with the Mac Engravings composition), and was able to create that just fine. In order to make sure that I truly understood what I learned, I decided to create my own composition and link it to a slightly more complicated Cocoa application.
Essentially, I have a composition that loads a movie or image through a Movie Loader patch, at which point it applies various filters to the frames before outputting it. In my Cocoa app, I've written code (or rather copied and pasted from other apple examples) that lets a user pick a file using an NSOpenPanel object. The filepath of the file they pick gets placed in a text-box that I placed in the app's window using Interface Builder. I binded the value of said text-box to the "Movie_Location" key in my composition, which is a published input in the Movie Loader patch that I'm using. However, no matter what I do, movies and images aren't loaded into this composition no matter what I try. The only thing that gets displayed is the default image that I have saved in that input from Quartz Composer (or nothing if I leave it blank before publishing).
I've added a Clear Color patch to the composition and binded that to a colorwell in my UI, and that successfully changes the color in my display, so I know that the composition and my Cocoa app are communicating. I've spent numerous hours at this point trying to figure out what's going on, and I've just about given up. Does the Movie Loader have any weird behaviors that I'm not aware of, or is there something obvious that I seem to be missing? I'd really appreciate any help or advice from anybody.
Thanks for reading through this...
Best,
Sami
There are two things I can think of as reasons why it is doing this:
The file path isn't formatted incorrectly. Try checking backslashes, colons, etc.
The box isn't updating the value. Try literally clicking in the text field and hitting enter.
That's all I can think of without seeing your quartz composition and/or code.
EDIT:
Check the other continuous box, in the general properties.
I figured this out yesterday. spudwaffle's second idea is what was going on. If I were to type a filepath in and hit enter, it would work just fine. I got this to work properly by just removing the bind and instead using the setValue:keyInPath: function that a patch controller offers. That said, is there some way to force a text-box to update? I remember seeing a "continuously update" or something like that button within the bind sub-menu in the inspector, but my code didn't work with that checked either.
Thanks to those of you that tried to help me! I really appreciate it.
Best,
Sami