I would like to create a MEL script that creates a scene, setting up ncloth and passive collider objects and run the simulation up to a certain frame.
In the script editor, I can see the scene set-up but there is no function starting the simulation.
The technique that #Andreas suggests is sometimes called "command harvesting". It is a great way to learn what and how Maya is doing things. But to answer your specific question:
You can use cmds.play() to start playing back on Maya. See the docs for options.
You might want to set the start frame and end frame of the playback range using the cmds.playbackOptions() command. See the docs for options.
So you would do: (relevant explanatory comments added)
# egs. to play from frame 1 to 120
# also note that the playbackSpeed flag is used
# we need to set this to 0 to "play every frame".
# setting maxPlaybackSpeed to 0 will result in free playback, so the playback isn't clamped.
# At this point, playback wouldn't be realtime, but it will be accurate.
# Dynamics and simulations must be played back like this or the nucleus will not evaluate properly.
cmds.playbackOptions(animationStartTime=1, animationEndTime=120, playbackSpeed=0, maxPlaybackSpeed=0)
# now start playback
cmds.play(forward=True)
EDIT: I just noticed that you had asked for MEL commands. Just take the above commands and MEL-ify them, like so:
playbackOptions -e animationStartTime 1 animationEndTime 120 playbackSpeed 0;
play -f 1;
Suggestion: It is best to playblast this playback to watch it in proper fps and playback speed.
Hope that was useful.
In the Script Editor you can tell Maya to "Echo all commands". If you enable that, then do something in the UI, it will output all the MEL commands in the Script Editor.
I'd try using bakeSimulation to convert the sim to ordinary vertex animations. You can then advance time to the desired time and export your OBJ that way.
This will disable the simulation once executed - it's good for getting the results but not editing them.
Related
How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)
I made a code through python to operate a preview of my PiCamera, I have set the time to 10 seconds, then automatically turns off. However I am unsure how I would be able to have a keystroke to stop the camera and return to the previous screen?
At the moment I am able to view for 10 seconds, and nothing else, the usual ctrl-c and various other keys does not work.
How would I be able to integrate a keystroke into the code following to stop the script and return to normal screen?
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(10)
camera.stop_preview()
Subprocess module you can check on the official page:
https://docs.python.org/2/library/subprocess.html#subprocess.Popen
A possible way to implement with subprocess.Popen is here on SO:
Controlling a python script from another script
Another possibility is to use multiprocesses or multithreading module. For instance creation of a thread can be done and you can take care of an ID :-)
All the possibility will lead you to learn a bit more of python!
My better suggestion will be to create easily a thread (https://docs.python.org/3/library/threading.html --> here for python 3), get the ID and leave it run.
If you want to terminate the camera running, then terminate the thread :-)
I've probably spent the last 2 weeks searching through Apple Quicktime documentation to resolve what seems like a pretty basic question, but to no avail. Here's the issue...
I've got several short QTMovie files each containing three tracks -- the usual two containing the sound and video, plus an extra video track holding subtitled text images (making two video tracks in all). If I select one of these tracks in Quicktime Pro and export it as an 'Movie To Image Sequence' I might find say, 142 stills spread throughout the duration of the movie. My search method also reports the correct total number of frames.
Now, I've learned that you can take single-frame images, add them to a QTMovie, and set their individual on-screen display times for however long you want using 'attributeForKey:QTMovieDurationAttribute'. But how on earth do I go about accessing that data again? (which essentially just seems like the reverse of this process).
In pseudo-code what I'm trying to do is something like:
{
select video track #2 ...
call up the first image in the sequence ...
access and note its on-screen duration setting ...
call up the second image ...
access and note its duration ...
... repeat until done.
}
I'm not after editing this data or anything -- just finding out how I can get access to individual frames in a video track ... and where the HECK this timing info is hiding inside the bowels of QT.
As a senior citizen I think I'm starting to get too old for this sort of thing and Quicktime is a big, often complex and confusing framework, so if anyone can help me out with some advice or (ideally) just a few lines of sample code here I'd really appreciate it. Thanks in advance :-)
Okay. After several more days of digging and experimentation I finally found the answer to my own question. To get the (next) subtitle image I used:
// access the subtitle track
Track theTrack = [self getVideoSubtitleTrack];
// set flags to find NEXT image (ignoring the current one)...
myFlags = nextTimeStep;
// ... and search forward from current position.
GetTrackNextInterestingTime(theTrack, myFlags, playheadPos, fixed1, &nextInterestingTime, NULL);
This only finds the start of an image object though, and iterating through the track finds the leading edge of each successive image. But I wanted the end time of the current text too.
After getting the start time, I experimented with the seven available flags and found that setting the flag to 'nextTimeTrackEdit' gets the end time.
I'm writing a small custom player based on libvlc. I've used much of the code from https://github.com/hartror/python-libvlc-bindings/blob/master/examples/gtkvlc.py that plays a single track just like I need.
Now I want to swtich to another track after previous has finished. To do that I catch callback "EventType.MediaPlayerEndReached" and in callback handler I write code:
<------>def endCallback(self,event):
<------><------>fname = vlc_controller.GetNextTrack()['url']
<------><------>self.w.set_title(fname)
<------><------>self.vlc.player.set_mrl(fname)
<------><------>self.w.set_title('after set_mrl')
<------><------>self.vlc.player.play()
<------><------>self.w.set_title('after play')
Now when this code gets executed it stucks on self.vlc.player.set_mrl(fname) and does not go any further and as a result I see NO NEXT TRACK.
I've tried different variations of this code using (vlc.stop(), vlc.set_media instead of vlc.set_mrl) but nothing works
Finally....
I think the best choise is to make multi threaded application on python
2 Threads:
Main thread - gtk.loop and displaying video an some other thinks
Additional thread that makes media switching.
For some time I was afraid of multithreading but now I know that this was the best way to do this task
For those facing similar issue in C# (Xamarin), per VLC documentation, calling the VLC object from an event tied to the object may freeze the application. So they said to use the code below inside the trigger event.
ThreadPool.QueueUserWorkItem(_ => PlayNext());
An example would be calling the code above inside the EndReach event.
Where PlayNext() is your own custom method to handle the next action you want to execute.
I struggled with this for about a day before seeing that portion of the documentation.
I am aware of the ability using an EdgeShapeTransformer to change the look of edges:
vv.getRenderContext().setEdgeShapeTransformer(new EdgeShape.Line()); // for example
However I am looking for how to change the way the line looks while dragging from one node to another to create an edge interactively. By default the 'hovering' edge which is not yet linked to another node is a large curved line. See the example here for what I mean.
CubicCurveEdgeEffects is where it is done. There is an EdgeEffects interface that can be implemented to do other things instead. It is used by the SimpleEdgeSupport class via the EditingGraphMousePlugin.
(Credit to Tom Nelson, offline communication.)