In gym, how should we implement the environment's render method so that Monitor's produced videos are not black? - wrapper

How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)

This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)

Related

cytoscape show traffic between nodes along an animated path

I need to show things moving between nodes along their connection paths similar to this project. I haven't been able to find any examples of it in cytoscape, but I have used cytoscape in the past and prefer to keep using it for this as well. I would appreciate recommendations on how to approach this problem.
You've got a few options...
The easiest is the Marquee visual style. It produces a "marching ants" illusion in the direction of directed edges. Simply to the Styles tab in the Control Panel and select the "Marquee" style. In the EDGE tab, you can choose from 3 different Marquee Line Types. You could imagine mapping these 3 line types to 3 categories (or bins) of traffic density, for example. Or you could use color, thickness and/or transparency in combination with a marquee style to represent traffic density. You can see an example here:
https://youtu.be/MF0zsxEPoPc?t=44
There's also an app for animation! This takes the approach of interpolating any visual style (including position and existence) between any set of key frames you provide. So, for example, you would have a start and finish frame and then CyAnimator would make a movie file for you:
http://apps.cytoscape.org/apps/cyanimator
And yet another completely different approach: with the scripting capabilities of Cytoscape, you can pretty much do whatever you want. The Unit tests for the RCy3 package, for example, ends up being an almost psychedelic display of data vis potential (and the unit tests aren't even at full coverage, shame). So you could direct your own animations in real time with a bit of scripting in R or Python. Here's the RCy3 unit test demo and links to the scripting libs:
https://www.youtube.com/watch?v=IXqbdlUnzUE&t=1s (caution: flashing graphics)
https://bioconductor.org/packages/release/bioc/html/RCy3.html
https://py2cytoscape.readthedocs.io/en/latest/
I'm using cytoscape.js with meteor.js. My elements, stylesheet and vehicles (shown as red dots) are stored in mongo, and can be updated via an external process or edited on-screen. The graph can be restructured or reshaped on the fly, and the vehicles will discover the new least-cost route to reach their target. Moves are queued with eles.animate() Routing is handled by eles.floydWarshall().path(). This might be similar to what you had in mind.

Viewing QTMovie duration attributes?

I've probably spent the last 2 weeks searching through Apple Quicktime documentation to resolve what seems like a pretty basic question, but to no avail. Here's the issue...
I've got several short QTMovie files each containing three tracks -- the usual two containing the sound and video, plus an extra video track holding subtitled text images (making two video tracks in all). If I select one of these tracks in Quicktime Pro and export it as an 'Movie To Image Sequence' I might find say, 142 stills spread throughout the duration of the movie. My search method also reports the correct total number of frames.
Now, I've learned that you can take single-frame images, add them to a QTMovie, and set their individual on-screen display times for however long you want using 'attributeForKey:QTMovieDurationAttribute'. But how on earth do I go about accessing that data again? (which essentially just seems like the reverse of this process).
In pseudo-code what I'm trying to do is something like:
{
select video track #2 ...
call up the first image in the sequence ...
access and note its on-screen duration setting ...
call up the second image ...
access and note its duration ...
... repeat until done.
}
I'm not after editing this data or anything -- just finding out how I can get access to individual frames in a video track ... and where the HECK this timing info is hiding inside the bowels of QT.
As a senior citizen I think I'm starting to get too old for this sort of thing and Quicktime is a big, often complex and confusing framework, so if anyone can help me out with some advice or (ideally) just a few lines of sample code here I'd really appreciate it. Thanks in advance :-)
Okay. After several more days of digging and experimentation I finally found the answer to my own question. To get the (next) subtitle image I used:
// access the subtitle track
Track theTrack = [self getVideoSubtitleTrack];
// set flags to find NEXT image (ignoring the current one)...
myFlags = nextTimeStep;
// ... and search forward from current position.
GetTrackNextInterestingTime(theTrack, myFlags, playheadPos, fixed1, &nextInterestingTime, NULL);
This only finds the start of an image object though, and iterating through the track finds the leading edge of each successive image. But I wanted the end time of the current text too.
After getting the start time, I experimented with the seven available flags and found that setting the flag to 'nextTimeTrackEdit' gets the end time.

overriding matplotlib's panning tool (wx)

I'm using matplotlib housed in a wxPython panel to do some heavy duty plotting. My issues comes when using native panning tool - it's appears as though matplotlib tries to constantly redraw the canvas as you drag the pan handle around. With the amount of data I'm plotting this is getting really choppy (already optimized with Collections for data etc)
In terms of performance I think it would be much preferable for the canvas to just draw once when the mouse is released at the end of a pan. I realise this will mean I have to extend the WxAgg NavigationToolbar2 class with my own, but I'm wondering if anyone has attempted something similar to this and can advise me on which functions to override?
many thanks
I've spent a lot of time making modification on the matplotlib backends, I've never done this specific change, but I can show you one line of code to comment out that will stop the dynamic updating:
I presume you are using the WxAgg backend, if this is the case, open this file: C:\Python27\Lib\site-packages\matplotlib\backends\backend_wx.py
And comment out the line indicated here:
def dynamic_update(self):
d = self._idle
self._idle = False
if d:
#self.canvas.draw() #<--- Comment out to stop the redrawing during the Pan/Zoom
self._idle = True
I tested this and it seems to nicely solve your issue. I did some quick digging and I didn't see any other functions calling this procedure so you might even be able to just change it to:
def dynamic_update(self):
pass
...Which is the same code you'll find in the base NavigationToolbar2 class
(And of course, if you're happy with this change you can do a little more work to make your own custom backend with this kind of modification. Just to make sure you don't lose the change when upgrading matplotlib)

UILables, Text Flow and Layouts

Consider the following, I have paragraph data being sent to a view which needs to be placed over a background image, which has at the top and the bottom, fixed elements (fig1)
Fig1.
My thought was to split this into 4 labels (Fig1.example2) my question here is how I can get the text to flow through labels 1 - 4 given that label 1,2 & 3 ar of fixed height. I assumed here that label 3 should be populated prior to 4 hence the layout in the attached diagram.
Can someone suggest the best way of doing this with maybe an example?
Thanks
Wish I could help more, but I think I can at least point you in the right direction.
First, your idea seems very possible, but would involve lots of calculations of text size that would be ugly and might not produce ideal results. The way I see it working is a binary search of testing portions of your string with sizeWithFont: until you can get the best guess for what the label will fit into that size and still look "right". Then you have to actually break up the string and track it in pieces... just seems wrong.
In iOS 6 (unfortunately doesn't apply to you right now but I'll post it as a potential benefit to others), you could probably use one UILabel and an NSAttributed string. There would be a couple of options to go with here, (I haven't done it so I'm not sure which would be the best) but it seems that if you could format the page with html, you can initialize the attributed string that way.
From the docs:
You can create an attributed string from HTML data using the initialization methods initWithHTML:documentAttributes: and initWithHTML:baseURL:documentAttributes:. The methods return text attributes defined by the HTML as the attributes of the string. They return document-level attributes defined by the HTML, such as paper and margin sizes, by reference to an NSDictionary object, as described in “RTF Files and Attributed Strings.” The methods translate HTML as well as possible into structures of the Cocoa text system, but the Application Kit does not provide complete, true rendering of arbitrary HTML.
An alternative here would be to just use the available attributes, setting line indents and such according to the image size. I haven't worked with attributed strings at this level, so I the best reference would be the developer videos and the programming guide for NSAttributedString. https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/AttributedStrings/AttributedStrings.html#//apple_ref/doc/uid/10000036-BBCCGDBG
For lesser versions of iOS, you'd probably be better off becoming familiar with CoreText. In the end you'll be rewarded with a better looking result, reusability/flexibility, the list goes on. For that, I would start with the CoreText programming guide: https://developer.apple.com/library/mac/#documentation/StringsTextFonts/Conceptual/CoreText_Programming/Introduction/Introduction.html
Maybe someone else can provide some sample code, but I think just looking through the docs will give you less of a headache than trying to calculate 4 labels like that.
EDIT:
I changed the link for CoreText
You have to go with CoreText: create your AttributedString and a CTFramesetter with it.
Then you can get a CTFrame for each of your textboxes and draw it in your graphics context.
https://developer.apple.com/library/mac/#documentation/Carbon/Reference/CTFramesetterRef/Reference/reference.html#//apple_ref/doc/uid/TP40005105
You can also use a UIWebView

How do you assign a default Image to the Blob object in Play framework?

I followed this nice article
http://www.lunatech-research.com/playframework-file-upload-blob
and have a perfectly working image upload solution
My questions is, if the user doesn't select any image, how do I assign a default image during save (probably stored in the server)?
if (!user.photo)
user.photo= ?;
user.save();
The one-hack that I can think of is upload the default image and see which UID "Play" stores in the /tmp directory and assign that above. Is there an elegant named* solution to this?
when I say named, I mean I want the code to look like (which means I know what I'm doing and I can also write elegant automated code if there are more than one picture)
user.photo= "images/default/male.jpg"
rather than (which means I'm just hacking and I can't extend it elegantly for a list of pictures)
user.photo= "c0109891-8c9f-4b8e-a533-62341e713a21"
Thanks in advance
The approach I have always taken is to not change the model for empty images, but instead do something in the view to show a default image, if the image does not exist. This I think is a better approach because your are corrupting your model for display purposes, which is bad practice (as you may want to be able to see all those who have not selected an image, for example).
To achieve this, in your view you can simply use the exists() method on the Blob field. The code would look like
#{if user.photo.exists()}
<img src="#{userPhoto(user.id)}">
#{/if}
#{else}
<img src="#{'public/images/defaultUserImage.jpg'}">
#{/else}
I have assumed in the above code that you are rendering the image using the action userPhoto as described in the Lunatech article.
I'd assume you can store the default image somewhere in your applications source folder and use
user.photo.set(new FileInputStream(photo), MimeTypes.getContentType(photo.getName()));
to save the data. Photo is just a File object, so you can get the reference of your default image and use it.