Getting Matplotlib's GTK Agg backend to respect user theming - matplotlib

I'm writing a PyGTK/Twisted app that uses Matplotlib for graphing. It's easy enough to embed the plots in my widgets using the FigureCanvasGtkAgg, but I notice that the background colour of the canvas (outside the plot area itself) does not match that for the rest of my application, and neither does the font (for labels, legends, etc).
Is there a simple way to get my graphs to respect the user selected GTK theme?

You can set it by, for example pylab.figure(facecolor=SOME_COLOR, ...) or matplotlib.rcParams['figure.facecolor'] = SOME_COLOR. It looks like that its default value is hard-coded, so there is no way to tell MPL to respect GTK theme.
Here's a concrete example of how to do this in PyGTK. Some of this information here was gleaned from "Get colors of current gtk style" and from the gdk.Color docs. I haven't gotten as far as setting the font, etc, but this shows the basic framework you need.
First, define the following function:
def set_graph_appearance(container, figure):
"""
Given a GTK container and a Matplotlib "figure" object, this will set the
figure background colour to be the same as the normal colour of the
container.
"""
# "bg" is the background "style helper" object. It contains five different
# colours, for the five different widget states.
bg_style = container.get_style().bg[gtk.STATE_NORMAL]
gtk_color = (bg_style.red_float, bg_style.green_float, bg_style.blue_float)
figure.set_facecolor(gtk_color)
You can then connect to the realize signal (maybe also the map-event signal, I didn't try) and re-colour the graph when the containing widget is created:
graph_panel.connect('realize', set_graph_appearance, graph.figure)
(Here, graph_panel is a gtk.Alignment and graph is a subclass of FigureCanvasGTKAgg that has a figure member as needed.)

Related

In gym, how should we implement the environment's render method so that Monitor's produced videos are not black?

How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)

setup ColumnChart from WinRTXamlToolkit

I'm trying to setup Column Chart from WinRTXamlToolkit.
Let me ask:
Is there any guide for charts from this lib? Or which way you found the easiest to learn using charts?
How can I put labels near axises? In points like on below image:
P0: Times
P1: States
How to move labels above chart (like on below image)?
Thank you in advance!
I found "workarounds" (far away from perfection but works).
Below WinRTXamlToolkit = WXT
Fastest way (if you don't know WXT) is:
hide WXT elements
title (just dont set it)
legend (see Hide legend of WPF Toolkit chart with more than one data series)
create your own equivalent of above elements using native XAML (TextBlocks ..) and place it wherever you like
To get column colors for legend do
MethodInXamlBackingObject() {
var paletteOfFirstColumn = ColumnChart.Palette[0];
var columnFirstBrush = paletteOfFirstColumn["Background"];
}
BTW. tips where from to learn WXT:
analyse sources of samples in WXT - these are very detailed
analyse WXT behaviour with tool "WXT Debug Console" (included in demo app) - very powerfull
read arts about WXT and WPF Toolkit (from which WXT is a fork)

Use lualatex in mathplotlib without pgf backend

basically the title is the question:
I would like to use lualatex for all the text handling in a matplotlib plot without using the pgf backend.
I need fontenc package and customized fonts to have identical fonts in plots and in my latex documents but do not want to use the pgf backend.
Is there a hidden option somewhere?
The context is the following: I have a mycommands.sty file where all my defind \newcommand for math are stored. I use some specific fonts for, e. g., \mathscr{p}, which is not possible (small letter) without the fontenc package.
Now I want to use these custom commands in different places (legend, labels, title, ...) in the plot and have them work and look exactly the same as in the document I write and compile with lualatex.
The only point why it is not possible is that matplotlib internally uses pdflatex for the compilation which gives me errors when using fontenc and therefore some of my commands do not work.
Thanks.

Using mathtext parser to output a svg file

Context
I'm looking for a simple way to import properly typeset mathematics (with LaTeX) into blender. A solution for this has already been given. But that means getting out of blender, using multiple tools and then going back to blender and importing the whole thing.
Blender comes with Python and can import svg
I'd like to find an other way and blender has a set of powerful tools based on Python. I was thinking: can I make Python parse some TeX input and then generate a svg (virtual) file inside blender. That would solve the problem.
matplotlib "emulates" TeX
It is possible to install any Python library and use it inside blender. So this made me think of a possible "hack" of matplotlib.
mathtext is a module that provides a parser for strings with TeX-like syntax for mathematical expressions. svg is one of the available "backends".
Consider the following snippet.
import matplotlib.mathtext as mathtext
parser = mathtext.MathTextParser('svg')
t = parser.parse(r'$\int_{0}^{t} x^2 dx = \frac{t^3}{3}$')
t is a tuple that has all the information needed. But I can't find a way (in the backend api) to convert it to a (virtual) svg file.
Any ideas?
Thanks
Matplotlib needs a figure (and currently also a canvas) to actually be able to render anything. So in order to produce an svg file whose only content is a text (a mathtext formula) you still need a figure and a canvas and the text needs to actually reside inside the figure, which can be achieved by fig.text(..).
Then you can save the figure to svg via fig.savefig(..). Using the bbox_inches="tight" option ensures the figure to be clipped to the extent of the text. And setting the facecolor to a transparent color removes the figure's background patch.
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
fig = Figure(figsize=(5, 4), dpi=100)
canvas = FigureCanvasAgg(fig)
fig.text(.5, .5, r'$\int_{0}^{t} x^2 dx = \frac{t^3}{3}$', fontsize=40)
fig.savefig("output.svg", bbox_inches="tight", facecolor=(1,1,1,0))

cytoscape show traffic between nodes along an animated path

I need to show things moving between nodes along their connection paths similar to this project. I haven't been able to find any examples of it in cytoscape, but I have used cytoscape in the past and prefer to keep using it for this as well. I would appreciate recommendations on how to approach this problem.
You've got a few options...
The easiest is the Marquee visual style. It produces a "marching ants" illusion in the direction of directed edges. Simply to the Styles tab in the Control Panel and select the "Marquee" style. In the EDGE tab, you can choose from 3 different Marquee Line Types. You could imagine mapping these 3 line types to 3 categories (or bins) of traffic density, for example. Or you could use color, thickness and/or transparency in combination with a marquee style to represent traffic density. You can see an example here:
https://youtu.be/MF0zsxEPoPc?t=44
There's also an app for animation! This takes the approach of interpolating any visual style (including position and existence) between any set of key frames you provide. So, for example, you would have a start and finish frame and then CyAnimator would make a movie file for you:
http://apps.cytoscape.org/apps/cyanimator
And yet another completely different approach: with the scripting capabilities of Cytoscape, you can pretty much do whatever you want. The Unit tests for the RCy3 package, for example, ends up being an almost psychedelic display of data vis potential (and the unit tests aren't even at full coverage, shame). So you could direct your own animations in real time with a bit of scripting in R or Python. Here's the RCy3 unit test demo and links to the scripting libs:
https://www.youtube.com/watch?v=IXqbdlUnzUE&t=1s (caution: flashing graphics)
https://bioconductor.org/packages/release/bioc/html/RCy3.html
https://py2cytoscape.readthedocs.io/en/latest/
I'm using cytoscape.js with meteor.js. My elements, stylesheet and vehicles (shown as red dots) are stored in mongo, and can be updated via an external process or edited on-screen. The graph can be restructured or reshaped on the fly, and the vehicles will discover the new least-cost route to reach their target. Moves are queued with eles.animate() Routing is handled by eles.floydWarshall().path(). This might be similar to what you had in mind.