I'm trying to import a triangle mesh from file (e.g., .3ds, .dae). However, it seems that some of the faces (triangles) are being ignored. If I scale the model by 10x before importing, then the triangles are in tact. Is there a way to force sketchup to load all faces, even small ones?
Here's an example of loading a closed mesh (no boundaries) at its regular scale. SketchUp has ignored a few of the triangles, creating holes and dangling edges:
If I shrink the model, the problems are much worse:
But if I scale up the model enough, the problems go away:
Also, immediately after I import my cursor is set to "move mode", so the object is placed wherever my cursor randomly happens to be. Is there a way to import the model exactly into the current coordinate system without mouse interaction?
Yes, it's a known problem that Sketchup doesn't import very small edges/faces correctly. You can automate the import process of an upscaled model with this ruby script though:
model = Sketchup.active_model
# Import your dwg file, true if you want the summary screen
model.import 'C:\path\to\example.dwg', false
# Reset the selected tool
model.select_tool(nil)
# Get all imported faces
faces = model.entities.grep(Sketchup::Face)
# Create a new ComponentDefinition
definition = model.definitions.add "dwg"
# Add the points of every face to the definition
faces.each{|f| definition.entities.add_face f.vertices}
# Remove all entities
model.entities.clear!
# Create a new DefinitionInstance that is scaled by 0.5
transformation = Geom::Transformation.new(0.5)
instance = model.entities.add_instance definition, transformation
# Explode the component to work with the model
instance.explode
This adds the component to the origin and takes care of scaling the imported model back. If your model were a skp file, you could even load it directly into a ComponentDefinition, but that doesn't work for dwg files.
Related
How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)
Context
I'm looking for a simple way to import properly typeset mathematics (with LaTeX) into blender. A solution for this has already been given. But that means getting out of blender, using multiple tools and then going back to blender and importing the whole thing.
Blender comes with Python and can import svg
I'd like to find an other way and blender has a set of powerful tools based on Python. I was thinking: can I make Python parse some TeX input and then generate a svg (virtual) file inside blender. That would solve the problem.
matplotlib "emulates" TeX
It is possible to install any Python library and use it inside blender. So this made me think of a possible "hack" of matplotlib.
mathtext is a module that provides a parser for strings with TeX-like syntax for mathematical expressions. svg is one of the available "backends".
Consider the following snippet.
import matplotlib.mathtext as mathtext
parser = mathtext.MathTextParser('svg')
t = parser.parse(r'$\int_{0}^{t} x^2 dx = \frac{t^3}{3}$')
t is a tuple that has all the information needed. But I can't find a way (in the backend api) to convert it to a (virtual) svg file.
Any ideas?
Thanks
Matplotlib needs a figure (and currently also a canvas) to actually be able to render anything. So in order to produce an svg file whose only content is a text (a mathtext formula) you still need a figure and a canvas and the text needs to actually reside inside the figure, which can be achieved by fig.text(..).
Then you can save the figure to svg via fig.savefig(..). Using the bbox_inches="tight" option ensures the figure to be clipped to the extent of the text. And setting the facecolor to a transparent color removes the figure's background patch.
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
fig = Figure(figsize=(5, 4), dpi=100)
canvas = FigureCanvasAgg(fig)
fig.text(.5, .5, r'$\int_{0}^{t} x^2 dx = \frac{t^3}{3}$', fontsize=40)
fig.savefig("output.svg", bbox_inches="tight", facecolor=(1,1,1,0))
I am trying to scale my decision tree to fit notebook but it appears not to scale properly. I have to keep scrolling for a better view. Can I please have some help on how to fix this. Attach is a pic of how it looks like.
from graphviz import Source
from sklearn import tree
from IPython.display import SVG
graph = Source( tree.export_graphviz(dt_classifier, out_file=None, feature_names=X.columns))
SVG(graph.pipe(format='svg'))
Perhaps it's not relevant any more, since this question has been open for about six months now. However, I just stumbled into it, as apparently 83 other readers, and I just crafted my way around this. The easy way is to use the pydot package (pip install pydot), and then add the default size. I have also been using %matplotlib inline so that it displays nicely within the notebook but without using the svg module. With your example:
%matplotlib inline
from graphviz import Source
from sklearn import tree
import pydot
dot_data = tree.export_graphviz(dt_classifier, out_file=None, feature_names=X.columns))
pdot = pydot.graph_from_dot_data(dot_data)
# Access element [0] because graph_from_dot_data actually returns a list of DOT elements.
pdot[0].set_graph_defaults(size = "\"15,15\"")
graph = Source(pdot[0].to_string())
graph
I also added rotate=True to export_graphviz so that it displays in horizontal style, the root of the tree is directly visible, and is easier to follow. Of course, you can play around with size so as to reach something that is acceptable for you.
I noticed that if I export my blender project as a obj-file I have the option to toggle "Export Animation" which will make alot of files, one for each frame.
I wanted to use the Collada (.dae) format to export my animations. Problem is, when I load my Collada file it says that NumAnimations == 0!
1) Why does a file that is supposed to store animation say 0 animation?
2) When I do get it to work, how to I swap between frames in Assimp?
1) Animation import should work, your problem is probably the export. Have you tried reading through your collada file? Watch for <library_animations> and the like.
2) Assimp has no notion of frames. aiAnimation consists of multiple channels (aiNodeAnim) which define transformations (keyframes) for nodes at specific ticks/time. To compute all transformations one needs to interpolate the correct keyframes depending on the current playback time and mTicksPerSecond of aiAnimation.
I'm writing a PyGTK/Twisted app that uses Matplotlib for graphing. It's easy enough to embed the plots in my widgets using the FigureCanvasGtkAgg, but I notice that the background colour of the canvas (outside the plot area itself) does not match that for the rest of my application, and neither does the font (for labels, legends, etc).
Is there a simple way to get my graphs to respect the user selected GTK theme?
You can set it by, for example pylab.figure(facecolor=SOME_COLOR, ...) or matplotlib.rcParams['figure.facecolor'] = SOME_COLOR. It looks like that its default value is hard-coded, so there is no way to tell MPL to respect GTK theme.
Here's a concrete example of how to do this in PyGTK. Some of this information here was gleaned from "Get colors of current gtk style" and from the gdk.Color docs. I haven't gotten as far as setting the font, etc, but this shows the basic framework you need.
First, define the following function:
def set_graph_appearance(container, figure):
"""
Given a GTK container and a Matplotlib "figure" object, this will set the
figure background colour to be the same as the normal colour of the
container.
"""
# "bg" is the background "style helper" object. It contains five different
# colours, for the five different widget states.
bg_style = container.get_style().bg[gtk.STATE_NORMAL]
gtk_color = (bg_style.red_float, bg_style.green_float, bg_style.blue_float)
figure.set_facecolor(gtk_color)
You can then connect to the realize signal (maybe also the map-event signal, I didn't try) and re-colour the graph when the containing widget is created:
graph_panel.connect('realize', set_graph_appearance, graph.figure)
(Here, graph_panel is a gtk.Alignment and graph is a subclass of FigureCanvasGTKAgg that has a figure member as needed.)