Traces over image with gnuplot - matplotlib

It is possible to produce a graphic in gnuplot of the sort of the ones that they like in neuroscienes journals, where the traces (plots) of the electric measurements are drawn over an image (density) plot of other representation of the neural activity? To be clear, I have allready managed to do it in matplotlib (pyplot), but it is to slow for the data sets that I have, and I think gnuplot could be faster and nicer. The example graphic is below.

Related

How to intersect two rasters in ArcMap?

I have two raster layers with different resolutions that I want to join see this image. One has higher resolution (transparent yellow) and the other raster has lower resolution but bigger extent (whole earth) and has information about different classes (drawn in different colors here). The resulting raster should have the higher resolution and extent of the raster drawn in yellow here but should be joined with the other raster, e.g. containing the information of what class it was laying within.
Really appreciate any help!
Cheers
You should to use mosaic rasters. Use Mosaic To New Raster tool.
But before rasters should have equal standart.
Use Resample tool for change resolution.
And your raster value system should be same scale. Arcmap will mosaic them anyway. But it can be wrong.
For example LST rasters. Suppose the raster is on a scale of 1 Fahrenheit and the raster is on a scale of 2 degrees Celsius.
In this case, even if the tools run and generate a new raster layer, the values will be incorrect.
I hope this answer was helpful.
Good luck with.

Matplotlib: optimize vector image output, remove hidden vectors data

Assume I have a very big Matplotlib scatterplot with overlapping points.
Is there a way to ask Matplotlib to optimize ('compress') the plot (e.g. during savefig), so to not export vectors which are invisible (overlapped by others).
My current EPS/SVG/PDF-savefig is very big.
I can solve this by preprocessing the input data, however level of overlapping also depends on the plot properties (e.g. size of the marks etc).
If Matplotlib cannot do this, are there other tools which can optimize vector graphics (open source/free tools,so not the Adobe suite).
Tx.

How can a 3D game render an object without having a sprite for every single angle?

When learning to program simple 2D games, each object would have a sprite sheet with little pictures of how a player would look in every frame/animation. 3D models don't seem to work this way or we would need one image for every possible view of the object!
For example, a rotating cube would need a lot images depicting how it would look on every single side. So my question is, how are 3D model "images" represented and rendered by the engine when viewed from arbitrary perspectives?
Multiple methods
There is a number of methods for rendering and storing 3D graphics and models. There are even different methods for rendering 2D graphics! In addition to 2D bitmaps, you also have SVG. SVG uses numbers to define points in an image. These points make shapes. The points can also define curves. This allows you to make images without the need for pixels. The result can be smaller file sizes, in addition to the ability to transform the image (scale and rotate) without causing distortion. Most 3D graphics use a similar technique, except in 3D. What these methods have in common, however, is that they all ultimately render the data to a 2D grid of pixels.
Projection
The most common method for rendering 3D models is projection. All of the shapes to be rendered are broken down into triangles before rendering. Why triangles? Because triangles are guaranteed to be coplanar. That saves a lot of work for the renderer since it doesn't have to worry about "coloring outside of the lines". One drawback to this is that most 3D graphics projection technologies don't support perfect spheres or other round surfaces. You have to use approximations and other tricks to make round surfaces (although there are some renderers which support round surfaces). The next step is to convert or project all of the 3D points into 2D points on the screen (as seen below).
From there, you essentially "color in" the triangles to make everything look solid. While this is pretty fast, another downside is that you can't really have things like reflections and refractions. Anytime you see a refractive or reflective surface in a game, they are only using trickery to make it look like a reflective or refractive material. The same goes for lighting and shading.
Here is an example of special coloring being used to make a sphere approximation look smooth. Notice that you can still see straight lines around the smoothed version:
Ray tracing
You also can render polygons using ray tracing. With this method, you basically trace the paths that the light takes to reach the camera. This allows you to make realistic reflections and refractions. However, I won't go into detail since it is too slow to realistically use in games currently. It is mainly used for 3D animations (like what Pixar makes). Simple scenes with low quality settings can be ray traced pretty quickly. But with complicated, realistic scenes, rendering can take several hours for a single frame (as is the case with Pixar movies). However, it does produce ultra realistic images:
Ray casting
Ray casting is not to be confused with the above-mentioned ray tracing. Ray casting does not trace the light paths. That means that you only have flat surfaces; not reflective. It also does not produce realistic light. However, this can be done relatively quickly, since in most cases you don't even need to cast a ray for every pixel. This is the method that was used for early games such as Doom and Wolfenstein 3D. In early games, ray casting was used for the maps, and the characters and other items were rendered using 2D sprites that were always facing the camera. The sprites were drawn from a few different angles to make them look 3D. Here is an image of Wolfenstein 3D:
Castle Wolfenstein with JavaScript and HTML5 Canvas: Image by Martin Kliehm
Storing the data
3D data can be stored using multiple methods. It is not necessarily dependent on the rendering method that is used. The stored data doesn't mean anything by itself, so you have to render it using one of the methods that have already been mentioned.
Polygons
This is similar to SVG. It is also the most common method for storing model data. You define the geometry using 3D points. These points can have other properties, such as texture data (in the form of UV mapping), color data, and whatever else you might want.
The data can be stored using a number of file formats. A common file format that is used is COLLADA, which is an XML file that stores the 3D data. There are a lot of other formats though. Fundamentally, however, all file formats are still storing the 3D data.
Here is an example of a polygon model:
Voxels
This method is pretty simple. You can think of voxel models like bitmaps, except they are a bunch of bitmaps layered together to make 3D bitmaps. So you have a 3D grid of pixels. One way of rendering voxels is converting the voxel points to 3D cubes. Note that voxels do not have to be rendered as cubes, however. Like pixels, they are only points that may have color data which can be interpreted in different ways. I won't go into much detail since this isn't too common and you generally render the voxels with polygon methods (like when you render them as cubes. Here is an example of a voxel model:
Image by Wikipedia user Vossman
In the 2D world with sprite sheets, you are drawing one of the sprites depending on the state of the actor (visual representation of your object). In the 3D world you are rendering a model for your actor that is a series of polygons with a texture mapped to it. There are standardized model files (I am mostly familiar with Autodesk 3DS Max), in which the model and the assigned textures can be packaged together (a .3DS or .MAX file), providing everything your graphics library needs to render the object and its textures.
In a nutshell, you don't use images for each view of a 3D object, you have a model with a texture rendered on it, creating a dynamic view as it is rendered by the graphics library.

How to plot 2D vector field in single picture using Hue and Brightness method with Digital Micrograph script?

I would like to to plot 2D vector field in a single picture using the Hue & brightness method, i.e., Hue to direction (or say, phase), brightness to magnitude.
Such method is often used to visualize e.g., magnetic domains, vortex etc which are reconstructed from Lorenz microscopy.
As input, I have two images of size 1024*1024, pixels contain the magnitude of X and Y component of the vector field.
Since DM does not support native HSL color scheme, I think one should first uses a group of self defined functions to convert HSL to RGB...
You can only use RGB images in DigitalMicrograph, so you will have to do the conversion from HSB to RGB in your script code, and then create the according RGB image.
Luckily, there is a demonstration script on the Gatan script resources webpage which does exactly that! You can basically use the script as it is shown there.
Gatan Script Resources
Link to script-file:
Display as HSB
Note, the script uses complex images as input - just as a convenient container to combine two images into a single one. The test function demonstrates this though.

How does Blender calculate vertex normals?

I'm attempting to calculate vertex normals for various game assets. The normals I calculate are used for "inflating" the model (to draw behind the real model producing a thick outline).
I currently compute the normal for each face and average all of them (several other questions on Stack Overflow suggest this approach). However, this doesn't work for sharp corners like this one (adjacent faces' normals marked in orange, the normal I'm trying to calculate is outlined in green).
The object looks like a small pedestal and we're looking at the front-left corner. There are three adjoining faces (the bottom face isn't visible; its normal points straight down).
Blender computes an excellent normal that lies squarely in the middle of the three faces' normals; it seems like it somehow calculates a normal that has minimum rotation to each of the three face normals. Blender's normal also doesn't change when the quads are triangulated differently.
Averaging the faces' normals gives me a different normal that points slightly upward in the Z-axis (-0.45, -0.89, +0.08). Inflating my model this way doesn't produce a good outline because the bottom face of the outline is shifted up and doesn't enclose the original model.
I attempted to look at the Blender source code but couldn't find what I was looking for. If anyone can point me to the algorithm in the Blender source, I'd accept that also.
Weight the surface normals by the angle of the faces where they join. It is a common practice in surface rendering (see discussion here: http://www.bytehazard.com/code/vertnorm.html), and will ensure that your bottom face is weighted stronger than the two slanted side faces. I don't know if Blender does it differently, but you should give it a try.