I need to generate a mesh file, where I need to extract the following information :
X Y and Z coordinates of each node + the nodetags
list of all the elements + elementtags
I would like to give each edge(the elements and the nodes of the edges) of my domain an index, in order to use it in my code for the management of BC, IC and parameters...)
Is there any preexisting code that would help me to do that ?
I tried gmsh, but I can't really understand the syntax of the .msh file, which is different from the explanation they propose in : 9.1 MSH file format
I've created meshio for this purpose. Here's how to write a file:
points = numpy.array([
[0.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
])
cells = {
"triangle": numpy.array([
[0, 1, 2]
])
}
meshio.write_points_cells(
"foo.vtk",
points,
cells,
# Optionally provide extra data on points, cells, etc.
# point_data=point_data,
# cell_data=cell_data,
# field_data=field_data
)
Many different formats are supported.
Related
My goal is to create a stratigraphic column (colored stacked rectangles) using matplotlib like the example below.
Data is in this format:
depth = [1,2,3,4,5,6,7,8,9,10] #depth (feet) below ground surface
lithotype = [4,4,4,5,5,5,6,6,6,2] #lithology type. 4 = clay, 6 = sand, 2 = silt
I tried matplotlib.patches.Rectangle but it's cumbersome. Wondering if someone has another suggestion.
Imho using Rectangle is not so difficult nor cumbersome.
from numpy import ones
from matplotlib.pyplot import show, subplots
from matplotlib.cm import get_cmap
from matplotlib.patches import Rectangle as r
# a simplification is to use, for the lithology types, a qualitative colormap
# here I use Paired, but other qualitative colormaps are displayed in
# https://matplotlib.org/stable/tutorials/colors/colormaps.html#qualitative
qcm = get_cmap('Paired')
# the data, augmented with type descriptions
# note that depths start from zero
depth = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # depth (feet) below ground surface
lithotype = [4, 4, 4, 5, 5, 5, 6, 1, 6, 2] # lithology type.
types = {1:'swiss cheese', 2:'silt', 4:'clay', 5:'silty sand', 6:'sand'}
# prepare the figure
fig, ax = subplots(figsize = (4, 8))
w = 2 # a conventional width, used to size the x-axis and the rectangles
ax.set(xlim=(0,2), xticks=[]) # size the x-axis, no x ticks
ax.set_ylim(ymin=0, ymax=depth[-1])
ax.invert_yaxis()
fig.suptitle('Soil Behaviour Type')
fig.subplots_adjust(right=0.5)
# plot a series of dots, that eventually will be covered by the Rectangle\s
# so that we can draw a legend
for lt in set(lithotype):
ax.scatter(lt, depth[1], color=qcm(lt), label=types[lt], zorder=0)
fig.legend(loc='center right')
ax.plot((1,1), (0,depth[-1]), lw=0)
# do the rectangles
for d0, d1, lt in zip(depth, depth[1:], lithotype):
ax.add_patch(
r( (0, d0), # coordinates of upper left corner
2, d1-d0, # conventional width on x, thickness of the layer
facecolor=qcm(lt), edgecolor='k'))
# That's all, folks!
show()
As you can see, placing the rectangles is not complicated, what is indeed cumbersome is to properly prepare the Figure and the Axes.
I know that I omitted part of the qualifying details from my solution, but I hope these omissions won't stop you from profiting from my answer.
I made a package called striplog for handling this sort of data and making these kinds of plots.
The tool can read CSV, LAS, and other formats directly (if the format is rather particular), but we can also construct a Striplog object manually. First let's set up the basic data:
depth = [1,2,3,4,5,6,7,8,9,10]
lithotype = [4,4,4,5,5,5,6,6,6,2]
KEY = {2: 'silt', 4: 'clay', 5: 'mud', 6: 'sand'}
Now you need to know that a Striplog is composed of Interval objects, each of which can have one or more Component elements:
from striplog import Striplog, Component, Interval
intervals = []
for top, base, lith in zip(depth, depth[1:], lithotype):
comp = Component({'lithology': KEY[lith]})
iv = Interval(top, base, components=[comp])
intervals.append(iv)
s = Striplog(intervals).merge_neighbours() # Merge like with like.
This results in Striplog(3 Intervals, start=1.0, stop=10.0). Now we'd like to make a plot using an appropriate Legend object.
from striplog import Legend
legend_csv = u"""colour, width, component lithology
#F7E9A6, 3, Sand
#A68374, 2.5, Silt
#99994A, 2, Mud
#666666, 1, Clay"""
legend = Legend.from_csv(text=legend_csv)
s.plot(legend=legend, aspect=2, label='lithology')
Which gives:
Admittedly the plotting is a little limited, but it's just matplotlib so you can always add more code. To be honest, if I were to build this tool today, I think I'd probably leave the plotting out entirely; it's often easier for the user to do their own thing.
Why go to all this trouble? Fair question. striplog lets you merge zones, make thickness or lithology histograms, make queries ("show me sandstone beds thicker than 2 m"), make 'flags', export LAS or CSV, and even do Markov chain sequence analysis. But even if it's not what you're looking for, maybe you can recycle some of the plotting code! Good luck.
I have a simple question that I can't find an answer for.
The sample is drawn from a bivariate normal distribution (X,Y) with given parameters like this:
import numpy as np
sample = np.random.multivariate_normal([1, 1], [[1, 0.2], [0.2, 0.8]], 10000)
Now I need to extract marginal distributions from this joint distribution. I want to get two arrays, called fx_x and fy_y that contain marginal distributions from X and Y.
fx_x = [....]
fy_y = [....]
How do I do this?
Thanks.
I am trying to generate a finite element mesh using PyGmsh, using the following code:
import pygmsh
geom = pygmsh.opencascade.Geometry(
characteristic_length_min=0.1,
characteristic_length_max=0.1,
)
rectangle = geom.add_rectangle([-1.0, -1.0, 0.0], 2.0, 2.0)
disk1 = geom.add_disk([-1.2, 0.0, 0.0], 0.5)
disk2 = geom.add_disk([+1.2, 0.0, 0.0], 0.5)
disk3 = geom.add_disk([0.0, -0.9, 0.0], 0.5)
disk4 = geom.add_disk([0.0, +0.9, 0.0], 0.5)
union = geom.boolean_union([rectangle, disk1, disk2])
diff = geom.boolean_difference([union], [disk3, disk4])
mesh = pygmsh.generate_mesh(geom, dim=2)
I can generate the following mesh:
However, I would like to add a crack to the mesh, something like:
The crack here is just an example, it would need to be defined before the meshing process.
I've tried creating 2 points (geom.add_point()) and a line (geom.add_line()), and then do a
geom.boolean_difference() between the final geometry and the line/crack, but this just does not work.
Any help would be greatly appreciated.
EDIT
The purpose of this type of mesh generation is to simulate a physical crack in a body. In the meshing process, the crack can be modeled by the elemental connectivity of the mesh (i.e. the elements must have different nodes to create a crack face). Example, before applying any load, the crack is closed:
After applying the load, the crack opens since the element connectivity allows this:
You can achieve this by modeling a very narrow rectangle at that region. You can give dimensions like 1e-10 easily. I modelled also the crack tip to collapse the nodes in one point by modeling a very small circle. It works quite fine.
Also there is a plugin for this now. It automatically separates the nodes at the specified crack line/surface.
This can be achieved using the "embed" functionality. Minimal working example bellow (in Python).
import gmsh
gmsh.initialize()
gmsh.model.add("TestModel")
ms = 1 # mesh size at point
# square (plate) points
gmsh.model.geo.addPoint(0, 0, 0, ms, 1)
gmsh.model.geo.addPoint(8, 0, 0, ms, 2)
gmsh.model.geo.addPoint(8, 8, 0, ms, 3)
gmsh.model.geo.addPoint(0, 8, 0, ms, 4)
# square (plate) lines
gmsh.model.geo.addLine(1, 2, 1)
gmsh.model.geo.addLine(2, 3, 2)
gmsh.model.geo.addLine(3, 4, 3)
gmsh.model.geo.addLine(4, 1, 4)
# square (plate) curve loop
gmsh.model.geo.addCurveLoop([1, 2, 3, 4], 1)
# square (plate) surface
s = gmsh.model.geo.addPlaneSurface([1])
# "crack" geometry
a = gmsh.model.geo.addPoint(2, 2, 0, ms)
b = gmsh.model.geo.addPoint(6, 4, 0, ms/100)
l = gmsh.model.geo.addLine(a, b)
# synchronize
gmsh.model.geo.synchronize()
# embed "crack" on plate
gmsh.model.mesh.embed(1, [l], 2, s)
# generate mesh
gmsh.model.mesh.generate(2)
gmsh.fltk.run()
gmsh.finalize()
Output:
I working with a multilabel classification problem, using Keras, scikit-learn, etc...
My dataframe contain 4000 microscopic oil samples, with images and 13 different labels for which problem find in those samples.
Actually i convert all images and labels to numpy array.
Example of one labeled image:
[ 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0 ]
In this label, if position is equal to 1, that means the current sample have a specific problem, like some particles in oil, and as you can see, it's possible the sample have more than one output.
Well, the problem is, my dataframe are imbalanced and i need to apply Class Weight method, but before, looking at the labels, i think i need to use like: [ 0, 1, 0, 0, ... ], not like the example i gave above.
Detail, i can run my neural network code without class weight, works well, but i can't train all the model with that imbalanced data.
Already tryed working using lists, unsuccessfully!
Of course, i have problems with shape, images have in example: (1000, 100, 200, 3) and labels (1000, 13); Thats why i can't apply Class Weight too...
There is a few problems i trying to fix.
I will post my code, because i stuck and i don't know what to do.
class_weight_list = compute_class_weight('balanced',np.unique(Y_train), Y_train)
class_weight = dict(zip(np.unique(Y_train), class_weight_list))
Y_train = to_categorical(Y_train,num_classes=len(np.unique(Y_train)))
main.py
dataset.py
models.py
What is the best strategy to work with labels in this case?
I appreciate if someone can help me.
Thanks in advance!!
I want to normalize the pixel values of an image to the range [0, 1] for each channel (R, G, B).
Minimal Example
#!/usr/bin/env python
import numpy as np
import scipy
from sklearn import preprocessing
original = scipy.misc.imread('Crocodylus-johnsoni-3.jpg')
scipy.misc.imshow(original)
transformed = np.zeros(original.shape, dtype=np.float64)
scaler = preprocessing.MinMaxScaler()
for channel in range(3):
transformed[:, :, channel] = scaler.fit_transform(original[:, :, channel])
scipy.misc.imsave("transformed.jpg", transformed)
What happens
Taking https://commons.wikimedia.org/wiki/File:Crocodylus-johnsoni-3.jpg,
I get the following "normalized" result:
As you can see there are lines from top to bottom at the right side. What happened there? It seems to me that the normalization went wrong. If so: How do I fix it?
In scikit-learn, a two-dimensional array with shape (m, n) is usually interpreted as a collection of m samples, with each sample having n features.
MinMaxScaler.fit_transform() transforms each feature, so each column of your array is transformed independently of the others. That results in the vertical "stripes" in the image.
It looks like you intended to scale each color channel independently. To do that using MinMaxScaler, reshape the input so that each channel becomes one column. That is, if the original image has shape (m, n, 3), reshape it to (m*n, 3) before passing it to the fit_transform() method, and then restore the shape of the result to create the transformed array.
For example,
ascolumns = original.reshape(-1, 3)
t = scaler.fit_transform(ascolumns)
transformed = t.reshape(original.shape)
With this, transformed looks like this:
The image looks exactly like the original, because it turns out that in the array original, the minimum and maximum are 0 and 255, respectively, in each channel:
In [41]: original.min(axis=(0, 1))
Out[41]: array([0, 0, 0], dtype=uint8)
In [42]: original.max(axis=(0, 1))
Out[42]: array([255, 255, 255], dtype=uint8)
So all fit_transform does in this case is transform all the input values to the floating point range [0.0, 1.0] uniformly. If the minimum or maximum was different in one of the channels, the transformed image would look different.
By the way, it is not difficult to perform the transform using pure numpy. (I'm using Python 3, so in the following, the division automatically casts the result to floating point. If you are using Python 2, you'll need to convert one of the argument to floating point, or use from __future__ import division.)
In [58]: omin = original.min(axis=(0, 1), keepdims=True)
In [59]: omax = original.max(axis=(0, 1), keepdims=True)
In [60]: xformed = (original - omin)/(omax - omin)
In [61]: np.allclose(xformed, transformed)
Out[61]: True
(One potential problem with that method is that it will generate an error if one of the channels is constant, because then one of the values in omax - omin will be 0.)