Hi everybody I have developed more than one years ago a qulityplot (for publications) library ..
now there is this 3 line in my base class:
font_dirs = ['/home/marco/.fonts', ]
font_files = font_manager.findSystemFonts(fontpaths=font_dirs)
font_list = font_manager.createFontList(font_files)
font_manager.fontManager.ttflist.extend(font_list)
but when I run any class for plot defined in this library (inherited from the base class whit the 4 lines above) I got this message:
The createFontList function was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use FontManager.addfont instead.
/usr/lib/python3.7/_collections_abc.py:841: MatplotlibDeprecationWarning: Support for setting the 'text.latex.preamble' or 'pgf.preamble' rcParam to a list of strings is deprecated since 3.3 and will be removed two minor releases later; set it to a single string instead.
self[key] = other[key]
Can somebody help me understand what I have to modify in order to make everything works fine
I had a similar problem with the preamble for the pgf and LATEX settings of my plots:
I had to change the following code:
pgf_with_latex = { # setup matplotlib to use latex for output
"pgf.texsystem": "pdflatex", # change this if using xetex or lautex
"text.usetex": True, # use LaTeX to write all text
"font.family": "serif",
"font.serif": [], # blank entries should cause plots
"font.sans-serif": [], # to inherit fonts from the document
"font.monospace": [],
"axes.labelsize": 10, # LaTeX default is 10pt font.
"font.size": 10,
"legend.fontsize": 8, # Make the legend/label fonts
"xtick.labelsize": 8, # a little smaller
"ytick.labelsize": 8,
"figure.figsize": figsize(0.9), # default fig size of 0.9 textwidth
"pgf.preamble": [
r"\usepackage[utf8x]{inputenc}", # use utf8 fonts
r"\usepackage[T1]{fontenc}", # plots will be generated
r"\usepackage[detect-all,locale=DE]{siunitx}",
] # using this preamble
}
to:
pgf_with_latex = { # setup matplotlib to use latex for output # change this if using xetex or lautex
"pgf.texsystem": "pdflatex",
"text.usetex": True, # use LaTeX to write all text
"font.family": "serif",
"font.serif": [], # blank entries should cause plots
"font.sans-serif": [], # to inherit fonts from the document
"font.monospace": [],
"axes.labelsize": 10, # LaTeX default is 10pt font.
"font.size": 10,
"legend.fontsize": 8, # Make the legend/label fonts
"xtick.labelsize": 8, # a little smaller
"ytick.labelsize": 8,
"pgf.preamble": r"\usepackage[detect-all,locale=DE]{siunitx} \usepackage[T1]{fontenc} \usepackage[utf8x]{inputenc}"}
You just have to rewrite your lists (here in pgf.preamble) to a single string.
Related
I use sympy plotting backends library to create plots directly from sympy expressions. As backends i use matplotlib library. Actually, i don't need to show a plot. I create plot and then i return it as svg string in order to insert it in a web page later. Everything worked fine, but when i started to use new virtual environment (i installed all necessary packages) i got an error. I use the following code:
# create two plot: one from init_expr (sympy tree expression) and second from
# fourier_expr (sympy tree expression):
p1 = plot(
(init_expr, (var, left_bound, right_bound), label_1, line_color_1),
(fourier_expr, (var, left_bound, right_bound), label_2, line_color_2),
show = False)
# add axes decorations and legend:
p1.xlabel = var_name
p1.ylabel = "f(" + data.var_name + ")"
p1.legend = True
# buffer to write svg data:
f = io.StringIO()
# save plot in svg formate in buffer:
p1.save(f, format = "svg")
# return svg string:
return f.getvalue()
when execution begins, at line:
p1.save(f, format="svg")
i get an error:
AttributeError("'NoneType' object has no attribute 'runner'")
Any ideas what am i doing wrong?
I've been using GEE to export some training patches from Sentinel-2 to be used in Python.
I could make it work, by following the GEE guide https://developers.google.com/earth-engine/tfrecord, and using the Export.image.toDrive function and then I can parse the exported TFRecord file to reconstruct my tiles.
var image_export_options = {
'patchDimensions': [366, 366],
'maxFileSize': 104857600,
// 'kernelSize': [366, 366],
'compressed': true
}
Export.image.toDrive({
image: clipped_img.select(bands.concat(['classes'])),
description: 'PatchesExport',
fileNamePrefix: 'Oros_1',
scale: 10,
folder: 'myExportFolder',
fileFormat: 'TFRecord',
region: export_area,
formatOptions: image_export_options,
})
However, when I try to specify the kernelSize in the formatOptions (that was supposed to "overlaps adjacent tiles by [kernelSize[0]/2, kernelSize[1]/2]", according to the guide) the files are exported but the '*mixer.json' doesn't reflect the increased number of patches and I am not able to iterate through the patches afterwards. The following command crashes the google colab session:
image_dataset = tf.data.TFRecordDataset(str(path/(file_prefix+'-00000.tfrecord.gz')), compression_type='GZIP')
first = next(iter(image_dataset))
first
The weird is that the problem happens only when I add the kernelSize to the formatOptions.
After some time trying to overcome this issue, I realized a not well documented behavior when one uses the kernel size to export patches from GEE.
Bundled with the exported TFRecord, there exists one xml file called mixer.
It doesn't matter if we use:
'patchDimensions': [184, 184],
'kernelSize': [1, 1], #default for no overlapping
or
'patchDimensions': [184, 184],
'kernelSize': [184, 184], #half patch overlapping
The mixer file remains the same and no mention to the kernel/overlapping size:
{'patchDimensions': [184, 184],
'patchesPerRow': 8,
'projection': {'affine': {'doubleMatrix': [10.0,
0.0,
493460.0,
0.0,
-10.0,
9313540.0]},
'crs': 'EPSG:32724'},
'totalPatches': 40}
In the second case, if we try to parse the patches using tf.io.parse_single_example(example_proto, image_features_dict), where image_features_dict equals something like:
{'B2': FixedLenFeature(shape=[184, 184], dtype=tf.float32, default_value=None),
'B3': FixedLenFeature(shape=[184, 184], dtype=tf.float32, default_value=None),
'B4': FixedLenFeature(shape=[184, 184], dtype=tf.float32, default_value=None)}
it will raise the error:
_FallbackException: This function does not handle the case of the path where all inputs are not already EagerTensors.
Can't parse serialized Example. [Op:ParseExampleV2]
Instead, to parse these records which have kernelSize > 1, we have to consider patchDimentions + kernelSize as the resulting patch size, even though the mixer.xml file says on contraty. In this example, our patchSize would be 368 (original patch size + kernelSize). Be aware that for odd kernel sizes, the number to be added to the original patch size is kernelSize - 1.
If I write a simple Rnw document containing a figure like e.g.,
\documentclass[11pt]{article}
%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<<setup, include=FALSE, cache=FALSE>>=
opts_chunk$set(dev = "pdf", comment = NA, fig.path = "figure/", fig.align='center', cache=FALSE, message=FALSE, background='white')
options(replace.assign=TRUE,width=85, digits = 8)
knit_hooks$set(fig=function(before, options, envir){if (before) par(mar=c(4,4,.1,.1),cex.lab=.95,cex.axis=.9,mgp=c(2,.7,0),tcl=-.3)})
#
<<prepare-data, include=FALSE>>=
#
%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{document}
A simple plot
\begin{figure}
<< scat, echo = FALSE, fig.width = 4.5, fig.height=3>>=
plot(runif(10), runif(10), pch = 20)
#
\end{figure}
\end{document}
Why does knitr create a PDF file with the filename figure/scat-1.pdf instead of figure/scat.pdf?
The reason was explained in the v1.7 release notes. In the development version (to be v1.8), you can use fig_chunk() to obtain the figure filenames (see package NEWS). Also see a related discussion here.
I have the following scapy layers:
The base layer (which is in fact SCTPChunkData() from scapy.sctp, but below is a simplified version of it):
class BaseProto(Packet):
fields_desc = [ # other fields omitted...
FieldLenField("len", None, length_of="data", adjust = lambda pkt,x:x+6),
XIntField("protoId", None),
StrLenField("data", "", length_from=lambda pkt: pkt.len-6),
]
And my layer defined like this:
MY_PROTO_ID = 19
class My_Proto(Packet):
fields_desc = [ ShortField ("f1", None),
ByteField ("f2", None),
ByteField ("length", None), ]
I want to dissect the data field from BaseProto as MyProto if protoId field from BaseProto equals MY_PROTO_ID.
I've tried using bind_layers() for this purpose, but I then realized that this function will "tell" scapy how to to dissect the payload of the base layer, not a specific field. In my example, the data field will actually store all the bytes that I want to decode as MyProto.
Also, guess_payload_class() is not helping, as it's just a different (more powerful) version of bind_layers(), thus operating only at payload level.
You have to chain the layers as BaseProto()/My_Proto() and use bind_layers(first_layer, next_layer, condition) to have scapy dissect them according to the condition.
Here's how it should look like.
PROTO_IDS = {
19: 'my_proto',
# define all other proto ids
}
class BaseProto(Packet):
name = "BaseProto"
fields_desc = [ # other fields omitted...
FieldLenField("len", None, length_of="data", adjust = lambda pkt,x:x+6),
IntEnumField("protoId", 19, PROTO_IDS),
#StrLenField("data", "", length_from=lambda pkt: pkt.len-6), #<-- will be the next layer, extra data will show up as Raw or PADD
]
class My_Proto(Packet):
name = "MyProto Sublayer"
fields_desc = [ ShortField ("f1", None),
ByteField ("f2", None),
ByteField ("length", None), ]
# BIND TCP.dport==9999 => BaseProto and BaseProto.protoId==19 to My_Proto
bind_layers(TCP, BaseProto, dport=9999)
# means: if BaseProto.protoId==19: dissect as BaseProto()/My_Proto()
bind_layers(BaseProto, My_Proto, {'protoId':19})
#example / testing
bytestr = str(BaseProto()/My_Proto()) # build
BaseProto(bytestr).show() # dissect
As a reference have a look at the scapy-ssl_tls layer implementation as they're pretty much exercising everything you need.
For those that want to export a simple 3D numpy array (along with axes) to a .vtk (or .vtr) file for post-processing and display in Paraview or Mayavi there's a little module called PyEVTK that does exactly that. The module supports structured and unstructured data etc..
Unfortunately, even though the code works fine in unix-based systems I couldn't make it work (keeps crashing) on any windows installation which simply makes things complicated. Ive contacted the developer but his suggestions did not work
Therefore my question is:
How can one use the from vtk.util import numpy_support function to export a 3D array (the function itself doesn't support 3D arrays) to a .vtk file? Is there a simple way to do it without creating vtkDatasets etc etc?
Thanks a lot!
It's been forever and I had entirely forgotten asking this question but I ended up figuring it out. I've written a post about it in my blog (PyScience) providing a tutorial on how to convert between NumPy and VTK. Do take a look if interested:
pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/
It's not a direct answer to your question, but if you have tvtk (if you have mayavi, you should have it), you can use it to write your data to vtk format. (See: http://code.enthought.com/projects/files/ETS3_API/enthought.tvtk.misc.html )
It doesn't use PyEVTK, and it supports a broad range of data sources (more than just structured and unstructured grids), so it will probably work where other things aren't.
As a quick example (Mayavi's mlab interface can make this much less verbose, especially if you're already using it.):
import numpy as np
from enthought.tvtk.api import tvtk, write_data
data = np.random.random((10,10,10))
grid = tvtk.ImageData(spacing=(10, 5, -10), origin=(100, 350, 200),
dimensions=data.shape)
grid.point_data.scalars = np.ravel(order='F')
grid.point_data.scalars.name = 'Test Data'
# Writes legacy ".vtk" format if filename ends with "vtk", otherwise
# this will write data using the newer xml-based format.
write_data(grid, 'test.vtk')
And a portion of the output file:
# vtk DataFile Version 3.0
vtk output
ASCII
DATASET STRUCTURED_POINTS
DIMENSIONS 10 10 10
SPACING 10 5 -10
ORIGIN 100 350 200
POINT_DATA 1000
SCALARS Test%20Data double
LOOKUP_TABLE default
0.598189 0.228948 0.346975 0.948916 0.0109774 0.30281 0.643976 0.17398 0.374673
0.295613 0.664072 0.307974 0.802966 0.836823 0.827732 0.895217 0.104437 0.292796
0.604939 0.96141 0.0837524 0.498616 0.608173 0.446545 0.364019 0.222914 0.514992
...
...
TVTK of Mayavi has a beautiful way of writing vtk files. Here is a test example I have written for myself following #Joe and tvtk documentation. The advantage it has over evtk, is the support for both ascii and html.Hope it will help other people.
from tvtk.api import tvtk, write_data
import numpy as np
#data = np.random.random((3, 3, 3))
#
#i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))
#i.point_data.scalars = data.ravel()
#i.point_data.scalars.name = 'scalars'
#i.dimensions = data.shape
#
#w = tvtk.XMLImageDataWriter(input=i, file_name='spoints3d.vti')
#w.write()
points = np.array([[0,0,0], [1,0,0], [1,1,0], [0,1,0]], 'f')
(n1, n2) = points.shape
poly_edge = np.array([[0,1,2,3]])
print n1, n2
## Scalar Data
#temperature = np.array([10., 20., 30., 40.])
#pressure = np.random.rand(n1)
#
## Vector Data
#velocity = np.random.rand(n1,n2)
#force = np.random.rand(n1,n2)
#
##Tensor Data with
comp = 5
stress = np.random.rand(n1,comp)
#
#print stress.shape
## The TVTK dataset.
mesh = tvtk.PolyData(points=points, polys=poly_edge)
#
## Data 0 # scalar data
#mesh.point_data.scalars = temperature
#mesh.point_data.scalars.name = 'Temperature'
#
## Data 1 # additional scalar data
#mesh.point_data.add_array(pressure)
#mesh.point_data.get_array(1).name = 'Pressure'
#mesh.update()
#
## Data 2 # Vector data
#mesh.point_data.vectors = velocity
#mesh.point_data.vectors.name = 'Velocity'
#mesh.update()
#
## Data 3 additional vector data
#mesh.point_data.add_array( force)
#mesh.point_data.get_array(3).name = 'Force'
#mesh.update()
mesh.point_data.tensors = stress
mesh.point_data.tensors.name = 'Stress'
# Data 4 additional tensor Data
#mesh.point_data.add_array(stress)
#mesh.point_data.get_array(4).name = 'Stress'
#mesh.update()
write_data(mesh, 'polydata.vtk')
# XML format
# Method 1
#write_data(mesh, 'polydata')
# Method 2
#w = tvtk.XMLPolyDataWriter(input=mesh, file_name='polydata.vtk')
#w.write()
I know it is a bit late and I do love your tutorials #somada141. This should work too.
def numpy2VTK(img, spacing=[1.0, 1.0, 1.0]):
# evolved from code from Stou S.,
# on http://www.siafoo.net/snippet/314
# This function, as the name suggests, converts numpy array to VTK
importer = vtk.vtkImageImport()
img_data = img.astype('uint8')
img_string = img_data.tostring() # type short
dim = img.shape
importer.CopyImportVoidPointer(img_string, len(img_string))
importer.SetDataScalarType(VTK_UNSIGNED_CHAR)
importer.SetNumberOfScalarComponents(1)
extent = importer.GetDataExtent()
importer.SetDataExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetWholeExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetDataSpacing(spacing[0], spacing[1], spacing[2])
importer.SetDataOrigin(0, 0, 0)
return importer
Hope it helps!
Here's a SimpleITK version with the function load_itk taken from here:
import SimpleITK as sitk
import numpy as np
if len(sys.argv)<3:
print('Wrong number of arguments.', file=sys.stderr)
print('Usage: ' + __file__ + ' input_sitk_file' + ' output_sitk_file', file=sys.stderr)
sys.exit(1)
def quick_read(filename):
# Read image information without reading the bulk data.
file_reader = sitk.ImageFileReader()
file_reader.SetFileName(filename)
file_reader.ReadImageInformation()
print('image size: {0}\nimage spacing: {1}'.format(file_reader.GetSize(), file_reader.GetSpacing()))
# Some files have a rich meta-data dictionary (e.g. DICOM)
for key in file_reader.GetMetaDataKeys():
print(key + ': ' + file_reader.GetMetaData(key))
def load_itk(filename):
# Reads the image using SimpleITK
itkimage = sitk.ReadImage(filename)
# Convert the image to a numpy array first and then shuffle the dimensions to get axis in the order z,y,x
data = sitk.GetArrayFromImage(itkimage)
# Read the origin of the ct_scan, will be used to convert the coordinates from world to voxel and vice versa.
origin = np.array(list(reversed(itkimage.GetOrigin())))
# Read the spacing along each dimension
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return data, origin, spacing
def convert(data, output_filename):
image = sitk.GetImageFromArray(data)
writer = sitk.ImageFileWriter()
writer.SetFileName(output_filename)
writer.Execute(image)
def wait():
print('Press Enter to load & convert or exit using Ctrl+C')
input()
quick_read(sys.argv[1])
print('-'*20)
wait()
data, origin, spacing = load_itk(sys.argv[1])
convert(sys.argv[2])