I'm saving grayscale images in TFRecord files. The idea then was to color map them on my GPU (only using TF of course) so they get three channels (They are going to be used on a pre-trained VGG-16 model so they have to have three channels).
Does anyone have any idea how to this properly?
I tried to do it with my homemade TF color mapping script, using for-loops, tf.scatter_nd and a mapping array with shape = (256,3)... but it took forever.
EDIT:
img_rgb = GRAY SCALE IMAGE WITH 3 CHANNELS
cmp = [[255,255,255],
[255,255,253],
[255,254,250],
[255,254,248],
[255,254,245],
...
[4,0,0],
[0,0,0]]
cmp = tf.convert_to_tensor(cmp, tf.int32) # (256, 3)
hot = tf.zeros([224,224,3], tf.int32)
for i in range(img_rgb.shape[2]):
for j in range(img_rgb.shape[1]):
for k in range(img_rgb.shape[0]):
indices = tf.constant([[k,j,i]])
updates = tf.Variable([cmp[img_rgb[k,j,i],i]])
shape = tf.constant([256, 3])
hot = tf.scatter_nd(indices, updates, shape)
This was my attempt, I know it's not optimal in any way, but It was the only solution I could come up with.
Thanks work by jimfleming, https://gist.github.com/jimfleming/c1adfdb0f526465c99409cc143dea97b
import matplotlib
import matplotlib.cm
import tensorflow as tf
def colorize(value, vmin=None, vmax=None, cmap=None):
"""
A utility function for TensorFlow that maps a grayscale image to a matplotlib
colormap for use with TensorBoard image summaries.
Arguments:
- value: 2D Tensor of shape [height, width] or 3D Tensor of shape
[height, width, 1].
- vmin: the minimum value of the range used for normalization.
(Default: value minimum)
- vmax: the maximum value of the range used for normalization.
(Default: value maximum)
- cmap: a valid cmap named for use with matplotlib's `get_cmap`.
(Default: 'gray')
Example usage:
```
output = tf.random_uniform(shape=[256, 256, 1])
output_color = colorize(output, vmin=0.0, vmax=1.0, cmap='plasma')
tf.summary.image('output', output_color)
```
Returns a 3D tensor of shape [height, width, 3].
"""
# normalize
vmin = tf.reduce_min(value) if vmin is None else vmin
vmax = tf.reduce_max(value) if vmax is None else vmax
value = (value - vmin) / (vmax - vmin) # vmin..vmax
# squeeze last dim if it exists
value = tf.squeeze(value)
# quantize
indices = tf.to_int32(tf.round(value * 255))
# gather
cm = matplotlib.cm.get_cmap(cmap if cmap is not None else 'gray')
colors = tf.constant(cm.colors, dtype=tf.float32)
value = tf.gather(colors, indices)
return value
You could also try tf.image.grayscale_to_rgb, although there seems to be only one choice of color map, gray.
We're here to help. If everyone wrote optimal code, there would be no need for Stackoverflow. :)
Here's how I would do it in place of the last 7 lines (untested code):
conv_img = tf.gather( params = cmp,
indices = img_rgb[ :, :, 0 ] )
Basically, no need for the for loops, Tensorflow will do that for you, and much quicker. tf.gather() will collect elements from cmp according to the indices provided, which here would be the 0th channel of img_rgb. Each collected element will have the three channels from cmp so when you put them all together, it will form an image.
I don't have time to test right now, gotta run, sorry. Hope it works.
I'd like to know if it is possble to the following code, but now using pytorch, where dtype = torch.cuda.FloatTensor. There's the code straight python (using numpy): Basically I want to get the value of x that produces the min value of fitness.
import numpy as np
import random as rand
xmax, xmin = 5, -5
pop = 30
x = (xmax-xmin)*rand.random(pop,1)
y = x**2
[minz, indexmin] = np.amin(y), np.argmin(y)
best = x[indexmin]
This is my attempt to do it:
import torch
dtype = torch.cuda.FloatTensor
def fit (x):
return x**2
def main():
pop = 30
xmax, xmin = 5, -5
x = (xmax-xmin)*torch.rand(pop, 1).type(dtype)+xmin
y = fit(x)
[miny, indexmin] = torch.min(y,0)
best = x[indexmin]
main()
The last part where I define the variable best as the value of x with index equal to indexmin it doesn't work. What am I doing wrong here.
The following messenge appears: RuntimeError:
expecting vector of indices at /opt/conda/conda-bld/pytorch_1501971235237/work/pytorch-0.1.12/torch/lib/THC/generic/THCTensorIndex.cu:405
You can simply do as follows.
import torch
dtype = torch.cuda.FloatTensor
def main():
pop, xmax, xmin = 30, 5, -5
x = (xmax-xmin)*torch.rand(pop, 1).type(dtype)+xmin
y = torch.pow(x, 2)
[miny, indexmin] = y.min(0)
best = x.squeeze()[indexmin] # squeeze x to make it 1d
main()
I am trying to introduce a random 90-degree rotation to images as part of a training data pipeline. However when I try to populate the k parameter of tf.image.rot90() with a scalar tensor I get the following error:
TypeError: Fetch argument None has invalid type <class 'NoneType'>.
The function works as expected when k is a python variable. The following demonstrates the problem:
import tensorflow as tf
import random
import numpy as np
from matplotlib import pyplot as plt
with tf.Session() as sess:
image = np.reshape(np.arange(0., 4.), [2, 2, 1])
print(image.shape)
# this works
k = random.randint(0, 3)
print('k = ' + str(k))
# this gives an error
# k = random.randint(0, 3)
# k = tf.convert_to_tensor(k, dtype=tf.int32)
# k = tf.Print(k, [k], 'k = ')
# this gives an error
# k = tf.random_uniform([], minval=0, maxval=4, dtype=tf.int32)
# k = tf.Print(k, [k], 'k = ')
image2 = tf.image.rot90(image, k)
img2 = sess.run(image2)
plt.figure
plt.subplot(121)
plt.imshow(np.squeeze(image), interpolation='nearest')
plt.subplot(122)
plt.imshow(np.squeeze(img2), interpolation='nearest')
plt.show()
Is there a way to set k to a random value as part of the training pipeline? Or is this a bug in tf.image.rot90()?
The current implementation of tf.image.rot90() has a bug: if you pass a value that is not a Python integer, it will not return any value. I created an issue about this, and will get a fix in soon. In general, you should be able to draw a random integer scalar for k, but the current implementation isn't general enough to support that.
You could try using tf.case() to implement it yourself, but I intend to implement that in the fix, so it might be easier to wait :-).
My goal is to generate a rotation matrix based on a rotation variable, theta.
Here's my code so far:
initial = 0.0
theta = tf.Variable(initial_value=initial, name='theta')
sin = tf.sin(theta)
cos = tf.cos(theta)
rot_matrix = tf.constant([[cos, -sin, 0], [sin, cos, 0]])
The above gives: TypeError: List of Tensors when single Tensor expected for the fifth line. I'm getting this because cos and sin are tensors. But I can't find any way to extract a value from a tensor. (Only extracting sub-tensors from tensors with tf.slice())
How can I properly create the rotation matrix?
You could make it a list of tensors and fetch that. Right now you have a mix of tensors and numbers which you can not fetch as is.
initial = 0.0
theta = tf.Variable(initial_value=initial, name='theta')
sin = tf.sin(theta)
cos = tf.cos(theta)
rot_matrix = [[cos, -sin, tf.constant(0)], [sin, cos, tf.constant(0)]]
sess = tf.Session()
sess.run(tf.initialize_all_variables())
sess.run(rot_matrix)
Alternatively you could turn it into a single Tensor using tf.pack(), which converts numbers (and lists and arrays of numbers) to tensors automatically.
rot_matrix = tf.pack([[cos, -sin, 0], [sin, cos, 0]])
I have a masked array which is used by matplotlib.plt.contourf to project a temperature contour on a glabal map. I was trying to smooth the contour, but unfortunately none of the proposed solutions seems to be able to handle masked array. I tested these solutions:
-scipy.ndimage.gaussian_filter - moving averages
scipy.ndimage.zoom
none of them works(they count in the masked values also). Is there any way I can smooth my contour on maskedArray
I have added this part after trying the proposed 'inpaint' solution and the results were unchanged. here is the code (if it helps)
import Scientific.IO.NetCDF as S
import mpl_toolkits.basemap as bm
import numpy.ma as MA
import numpy as np
import matplotlib.pyplot as plt
import inpaint
def main():
fileobj = S.NetCDFFile('Bias.ANN.tas_A1_1.nc', mode='r')
# take the values
set1 = {'time', 'lat', 'lon'}
set2 = set(fileobj.variables.keys())
set3 = set2 - set1
datadim = set3.pop()
print "******************datadim: "+datadim
data = fileobj.variables[datadim].getValue()[0,:,:]
lon = fileobj.variables['lon'].getValue()
lat = fileobj.variables['lat'].getValue()
fileobj.close()
data, lon = bm.shiftgrid(180.,data, lon,start=False)
data = MA.masked_equal(data, 1.0e20)
#data2 = inpaint.replace_nans(data, 10, 0.25, 2, 'idw')
#- Make 2-D longitude and latitude arrays:
[lon2d, lat2d] =np.meshgrid(lon, lat)
#- Set up map:
mapproj = bm.Basemap(projection='cyl',
llcrnrlat=-90.0, llcrnrlon=-180.00,
urcrnrlat=90.0, urcrnrlon=180.0)
mapproj.drawcoastlines(linewidth=.5)
mapproj.drawmapboundary(fill_color='.8')
#mapproj.drawparallels(N.array([-90, -45, 0, 45, 90]), labels=[1,0,0,0])
#mapproj.drawmeridians(N.array([0, 90, 180, 270, 360]), labels=[0,0,0,1])
lonall, latall = mapproj(lon2d, lat2d)
cmap=plt.cm.Spectral
#- Make a contour plot of the temperature:
mymapf = plt.contourf(lonall, latall, data, 20, cmap=cmap)
#plt.clabel(mymapf, fontsize=12)
plt.title(cmap.name)
plt.colorbar(mymapf, orientation='horizontal')
plt.savefig('sample2.png', dpi=150, edgecolor='red', format='png', bbox_inches='tight', pad_inches=.2)
plt.close()
if __name__ == "__main__":
main()
I am comparing the output from this code (the first figure), with output of the same datafile from Panoply. Zoomin in and looking more precisely it seems like it is not the smoothness problem, but the pyplot model provides one stripe slimmer, or the contours are cut earlier (the outer boundaries shows this clearly, and inner contours are different due to this fact). It makes it to look like that the pyplot model is not as smooth as the Panoply one. how can I get (nearly) the same model? Am I distinguishing it right?
I had similar problem and google pointed me to this: blog post. Basically he's using inpaint algorithm to interpolate missing values and produce valid array for filtering.
The code is at the end of the post, and you can save it to site-packages (or else) and load it as module (i.e. inpaint.py):
import inpaint
filled = inpaint.replace_nans(NANMask, 5, 0.5, 2, 'idw')
I'm happy with the result, and I guess it will suite missing temperature values just fine. There is also next version here: github but code will need some cleaning for general usage as it's part of a project.
For reference, easy use and preservation sake I'll post the code (of initial version) here:
# -*- coding: utf-8 -*-
"""A module for various utilities and helper functions"""
import numpy as np
#cimport numpy as np
#cimport cython
DTYPEf = np.float64
#ctypedef np.float64_t DTYPEf_t
DTYPEi = np.int32
#ctypedef np.int32_t DTYPEi_t
##cython.boundscheck(False) # turn of bounds-checking for entire function
##cython.wraparound(False) # turn of bounds-checking for entire function
def replace_nans(array, max_iter, tol,kernel_size=1,method='localmean'):
"""Replace NaN elements in an array using an iterative image inpainting algorithm.
The algorithm is the following:
1) For each element in the input array, replace it by a weighted average
of the neighbouring elements which are not NaN themselves. The weights depends
of the method type. If ``method=localmean`` weight are equal to 1/( (2*kernel_size+1)**2 -1 )
2) Several iterations are needed if there are adjacent NaN elements.
If this is the case, information is "spread" from the edges of the missing
regions iteratively, until the variation is below a certain threshold.
Parameters
----------
array : 2d np.ndarray
an array containing NaN elements that have to be replaced
max_iter : int
the number of iterations
kernel_size : int
the size of the kernel, default is 1
method : str
the method used to replace invalid values. Valid options are
`localmean`, 'idw'.
Returns
-------
filled : 2d np.ndarray
a copy of the input array, where NaN elements have been replaced.
"""
# cdef int i, j, I, J, it, n, k, l
# cdef int n_invalids
filled = np.empty( [array.shape[0], array.shape[1]], dtype=DTYPEf)
kernel = np.empty( (2*kernel_size+1, 2*kernel_size+1), dtype=DTYPEf )
# cdef np.ndarray[np.int_t, ndim=1] inans
# cdef np.ndarray[np.int_t, ndim=1] jnans
# indices where array is NaN
inans, jnans = np.nonzero( np.isnan(array) )
# number of NaN elements
n_nans = len(inans)
# arrays which contain replaced values to check for convergence
replaced_new = np.zeros( n_nans, dtype=DTYPEf)
replaced_old = np.zeros( n_nans, dtype=DTYPEf)
# depending on kernel type, fill kernel array
if method == 'localmean':
print 'kernel_size', kernel_size
for i in range(2*kernel_size+1):
for j in range(2*kernel_size+1):
kernel[i,j] = 1
print kernel, 'kernel'
elif method == 'idw':
kernel = np.array([[0, 0.5, 0.5, 0.5,0],
[0.5,0.75,0.75,0.75,0.5],
[0.5,0.75,1,0.75,0.5],
[0.5,0.75,0.75,0.5,1],
[0, 0.5, 0.5 ,0.5 ,0]])
print kernel, 'kernel'
else:
raise ValueError( 'method not valid. Should be one of `localmean`.')
# fill new array with input elements
for i in range(array.shape[0]):
for j in range(array.shape[1]):
filled[i,j] = array[i,j]
# make several passes
# until we reach convergence
for it in range(max_iter):
print 'iteration', it
# for each NaN element
for k in range(n_nans):
i = inans[k]
j = jnans[k]
# initialize to zero
filled[i,j] = 0.0
n = 0
# loop over the kernel
for I in range(2*kernel_size+1):
for J in range(2*kernel_size+1):
# if we are not out of the boundaries
if i+I-kernel_size < array.shape[0] and i+I-kernel_size >= 0:
if j+J-kernel_size < array.shape[1] and j+J-kernel_size >= 0:
# if the neighbour element is not NaN itself.
if filled[i+I-kernel_size, j+J-kernel_size] == filled[i+I-kernel_size, j+J-kernel_size] :
# do not sum itself
if I-kernel_size != 0 and J-kernel_size != 0:
# convolve kernel with original array
filled[i,j] = filled[i,j] + filled[i+I-kernel_size, j+J-kernel_size]*kernel[I, J]
n = n + 1*kernel[I,J]
# divide value by effective number of added elements
if n != 0:
filled[i,j] = filled[i,j] / n
replaced_new[k] = filled[i,j]
else:
filled[i,j] = np.nan
# check if mean square difference between values of replaced
#elements is below a certain tolerance
print 'tolerance', np.mean( (replaced_new-replaced_old)**2 )
if np.mean( (replaced_new-replaced_old)**2 ) < tol:
break
else:
for l in range(n_nans):
replaced_old[l] = replaced_new[l]
return filled
def sincinterp(image, x, y, kernel_size=3 ):
"""Re-sample an image at intermediate positions between pixels.
This function uses a cardinal interpolation formula which limits
the loss of information in the resampling process. It uses a limited
number of neighbouring pixels.
The new image :math:`im^+` at fractional locations :math:`x` and :math:`y` is computed as:
.. math::
im^+(x,y) = \sum_{i=-\mathtt{kernel\_size}}^{i=\mathtt{kernel\_size}} \sum_{j=-\mathtt{kernel\_size}}^{j=\mathtt{kernel\_size}} \mathtt{image}(i,j) sin[\pi(i-\mathtt{x})] sin[\pi(j-\mathtt{y})] / \pi(i-\mathtt{x}) / \pi(j-\mathtt{y})
Parameters
----------
image : np.ndarray, dtype np.int32
the image array.
x : two dimensions np.ndarray of floats
an array containing fractional pixel row
positions at which to interpolate the image
y : two dimensions np.ndarray of floats
an array containing fractional pixel column
positions at which to interpolate the image
kernel_size : int
interpolation is performed over a ``(2*kernel_size+1)*(2*kernel_size+1)``
submatrix in the neighbourhood of each interpolation point.
Returns
-------
im : np.ndarray, dtype np.float64
the interpolated value of ``image`` at the points specified
by ``x`` and ``y``
"""
# indices
# cdef int i, j, I, J
# the output array
r = np.zeros( [x.shape[0], x.shape[1]], dtype=DTYPEf)
# fast pi
pi = 3.1419
# for each point of the output array
for I in range(x.shape[0]):
for J in range(x.shape[1]):
#loop over all neighbouring grid points
for i in range( int(x[I,J])-kernel_size, int(x[I,J])+kernel_size+1 ):
for j in range( int(y[I,J])-kernel_size, int(y[I,J])+kernel_size+1 ):
# check that we are in the boundaries
if i >= 0 and i <= image.shape[0] and j >= 0 and j <= image.shape[1]:
if (i-x[I,J]) == 0.0 and (j-y[I,J]) == 0.0:
r[I,J] = r[I,J] + image[i,j]
elif (i-x[I,J]) == 0.0:
r[I,J] = r[I,J] + image[i,j] * np.sin( pi*(j-y[I,J]) )/( pi*(j-y[I,J]) )
elif (j-y[I,J]) == 0.0:
r[I,J] = r[I,J] + image[i,j] * np.sin( pi*(i-x[I,J]) )/( pi*(i-x[I,J]) )
else:
r[I,J] = r[I,J] + image[i,j] * np.sin( pi*(i-x[I,J]) )*np.sin( pi*(j-y[I,J]) )/( pi*pi*(i-x[I,J])*(j-y[I,J]))
return r
#cdef extern from "math.h":
# double sin(double)
A simple smoothing function that works with masked data will solve this. One can then avoid the approaches that involve making up data (ie, interpolating, inpainting, etc); and making up data should always be avoided.
The main issue that arises when smoothing masked data is that for each point, smoothing uses the neighboring values to calculate a new value at a center point, but when those neighbors are masked, the new value for the center point will also become masked due to the rules of masked arrays. Therefore, one needs to do the calculation with unmasked data, and explicitly account for the mask. That's easy to do, and is not in the function smooth below.
from numpy import *
import pylab as plt
# make a grid and a striped mask as test data
N = 100
x = linspace(0, 5, N, endpoint=True)
grid = 2. + 1.*(sin(2*pi*x)[:,newaxis]*sin(2*pi*x)>0.)
m = resize((sin(pi*x)>0), (N,N))
plt.imshow(grid.copy(), cmap='jet', interpolation='nearest')
plt.colorbar()
plt.title('original data')
def smooth(u, mask):
m = ~mask
r = u*m # set all 'masked' points to 0. so they aren't used in the smoothing
a = 4*r[1:-1,1:-1] + r[2:,1:-1] + r[:-2,1:-1] + r[1:-1,2:] + r[1:-1,:-2]
b = 4*m[1:-1,1:-1] + m[2:,1:-1] + m[:-2,1:-1] + m[1:-1,2:] + m[1:-1,:-2] # a divisor that accounts for masked points
b[b==0] = 1. # for avoiding divide by 0 error (region is masked so value doesn't matter)
u[1:-1,1:-1] = a/b
# run the data through the smoothing filter a few times
for i in range(10):
smooth(grid, m)
mg = ma.array(grid, mask=m) # put together the mask and the data
plt.figure()
plt.imshow(mg, cmap='jet', interpolation='nearest')
plt.colorbar()
plt.title('smoothed with mask')
plt.show()
The main point is that at the boundary of the mask, the masked values are not used in the smoothing. (This is also where the grid squares switch values, so it would be clear in the figure if the masked neighboring values were being used.)
We also just had this problem and the astropy package has us covered:
import numpy as np
import matplotlib.pyplot as plt
# Some Axes
x = np.arange(100)
y = np.arange(100)
#Some Interesting Shape
z = np.array(np.outer(np.sin((x+y)/10),np.sin(y/3)),dtype=float)
# some mask
mask = np.outer(np.sin((x+y)/20),np.sin(y/5))**2>.9
# masked data represent noise, so lets put in some trash into the masked points
z[mask] = (np.random.random(size = (100,100))*10)[mask]
# masked data
z_masked = np.ma.masked_array(z, mask)
# "Conventional" filter
filter_kernelsize = 2
import scipy.ndimage
z_filtered_bad = scipy.ndimage.gaussian_filter(z_masked,filter_kernelsize)
# Lets filter it
import astropy.convolution.convolve
from astropy.convolution import Gaussian2DKernel
k = Gaussian2DKernel(1.5)
z_filtered = astropy.convolution.convolve(z_masked, k, boundary='extend')
### Plots:
fig, axes = plt.subplots(2,2)
plt.sca(axes[0,0])
plt.title('Raw Data')
plt.imshow(z)
plt.colorbar()
plt.sca(axes[0,1])
plt.title('Raw Data Masked')
plt.imshow(z_masked)
plt.colorbar()
plt.sca(axes[1,0])
plt.title('ndimage filter (ignores mask)')
plt.imshow(z_filtered_bad)
plt.colorbar()
plt.sca(axes[1,1])
plt.title('astropy filter (uses mask)')
plt.imshow(z_filtered)
plt.colorbar()
plt.tight_layout()
Output plot of the code