How to add colormap renderer from a classified single band "QgsRasterLayer" with PyQGIS - rendering

I'm trying to add a colormap to a TMS service which is serving single band PNG with values ranging from 1 to 11. At this point, the layer renders in black (low values between 1 and 11) but I would like it to render with a specific color for each of the 11 values. This is for a QGIS plugin that adds layer to the map.
Here is a sample of my code, any help would be very much appreciated!
# Create rlayer
urlWithParams = 'type=xyz&url=https://bucket_name.s3.ca-central-1.amazonaws.com/z/x/-y.png&zmax=19&zmin=0&crs=EPSG3857'
layerName = 'Classified image'
rlayer = QgsRasterLayer(urlWithParams, layerName, 'wms')
# One of my attempt to create the renderer
fcn = QgsColorRampShader()
fcn.setColorRampType(QgsColorRampShader.Discrete)
lst = [QgsColorRampShader.ColorRampItem(1, QColor(0, 255, 0)),
QgsColorRampShader.ColorRampItem(2, QColor(65, 123, 23)),
QgsColorRampShader.ColorRampItem(3, QColor(123, 76, 34)),
QgsColorRampShader.ColorRampItem(4, QColor(45, 234, 223)),
QgsColorRampShader.ColorRampItem(5, QColor(90, 134, 23)),
QgsColorRampShader.ColorRampItem(6, QColor(45, 255, 156)),
QgsColorRampShader.ColorRampItem(7, QColor(245, 23, 123)),
QgsColorRampShader.ColorRampItem(8, QColor(233, 167, 87)),
QgsColorRampShader.ColorRampItem(9, QColor(123, 125, 23)),
QgsColorRampShader.ColorRampItem(10, QColor(213, 231, 123)),
QgsColorRampShader.ColorRampItem(11, QColor(255, 255, 0))]
fcn.setColorRampItemList(lst)
shader = QgsRasterShader()
shader.setRasterShaderFunction(fcn)
renderer = QgsSingleBandColorDataRenderer(rlayer.dataProvider(), 1, shader)
rlayer.setRenderer(renderer)
rlayer.triggerRepaint()
# Add rendered layer to QGIS map
QgsProject.instance().addMapLayer(rlayer)
It looks like the type of renderer is QgsSingleBandColorDataRenderer. Any idea how to make this work? Thanks!

Related

in __getattr__ raise AttributeError(name) AttributeError: shape

I'm creating tissue masks for a bunch of pathology images and in one of the steps for preparing them ı had to change the black pixels to white.
my code works for one image but when I want to apply it to image file in a directory I received this error :
I don't understand the error and don't know to solve it.
File "/Users/sepideh/Library/CloudStorage/GoogleDrive-.../My Drive/Remove_empty_pixels/Remove_empty_pixels.py", line 108, in <module>
height, width, _ = img.shape
File "/Users/sepideh/opt/anaconda3/envs/myenv/lib/python3.9/site-packages/PIL/Image.py", line 529, in __getattr__
raise AttributeError(name)
AttributeError: shape
and this is my code :
height, width, _ = img.shape
white_px = np.asarray([255, 255, 255])
black_px = np.asarray([0 , 0 , 0 ])
img2 = np.array(img, copy=True)
for i in range(height):
for j in range(width):
px = img[i][j]
if all(px == black_px):
img2[i][j] = white_px
I want to understand the reason for this error and a solution for it.

Reshape unbalanced data from wide to long using reshape function

I am currently working on longitudinal data and trying to reshape the data from the wide format to the long. The naming pattern of the time-varying variables is r*variable (for example, height data collected in wave 1 is r1height). The identifiers are hhid (household id) and pn (person id). The data itself is unbalanced. Some variables are observed from first wave to last wave, but others are only observed from the middle of the study (i.e., wave 3 to 5).
I have already reshaped the data using merged.stack from the splitstackshape package (see codes below).
df <- data.frame(hhid = c("10001", "10002", "10003", "10004"),
pn = c("001", "001", "001", "002"),
r1weight = c(56, 76, 87, 64),
r2weight = c(57, 75, 88, 66),
r3weight = c(56, 76, 87, 65),
r4weight = c(78,99,23,32),
r5weight = c(55, 77, 84, 65),
r1height = c(151, 163, 173, 153),
r2height = c(154, 164, NA, 154),
r3height = c(NA, 165, NA, 152),
r4height = c(153, 162, 172, 154),
r5height = c(152,161,171,154),
r3bmi = c(22,23,24,25),
r4bmi = c(23,24,20,19),
r5bmi = c(21,14,22,19))
library(splitstackshape)
# Merge stack (this is what I want)
long1 <- merged.stack(df, id.vars = c("hhid", "pn"),
var.stubs = c("weight", "height", "bmi"),
sep = "var.stubs", atStart = F, keep.all = FALSE)
Now I want to know if I can use the "reshape" function to get the same results. I have tried using reshape method but failed. For example, the reshape function, as shown in the code below, returns bizarre longitudinal data. I thought the "sep" statement should cause the problem, but I don't know how to specify a pattern for my time-varying variables.
# Reshape (Wrong results)
library(reshape)
namelist <- names(df)
namelist <- namelist[namelist %in% c("hhid", "pn") == FALSE]
long2 <- reshape(data=df,
varying = namelist,
sep = "",
direction = "long",
idvar = c("hhid", "pn"))
Could anyone let me know how to address this problem?
Thanks

How to set custom color ranges on deck.gl hexagon layer?

I want to assign colors to predefined ranges, eg red for 100-200, blue for <100 with no success.
I can set the colors but not custom ranges.
How can i do this or at least How deck quantize the color scaling?
You can do something like that
First of all, import from d3-scale package scaleThreshold, for example:
import { scaleThreshold } from 'd3-scale';
Now define your scale color function:
const colorScaleFunction = scaleThreshold()
.domain([1, 2, 3, 4, 5, 6])
.range([
[65, 182, 196],
[254, 178, 76],
[253, 141, 60],
[252, 78, 42],
[227, 26, 28],
[189, 0, 38],
]);
Then define your layer, for example a GeoJson layer.
Start with importing it:
import { GeoJsonLayer } from '#deck.gl/layers';
then define layer:
const geoJsonLayer = new GeoJsonLayer({
id: 'geojson-layer-example',
data: dataExample, /* just use some data */
getFillColor: (d) => colorScaleFunction(d.properties.someValue),
pickable: true,
});
Now you can render it:
return (
<DeckGL
layers={geoJsonLayer}
initialViewState={YOUR_INITIAL_VIEW_STATE}
controller={true}
>
<StaticMap
reuseMaps
mapStyle={YOUR_MAP_STYLE}
preventStyleDiffing={true}
mapboxApiAccessToken={YOUR_MAPBOX_TOKEN}
/>
</DeckGL>
);
Thats all, I think.

Can we visualize the embedding with multiple sprite images in tensorflow?

What I mean is, can I, for example, construct 2 different sprite images and be able to choose one of them while viewing embeddings in 2D/3D space using TSNE/PCA?
In other words, when using the following code:
embedding.sprite.image_path = "Path/to/the/sprite_image.jpg"
Is there a way to add another sprite image?
So, when training a Conv Net to distinguish between MNIST digits, I not only need to view the 1,2,..9, and 0 in the 3D/2D space, instead, I would like to see where are the ones gathering in that space. Same for 2s, 3s and so on. so I need a unique color for the 1s, another one for the 2s and so on... I need to view this as in the following image:
source
Any help is much appreciated!
There is an easier way to do this with filtering. You can just select the labels with a regex syntax:
If this is not what you are looking for, you could create a sprite image that assigns the same plain color image to each of your labels!
This functionality should come out of the box (without additional sprite images). See 'colour by' in the left sidepanel. You can toggle the A to switch sprite images on and off.
This run was produced with the example on the front page of the tensorboardX projector GitHub repo. https://github.com/lanpa/tensorboardX
You can also see a live demo with MNIST dataset (images and colours) at http://projector.tensorflow.org/
import torchvision.utils as vutils
import numpy as np
import torchvision.models as models
from torchvision import datasets
from tensorboardX import SummaryWriter
resnet18 = models.resnet18(False)
writer = SummaryWriter()
sample_rate = 44100
freqs = [262, 294, 330, 349, 392, 440, 440, 440, 440, 440, 440]
for n_iter in range(100):
dummy_s1 = torch.rand(1)
dummy_s2 = torch.rand(1)
# data grouping by `slash`
writer.add_scalar('data/scalar1', dummy_s1[0], n_iter)
writer.add_scalar('data/scalar2', dummy_s2[0], n_iter)
writer.add_scalars('data/scalar_group', {'xsinx': n_iter * np.sin(n_iter),
'xcosx': n_iter * np.cos(n_iter),
'arctanx': np.arctan(n_iter)}, n_iter)
dummy_img = torch.rand(32, 3, 64, 64) # output from network
if n_iter % 10 == 0:
x = vutils.make_grid(dummy_img, normalize=True, scale_each=True)
writer.add_image('Image', x, n_iter)
dummy_audio = torch.zeros(sample_rate * 2)
for i in range(x.size(0)):
# amplitude of sound should in [-1, 1]
dummy_audio[i] = np.cos(freqs[n_iter // 10] * np.pi * float(i) / float(sample_rate))
writer.add_audio('myAudio', dummy_audio, n_iter, sample_rate=sample_rate)
writer.add_text('Text', 'text logged at step:' + str(n_iter), n_iter)
for name, param in resnet18.named_parameters():
writer.add_histogram(name, param.clone().cpu().data.numpy(), n_iter)
# needs tensorboard 0.4RC or later
writer.add_pr_curve('xoxo', np.random.randint(2, size=100), np.random.rand(100), n_iter)
dataset = datasets.MNIST('mnist', train=False, download=True)
images = dataset.test_data[:100].float()
label = dataset.test_labels[:100]
features = images.view(100, 784)
writer.add_embedding(features, metadata=label, label_img=images.unsqueeze(1))
# export scalar data to JSON for external processing
writer.export_scalars_to_json("./all_scalars.json")
writer.close()
There are some threads mentioning that this currently fails beyond a threshold number of datapoints. https://github.com/lanpa/tensorboardX

How can i add points or nodes to a shape in vba?

I am trying to add points or nodes to a shape, so instead of having 4 points, I can have more
Here is my code adding shapes
Set shap2 = w.Shapes.AddShape(1, 330, 55, 100, 40)
shap2.Nodes.Insert , 6, msoEditingAuto, 1, 1
I am getting an error when I am trying to add Nodes... Any idea?