How to set custom color ranges on deck.gl hexagon layer? - deck.gl

I want to assign colors to predefined ranges, eg red for 100-200, blue for <100 with no success.
I can set the colors but not custom ranges.
How can i do this or at least How deck quantize the color scaling?

You can do something like that
First of all, import from d3-scale package scaleThreshold, for example:
import { scaleThreshold } from 'd3-scale';
Now define your scale color function:
const colorScaleFunction = scaleThreshold()
.domain([1, 2, 3, 4, 5, 6])
.range([
[65, 182, 196],
[254, 178, 76],
[253, 141, 60],
[252, 78, 42],
[227, 26, 28],
[189, 0, 38],
]);
Then define your layer, for example a GeoJson layer.
Start with importing it:
import { GeoJsonLayer } from '#deck.gl/layers';
then define layer:
const geoJsonLayer = new GeoJsonLayer({
id: 'geojson-layer-example',
data: dataExample, /* just use some data */
getFillColor: (d) => colorScaleFunction(d.properties.someValue),
pickable: true,
});
Now you can render it:
return (
<DeckGL
layers={geoJsonLayer}
initialViewState={YOUR_INITIAL_VIEW_STATE}
controller={true}
>
<StaticMap
reuseMaps
mapStyle={YOUR_MAP_STYLE}
preventStyleDiffing={true}
mapboxApiAccessToken={YOUR_MAPBOX_TOKEN}
/>
</DeckGL>
);
Thats all, I think.

Related

How to add colormap renderer from a classified single band "QgsRasterLayer" with PyQGIS

I'm trying to add a colormap to a TMS service which is serving single band PNG with values ranging from 1 to 11. At this point, the layer renders in black (low values between 1 and 11) but I would like it to render with a specific color for each of the 11 values. This is for a QGIS plugin that adds layer to the map.
Here is a sample of my code, any help would be very much appreciated!
# Create rlayer
urlWithParams = 'type=xyz&url=https://bucket_name.s3.ca-central-1.amazonaws.com/z/x/-y.png&zmax=19&zmin=0&crs=EPSG3857'
layerName = 'Classified image'
rlayer = QgsRasterLayer(urlWithParams, layerName, 'wms')
# One of my attempt to create the renderer
fcn = QgsColorRampShader()
fcn.setColorRampType(QgsColorRampShader.Discrete)
lst = [QgsColorRampShader.ColorRampItem(1, QColor(0, 255, 0)),
QgsColorRampShader.ColorRampItem(2, QColor(65, 123, 23)),
QgsColorRampShader.ColorRampItem(3, QColor(123, 76, 34)),
QgsColorRampShader.ColorRampItem(4, QColor(45, 234, 223)),
QgsColorRampShader.ColorRampItem(5, QColor(90, 134, 23)),
QgsColorRampShader.ColorRampItem(6, QColor(45, 255, 156)),
QgsColorRampShader.ColorRampItem(7, QColor(245, 23, 123)),
QgsColorRampShader.ColorRampItem(8, QColor(233, 167, 87)),
QgsColorRampShader.ColorRampItem(9, QColor(123, 125, 23)),
QgsColorRampShader.ColorRampItem(10, QColor(213, 231, 123)),
QgsColorRampShader.ColorRampItem(11, QColor(255, 255, 0))]
fcn.setColorRampItemList(lst)
shader = QgsRasterShader()
shader.setRasterShaderFunction(fcn)
renderer = QgsSingleBandColorDataRenderer(rlayer.dataProvider(), 1, shader)
rlayer.setRenderer(renderer)
rlayer.triggerRepaint()
# Add rendered layer to QGIS map
QgsProject.instance().addMapLayer(rlayer)
It looks like the type of renderer is QgsSingleBandColorDataRenderer. Any idea how to make this work? Thanks!

Tensorflow object detection evaluation

I like to evaluate my object detection model with mAP (mean average precision). In https://github.com/tensorflow/models/tree/master/research/object_detection/utils/ there is object_detection_evaluation.py that I want to use.
I use following for the groundtruth boxes:
pascal_evaluator = object_detection_evaluation.PascalDetectionEvaluator(
categories, matching_iou_threshold=0.1)
groundtruth_boxes = np.array([[10, 10, 11, 11]], dtype=float)
groundtruth_class_labels = np.array([1], dtype=int)
groundtruth_is_difficult_list = np.array([False], dtype=bool)
pascal_evaluator.add_single_ground_truth_image_info(
'img2',
{
standard_fields.InputDataFields.groundtruth_boxes: groundtruth_boxes,
standard_fields.InputDataFields.groundtruth_classes: groundtruth_class_labels,
standard_fields.InputDataFields.groundtruth_difficult: groundtruth_is_difficult_list
}
)
and this for the prediction Boxes:
# Add detections
image_key = 'img2'
detected_boxes = np.array(
[ [100, 100, 220, 220], [10, 10, 11, 11]],
dtype=float)
detected_class_labels = np.array([1,1], dtype=int)
detected_scores = np.array([0.8, 0.9], dtype=float)
pascal_evaluator.add_single_detected_image_info(image_key, {
standard_fields.DetectionResultFields.detection_boxes:
detected_boxes,
standard_fields.DetectionResultFields.detection_scores:
detected_scores,
standard_fields.DetectionResultFields.detection_classes:
detected_class_labels
})
I print the results with
metrics = pascal_evaluator.evaluate()
print(metrics)
And my Question:
if I use this prediction Boxes [100, 100, 220, 220], [10, 10, 11, 11] the result is:
{'PASCAL/Precision/mAP#0.1IOU': 1.0,
'PASCAL/PerformanceByCategory/AP#0.1IOU/face': 1.0}
If I use [10, 10, 11, 11], [100, 100, 220, 220] (other Box sequence)
I get following result:
{'PASCAL/Precision/mAP#0.1IOU': 0.5,
'PASCAL/PerformanceByCategory/AP#0.1IOU/face': 0.5}
Why is that so? Or is it bug?
Cheers Michael
Although you are not so clear about it I think I found the error in your code. You mentioned you get different results for different order of bounding boxes. This seems peculiar and if true then it was surely a bug.
But, since I tested the code myself, you probably did not change the corresponding scores (detected_scores = np.array([0.8, 0.9], dtype=float)) to the bounding boxes. But this way you changes also the problem not just the order of the bounding boxes. If you apply the correct bounding boxes the mAP remains the same in both cases:
{'PascalBoxes_Precision/mAP#0.5IOU': 1.0,
'PascalBoxes_PerformanceByCategory/AP#0.5IOU/person': 1.0}

Formatting Manipulate output to have 2 cells in Mathematica

The following output code outputs an array from the manipulate statement. I would like to output the fitting and plot as two separate output cells that update dynamically. I think it should be pretty simple, but I am having trouble with it. I've tried using the CellPrint[] function, but did not get it to work.
Thanks,
Tal
temperatures(*mK*)= {300, 200, 150, 100, 75, 50, 25, 11, 10};
F[t_, \[Nu]_] := t^\[Nu];
rd (*uOhms*)= {27173.91304, 31250., 42372.88136, 200601.80542,
1.05263*10^6, 1.33333*10^7, 1.33333*10^8, 2.*10^8, 2.1*10^8};
logRd = Log10[rd];
f[\[Nu]0_] := Module[{\[Nu]},
\[Nu] = \[Nu]0;
data = Transpose[{F[temperatures, \[Nu]]*10^3, logRd}];
fitToHexatic = LinearModelFit[data[[4 ;; 6]], x, x];
plota =
Plot[fitToHexatic["BestFit"], {x, 0, data[[-1]][[1]]},
Axes -> False];
plotb = ListPlot[data, Axes -> False];
{fitToHexatic, Show[{plota, plotb}, Axes -> True]}
]
Manipulate[
f[nu],
{nu, -0.2, -1}
]
Screenshot of the output:
You don't need to use a Manipulate. You can get more control with lower level functions. E.g.
Slider[Dynamic[nu, (f[#]; nu = #) &], {-0.2, -1}]
Dynamic[Normal[fitToHexatic]]
Dynamic[Show[{plota, plotb}, Axes -> True]]
See also Prototypical Manipulate in lower level functions.

Is it compulsory for SVM to have two labels?

I have a question regarding Support Vector Machine. Does SVM have to have two labels? Is it possible that there will be one label and prediction will be based on that label? For example, the following testData does not fit trainingData so it won't be labeled 1 but any other integer. The dilemma is that I do not know values for worst case scenario because all values are gotten from user input.
int labels[3] = {1, 1, 1};
cv::Mat labelsMat(3, 1, CV_32S, labels);
float trainingData[3][3] = { { 25, 191, 19 }, { 24, 186, 17}, { 25, 200, 19} };
float testData[3] = {70, 500, 100};
SVM is a classification method. By using SVM you can separate data into several parts and the labels requires for this. You should train SVM by more than one label then test it by new inputs.

Trouble setting up the SimpleVector encoder

Using the commits from breznak for the encoders (I wasn't able to figure out "git checkout ..." with GitHub, so I just carefully copied over the three files - base.py, multi.py, and multi_test.py).
I ran multi_test.py without any problems.
Then I adjusted my model parameters (MODEL_PARAMS), so that the encoders portion of 'sensorParams' looks like this:
'encoders': {
'frequency': {
'fieldname': u'frequency',
'type': 'SimpleVector',
'length': 5,
'minVal': 0,
'maxVal': 210
}
},
I also adjusted the modelInput portion of my code, so it looked like this:
model = ModelFactory.create(model_params.MODEL_PARAMS)
model.enableInference({'predictedField': 'frequency'})
y = [1,2,3,4,5]
modelInput = {"frequency": y}
result = model.run(modelInput)
But I get the final error, regardless if I instantiate 'y' as a list or a numpy.ndarray
File "nta/eng/lib/python2.7/site-packages/nupic/encoders/base.py", line 183, in _getInputValue
return getattr(obj, fieldname)
AttributeError: 'list' object has no attribute 'idx0'
I also tried initializing a SimpleVector encoder inline with my modelInput, directly encoding my array, then passing it through modelInput. That violated the input parameters of my SimpleVector, because I was now double encoding. So I removed the encoders portion of my model parameters dictionary. That caused a spit up, because some part of my model was looking for that portion of the dictionary.
Any suggestions on what I should do next?
Edit: Here're the files I'm using with the OPF.
sendAnArray.py
import numpy
from nupic.frameworks.opf.modelfactory import ModelFactory
import model_params
class sendAnArray():
def __init__(self):
self.model = ModelFactory.create(model_params.MODEL_PARAMS)
self.model.enableInference({'predictedField': 'frequency'})
for i in range(100):
self.run()
def run(self):
y = [1,2,3,4,5]
modelInput = {"frequency": y}
result = self.model.run(modelInput)
anomalyScore = result.inferences['anomalyScore']
print y, anomalyScore
sAA = sendAnArray()
model_params.py
MODEL_PARAMS = {
'model': "CLA",
'version': 1,
'predictAheadTime': None,
'modelParams': {
'inferenceType': 'TemporalAnomaly',
'sensorParams': {
'verbosity' : 0,
'encoders': {
'frequency': {
'fieldname': u'frequency',
'type': 'SimpleVector',
'length': 5,
'minVal': 0,
'maxVal': 210
}
},
'sensorAutoReset' : None,
},
'spEnable': True,
'spParams': {
'spVerbosity' : 0,
'globalInhibition': 1,
'columnCount': 2048,
'inputWidth': 5,
'numActivePerInhArea': 60,
'seed': 1956,
'coincInputPoolPct': 0.5,
'synPermConnected': 0.1,
'synPermActiveInc': 0.1,
'synPermInactiveDec': 0.01,
},
'tpEnable' : True,
'tpParams': {
'verbosity': 0,
'columnCount': 2048,
'cellsPerColumn': 32,
'inputWidth': 2048,
'seed': 1960,
'temporalImp': 'cpp',
'newSynapseCount': 20,
'maxSynapsesPerSegment': 32,
'maxSegmentsPerCell': 128,
'initialPerm': 0.21,
'permanenceInc': 0.1,
'permanenceDec' : 0.1,
'globalDecay': 0.0,
'maxAge': 0,
'minThreshold': 12,
'activationThreshold': 16,
'outputType': 'normal',
'pamLength': 1,
},
'clParams': {
'regionName' : 'CLAClassifierRegion',
'clVerbosity' : 0,
'alpha': 0.0001,
'steps': '5',
},
'anomalyParams': {
u'anomalyCacheRecords': None,
u'autoDetectThreshold': None,
u'autoDetectWaitRecords': 2184
},
'trainSPNetOnlyIfRequested': False,
},
}
The problem seems to be that the SimpleVector class is accepting an array instead of a dict as its input, and then reconstructs that internally as {'list': {'idx0': 1, 'idx1': 2, ...}} (ie as if this dict had been the input). This is fine if it is done consistently, but your error shows that it's broken down somewhere. Have a word with #breznak about this.
Working through the OPF was difficult. I wanted to input an array of indices into the temporal pooler, so I opted to interface directly with the algorithms (I relied heavy on hello_tp.py). I ignored SimpleVector all together, and instead worked through the BitmapArray encoder.
Subutai has a useful email on the nupic-discuss listserve, where he breaks down the three main areas of the NuPIC API: algorithms, networks/regions, & the OPF. That helped me understand my options better.