How do I access all the pixels for a Raster Source - openlayers-6

I am attempting to calculate some statistics for pixel values using openlayers 6.3.1 & I am having an issue iterating over all pixels. I have read the docs for the pixels array that gets passed to the operation callback and it states:
For pixel type operations, the function will be called with an array
of * pixels, where each pixel is an array of four numbers ([r, g, b, a]) in the * range of 0 - 255. It should return a single pixel
array.
I have taken this to mean that the array passed contains all the pixels but everything I do seems to prove that I only get the current pixel to work on.
if(this.rasterSource == null) {
this.rasterSource = new Raster({
sources: [this.imageLayer],
operation: function (pixels, data) {
data['originalPixels'] = pixels;
if(!isSetUp) {
// originalPixels = pixels as number[][];
// const originalPixels = Array.from(pixels as number[][]);
// let originals = generateOriginalHistograms(pixels as number[][]);
isSetUp = true;
}
// console.log(pixels[0]);
let pixel = pixels[0];
pixel[data['channel']] = data['value'];
return pixel;
},
lib: {
isSetUp: isSetUp,
numBins: numBins,
// originalPixels: originalPixels,
// originalRed: originalRed,
// originalGreen: originalGreen,
// originalBlue: originalBlue,
generateOriginalHistograms: generateOriginalHistograms,
}
});
this.rasterSource.on('beforeoperations', function(event) {
event.data.channel = 0;
event.data.value = 255;
});
this.rasterSource.on('afteroperations', function(event) {
console.debug("After Operations");
});
I have realised that I cannot pass arrays through the lib object so I have had to stop attempting that. These are the declarations I am currently using:
const numBins = 256;
var isSetUp: boolean = false;
function generateOriginalHistograms(pixels: number[][]) {
let originalRed = new Array(numBins).fill(0);
let originalGreen = new Array(numBins).fill(0);
let originalBlue = new Array(numBins).fill(0);
for(let i = 0; i < numBins; ++i) {
originalRed[Math.floor(pixels[i][0])]++
originalGreen[Math.floor(pixels[i][1])]++;
originalBlue[Math.floor(pixels[i][2])]++;
}
return {red: originalRed, blue: originalBlue, green: originalGreen};
}
& they are declared outside of the current angular component that I am writing this in. I did have another question on this but I have since realised that I was way off in what I could and couldn't use here;
This now runs and, as it is currently commented will tint the image red. But the value of data['originalPixels'] = pixels; only ever produces one pixel. Can anyone tell me why this is & what I need to do to access the whole pixel array. I have tried to slice & spread the array to no avail. If I uncomment the line // let originals = generateOriginalHistograms(pixels as number[][]); I get an error ​
Uncaught TypeError: Cannot read properties of undefined (reading '0')
generateOriginalHistograms # blob:http://localhos…a7fa-b5a410582c06:6
(anonymous) # blob:http://localhos…7fa-b5a410582c06:76
(anonymous) # blob:http://localhos…7fa-b5a410582c06:62
(anonymous) # blob:http://localhos…7fa-b5a410582c06:83
& if I uncomment the line // console.log(pixels[0]); I get all the pixel values streaming in the console but quite slowly.

The answer appears to be change the operationType to 'image' and work with the ImageData object.
this.rasterSource = new Raster({
sources: [this.imageLayer],
operationType: "image",
operation: function (pixels, data) {
let imageData = pixels[0] as ImageData;
...
I now have no issues calculating the stats I need.

Related

How do I extract the output from CGAL::poisson_surface_reconstruction_delaunay?

I am trying to convert a point cloud to a trimesh using CGAL::poisson_surface_reconstruction_delaunay() and extract the data inside the trimesh to an OpenGL friendly format:
// The function below should set vertices and indices so that:
// triangle 0: (vertices[indices[0]],vertices[indices[1]],vertices[indices[2]]),
// triangle 1: (vertices[indices[3]],vertices[indices[4]],vertices[indices[5]])
// ...
// triangle n - 1
void reconstructPointsToSurfaceInOpenGLFormat(const& std::list<std::pair<Kernel::Point_3, Kernel::Vector_3>> points, // input: points and normals
std::vector<glm::vec3>& vertices, // output
std::vector<unsigned int>& indices) { // output
CGAL::Surface_mesh<Kernel::Point_3> trimesh;
double spacing = 10;
bool ok = CGAL::poisson_surface_reconstruction_delaunay(points.begin(), points.end(),
CGAL::First_of_pair_property_map<std::pair<Kernel::Point_3, Kernel::Vector_3>>(),
CGAL::Second_of_pair_property_map<std::pair<Kernel::Point_3, Kernel::Vector_3>>(),
trimesh, spacing);
// How do I set the vertices and indices values?
}
Please help me on iterating trough the triangles in trimesh and setting the vertices and indices in the code above.
The class Polyhedron_3 is not indexed based so you need to provide a item class with ids like Polyhedron_items_with_id_3. You will then need to call CGAL::set_halfedgeds_items_id(trimesh) to init the ids. If you can't modify the Polyhedron type, then you can use dynamic properties and will need to init the ids.
Note that Surface_mesh is indexed based and no particular handling is needed to get indices.
Based on sloriots code from his answer:
void mesh2GLM(CGAL::Surface_mesh<Kernel::Point_3>& trimesh, std::vector<glm::vec3>& vertices, std::vector<int>& indices) {
std::map<size_t, size_t> meshIndex2Index;
// Loop over all vertices in mesh:
size_t index = 0;
for (Mesh::Vertex_index v : CGAL::vertices(trimesh)) {
CGAL::Epick::Point_3 point = trimesh.point(v);
std::size_t vi = v;
vertices.push_back(glm::vec3(point.x(), point.y(), point.z()));
meshIndex2Index[vi] = index;
index++;
}
// Loop over all triangles (faces):
for (Mesh::Face_index f : faces(trimesh)) {
for (Mesh::Vertex_index v : CGAL::vertices_around_face(CGAL::halfedge(f, trimesh), trimesh)) {
trimesh.point(v);
std::size_t vi = v;
size_t index = meshIndex2Index[vi];
indices.push_back(index);
}
}
}
Seems to work fine.

Single usage VkImageMemoryBarrier?

I've been learning vulkan and following vulkan-tutorial and right now I'm at the Texture mapping part. I'm loading an image and uploading it to the host memory, but I'm having trouble understanding the layout transitions and barriers.
Consider this (pseudo)code for loading and transitioning an image (inspired by this), which will be sampled in a fragment shader:
auto texture = loadTexture(filePath);
auto stagingBuffer = createStagingBuffer(texture.pixels, texture.size);
// Create image with:
// usage - VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT
// properties - VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
auto imageBuffer = createImage();
// -- begin single usage command buffer --
auto cb = beginCommandBuffer();
VkImageMemoryBarrier preCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED,
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
// ...
};
// PipelineBarrier (preCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
// dstStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// imageMemoryBarrier = &preCopyBarrier
vkCmdPipelineBarrier(cb, ...);
// Copy stagingBuffer.buffer to imageBuffer.image
// dstImageLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL
vkCmdCopyBufferToImage(cb, ...);
VkImageMemoryBarrier postCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.dstAccessMask = VK_ACCESS_SHADER_READ_BIT,
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
// ...
};
// PipelineBarrier (postCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
// imageMemoryBarrier = &postCopyBarrier
vkCmdPipelineBarrier(cb, ...)
endAndSubmitCommandBuffer(cb);
The preCopyBarrier is there because of the vkCmdCopyBufferToImage(...) command and will be "used"/"activated" only once and that is during this command(?).
The postCopyBarrier is there because of the fact, that it will be sampled in the fragment shader, so the layout transition
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL -> VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL
has to happen every single time a frame is rendered(? Please correct me, if I'm wrong).
But (assuming I'm correct, which I'm probably not) I'm having trouble wrapping my head around the fact, that I'm creating a preCopyBarrier, which will be used only once and postCopyBarrier, which will be used continuously. If I were to load for example 200 textures, I'd have a bunch of their single usage preCopyBarriers laying around. Isn't this a...waste?
This might be a stupid question and I'm probably missing/misunderstanding something important, but I feel like I shouldn't move on without understanding this concept correctly.

Tensorflow retrained graph in C# (Tensorflowsharp)

I'am just trying to use a retrained inception model in Tensorflow sharp in Unity.
The retrained model was prepared with optimize_for_inference and is working like a charm in python.
But it is pretty inaccurate in c#.
the code works like this:
First i get the Picture
//webcamtexture transformed to picture in jpg
var pic = _texture.EncodeToJpg();
//added Picture to queue for the object detection thread
_detectedObjects.addTens(pic);
After that a thread will handle each collected picture
public void HandlePicture(byte[] picture)
{
var tensor = ImageUtil.CreateTensorFromImageFile(picture);
var runner = session.GetRunner();
runner.AddInput(g_input, tensor).Fetch(g_output);
var output = runner.Run();
var bestIdx = 0;
float best = 0;
var result = output[0];
var rshape = result.Shape;
var probabilities = ((float[][])result.GetValue(jagged: true))[0];
for (int r = 0; r < probabilities.Length; r++)
{
if (probabilities[r] > best)
{
bestIdx = r;
best = probabilities[r];
}
}
Debug.Log("Tensorflow thinks this is: " + labels[bestIdx] + " Prob : " + best * 100);
}
so my guess is:
1.it has something to do with retrained graphs (because i can't find any application/test it is used and working).
2.It has something to do with how i handle the picture transform into a tensor?! (but if that is wrong i could need help there, the code further down)
to transform the picture i'am also using a graph like it is used in the tensorsharp example
public static class ImageUtil
{
// Convert the image in filename to a Tensor suitable as input to the Inception model.
public static TFTensor CreateTensorFromImageFile(byte[] contents, TFDataType destinationDataType = TFDataType.Float)
{
// DecodeJpeg uses a scalar String-valued tensor as input.
var tensor = TFTensor.CreateString(contents);
TFGraph graph;
TFOutput input, output;
// Construct a graph to normalize the image
ConstructGraphToNormalizeImage(out graph, out input, out output, destinationDataType);
// Execute that graph to normalize this one image
using (var session = new TFSession(graph))
{
var normalized = session.Run(
inputs: new[] { input },
inputValues: new[] { tensor },
outputs: new[] { output });
return normalized[0];
}
}
// The inception model takes as input the image described by a Tensor in a very
// specific normalized format (a particular image size, shape of the input tensor,
// normalized pixel values etc.).
//
// This function constructs a graph of TensorFlow operations which takes as
// input a JPEG-encoded string and returns a tensor suitable as input to the
// inception model.
private static void ConstructGraphToNormalizeImage(out TFGraph graph, out TFOutput input, out TFOutput output, TFDataType destinationDataType = TFDataType.Float)
{
// Some constants specific to the pre-trained model at:
// https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
//
// - The model was trained after with images scaled to 224x224 pixels.
// - The colors, represented as R, G, B in 1-byte each were converted to
// float using (value - Mean)/Scale.
const int W = 299;
const int H = 299;
const float Mean = 128;
const float Scale = 1;
graph = new TFGraph();
input = graph.Placeholder(TFDataType.String);
output = graph.Cast(graph.Div(
x: graph.Sub(
x: graph.ResizeBilinear(
images: graph.ExpandDims(
input: graph.Cast(
graph.DecodeJpeg(contents: input, channels: 3), DstT: TFDataType.Float),
dim: graph.Const(0, "make_batch")),
size: graph.Const(new int[] { W, H }, "size")),
y: graph.Const(Mean, "mean")),
y: graph.Const(Scale, "scale")), destinationDataType);
}
}

Vulkan depth image binding error

Hi I am trying to bind depth memory buffer but I get an error saying as below. I have no idea why this error is popping up.
The depth format is VK_FORMAT_D16_UNORM and the usage is VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT. I have read online that the TILING shouldnt be linear but then I get a different error. Thanks!!!
The code for creating and binding the image is as below.
VkImageCreateInfo imageInfo = {};
// If the depth format is undefined, use fallback as 16-byte value
if (Depth.format == VK_FORMAT_UNDEFINED) {
Depth.format = VK_FORMAT_D16_UNORM;
}
const VkFormat depthFormat = Depth.format;
VkFormatProperties props;
vkGetPhysicalDeviceFormatProperties(*deviceObj->gpu, depthFormat, &props);
if (props.linearTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_LINEAR;
}
else if (props.optimalTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
}
else {
std::cout << "Unsupported Depth Format, try other Depth formats.\n";
exit(-1);
}
imageInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageInfo.pNext = NULL;
imageInfo.imageType = VK_IMAGE_TYPE_2D;
imageInfo.format = depthFormat;
imageInfo.extent.width = width;
imageInfo.extent.height = height;
imageInfo.extent.depth = 1;
imageInfo.mipLevels = 1;
imageInfo.arrayLayers = 1;
imageInfo.samples = NUM_SAMPLES;
imageInfo.queueFamilyIndexCount = 0;
imageInfo.pQueueFamilyIndices = NULL;
imageInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
imageInfo.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
imageInfo.flags = 0;
// User create image info and create the image objects
result = vkCreateImage(deviceObj->device, &imageInfo, NULL, &Depth.image);
assert(result == VK_SUCCESS);
// Get the image memory requirements
VkMemoryRequirements memRqrmnt;
vkGetImageMemoryRequirements(deviceObj->device, Depth.image, &memRqrmnt);
VkMemoryAllocateInfo memAlloc = {};
memAlloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memAlloc.pNext = NULL;
memAlloc.allocationSize = 0;
memAlloc.memoryTypeIndex = 0;
memAlloc.allocationSize = memRqrmnt.size;
// Determine the type of memory required with the help of memory properties
pass = deviceObj->memoryTypeFromProperties(memRqrmnt.memoryTypeBits, 0, /* No requirements */ &memAlloc.memoryTypeIndex);
assert(pass);
// Allocate the memory for image objects
result = vkAllocateMemory(deviceObj->device, &memAlloc, NULL, &Depth.mem);
assert(result == VK_SUCCESS);
// Bind the allocated memeory
result = vkBindImageMemory(deviceObj->device, Depth.image, Depth.mem, 0);
assert(result == VK_SUCCESS);
Yes, linear tiling may not be supported for depth usage Images.
Consult the specification and Valid Usage section of VkImageCreateInfo. The capability is queried by vkGetPhysicalDeviceFormatProperties and vkGetPhysicalDeviceImageFormatProperties commands. Though depth formats are "opaque", so there is not much reason to use linear tiling.
This you seem to be doing in your code.
But the error informs you that you are trying to use a memory type that is not allowed for the given Image. Use vkGetImageMemoryRequirements command to query which memory types are allowed.
Possibly you have some error there (you are using 0x1 which is obviously not part of 0x84 per the message). You may want to reuse the example code in the Device Memory chapter of the specification. Provide your memoryTypeFromProperties implementation for more specific answer.
I accidentally set the typeIndex to 1 instead of i and it works now. In my defense I have been vulkan coding the whole day and my eyes are bleeding :). Thanks for the help.
bool VulkanDevice::memoryTypeFromProperties(uint32_t typeBits, VkFlags
requirementsMask, uint32_t *typeIndex)
{
// Search memtypes to find first index with those properties
for (uint32_t i = 0; i < 32; i++) {
if ((typeBits & 1) == 1) {
// Type is available, does it match user properties?
if ((memoryProperties.memoryTypes[i].propertyFlags & requirementsMask) == requirementsMask) {
*typeIndex = i;// was set to 1 :(
return true;
}
}
typeBits >>= 1;
}
// No memory types matched, return failure
return false;
}

Making sense of a list of GPS values in an iOS application

I have a web service that interfaces with the google maps API to generate a polygon on a google map. The service takes the GPS values and stores them for retrieval.
The problem is that when I try and use these values on my iPhone app the MKPolyline is just either a mess or a bunch of zig-zag lines.
Is there a way to make sense of these values so I can reconstruct the polygon?
My current code looks like this
private void GenerateMap()
{
var latCoord = new List<double>();
var longCoord = new List<double>();
var pad = AppDelegate.Self.db.GetPaddockFromCrop(crop);
mapMapView.MapType = MKMapType.Standard;
mapMapView.ZoomEnabled = true;
mapMapView.ScrollEnabled = false;
mapMapView.OverlayRenderer = (m, o) =>
{
if (o.GetType() == typeof(MKPolyline))
{
var p = new MKPolylineRenderer((MKPolyline)o);
p.LineWidth = 2.0f;
p.StrokeColor = UIColor.Green;
return p;
}
else
return null;
};
scMapType.ValueChanged += (s, e) =>
{
switch (scMapType.SelectedSegment)
{
case 0:
mapMapView.MapType = MKMapType.Standard;
break;
case 1:
mapMapView.MapType = MKMapType.Satellite;
break;
case 2:
mapMapView.MapType = MKMapType.Hybrid;
break;
}
};
if (pad.Boundaries != null)
{
var bounds = pad.Boundaries.OrderBy(t => t.latitude).ThenBy(t => t.longitude).ToList();
foreach (var l in bounds)
{
double lat = l.latitude;
double lon = l.longitude;
latCoord.Add(lat);
longCoord.Add(lon);
}
if (latCoord.Count != 0)
{
if (latCoord.Count > 0)
{
var coord = new List<CLLocationCoordinate2D>();
for (int i = 0; i < latCoord.Count; ++i)
{
var c = new CLLocationCoordinate2D();
c.Latitude = latCoord[i];
c.Longitude = longCoord[i];
coord.Add(c);
}
var line = MKPolyline.FromCoordinates(coord.ToArray());
mapMapView.AddOverlay(line);
mapMapView.SetVisibleMapRect(line.BoundingMapRect, true);
}
}
}
}
MKPolygon / MKPolygonRenderer gives the same sort of random line mess. The OrderBy LINQ makes no difference other than to make the random lines a zig-zag going up or down the view.
Since you don't know the order the points were captured in, you can't trace the actual path traveled around the perimeter of the paddock; this is why your polylines are turning into silly-walks all over the map. Lacking that information, you can at best make an educated guess.
Some possible heuristics you might want to try:
Take the average of all the points to get a "somewhere in the middle" point, then order by atan2(l.latitude - middle.latitude, l.longitude - middle.longitude). (Be careful, atan2 is undefined at (0, 0)!)
Take the convex hull of the points captured: for a relatively small number of points you can get away with the simple quadratic time Jarvis's march. This has the approximate effect of wrapping a notional rubber band around the outside of the map push-pins by discarding points that would form concavities, and should also give you the order of the remaining points.