How to impelement post-proccesing for yolo v3 or v4 onnx models in ML.Net - yolo

I followed this microsoft tutorial and there was no problem. but i wanted to change model to yolo v3 or v4. I get the YOLOv4 onnx model from onnx/models and was able to get all three array of float outputs of yolov4 onnx model but the problem is with post-processing and i can't get proper boundinboxes from these outputs.
I changed all things like anchors, strides, output grid sizes, some functios and... in microsoft tutorial src code to be compatible with yolov4. but I cant get proper results.
I checked all my code with python implementation but i don't know where is the problem.
Does anyone have a link or knows how to implement yolo v3 or v4 onnx models in c# with ML.Net
Any help will be appreciated

I think it is not possible to directly port microsoft's tutorial from YOLO v2 to v3 as it relies on the inputs and outputs of each model.
As a side note, I did a port of another YOLO v3 model to ML.Net in this GitHub repo: 'YOLOv3MLNet'. It contains a fully functionning ML.Net pipeline.
I've also made the code of this answer available here:
YOLO v3 with ML.Net
YOLO v4 with ML.Net
To go back to your models, I'll take the YOLO v3 (available in the onnx/models repo) as an example. A good explaination of the model can be found here.
First advice would be to look at the model using Netron. Doing so, you will see the input and output layers. They also describe these layers in the onnx/models documentation.
Netron's yolov3-10 screenshot
(I see in Netron that this particular YOLO v3 model also does some post-processing by doing the Non-maximum supression step.)
Input layers names: input_1, image_shape
Ouput layers names: yolonms_layer_1/ExpandDims_1:0, yolonms_layer_1/ExpandDims_3:0, yolonms_layer_1/concat_2:0
As per the model documentation, the input shapes are:
Resized image (1x3x416x416) Original image size (1x2) which is [image.size['1], image.size[0]]
We first need to define the ML.Net input and output classes as follow:
public class YoloV3BitmapData
{
[ColumnName("bitmap")]
[ImageType(416, 416)]
public Bitmap Image { get; set; }
[ColumnName("width")]
public float ImageWidth => Image.Width;
[ColumnName("height")]
public float ImageHeight => Image.Height;
}
public class YoloV3Prediction
{
/// <summary>
/// ((52 x 52) + (26 x 26) + 13 x 13)) x 3 = 10,647.
/// </summary>
public const int YoloV3BboxPredictionCount = 10_647;
/// <summary>
/// Boxes
/// </summary>
[ColumnName("yolonms_layer_1/ExpandDims_1:0")]
public float[] Boxes { get; set; }
/// <summary>
/// Scores
/// </summary>
[ColumnName("yolonms_layer_1/ExpandDims_3:0")]
public float[] Scores { get; set; }
/// <summary>
/// Concat
/// </summary>
[ColumnName("yolonms_layer_1/concat_2:0")]
public int[] Concat { get; set; }
}
We then create the ML.Net pipeline and load the prediction engine:
// Define scoring pipeline
var pipeline = mlContext.Transforms.ResizeImages(inputColumnName: "bitmap", outputColumnName: "input_1", imageWidth: 416, imageHeight: 416, resizing: ResizingKind.IsoPad)
.Append(mlContext.Transforms.ExtractPixels(outputColumnName: "input_1", outputAsFloatArray: true, scaleImage: 1f / 255f))
.Append(mlContext.Transforms.Concatenate("image_shape", "height", "width"))
.Append(mlContext.Transforms.ApplyOnnxModel(shapeDictionary: new Dictionary<string, int[]>() { { "input_1", new[] { 1, 3, 416, 416 } } },
inputColumnNames: new[]
{
"input_1",
"image_shape"
},
outputColumnNames: new[]
{
"yolonms_layer_1/ExpandDims_1:0",
"yolonms_layer_1/ExpandDims_3:0",
"yolonms_layer_1/concat_2:0"
},
modelFile: #"D:\yolov3-10.onnx"));
// Fit on empty list to obtain input data schema
var model = pipeline.Fit(mlContext.Data.LoadFromEnumerable(new List<YoloV3BitmapData>()));
// Create prediction engine
var predictionEngine = mlContext.Model.CreatePredictionEngine<YoloV3BitmapData, YoloV3Prediction>(model);
NB: We need to define the shapeDictionary parameter because they are not completly defined in the model.
As per the model documentation, the output shapes are:
The model has 3 outputs. boxes: (1x'n_candidates'x4), the coordinates of all anchor boxes, scores: (1x80x'n_candidates'), the scores of all anchor boxes per class, indices: ('nbox'x3), selected indices from the boxes tensor. The selected index format is (batch_index, class_index, box_index).
The function below will help you process the results, I leave it to you fine-tune it.
public IReadOnlyList<YoloV3Result> GetResults(YoloV3Prediction prediction, string[] categories)
{
if (prediction.Concat == null || prediction.Concat.Length == 0)
{
return new List<YoloV3Result>();
}
if (prediction.Boxes.Length != YoloV3Prediction.YoloV3BboxPredictionCount * 4)
{
throw new ArgumentException();
}
if (prediction.Scores.Length != YoloV3Prediction.YoloV3BboxPredictionCount * categories.Length)
{
throw new ArgumentException();
}
List<YoloV3Result> results = new List<YoloV3Result>();
// Concat size is 'nbox'x3 (batch_index, class_index, box_index)
int resulstCount = prediction.Concat.Length / 3;
for (int c = 0; c < resulstCount; c++)
{
var res = prediction.Concat.Skip(c * 3).Take(3).ToArray();
var batch_index = res[0];
var class_index = res[1];
var box_index = res[2];
var label = categories[class_index];
var bbox = new float[]
{
prediction.Boxes[box_index * 4],
prediction.Boxes[box_index * 4 + 1],
prediction.Boxes[box_index * 4 + 2],
prediction.Boxes[box_index * 4 + 3],
};
var score = prediction.Scores[box_index + class_index * YoloV3Prediction.YoloV3BboxPredictionCount];
results.Add(new YoloV3Result(bbox, label, score));
}
return results;
}
In this version of the model, they are 80 classes (see the model's GitHub documentation for the link).
You can use the above like this:
// load image
string imageName = "dog_cat.jpg";
using (var bitmap = new Bitmap(Image.FromFile(Path.Combine(imageFolder, imageName))))
{
// predict
var predict = predictionEngine.Predict(new YoloV3BitmapData() { Image = bitmap });
var results = GetResults(predict, classesNames);
// draw predictions
using (var g = Graphics.FromImage(bitmap))
{
foreach (var result in results)
{
var y1 = result.BBox[0];
var x1 = result.BBox[1];
var y2 = result.BBox[2];
var x2 = result.BBox[3];
g.DrawRectangle(Pens.Red, x1, y1, x2-x1, y2-y1);
using (var brushes = new SolidBrush(Color.FromArgb(50, Color.Red)))
{
g.FillRectangle(brushes, x1, y1, x2 - x1, y2 - y1);
}
g.DrawString(result.Label + " " + result.Confidence.ToString("0.00"),
new Font("Arial", 12), Brushes.Blue, new PointF(x1, y1));
}
bitmap.Save(Path.Combine(imageOutputFolder, Path.ChangeExtension(imageName, "_processed" + Path.GetExtension(imageName))));
}
}
You can find a result example here.

Related

Passing image to tflite model

I loaded tensor-flow Lite model in Flutter but i am having some problem passing the image to the model for prediction. The prediction method allows any object to pass but Flutter is reading images as Files as i am using Image Picker class and I cant find a way to convert file type into an image so that i can convert it into a (28,28,1) which is required by the model.
Thanks, any help would be appreciated.
Uint8List imageToByteListFloat32(
img.Image image, int inputSize, double mean, double std) {
var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
var buffer = Float32List.view(convertedBytes.buffer);
int pixelIndex = 0;
for (var i = 0; i < inputSize; i++) {
for (var j = 0; j < inputSize; j++) {
var pixel = image.getPixel(j, i);
buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
}
}
return convertedBytes.buffer.asUint8List();
}
classifyImage(File image) async {
var output = await Tflite.runModelOnBinary(
binary: imageToByteListFloat32(image, 224, 127.5, 127.5),// required
numResults: 10, // defaults to 5
threshold: 0.05, // defaults to 0.1
asynch: true // defaults to true
);
setState(() {
_loading = false;
_outputs = output;
});
}`
You can use the flutter image lib to get an image from a file.
import 'package:image/image.dart' as imageLib;
var image = imageLib.decodeImage(await imageFile.readAsBytes());
Also this github repo from shaquian might be helpful for you!
https://github.com/shaqian/flutter_tflite/blob/master/example/lib/main.dart

Screen Size of the camera on the example object detection of tensorflow lite

On the tensorflow lite example object detection, the camera don't take all the screen but just a part.
I tried to find some constant in CameraActivity, CameraConnectionFragment and Size classes but no results.
So I just want a way to put the camera in all the screen or just an explanation.
Thanks you.
I just find the solution, it's in the CameraConnectionFragment class :
protected static Size chooseOptimalSize(final Size[] choices, final int width, final int height) {
final int minSize = Math.max(Math.min(width, height), MINIMUM_PREVIEW_SIZE);
final Size desiredSize = new Size(1280, 720);
protected static Size chooseOptimalSize(final Size[] choices, final int width, final int height) {
final int minSize = Math.max(Math.min(width, height), MINIMUM_PREVIEW_SIZE);
final Size desiredSize = new Size(1280, 720);
// Collect the supported resolutions that are at least as big as the preview Surface
boolean exactSizeFound = false;
final List<Size> bigEnough = new ArrayList<Size>();
final List<Size> tooSmall = new ArrayList<Size>();
for (final Size option : choices) {
if (option.equals(desiredSize)) {
// Set the size but don't return yet so that remaining sizes will still be logged.
exactSizeFound = true;
}
if (option.getHeight() >= minSize && option.getWidth() >= minSize) {
bigEnough.add(option);
} else {
tooSmall.add(option);
}
}
just replace 1280, 720 by what we want.

First Person Camera controls

I have recently started up a 3d first person shooter game in Monogame and I am having some issues with the camera controls, I am unable to figure out how I make the camera slowly turn on it's X axis when I hold down the left/right arrow keys.
At the minute, the code I have is as follows:
Matrix view = Matrix.CreateLookAt(new Vector3(60, 20, 10), new Vector3(0, 0, 0), Vector3.UnitZ);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45), 800f / 600f, 0.1f, 100f);
And then down in the update section I have this:
if (kb.IsKeyDown(Keys.Left))
{
view = Matrix.CreateLookAt(new Vector3(60, 20, 10), new Vector3(-2, -2, -2), Vector3.UnitZ);
}
The issue is at the minute this code simply moves the camera to the side a little then stops. I am unsure on how to keep having it move until I let go of the key?
The entire of my code will be shown below incase I forgot something (the floor verts currently don't work and the names related to a ship is due to me working from a tutorial):
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
namespace Game1
{
/// <summary>
/// This is the main type for your game.
/// </summary>
public class Game1 : Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Model model;
Vector3 ship1Location = new Vector3(40, 0, 0);
Vector3 ship2Location = new Vector3(20, 0, 0);
Matrix view = Matrix.CreateLookAt(new Vector3(60, 20, 10), new Vector3(0, 0, 0), Vector3.UnitZ);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45), 800f / 600f, 0.1f, 100f);
VertexPositionTexture[] floorVerts;
BasicEffect effect;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
}
protected override void Initialize()
{
floorVerts = new VertexPositionTexture[6];
floorVerts[0].Position = new Vector3(-20, -20, 0);
floorVerts[1].Position = new Vector3(-20, 20, 0);
floorVerts[2].Position = new Vector3(20, -20, 0);
floorVerts[3].Position = floorVerts[1].Position;
floorVerts[4].Position = new Vector3(20, 20, 0);
floorVerts[5].Position = floorVerts[2].Position;
effect = new BasicEffect(graphics.GraphicsDevice);
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
model = Content.Load<Model>("health2");
}
protected override void UnloadContent()
{
}
protected override void Update(GameTime gameTime)
{
// Allows the game to exit
if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)
this.Exit();
Matrix ship1WorldMatrix = Matrix.CreateTranslation(ship1Location);
Matrix ship2WorldMatrix = Matrix.CreateTranslation(ship2Location);
if (IsCollision(model, ship1WorldMatrix, model, ship2WorldMatrix))
{
ship1Location = new Vector3(0, -20, 0);
}
KeyboardState kb = Keyboard.GetState();
if (kb.IsKeyDown(Keys.A))
{
ship1Location += new Vector3(-0.1f, 0, 0);
}
if (kb.IsKeyDown(Keys.Left))
{
view = Matrix.CreateLookAt(new Vector3(60, 20, 10), new Vector3(-2, -2, -2), Vector3.UnitZ);
}
ship2Location += new Vector3(0, 0, 0);
base.Update(gameTime);
}
private bool IsCollision(Model model1, Matrix world1, Model model2, Matrix world2)
{
for (int meshIndex1 = 0; meshIndex1 < model1.Meshes.Count; meshIndex1++)
{
BoundingSphere sphere1 = model1.Meshes[meshIndex1].BoundingSphere;
sphere1 = sphere1.Transform(world1);
for (int meshIndex2 = 0; meshIndex2 < model2.Meshes.Count; meshIndex2++)
{
BoundingSphere sphere2 = model2.Meshes[meshIndex2].BoundingSphere;
sphere2 = sphere2.Transform(world2);
if (sphere1.Intersects(sphere2))
return true;
}
}
return false;
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
DrawGround();
Matrix ship1WorldMatrix = Matrix.CreateTranslation(ship1Location);
Matrix ship2WorldMatrix = Matrix.CreateTranslation(ship2Location);
DrawModel(model, ship1WorldMatrix, view, projection);
DrawModel(model, ship2WorldMatrix, view, projection);
base.Draw(gameTime);
}
void DrawGround()
{
// The assignment of effect.View and effect.Projection
// are nearly identical to the code in the Model drawing code.
float aspectRatio =
graphics.PreferredBackBufferWidth / (float)graphics.PreferredBackBufferHeight;
float fieldOfView = Microsoft.Xna.Framework.MathHelper.PiOver4;
float nearClipPlane = 1;
float farClipPlane = 200;
effect.Projection = Matrix.CreatePerspectiveFieldOfView(
fieldOfView, aspectRatio, nearClipPlane, farClipPlane);
foreach (var pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
graphics.GraphicsDevice.DrawUserPrimitives(
// We’ll be rendering two trinalges
PrimitiveType.TriangleList,
// The array of verts that we want to render
floorVerts,
// The offset, which is 0 since we want to start
// at the beginning of the floorVerts array
0,
// The number of triangles to draw
2);
}
}
private void DrawModel(Model model, Matrix world, Matrix view, Matrix projection)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.AmbientLightColor = new Vector3(2f, 0, 0);
effect.World = world;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
}
}
Since the camera's position and the point it is looking at are necessary parameters to create a view matrix, you can simply rotate (think orbit) the LookAt camLookAt around the camPosition like this:
//declare class scope variables
Vector3 camPosition = new Vector3(60, 20, 10);//your starting camera position
Vector3 camLookAt = Vector3.Zero;//your starting camera focus point (look at)
Vector2 camUp = Vector3.Up;
float camYawRate = 0.004f;//set to taste
//in the Update method
float elapsed = gameTime.ElapsedGameTime.TotalSeconds;
//later in the method...
if (kb.IsKeyDown(Keys.Left))
{
camLookAt = Vector3.Transform(camLookAt - camPosition,Matrix.CreateRotationY(-camYawRate * elapsedTime)) + camPosition;);//remove the - sign from camYawRate to rotate to the right (or vice versa)
view = Matrix.CreateLookAt(camPosition, camLookAt, camUp);
}
And that's it, give it a shot. Add another similar block to rotate to the right.

Draw the closest Bone (Shoulder - Head) of 2 tracking skeletons Kinect SDK v1.6

I'm looking a solution to draw the bone of the closest skeleton. I wrote code to draw the first tracking skeleton's bone. If the second person will closer than first I want to draw bone of second skeleton. Maybe somebody have any idea how to do it?
void sensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
// check for frame drop.
if (skeletonFrame == null)
{
return;
}
// copy the frame data in to the collection
skeletonFrame.CopySkeletonDataTo(totalSkeleton);
// get the first Tracked skeleton
skeleton = (from trackskeleton in totalSkeleton
where trackskeleton.TrackingState == SkeletonTrackingState.Tracked
select trackskeleton).FirstOrDefault();
if (skeleton != null)
{
if (skeleton.Joints[JointType.ShoulderCenter].TrackingState == JointTrackingState.Tracked && skeleton.Joints[JointType.Head].TrackingState == JointTrackingState.Tracked)
{
myCanvas.Children.Clear();
this.DrawHead();
}
}
}
}
// Draws the head.
private void DrawHead()
{
if (skeleton != null)
{
drawBone(skeleton.Joints[JointType.ShoulderCenter], skeleton.Joints[JointType.Head]);
}
}
// Draws the bone.
void drawBone(Joint trackedJoint1, Joint trackedJoint2)
{
Line skeletonBone = new Line();
skeletonBone.Stroke = Brushes.Black;
skeletonBone.StrokeThickness = 3;
Point joint1 = this.ScalePosition(trackedJoint1.Position);
skeletonBone.X1 = joint1.X;
skeletonBone.Y1 = joint1.Y;
Point joint2 = this.ScalePosition(trackedJoint2.Position);
skeletonBone.X2 = joint2.X;
skeletonBone.Y2 = joint2.Y;
myCanvas.Children.Add(skeletonBone);
}
/// <summary>
/// Scales the position.
/// </summary>
/// <param name="skeletonPoint">The skeltonpoint.</param>
/// <returns></returns>
private Point ScalePosition(SkeletonPoint skeletonPoint)
{
// return the depth points from the skeleton point
DepthImagePoint depthPoint = this.sensor.CoordinateMapper.MapSkeletonPointToDepthPoint(skeletonPoint, DepthImageFormat.Resolution640x480Fps30);
return new Point(depthPoint.X, depthPoint.Y);
}
define a certain threshold and if the bone is behind that threshold draw the joins x,y coordinates on the canvas. You will have to add code to update the position of the sprite accordingly.

I need an algorithm that can fit n rectangles of any size in a larger one minimizing its area

I need an algorithm that would take n rectangles of any sizes, and calculate a rectangle big enough to fit them all, minimizing its area so the wasted area is minimum, and also returning the position of all the smaller rectangles within.
The specific task I need this to implement on is in a sprite sheet compiler that would take individual PNG files and make a large PNG with all the images in it, so individual frames can be blitted from this surface at run time.
A nice to have feature would be that it aims to a specific given width/height ratio, but it's not mandatory.
I'd prefer simple, generic code I can port to another language.
This is what I put together for my own needs. The T parameter is whatever object you want associated with the results (think of it like the Tag property). It takes a list of sizes and returns a list of Rects that are arranged
static class LayoutHelper
{
/// <summary>
/// Determines the best fit of a List of Sizes, into the desired rectangle shape
/// </summary>
/// <typeparam name="T">Holder for an associated object (e.g., window, UserControl, etc.)</typeparam>
/// <param name="desiredWidthToHeightRatio">the target rectangle shape</param>
/// <param name="rectsToArrange">List of sizes that have to fit in the rectangle</param>
/// <param name="lossiness">1 = non-lossy (slow). Greater numbers improve speed, but miss some best fits</param>
/// <returns>list of arranged rects</returns>
static public List<Tuple<T, Rect>> BestFitRects<T>(double desiredWidthToHeightRatio,
List<Tuple<Size, T>> rectsToArrange, int lossiness = 10)
{
// helper anonymous function that tests for rectangle intersections or boundary violations
var CheckIfRectsIntersect = new Func<Rect, List<Rect>, double, bool>((one, list, containerHeight) =>
{
if (one.Y + one.Height > containerHeight) return true;
return list.Any(two =>
{
if ((one.Top > two.Bottom) ||
(one.Bottom < two.Top) ||
(one.Left > two.Right) ||
(one.Right < two.Left)) return false; // no intersection
return true; // intersection found
});
});
// helper anonymous function for adding drop points
var AddNewPotentialDropPoints = new Action<SortedDictionary<Point, object>, Rect>(
(potentialDropPoints, newRect) =>
{
// Only two locations make sense for placing a new rectangle, underneath the
// bottom left corner or to the right of a top right corner
potentialDropPoints[new Point(newRect.X + newRect.Width + 1,
newRect.Y)] = null;
potentialDropPoints[new Point(newRect.X,
newRect.Y + newRect.Height + 1)] = null;
});
var sync = new object();
// the outer boundary that limits how high the rectangles can stack vertically
var containingRectHeight = Convert.ToInt32(rectsToArrange.Max(a => a.Item1.Height));
// always try packing using the tallest rectangle first, working down in height
var largestToSmallest = rectsToArrange.OrderByDescending(a => a.Item1.Height).ToList();
// find the maximum possible container height needed
var totalHeight = Convert.ToInt32(rectsToArrange.Sum(a => a.Item1.Height));
List<Tuple<T, Rect>> bestResults = null;
// used to find the best packing arrangement that approximates the target container dimensions ratio
var bestResultsProximityToDesiredRatio = double.MaxValue;
// try all arrangements for all suitable container sizes
Parallel.For(0, ((totalHeight + 1) - containingRectHeight) / lossiness,
//new ParallelOptions() { MaxDegreeOfParallelism = 1},
currentHeight =>
{
var potentialDropPoints = new SortedDictionary<Point, object>(Comparer<Point>.Create((p1, p2) =>
{
// choose the leftmost, then highest point as earlier in the sort order
if (p1.X != p2.X) return p1.X.CompareTo(p2.X);
return p1.Y.CompareTo(p2.Y);
}));
var localResults = new List<Tuple<T, Rect>>();
// iterate through the rectangles from largest to smallest
largestToSmallest.ForEach(currentSize =>
{
// check to see if the next rectangle fits in with the currently arranged rectangles
if (!potentialDropPoints.Any(dropPoint =>
{
var workingPoint = dropPoint.Key;
Rect? lastFittingRect = null;
var lowY = workingPoint.Y;
var highY = workingPoint.Y - 1;
var boundaryFound = false;
// check if it fits in the current arrangement of rects
do
{
// create a positioned rectangle out of the size dimensions
var workingRect = new Rect(workingPoint,
new Point(workingPoint.X + currentSize.Item1.Width,
workingPoint.Y + currentSize.Item1.Height));
// keep moving it up in binary search fashion until it bumps the higher rect
if (!CheckIfRectsIntersect(workingRect, localResults.Select(a => a.Item2).ToList(),
containingRectHeight + (currentHeight * lossiness)))
{
lastFittingRect = workingRect;
if (!boundaryFound)
{
highY = Math.Max(lowY - ((lowY - highY) * 2), 0);
if (highY == 0) boundaryFound = true;
}
else
{
lowY = workingPoint.Y;
}
}
else
{
boundaryFound = true;
highY = workingPoint.Y;
}
workingPoint = new Point(workingPoint.X, lowY - (lowY - highY) / 2);
} while (lowY - highY > 1);
if (lastFittingRect.HasValue) // found the sweet spot for this rect
{
var newRect = lastFittingRect.Value;
potentialDropPoints.Remove(dropPoint.Key);
// successfully found the best location for the new rectangle, so add it to the pending results
localResults.Add(Tuple.Create(currentSize.Item2, newRect));
AddNewPotentialDropPoints(potentialDropPoints, newRect);
return true;
}
return false;
}))
{
// this only occurs on the first square
var newRect = new Rect(0, 0, currentSize.Item1.Width, currentSize.Item1.Height);
localResults.Add(Tuple.Create(currentSize.Item2, newRect));
AddNewPotentialDropPoints(potentialDropPoints, newRect);
}
});
// layout is complete, now see if this layout is the best one found so far
var layoutHeight = localResults.Max(a => a.Item2.Y + a.Item2.Height);
var layoutWidth = localResults.Max(a => a.Item2.X + a.Item2.Width);
var widthMatchingDesiredRatio = desiredWidthToHeightRatio * layoutHeight;
double ratioProximity;
if (layoutWidth < widthMatchingDesiredRatio)
ratioProximity = widthMatchingDesiredRatio / layoutWidth;
else
ratioProximity = layoutWidth / widthMatchingDesiredRatio;
lock (sync)
{
if (ratioProximity < bestResultsProximityToDesiredRatio)
{
// this layout is the best approximation of the desired container dimensions, so far
bestResults = localResults;
bestResultsProximityToDesiredRatio = ratioProximity;
}
}
});
return bestResults ?? new List<Tuple<T, Rect>>() {Tuple.Create(rectsToArrange[0].Item2,
new Rect(new Point(0, 0), new Point(rectsToArrange[0].Item1.Width, rectsToArrange[0].Item1.Height))) };
}
}