Open frameworks save frame returning black images - rendering

I'm attempting to save individual frames of an open frameworks sketch to then compile into a movie, I'm using the command "ofSaveFrame()" in the draw section of the code, but for some reason all the resulting .png's are black. Any ideas?
Thank you in advance for your help

You'll have to locate the two files in your project ofGLRenderer.cpp and ofGLProgrammableRenderer.cpp
I changed both ofGLRenderer::saveScreen() and ofGLProgrammableRenderer::saveScreen() to look like this
#ifndef TARGET_OPENGLES
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
pixels.allocate(w, h, OF_PIXELS_RGB);
if(isVFlipped()){
y = sh - y;
y -= h; // top, bottom issues
}
glReadPixels(x, y, w, h, GL_RGB, GL_UNSIGNED_BYTE, pixels.getData()); // read the memory....
#else

Related

finding bounding box of centroid with limited information

I have detected blob keypoints in opencv c++. The centroid displays fine. How do I then draw a bounding box around the detected blob if I only have the blob center coordinates? I can't work backwards from center because of too many unknowns(or so I believe).
threshold(imageUndistorted, binary_image, 30, 255, THRESH_BINARY);
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blob
detector->detect(binary_image, binary_keypoints);
drawKeypoints(binary_image, binary_keypoints, bin_image_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
//draw BBox ?
What am I overlooking to draw the bounding box around the single blob?
I said:
I can't work backwards from center because of too many unknowns(or so I believe).
There is not limited information if blob size is used: keypoints.size which returns the diameter of the blob in question. Though there might be some inaccurate results with highly asymmetric or lopsided targets, this worked well for me b/c I used spheroid objects. Moments/ is probably the better approached for the asymmetrical targets.
keypoints.size should not be confused with keypoints.size(). The latter does a count in the vector of objects in my case the former is the diameter. Using both.
Using the diameter I can then calculate the rest with no problem:
float TLx = (ctr_x - r);
float TLy = (ctr_y - r);
float BRx = (ctr_x + r);
float Bry = (ctr_y + r);
Point TLp(TLx-10, TLy-10); //works fine without but more visible with enhancement
Point BRp(BRx+10, Bry+10); //same here
std::cout << "Top Left: " << TLp << std::endl << "Right Lower:" << BRp << std::endl;
cv::rectangle(bin_with_keypoints, TLp, BRp, cv::Scalar(0, 255, 0));
imshow("With Green Bounding Box:", bin_with_keypoints);
TLp = top left point with 10px adjustments to make box bigger.
BRp = bottom right point
TLx, TLy are calculated from blob center coordinates as well as BRps. If you are going to use multiple targets would suggest contours approach (with the moments). I have 1 - 2 blobs to keep track of which is a lot easier but keeps resource usage down.
Rectangle drawing function can also work with Rect (diameter = keypoint.size)
Rect r(TLp, BRp, center_x + diameter/2, center_y+diamter/2) // r(TLc, BRc, width, heigth)
cv::rectangle(bin_with_keypoints, rect, cv::Scalar(0, 255, 0));

How to calculate the length in mm of a string in a PDF document created with jsPDF library?

I use jsPDF library to create and print a PDF document. This library exposes low level methods which are ok, but i have tons of fields to create, many of which are similar, and i need to create higher level abstractions.
For example i have a createLabel function that i want to call instead of this low level stuff.
var doc = new jsPDF('portrait', 'mm', 'a4');
doc.addFont('Arial', "sans-serif", "normal");
// name
doc.setFontSize(14);
doc.text(10, 19, "name:");
doc.setLineWidth(0.1);
doc.line(25, 19, 100, 19); // meaning x1, y1, x2, y2
// CUI
doc.setFontSize(14);
doc.text(10, 29, "CUI:");
doc.setLineWidth(0.1);
doc.line(21, 29, 100, 29);
// same stuff but use functions instead.
createLabel("name: ", 10,50, 100); // meaning (labelName, x, y, totalWidth)
createLabel("CUI: ", 10,60, 100);
As you can see, the lines for the second group of labels are not placed in the right position. They are too much on the left. Their starting postion is generated based on the length of the labelName, and this length calculation fails. How can i make this work properly? The code so far is:
function createLabel(name, x, y, totalWidth) {
//draw name
doc.setFontSize(14);
doc.text(x, y, name);
// draw line
const nameLength = (measureLength(name)) + 2;
doc.setLineWidth(0.1);
// i want to start the line after the name ends + 2 mm.
// and end the line in such a way that nameLength + lineLength == totalWidth of the compoenent.
doc.line(x + nameLength, y, x + totalWidth, y);
}
function measureLength(str) {
let canvas = document.createElement('canvas'); // in memory canvas.. not rendered anywere..
let ctx = canvas.getContext("2d")
ctx.font = "14px Arial";
let width = ctx.measureText(str).width;
let mm = ( width * 25.4 ) / 149 // meaning (px * 25.4) / screen DPI
console.log (mm);
return mm; // of course, this calculation turns out wrong..
}
How to make this measureLength function work correctly? Most solutions i found involve DOM but this is PDF.
Notice: I use the same font ('14px Arial') for the PDF document and for the canvas. jsPDF live demo.
Any insight is appreciated, thanks :)
This might resolve your problem:
createLabel(name, x, y, totalWidth) {
doc.setFontSize(14);
doc.text(x, y, name);
// draw line
const nameLength = (doc.getTextDimensions(name).w / (72 / 25.6) ) + 2;
console.log('nameLength', nameLength); // todo remove
doc.setLineWidth(0.1);
// i want to start the line after the name ends + 2 mm.
// and end the line in such a way that nameLength + lineLength == totalWidth of the compoenent.
doc.line(x + nameLength, y, x + totalWidth, y);
}
Check how I calculate nameLength - using build in jsPDF function and converting to mm.
Helpful links:
how to calculate text size
why sometimes calculation might be wrong by few pixels
This is the result:
Remember that you use x + totalWidth for line width, so lines are longer by x compared to manual example at the top

How do I use the scanCrop property of a ZBar reader?

I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}

Can't correctly rotate cylinder in openGL to desired position

I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow:

Fragment-shader blur ... how does this work?

uniform sampler2D sampler0;
uniform vec2 tc_offset[9];
void blur()
{
vec4 sample[9];
for(int i = 0; i < 9; ++i)
sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]);
gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] +
(2.0 * sample[3]) + sample[4] + 2.0 * sample[5] +
sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0;
}
How does the sample[i] = texture2D(sample0, ...) line work?
It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?
It applies a blur kernel to the image. tc_offset needs to be properly initialized by the application to form a 3x3 area of sampling points around the actual texture coordinate:
0 0 0
0 x 0
0 0 0
(assuming x is the original coordinate). The offset for the upper-left sampling point would be -1/width,-1/height. The offset for the center point needs to be carefully aligned to texel center (the off-by-0.5 problem). Also, the hardware bilinear filter can be used to cheaply increase the amount of blur (by sampling between texels).
The rest of the shader scales the samples by their distance. Usually, this is precomputed as well:
for(int i = 0; i < NUM_SAMPLES; ++i) {
result += texture2D(sampler,texcoord+offsetscaling[i].xy)*offsetscaling[i].z;
}
One way is to generate your original image to render to a texture, not to the screen.
And then you draw a full screen quad using this shader and the texture as it's input to post-process the image.
As you note, in order to make a blurred image, you first need to make an image, and then blur it. This shader does (just) the second step, taking an image that was generated previously and blurring it. There needs to be additional code elsewhere to generate the original non-blurred image.