uniform sampler2D sampler0;
uniform vec2 tc_offset[9];
void blur()
{
vec4 sample[9];
for(int i = 0; i < 9; ++i)
sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]);
gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] +
(2.0 * sample[3]) + sample[4] + 2.0 * sample[5] +
sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0;
}
How does the sample[i] = texture2D(sample0, ...) line work?
It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?
It applies a blur kernel to the image. tc_offset needs to be properly initialized by the application to form a 3x3 area of sampling points around the actual texture coordinate:
0 0 0
0 x 0
0 0 0
(assuming x is the original coordinate). The offset for the upper-left sampling point would be -1/width,-1/height. The offset for the center point needs to be carefully aligned to texel center (the off-by-0.5 problem). Also, the hardware bilinear filter can be used to cheaply increase the amount of blur (by sampling between texels).
The rest of the shader scales the samples by their distance. Usually, this is precomputed as well:
for(int i = 0; i < NUM_SAMPLES; ++i) {
result += texture2D(sampler,texcoord+offsetscaling[i].xy)*offsetscaling[i].z;
}
One way is to generate your original image to render to a texture, not to the screen.
And then you draw a full screen quad using this shader and the texture as it's input to post-process the image.
As you note, in order to make a blurred image, you first need to make an image, and then blur it. This shader does (just) the second step, taking an image that was generated previously and blurring it. There needs to be additional code elsewhere to generate the original non-blurred image.
Related
I have detected blob keypoints in opencv c++. The centroid displays fine. How do I then draw a bounding box around the detected blob if I only have the blob center coordinates? I can't work backwards from center because of too many unknowns(or so I believe).
threshold(imageUndistorted, binary_image, 30, 255, THRESH_BINARY);
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blob
detector->detect(binary_image, binary_keypoints);
drawKeypoints(binary_image, binary_keypoints, bin_image_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
//draw BBox ?
What am I overlooking to draw the bounding box around the single blob?
I said:
I can't work backwards from center because of too many unknowns(or so I believe).
There is not limited information if blob size is used: keypoints.size which returns the diameter of the blob in question. Though there might be some inaccurate results with highly asymmetric or lopsided targets, this worked well for me b/c I used spheroid objects. Moments/ is probably the better approached for the asymmetrical targets.
keypoints.size should not be confused with keypoints.size(). The latter does a count in the vector of objects in my case the former is the diameter. Using both.
Using the diameter I can then calculate the rest with no problem:
float TLx = (ctr_x - r);
float TLy = (ctr_y - r);
float BRx = (ctr_x + r);
float Bry = (ctr_y + r);
Point TLp(TLx-10, TLy-10); //works fine without but more visible with enhancement
Point BRp(BRx+10, Bry+10); //same here
std::cout << "Top Left: " << TLp << std::endl << "Right Lower:" << BRp << std::endl;
cv::rectangle(bin_with_keypoints, TLp, BRp, cv::Scalar(0, 255, 0));
imshow("With Green Bounding Box:", bin_with_keypoints);
TLp = top left point with 10px adjustments to make box bigger.
BRp = bottom right point
TLx, TLy are calculated from blob center coordinates as well as BRps. If you are going to use multiple targets would suggest contours approach (with the moments). I have 1 - 2 blobs to keep track of which is a lot easier but keeps resource usage down.
Rectangle drawing function can also work with Rect (diameter = keypoint.size)
Rect r(TLp, BRp, center_x + diameter/2, center_y+diamter/2) // r(TLc, BRc, width, heigth)
cv::rectangle(bin_with_keypoints, rect, cv::Scalar(0, 255, 0));
I am trying to read a 12-bit grayscale (DICOM:MONOCHROME2) image. I can read DICOM RGB files fine. When I attempt to load a grayscale image into NSBitmapImageRep, I get the following error message:
Inconsistent set of values to create NSBitmapImageRep
I have the following code fragment:
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes : nil
pixelsWide : width
pixelsHigh : height
bitsPerSample : bitsStored
samplesPerPixel : 1
hasAlpha : NO
isPlanar : NO
colorSpaceName : NSCalibratedWhiteColorSpace
bytesPerRow : width * bitsAllocated / 8
bitsPerPixel : bitsAllocated];
With these values:
width = 256
height = 256
bitsStored = 12
bitsAllocated = 16
Nothing seems inconsistent to me. I have verified that the image is: width*height*2 in length. So I am pretty sure that it is in a 2-byte grayscale format. I have tried many variations of the parameters, but nothing works. If I change "bitsPerSample" to 16, the error message goes away, but I get a solid black image. The closest success that I have been able to achieve, is to set "bitsPerPixel" to zero. When I do this, I successfully produce an image but it is clearly incorrectly rendered (you can barely make out the original image). Please some suggestions!! I have tried a long time to get this to work and have checked the Stack overflow and the web (many times). Thanks very much for any help!
SOLUTION:
After the very helpful suggestions from LEADTOOLS Support, I was able to solve my problem. Here is the code fragment that works (assuming a MONOCHROME2 DICOM image):
// If, and only if, MONOCHROME2:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes : &pixelData
pixelsWide : width
pixelsHigh : height
bitsPerSample : bitsAllocated /*bitsStored-this will not work*/
samplesPerPixel : samplesPerPixel
hasAlpha : NO
isPlanar : NO
colorSpaceName : NSCalibratedWhiteColorSpace
bytesPerRow : width * bitsAllocated / 8
bitsPerPixel : bitsAllocated];
int scale = USHRT_MAX / largestImagePixelValue;
uint16_t *ptr = (uint16_t *)imageRep.bitmapData;
for (int i = 0; i < width * height; i++) *ptr++ *= scale;
It is important to know about the Transfer Syntax (0002:0010) and Number of frames in the dataset. Also, try to get the value length and VR for Pixel Data (7FE0:0010) element. Using value length of the pixel data element you will be able to validate your calculation for uncompressed image.
As for displaying the image, you will also need the value for High Bit (0028:0102) and Pixel Representation (0028:0103). An image could be 16-bit allocated, 12-bit stored, high bit set to 15 and have one sample per pixel. That means 4 lest significant bits of each word do not contain pixel data. Pixel Representation when set to 1 means sign bit is the high bit in pixel sample.
In addition, you many need to apply modality LUT transformation (rescale slope and rescale intercept for linear transformation) when present in the dataset to prepare the data for display. At the end, you apply the VOI LUT transformation (Window center and Window Width) to display the image.
How to apply a filter on CMRotationMatrix? maybe kalman filter. I need fix the noise of CMRotationMatrix (transformFromCMRotationMatrix), to get a linear values of result matrix
This matrix values will be convert to XYZ, in my case I'm simulate 3D on 2D screen like that:
// Casting matrix to x, y
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, boxMatrix);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
CGPointMake(x * self.bounds.size.width, self.bounds.size.height - (y * self.bounds.size.height));
code:
// define variable
mat4f_t cameraTransform;
// start the display link loop
- (void)startDisplayLink
{
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(onDisplayLink:)];
[displayLink setFrameInterval:1];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
// stop the display link loop
- (void)stopDisplayLink
{
[displayLink invalidate];
displayLink = nil;
}
// event of display link
- (void)onDisplayLink:(id)sender
{
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
}
// function trigger before [self setNeedDisplay];
void transformFromCMRotationMatrix(vec4f_t mout, const CMRotationMatrix *m)
{
mout[0] = (float)m->m11;
mout[1] = (float)m->m21;
mout[2] = (float)m->m31;
mout[3] = 0.0f;
mout[4] = (float)m->m12;
mout[5] = (float)m->m22;
mout[6] = (float)m->m32;
mout[7] = 0.0f;
mout[8] = (float)m->m13;
mout[9] = (float)m->m23;
mout[10] = (float)m->m33;
mout[11] = 0.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 0.0f;
mout[15] = 1.0f;
}
// Matrix-vector and matrix-matricx multiplication routines
void multiplyMatrixAndVector(vec4f_t vout, const mat4f_t m, const vec4f_t v)
{
vout[0] = m[0]*v[0] + m[4]*v[1] + m[8]*v[2] + m[12]*v[3];
vout[1] = m[1]*v[0] + m[5]*v[1] + m[9]*v[2] + m[13]*v[3];
vout[2] = m[2]*v[0] + m[6]*v[1] + m[10]*v[2] + m[14]*v[3];
vout[3] = m[3]*v[0] + m[7]*v[1] + m[11]*v[2] + m[15]*v[3];
}
In general I would distinguish between improving the signal noise ratio and smooting the signal.
Signal Improvement
If you really want to be better than Apple's Core Motion which has already a sensor fusion algorithm implemented, stay prepared for a long term project with an uncertain outcome. In this case you would be better off to take the raw accelerometer and gyro signals to build your own sensor fusion algorithm but you have to care about a lot of problems like drift, hardware dependency of different iPhone versions, hardware differences of the sensors within same generation, ... So my advice: try everything to avoid it.
Smoothing
This just means interpolating two or more signals and building kind of average. I don't know about any suitable approach to use for rotation matrices directly (maybe there is one) but you can use quaternions instead (more resources: OpenGL Tutorial Using Quaternions to represent rotation or Quaternion FAQ).
The resulting quaternion of such an interpolation can be multiplied with your vector to get the projection similarly to the matrix way (you may look at Finding normal vector to iOS device for more information).
Interpolation between two unit quaternions representing rotations can be accomplished with Slerp. In practice you will use what is described as Geometric Slerp in Wikipedia. If you have two points in time t1 and t2 and the corresponding quaternions q1 and q2 and the angular distance omega between them, the formula is:
q'(q1, q2, t) = sin((1- t) * omega) / sin(omega) * q0 + sin(t * omega) / sin(omega) * q1
t should be 0.5 because you want the average between both rotations. Omega can be calculated by the dot product:
cos(omega) = q1.q2 = w1*w2 + x1*x2 + y1*y2 + z1*z2
If this approach using two quaternions still doesn't match your needs, you can repeat this by using slerp (slerp (q1, q2), slerp (q3, q4)). Some notes:
From a performance point fo view it's not that cheap to perform three sin and one arccos call in your run loop 1/frequency times per second. Thus you should avoid using too many points
In your case all signals are close to each other especially when using high sensor frequencies. You have to take care about angles that are very small and let 1/sin(omega) explode. In this case set sin(x) ≈ x
Like in other filters like low-pass filter the more points in time you use the more time delay you get. So if you have frequency f you will get about 0.5/f sec delay when using two points and 1.5/f for the double slerp.
If something appears weird, check that your resulting quaternions are unit quaternions i.e. ||q|| = 1
If you are running into performance issues you might have a look at Hacking Quaternions
The C++ project pbrt at github contains a quaternion class to get some inspiration from.
every one.
I'm a android developer.
I want to scale my image from center of displayed part of image with matrix.
So, I scaled my image with matrix. And then moved it with the calculated pointer.
But, the Application not work correctly.
This can't find the correct center, so when it does, it moved right.
Why this is?
I can't find the problem.
The code followed.
matrix.reset();
curScale += 0.02f;
orgImage.getHeight();
w = orgImage.getWidth();
matrix.postScale(curScale, curScale);
rtnBitmap = Bitmap.createBitmap(orgImage, 0, 0, w, h, matrix, true);
curImageView.setImageBitmap(rtnBitmap);
Matrix curZoomOutMatrix = new Matrix();
pointerx =(int ) ((mDisplayWidth/2 - curPosX) * curScale);
curPosX = - pointerx;
pointery =(int ) ((mDisplayWidth/2 - curPosY) * curScale);
curPosY = - pointery;
Log.i("ZoomOut-> posX = ", Integer.toString(curPosX));
Log.i("ZoomOut-> posY = ", Integer.toString(curPosY));
curZoomOutMatrix.postTranslate(curPosX, curPosY);
curImageView.setImageMatrix(curZoomOutMatrix);
curImageView.invalidate();
Did you have any sample code for center zoomIn and zoomOut the imageView with matrix?
Who can explain for that?
Please help me.
Or, It's my fault.
First, I scale the image from original one.
So, The image is (width, height) * scale;
Then I calculate the absolute position of the point that is displayed center. And then, move my ImageView to the calculated position from which the view is. My fault are here.
When i calculate view position, I change the position from now scale.
So, when it scaled, the position is not <original position> * <now scale>. It was <original position * <scale> * <now scale>, the result was strange position.
So i remade add to calculate the center position from original one.
That mode is now following.
public void calculate(float offset) {
float tmpScale = curScale - offset;
float orgWidth = (mDisplayWidth / 2 - curPosX) / tmpScale;
float orgHeight = (mDisplayHeight / 2 - curPosY) / tmpScale;
int tmpPosX = (int)(mDisplayWidth / 2 - orgWidth * curScale);
int tmpPosY = (int)(mDisplayHeight / 2 - orgHeight * curScale);
curPosX = tmpPosX;
curPosY = tmpPosY;
Matrix matrix = new Matrix();
matrix.postTranslate(tmpPosX, tmpPosY);
curImageView.setImageMatrix(matrix);
curImageView.invalidate();
}
Thank you. every one.
I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow: