Ok, I haven't found it anywhere. What should I do if i want to draw with core graphics per pixel? Likeā¦ I want to draw a line to pixels (45,61) and than (46,63) instead of drawing to point (23,31) or something like that. So what should I do in this case?
Should I use something like:
CGContextAddLineToPoint(context,22.5,30.5);
CGContextAddLineToPoint(context,23,31.5);
Or there is some better way?
I know about contentScaleFactor but should I use it as (when plotting some function for example):
for(int x=bounds.origin.x; x<=bounds.origin.x+bounds.size.width*[self contentScaleFactor]; i++)
CGContextAddLineToPoint(context,x/[self contentScaleFactor],y(x/[self contentScaleFactor]));
I know that the example code is not superb, but I think you'll get the idea.
I'll be vey thankful for help because I'm a bit confused with all this scale factor stuff.
Sounds like you are doing the Assignment3 from the iOS uTunes Stanford Course? :)
I think you are on the right track, as my implementation looks very similar:
for(int x=self.bounds.origin.x; x<=self.bounds.origin.x + (self.bounds.size.width * self.contentScaleFactor); x++) {
// get the scaled X Value to ask dataSource
CGFloat axeX = (x/self.contentScaleFactor - self.origin.x) / self.scale;
// using axeX here for testing, which draws a x=y graph
// replace axeX with the dataSource Y-Point calculation
CGFloat axeY = -1 * ((axeX * self.scale) + self.origin.y);
if (x == self.bounds.origin.x) {
CGContextMoveToPoint(context, x/self.contentScaleFactor, axeY);
} else {
CGContextAddLineToPoint(context, x/self.contentScaleFactor, axeY);
}
}
Tested on iOS-Sim iPhone4 (contentScaleFactor 1.0) + iPhone4S Device (contentScaleFactor 2.0).
Would be happy about possible improvements from other readers, because I am still learning.
Related
I am stuck with a problem that is how to recognize some patterns in image.
the image is the image of paper which is pure white and the patterns are in four corners are in black.
I want to recognize the black patterns on the image?
I surf a lot on the net and found that the opencv as a answer. but there is nothing provided that describe how to use opencv in order to achieve the required feature.
Please help me with some coding point of view or provide some link which I should follow or any name of any open source library which I should use to achieve this feature.
The image for pattern is below:-
The image consist of pure white background and four black patterns in the corner.I need to recognize these black patterns in the all four corners all then process the image.One corner shown in oval to highlight it.
Any suggestions will be highly appreciated.
Thanks in advance!
I really don't understand your problem - if you say:
The image is the image of paper which is pure white and the patterns
are in four corners are in black.
Then what's the problem to mask only these four contours from image? After doing mask with 4 squares with length 40 pixels I got this:
To remove small areas you can use morphological operations. I got this:
And just draw them (optional) on input image. Here's result:
To implement this algorithm I use OpenCV library. I'm 100% sure that it works on IOS - OpenCV team finally published IOS version. So if you say:
I tried running the OpenCV-iOS link but the project does not run, it
is showing errors.
Then we can't help you with that because we are not telepathists to see your problem. Just small suggestion - try to google your problem. I'm 99% sure that it should help.
And lest I forget - here's c++ code:
Mat src = imread("input.png"), tmp;
//convert image to 1bit
cvtColor(src, tmp, CV_BGR2GRAY);
threshold(tmp, tmp, 200, 255, THRESH_OTSU);
//do masking
#define DELTA 40
for (size_t i=0; i<tmp.rows; i++)
{
for (size_t j=0; j<tmp.cols; j++)
{
if(!((i < DELTA && j < DELTA)
|| (i < DELTA && j > tmp.cols - DELTA)
|| (i > tmp.rows - DELTA && j < DELTA)
|| (i > tmp.rows - DELTA && j > tmp.cols - DELTA)))
{
//set color to black
tmp.at<uchar>(i, j) = 255;
}
}
}
bitwise_not(tmp,tmp);
//erosion and dilatation:
Mat element = getStructuringElement(MORPH_RECT, Size(2, 2), Point(1, 1));
erode(tmp, tmp, element);
dilate(tmp, tmp, element);
//(Optional) find contours and draw them:
vector<Vec4i> hierarchy;
vector<vector<Point2i> > contours;
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for (size_t i=0; i<contours.size(); i++)
{
drawContours(src, contours, i, Scalar(0, 0, 255), 1);
}
Maybe this question is helpful for you, especially the link to the Tennis Ball Recognizing Tutorial seems to be pretty much what you are looking for.
Regarding how to use OpenCV on iOS you might want to take a look at OpenCV-iOS and Computer Vision with iOS.
I am stuck with a problem that is how to recognize some patterns in image.
the image is the image of paper which is pure white and the patterns are in four corners are in black.
I want to recognize the black patterns on the image?
I surf a lot on the net and found that the opencv as a answer. but there is nothing provided that describe how to use opencv in order to achieve the required feature.
Please help me with some coding point of view or provide some link which I should follow or any name of any open source library which I should use to achieve this feature.
The image for pattern is below:-
The image consist of pure white background and four black patterns in the corner.I need to recognize these black patterns in the all four corners all then process the image.One corner shown in oval to highlight it.
Any suggestions will be highly appreciated.
Thanks in advance!
I really don't understand your problem - if you say:
The image is the image of paper which is pure white and the patterns
are in four corners are in black.
Then what's the problem to mask only these four contours from image? After doing mask with 4 squares with length 40 pixels I got this:
To remove small areas you can use morphological operations. I got this:
And just draw them (optional) on input image. Here's result:
To implement this algorithm I use OpenCV library. I'm 100% sure that it works on IOS - OpenCV team finally published IOS version. So if you say:
I tried running the OpenCV-iOS link but the project does not run, it
is showing errors.
Then we can't help you with that because we are not telepathists to see your problem. Just small suggestion - try to google your problem. I'm 99% sure that it should help.
And lest I forget - here's c++ code:
Mat src = imread("input.png"), tmp;
//convert image to 1bit
cvtColor(src, tmp, CV_BGR2GRAY);
threshold(tmp, tmp, 200, 255, THRESH_OTSU);
//do masking
#define DELTA 40
for (size_t i=0; i<tmp.rows; i++)
{
for (size_t j=0; j<tmp.cols; j++)
{
if(!((i < DELTA && j < DELTA)
|| (i < DELTA && j > tmp.cols - DELTA)
|| (i > tmp.rows - DELTA && j < DELTA)
|| (i > tmp.rows - DELTA && j > tmp.cols - DELTA)))
{
//set color to black
tmp.at<uchar>(i, j) = 255;
}
}
}
bitwise_not(tmp,tmp);
//erosion and dilatation:
Mat element = getStructuringElement(MORPH_RECT, Size(2, 2), Point(1, 1));
erode(tmp, tmp, element);
dilate(tmp, tmp, element);
//(Optional) find contours and draw them:
vector<Vec4i> hierarchy;
vector<vector<Point2i> > contours;
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for (size_t i=0; i<contours.size(); i++)
{
drawContours(src, contours, i, Scalar(0, 0, 255), 1);
}
Maybe this question is helpful for you, especially the link to the Tennis Ball Recognizing Tutorial seems to be pretty much what you are looking for.
Regarding how to use OpenCV on iOS you might want to take a look at OpenCV-iOS and Computer Vision with iOS.
Am a flash game developer at present I started writing games for iphone using cocos 2d engine, I have implemented separate Axis theorem for collision detection which works perfect. All the polygons are drawn as follows in openGl.
Now am trying to apply gravity to this 16x16 box after many search I found this tutorial http://www.seinia.com/tutorials/Bounce/ and implemented the same in objective C.
The problem am having is after the square comes to rest it keeps bouncing up/down in fractions. I tried a lot to fix this but I couldn't control that tiny movement.I never had such problem in flash but here the floating value is affecting the square position a lot.
Please let me know what is the write way to handle such issue , any reference URL would be helpful. Appriciated your help.Thanks!
0,16 16,16
------------
| |
| |
| |
| |
------------
0,0 16,0
Objective C code
if (square.points[0].y <= 0.1f) {
velocity.vy *= -bounce;
[square restInPeace:[Vector2D createVectorWithComponentX:velocity.vx Y:8.0f]];
// landed
if (fabs(velocity.vy) < 0.9f) {
velocity.vy = 0.0f;
[square restInPeace:[Vector2D createVectorWithComponentX:velocity.vx Y:8.0f]];
isLanded = YES;
}
}
Translate the object
-(void) translateVelocity:(Vector2D*)velocity
{
// The center as well as all of the vertices need to be
// accommodated.
center.x += velocity.vx;
center.y += velocity.vy;
for(int i=0; i < count; i++)
{
points[i].x += velocity.vx;
points[i].y += velocity.vy;
////NSLog(#"velocity %f %f",points[i].x , points[i].y);
}
}
When using a bounce algorithm, it is usually recommended to implement a slight imperfection to make sure that this event does not happen. You could also largen the range of what is accepted as "Landed", but remember to make sure that you then stick the object to the floor to make sure there are no visual artifacts.
By the imperfection I mean :
velocity.vy *= (-bounce + 0.01f);
For example. This should make your object always come to a halt.
I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'.
The code where I'm adding the lighting effects is:
// trace lights
for ( int l=0; l<primitives.count; l++) {
Primitive* p = [primitives objectAtIndex:l];
if (p.light)
{
Sphere * lightSource = (Sphere *)p;
// calculate diffuse shading
Vector3 *light = [[Vector3 alloc] init];
light.x = lightSource.centre.x - intersectionPoint.x;
light.y = lightSource.centre.y - intersectionPoint.y;
light.z = lightSource.centre.z - intersectionPoint.z;
[light normalize];
Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain];
if (primitiveThatWasHit.material.diffuse > 0)
{
float illumination = DOT(normal, light);
if (illumination > 0)
{
float diff = illumination * primitiveThatWasHit.material.diffuse;
// add diffuse component to ray color
colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red;
colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue;
colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green;
}
}
[normal release];
[light release];
}
}
How can I make it look right?
It's a perceptual effect called Mach banding.
You are also very likely viewing the images in the wrong color space. Your ray tracer is doing the lighting math in a "linear" space, but then you are almost certainly viewing those images on a display with a nonlinear response, and therefore not even seeing the correct results. This could easily be making the Mach bands much more prominent than if you were displaying them properly. Try learning about gamma correction.
Your eyes are decieving you. If you move the spheres from the 3 pictures together you will very clearly see that the areas are the same color when single light and brighter when double lit. If you want to make it look nicer I suggest you add a whole arc of light sources between the current ones.
You've saturated one colour channel in the image; turn down the brightness a bit and see what happens.
Are you sure your lighting directions are both normalized?
May be worth it to throw an assert in there.
I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.