convert grayIplImage to color iplimage in opencv [duplicate] - objective-c

I want to composite a number of images into a single window in openCV. I had found i could create a ROI in one image and copy another colour image into this area without any problems.
Switching the source image to an image i had carried out some processing on didn't work though.
Eventually i found out that i'd converted the src image to greyscale and that when using the copyTo method that it didn't copy anything over.
I've answered this question with my basic solution that only caters for greyscale to colour. If you use other Mat image types then you'd have to carry out the additional tests and conversions.

I realised my problem was i was trying to copy a greyscale image into a colour image. As such i had to convert it into the appropriate type first.
drawIntoArea(Mat &src, Mat &dst, int x, int y, int width, int height)
{
Mat scaledSrc;
// Destination image for the converted src image.
Mat convertedSrc(src.rows,src.cols,CV_8UC3, Scalar(0,0,255));
// Convert the src image into the correct destination image type
// Could also use MixChannels here.
// Expand to support range of image source types.
if (src.type() != dst.type())
{
cvtColor(src, convertedSrc, CV_GRAY2RGB);
}else{
src.copyTo(convertedSrc);
}
// Resize the converted source image to the desired target width.
resize(convertedSrc, scaledSrc,Size(width,height),1,1,INTER_AREA);
// create a region of interest in the destination image to copy the newly sized and converted source image into.
Mat ROI = dst(Rect(x, y, scaledSrc.cols, scaledSrc.rows));
scaledSrc.copyTo(ROI);
}
Took me a while to realise the image source types were different, i'd forgotten i'd converted the images to grey scale for some other processing steps.

Related

Accessing pixel values from destination image of CopyMakeBorder function in Emgu CV

A little bit new to EmguCV here
Just want to ask quick question, about CopyMakeBorder function
Are the pixel values of the destination image accessible?
I want to process further the destination image, but when I tried to access pixel values from the image, it only returns me 0 (even in the location that are not supposed to be 0, for example the central pixel). When I used Imshow, it shows that the image borders are perfectly processed, but problem only persist when I try to access the pixel values, only getting 0 wherever the pixel location is.
This is not a problem when I tried to use destination images from other EmguCV functions, such as Threshold Function
Can anyone clarify? Thanks A lot!!
I am using VB.net, here is the code (I am away from my workstation for the weekend so I am just gonna try to remember the code, probably some capital letters here and there are mistyped, but I hope you get the gist.)
First I initialize the source images and destination image
Dim img As Image(Of Gray,Byte) = New Image (Of Gray, Byte)("myimage.jpg")
Dim img1 As Image(Of Gray,Byte) = New Image (Of Gray, Byte)(img.size)
CopyMakeBorder Function, extend 1 pixel to top, bottom, left and right. Border type constant 0 values
Cvinvoke.CopyMakeBorder(img,img1,1,1,1,1,BorderType.Constant, New MCvscalar(0))
Accessing pixel values from destination image, take example pixel in x = 100, y = 100, and channel 0 (as it is a grayscale image)
Console.writeline(img1.data(100,100,0))
This will make debug output to 0, and no matter where I try to take the pixel values, it is still 0, even though when I try to show the image that specific pixel should not be 0 (it is not black)
Cvinvoke.Imshow("test",img1)
You are trying to access the data through Image.Data, however, this doesn't include the added border(s); just the original bitmap.
The added border is in the Mat property, however. Through it the individual pixels can be accessed
' returns data from original bitmap
Console.WriteLine(img1.Data(100, 100, 0))
' returns data from modified bitmap
Console.WriteLine(img1.Mat.GetData(100, 100)(0))

What is the right way to resize using NVIDIA NPP to exact destination dimensions?

I'm trying to use NVIDIA NPP to experiment with some image resizing routines. I want to resize to an exact dimension. I've been looking at image resizing using NVIDIA NPP but all of its resize functions take scale factors for X and Y Dimensions, and I could not see any API taking direct destination dimensions.
As an example, this is one API:
NppStatus nppiResizeSqrPixel_8u_C1R(const Npp8u * pSrc, NppiSize oSrcSize, int nSrcStep, NppiRect oSrcROI, Npp8u * pDst, int nDstStep, NppiRect oDstROI, double nXFactor, double nYFactor, double nXShift, double nYShift, int eInterpolation);
I realize one way could be to find the appropriate scale factor the destination dimension, but we don't exactly know how the API decides destination ROI based on scale factor (since it is floating point math). We could reverse the calculation in the jpegNPP sample to find the scale factor, but the API itself does not make any guarantees so I'm not sure how safe it is. Any ideas what are the possibilities?
As a side question, the API also takes two params, nXShift and nYShift, but just says "Source pixel shift in x-direction". I'm not exactly clear what shift is being talked about here. Do you have an idea?
If I wanted to map the whole SRC image to the smaller rectangle in the DST image as shown in the image below I would use xFactor = yFactor = 0.5 and xShift = 0.5*DST.width and yShift = 0.
Mapping src to half size destination image
In other words, the pixel at (x,y) in the SRC is mapped to the pixel (x',y') in the DST as
x' = xFactor * x + xShift
y' = yFactor * y + yShift
In this case, both the source and dest ROI could be the entire support of the respective images.

Creating Image from pixel data with CGBitmapContextCreate

I am trying to write code that can crop an existing image down to some specified size/region. I am working with DICOM images, and the API I am using allows me to get pixel values directly. I've placed pixel values of the area of interest within the image into an array of floats (dstImage, below).
Where I'm encountering trouble is with the actual construction/creation of the new, cropped image file using this pixel data. The source image is grayscale, however all of the examples I have found online (like this one) have been for RGB images. I tried to follow the example in that link, adjusting for grayscale and trying numerous different values, but I continue to get errors on the CGBitmapContextCreate line of code and still do not clearly understand what those values are supposed to be.
My intensity values for the source image go above 255, so my impression is that this is not 8-bit Grayscale, but 16-bit Grayscale.
Here is my code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context;
context = CGBitmapContextCreate(dstImage, // pixel data from the region of interest
dstWidth, // width of the region of interest
dstHeight, // height of the region of interest
16, // bits per component
2 * dstWidth, // bytes per row
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("test.png"),
kCFURLPOSIXPathStyle,
false);
CFStringRef type = kUTTypePNG;
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url,
type,
1,
0);
CGImageDestinationAddImage(dest,
cgImage,
0);
CFRelease(cgImage);
CFRelease(context);
CGImageDestinationFinalize(dest);
free(dstImage);
The error I keep receiving is:
CGBitmapContextCreate: unsupported parameter combination: 16 integer bits/component; 32 bits/pixel; 1-component color space; kCGImageAlphaNoneSkipLast; 42 bytes/row.
The ultimate goal is to create an image file from the pixel data in dstImage and save it to the hard drive. Help on this would be greatly appreciated as would insight into how to determine what values I should be using in the CGBitmapContextCreate call.
Thank you
First, you should familiarize yourself with the "Supported Pixel Formats" section of Quartz 2D Programming Guide: Graphics Contexts.
If your image data is in an array of float values, then it's 32-bits-per-component, not 16. Therefore, you have to use kCGImageAlphaNone | kCGBitmapFloatComponents.
However, I believe that Core Graphics will interpret floating-point components as being between 0.0 and 1.0. If your values are outside of that, you may need to convert them using something like (value - minimumValue) / (maximumValue - minimumValue). An alternative may be to use CGColorSpaceCreateCalibratedGray() or to create a CGImage using CGImageCreate() and specifying an appropriate decode parameter and then create a bitmap context from that using CGBitmapContextCreateImage().
In fact, if you're not drawing into your bitmap context, you should just be creating a CGImage instead, anyway.

create a new image with only masked part (without transparent area) with new size

I have a mask and an image on which mask is applied to get a portion of that image.
The problem is when I apply that mask on the image ,the resultant image from masking is of same size as the original image .Though the unmasked part is transparent. What I need is an image which only has the masked part of the original image ,I dont want transparent part to be in the image. so that the resultant image will be of smaller size an contain only the masked part.
Thanks
You can:
Draw the image to a new CGBitmapContext at actual size, providing a buffer for the bitmap. CGBitmapContextCreate
Read alpha values from the bitmap to determine the transparent boundaries. You will have to determine how to read this based on the pixel data you have specified.
Create a new CGBitmapContext providing the external buffer, using some variation or combination of: a) a pixel offset, b) offset bytes per row, or c) manually move the bitmap's data (in place to reduce memory usage, if possible). CGBitmapContextCreate
Create a CGImage from the second bitmap context. CGBitmapContextCreateImage

Objective C - Detect a "path" drawing, inside a map image

I have a physical map (real world), for example, a little town map.
A "path" line is painted over the map, think about it like "you are here. here's how to reach the train station" :)
Now, let's suppose I can get an image of that scenario (likewise, coming from a photo).
An image that looks like:
My goal is not easy way out!
I want to GET the path OUT of the image, i.e., separate the two layers.
Is there a way to extract those red marks from the image?
Maybe using CoreGraphics? Maybe an external library?
It's not an objective C specific question, but I am working on Apple iOS.
I already worked with something similar, the face-recognition.
Now the answer I expect is: "What do you mean by PATH?"
Well, I really don't know, maybe a line (see above image) of a completely different color from the 'major' colors in the background.
Let's talk about it.
If you can use OpenCV then it becomes simpler. Here's a general method:
Separate the image into Hue, Saturation and Variation (HSV colorspace)
Here's the OpenCV code:
// Compute HSV image and separate into colors
IplImage* hsv = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
cvCvtColor( img, hsv, CV_BGR2HSV );
IplImage* h_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* s_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* v_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 );
Deal with the Hue (h_plane) image only as it gives just the hue without any change in value for a lighter or darker shade of the same color
Check which pixels have Red hue (i think red is 0 degree for HSV, but please check the OpenCV values)
Copy these pixels into a separate image
I's strongly suggest using the OpenCV library if possible, which is basically made for such tasks.
You could filter the color, define a threshold for what the color red is, then filter everything else to alpha, and you have left over what your "path" is?