I am using Assimp and when using FBX with embedded textures, Assimp provides the embedded texture data in the following struct aiTexture's pcData slot. The documentation for pcData says:
Appropriate decoders (such as libjpeg, libpng, D3DX, DevIL) are
required to load theses textures. aiTexture::mWidth specifies the size
of the texture data in bytes, aiTexture::pcData is a pointer to the
raw image data
I understand that pcData will have the png header, chunks etc and libPNG can give me back the actual image data with other specs (width, height etc).
What is the native iOS/macOS API to do this instead of using libPNG?
For example: CGImageCreateWithPNGDataProvider source property describes it as a data provider supplying PNG encoded data. I tried with some code like this but that doesn't work:
CGDataProviderRef dataProvider = NULL;
dataProvider = CGDataProviderCreateWithData(NULL,
(const void*)texture->pcData,
texture->mWidth,
rgbReleaseRampData);
if(dataProvider) {
NSLog(#" ********* Created image data provider ");
}
// fails at this line
CGImageRef imageRef = CGImageCreateWithPNGDataProvider(dataProvider,
NULL,false, kCGRenderingIntentDefault);
Well, at least for iOS/macOS; it seems to be a problem with the assimp library.
The following, if not the perfect fix, works:
adding a uint_8 to hold raw image data in texture.h:
uint8_t *rawImageData;
Then reinterpret the pcData in FBXConverter.cpp as such:
out_tex->pcData = reinterpret_cast< aiTexel * >(const_cast<Video&>( video
).RelinquishContent());
out_tex->rawImageData = reinterpret_cast< uint8_t * >(out_tex->pcData);
fixes the above problem. Without reinterpreting the pcData, in iOS/macOS one gets the invalid memory address for the memory buffer.
With the above fix, generating an image object requires only the following:
const struct aiTexture *aiTexture = aiScene->mTextures[index];
NSData *imageData = [NSData dataWithBytes:aiTexture->pcData
length:aiTexture->mWidth];
self.imageDataProvider =
CGDataProviderCreateWithCFData((CFDataRef)imageData);
NSString* format = [NSString stringWithUTF8String:aiTexture->achFormatHint];
if([format isEqualToString:#"png"]) {
DLog(#" Created png embedded texture ");
self.image = CGImageCreateWithPNGDataProvider(
self.imageDataProvider, NULL, true, kCGRenderingIntentDefault);
}
if([format isEqualToString:#"jpg"]) {
DLog(#" Created jpg embedded texture");
self.image = CGImageCreateWithJPEGDataProvider(
self.imageDataProvider, NULL, true, kCGRenderingIntentDefault);
}
The GH Issue.
Related
I'm newbie to the Intel Realsense SDK and coding in Visual Studio 2017(C or C++) for Intel Realsense camera D435.
In my example I have the following,
static rs2::frameset current_frameset;
auto color = current_frameset.get_color_frame();
frame = cvQueryFrame(color);
I've got an error on line 3 as "can not convert 'rs2::video_frame' to 'CvCapture'"
I've not being able to find a solution to this issue and it's proving difficult and resulted in more errors.
Does anyone know how I can overcome this problem?
Thanks for the help!
The cvQueryFrame accepts cvCapture instance, and it is used to retrieve the frame from camera. In LibRS, the frame you retrieved back can be used already, you don't have to get back it again. attached the snippet of CV example in LibRS, you can refer to the complete code here
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
imshow(window_name, image);
}
In OpenGL, after a texture name is generated, the texture does not have storage. With glTexImage2D you can create storage for the texture.
How can you determine if a texture has storage?
You can't do exactly that in ES 2.0. In ES 3.1 and later, you can call:
GLint width = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
if (width > 0) {
// texture has storage
}
The glIsTexture() call that is available in ES 2.0 may give you the desired information depending on what exactly your requirements are. While it will not tell you if the texture has storage, it will tell you if the given id is valid, and if it was ever bound as a texture. For example:
GLuint texId = 0;
GLboolean isTex = glIsTexture(texId);
// Result is GL_FALSE because texId is not a valid texture name.
glGenTextures(1, &texId);
isTex = glIsTexture(texId);
// Result is GL_FALSE because, while texId is a valid name, it was never
// bound yet, so the texture object has not been created.
glBindTexture(GL_TEXTURE_2D, texId);
glBindTexture(GL_TEXTURE_2D, 0);
isTex = glIsTexture(texId);
// Result is GL_TRUE because the texture object was created when the
// texture was previously bound.
I believe you can use glGetTexLevelParameterfv to get the height (or width) of the texture. A value of zero for either of these parameters means the texture name represents the null texture.
Note I haven't tested this!
I am trying to resize a .tif image and then display it on the browser by converting it to a base64 string. Since ImageIo doesn't support TIF images by default, i have added imageio_alpha-1.1.jar(got it here - http://www.findjar.com/jar/geoserver/jai/jars/jai_imageio-1.1-alpha.jar.html). Now ImageIO is able to register the plugin, which i checked by doing this
String[] writerNames = ImageIO.getWriterFormatNames();
writerNames has TIF in it, this means ImageIO has registered the plugin.
I am resizing the image like this
Map resizeImage(BufferedImage imageData, int width, int height, String imageFormat){
BufferedImage thumbnail = Scalr.resize(imageData, Scalr.Method.SPEED, Scalr.Mode.FIT_EXACT ,
width, height, Scalr.OP_ANTIALIAS);
String[] writerNames = ImageIO.getWriterFormatNames();
ByteArrayOutputStream baos = new ByteArrayOutputStream()
ImageIO.write(thumbnail, imageFormat, baos)
baos.flush()
byte[] imageBytes = baos.toByteArray()
baos.close()
return [imageBytes:imageBytes, imageFormat:imageFormat]
}
String encodeImageToBase64(byte[] imageData){
return Base64.encodeBase64String(imageData)
}
BufferedImage getBufferedImage(byte[] imageData){
ByteArrayInputStream bais = new ByteArrayInputStream(imageData)
BufferedImage bImageFromConvert = ImageIO.read(bais)
bais.close()
return bImageFromConvert
}
String resizeToDimensions(byte[] imageData, String imageFormat, int width, int height){
def bimg = getBufferedImage(imageData)
Map resizedImageData = resizeImage(bimg, width, height, imageFormat)
return encodeImageToBase64(resizedImageData.imageBytes)
}
now i am displaying the image like this
< img src = "data:image/tif;base64,TU0AKgAAAAgADAEAAAMAAA...." /> with this i get failed to load url message(on hovering)
as far as i know the base64 string usually starts with /9j/(may be i am wrong). when i am appending /9j/. I get an error - "image corrupt or truncated". I am not able to figure out the problem here, please help.
At first glance your use of the Data URI format looks correct -- try and narrow down exactly where the failure is.
I would recommend:
In your method where you return the string to the front end, I would recommend printing the entire thing out to the console to get the raw data in Data URI format.
Take the Data URI string, create a sample HTML file with the hard-coded value in it, try and load it... does the image display? If so, great, then your problem is with how you are streaming that back to the front end or how you are trying to load it. (Likely a JavaScript/DOM issue)
If that doesn't work, try and chop the Base64 section out of the example and save it into an example TXT file. In your Java code, load it, decode it and try and create an image out of it and write it back out to a TIFF -- if that didn't work, then there is something wrong with your Base64 handling and the encoding is invalid most likely.
Getting that far should answer most of the questions.
Actually now that I think about it, try and use ImageIO to read the image into a BufferedImage, then process it with imgscalr, then immediately call ImageIO.write and try and write it out to a new TIF someplace else and make sure the ImageIO decoding/encoding process is working correctly.
Hope that helps!
I need to do an iphone app that can look for a image pattern in an image. (sth like this)
After numerous google searching, i feel that the only option i have in to use template matching function in the opencv which has been ported for objectiveC.
I found an excellent starting point for a simple opencv project in objectiveC from this github code.
But it is only using edge detection and face detection features in the openCV. I need an objectiveC example that uses the template matching function - "cvMatchTemplate" - in objectiveC for iPhone?
Below is the code I have at this moment: (at least it is not giving me error, but this piece of code, return a completely black image, i am expecting a result image where matched area will be brighter?)
IplImage *imgTemplate = [self CreateIplImageFromUIImage:[UIImage imageNamed:#"laughing_man.png"]];
IplImage *imgSource = [self CreateIplImageFromUIImage:imageView.image];
CvSize sizeTemplate = cvGetSize(imgTemplate);
CvSize sizeSrc = cvGetSize(imgSource);
CvSize sizeResult = cvSize(sizeSrc.width - sizeTemplate.width+1, sizeSrc.height-sizeTemplate.height + 1);
IplImage *imgResult = cvCreateImage(sizeResult, IPL_DEPTH_32F, 1);
cvMatchTemplate(imgSource, imgTemplate, imgResult, CV_TM_CCORR_NORMED);
cvReleaseImage(&imgSource);
cvReleaseImage(&imgTemplate);
imageView.image = [self UIImageFromIplImage:imgResult];
cvReleaseImage(&imgResult);
p/s: Or, should I try to recognize object using cvHaarDetectObjects?
The result from cvMatchTemplate is a 32-bit floating point image. In order to display the result, you'll need to convert that to an unsigned char, 8-bit image (IPL_DEPTH_8U).
The CV_TM_CCORR_NORMED method produces values between [0, 1] and cvConvertScale provides an easy way to do the scaling and type conversion. Try adding the following to your code:
IplImage* displayImgResult = cvCreateImage( cvGetSize( imgResult ), IPL_DEPTH_8U, 1);
cvConvertScale( imgResult, displayImgResult, 255, 0 );
imageView.image = [self UIImageFromIplImage:displayImgResult];
I have RTF text which I am showing to the user on Mac. Now I need to replace some text. The text has some images inline. When I execute the following code, the images are getting lost. I am using c#, Mono and Monobjc to run this on mac.
NSText _questionView;
// some initialisation code which I have skipped
//
NSRange range = NSRange.NSMakeRange(0, _questionView.TextStorage.Length);
NSData oldString = _questionView.RTFFromRange(range);
if (oldString != null)
{
string s = oldString.ConvertRTFToString();
_questionView.ReplaceCharactersInRangeWithRTF(range, s.ConvertToNSData());
_questionView.SelectedRange = NSRange.NSMakeRange(0,0);
// After this line the inline images are lost.
}
You are losing the images because you are converting the RTF content to NSString. NSString can only carry text, not attributes. You should consider using NSAttributedString instead for the text manipulation, in order to keep the RTF attributes (style, images, etc.).