Drawing a 16x16 grid of random pixels in Core Graphics - cocoa-touch

How would I draw a UIImage in Core Graphics with dimensions 16x16 filled with random pixels at random coordinates and random grayscale color? This seems slightly impossible to do at the moment...
EDIT: Perhaps I should start with a diagonal line texture? My problem is filling in each pixel one by one. Doesn't seem doable in Core Graphics.

Create a buffer as many bytes long as you want pixels (so, in this case, 16 * 16).
Fill this buffer by reading from /dev/random.
Pass this buffer to the CGImageCreate function using kCGImageAlphaNone.
Once you have created a CGImage, it is trivial to create a UIImage from it. Depending on your requirements, you can actually create up to eight “random” UIImages from the same CGImage by specifying different orientation values.
ETA: You might also try creating a two-byte-per-pixel buffer and image. Then, by using each of the endianness flags, you can create two “random” CGImages from the same buffer, for a total of 16 “random” UIImages. However, I don't know whether two-byte-per-pixel no-alpha grayscale is supported on any version of iOS; the Quartz 2D Programming Guide lists only Mac OS X version numbers.

Related

Rendering line art with constant screen width

I have a line art texture applied to an object in 3D space. The default behavior is for the object and the texture to receive perspective scaling based on the perspective model view projection matrix. Is there any established technique to keep the positioning and scaling of the 3D object, while keeping the line width constant relative to the screen? The desired effect is as though a pen (fixed screen width) were used to trace a path on the 3D object.
Would something like SDF-based font rendering help?
Or maybe some kind of projective texture mapping?
Or render the object and texture to a buffer and expand the lines using edge detection?
Unfortunately, I'm using OGL ES 2, so I can't use a geom shader or anything like that.
The solution I came up with is inspired by procedural SDF generation, like #Felipe suggested, combined with Chris Green's Improved Alpha-Tested Magnification for Vector Textures and Special Effects.
Basically I hand draw shapes into textures using pure red, green, and blue. Then I render the scene using those textures, and generate an SDF on the fly in a second render pass. The SDF generation uses Green's algorithm with a small spread to improve performance. The SDF is then passed to a final render pass that thresholds and antialiases the SDF per Green's approach, using fwidth to maintain a constant line weight regardless of the distance of the object to the camera.
Since the original question was just for the approach/concept, I'm not posting an example at the moment. But I'll see if I can put together a shadertoy sometime soon.
You could create the texture procedurally in a fragment shader and use the size of a pixel for interpolations.
See:
FabriceNeyret's blog

Drawing a line using openGL es 2.0 and iphone touchscreen

This is the super simple version of the question I posted earlier (Which I think is too complicated)
How do I draw a Line in OpenGL ES 2.0 Using as a reference a stroke on the Touch Screen?
For example If i draw a square with my finger on the screen, i want it to be drawn on the screen with OpenGL.
I have tried researching a lot but no luck so far.
(I only now how to draw objects which already have fixed vertex arrays, no idea of how to draw one with constantly changing array nor how to implement it)
You should use vertex buffer objects (VBOs) as the backing OpenGL structure for your vertex data. Then, the gesture must be converted to a series of positions (I don't know how that happens on your platform). These positions must then be pushed to the VBO with glBufferSubData if the existing VBO is large enough or glBufferData if the existing VBO is too small.
Using VBOs to draw lines or any other OpenGL shape is easy and many tutorials exist to accomplish it.
update
Based on your other question, you seem to be almost there! You already create VBOs like I mentioned but they are probably not large enough. The current size is sizeof(Vertices) as specified in glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
You need to change the size given to glBufferData to something large enough to hold all the original vertices + those added later. You should also use GL_STREAM as the last argument (read up on the function).
To add a new vertex, use something like this :
glBufferSubData(GL_ARRAY_BUFFER, current_nb_vertices*3*sizeof(float), nb_vertices_to_add, newVertices);
current_nb_vertices += nb_vertices_to_add;
//...
// drawing lines
glDrawArrays(GL_LINE_STRIP, 0, current_nb_vertices);
You don't need the indices in the element array to draw lines.

How do I upload sub-rectangles of image data to an OpenGLES 2 framebuffer texture?

In my OpenGLES 2 application (on an SGX535 on Android 2.3, not that it matters), I've got a large texture that I need to make frequent small updates to. I set this up as a pair of FBOs, where I render updates to the back buffer, then render the entire back buffer as a texture to the front buffer to "swap" them. The front buffer is then used elsewhere in the scene as a texture.
The updates are sometimes solid color sub-rectangles, but most of the time, the updates are raw image data, in the same format as the texture, e.g., new image data is coming in as RGB565, and the framebuffer objects are backed by RGB565 textures.
Using glTexSubImage2D() is slow, as you might expect, particularly on a deferred renderer like the SGX. Not only that, using glTexSubImage2D on the back FBO eventually causes the app to crash somewhere in the SGX driver.
I tried creating new texture objects for each sub-rectangle, calling glTexImage2D to initialize them, then render them to the back buffer as textured quads. I preserved the texture objects for two FBO buffer swaps before deleting them, but apparently that wasn't long enough, because when the texture IDs were re-used, they retained the dimensions of the old texture.
Instead, I'm currently taking the entire buffer of raw image data and converting it to an array of structs of vertices and colors, like this:
struct rawPoint {
GLfloat x;
GLfloat y;
GLclampf r;
GLclampf g;
GLclampf b;
};
I can then render this array to the back buffer using GL_POINTS. For a buffer of RGB565 data, this means allocating a buffer literally 10x bigger than the original data, but it's actually faster than using glTexSubImage2D()!
I can't keep the vertices or the colors in their native unsigned short format, because OpenGL ES 2 only takes floats in vertex attributes and shader uniforms. I have to submit every pixel as a separate set of coordinates, because I don't have geometry shaders. Finally, I can't use the EGL_KHR_gl_texture_2D_image extension, since my platform doesn't support it!
There must be a better way to do this! I'm burning tons of CPU cycles just to convert image data into a wasteful floating point color format just so the GPU can convert it back into the format it started with.
Would I be better off using EGL Pbuffers? I'm not excited by that prospect, since it requires context switching, and I'm not even sure it would let me write directly to the image buffer.
I'm kind of new to graphics, so take this with a big grain of salt.
Create a native buffer (see ) the size of your texture
Use the native buffer to create an EGL image
eglCreateImageKHR(eglGetCurrentDisplay(),
eglGetCurrentContext(),
EGL_GL_TEXTURE_2D_KHR,
buffer,
attr);
I know this uses EGL_GL_TEXTURE_2D_KHR. Are you sure your platform doesn't support this? I am developing on a platform that uses SGX535 as well, and mine seems to support it.
After that, bind the texture as usual. You can memcpy into your native buffer to update sub rectangles very quickly I believe.
I realize I'm answering a month old question, but if you need to see some more code or something, let me know.

Posterization effect when using CGContextDrawImage, float CGBitmapContext

To retrieve pixel values from CGImage I use CGContextDrawImage (like described here:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?). The only difference is that I create 128 bpp float components context, not usual 32 bpp context. The source CGImage obtained from CGImageSource created with option kCGImageSourceShouldAllowFloat. That way I hoped to get access to float pixel values color matched with my bitmap context's color space and use them in further image processing. The problem is that resulting image data seems to be loosing dynamic range. It can be seen in shadow, plain blue sky areas. They become contoured and lacking detail. Some investigation showed the problem occurs in CGContextDrawImage (Source CGImage contains full dynamic range, saving it through CGImageDestination proves it) and after CGContextDrawImage context contents become posterized.
After some more investigation I found this:
http://lists.apple.com/archives/quartz-dev/2007/mar/msg00026.html
That led me to conclusion that problem is not in my code but in core graphics or that is intended behaviour.
My question is: what is correct way to obtain floating point data from image using core graphics?
After some more investigation the problem is narrowed down to the following: posterization occurs when 8 bit image is drawn to the 128 bpp float context created with linear color space (kCGColorSpaceGenericRGBLinear). If I draw the same image to context created with kCGColorSpaceGenericRGB then retrieve CGImage from that context and draw that second image to linear color space context everything is ok.
Other solution (workaround ?) is to use Core Image: create CIImage from source CGImage and draw it to CIContext created with corresponding kCGColorSpaceGenericRGBLinear CGContext. But that only option on OS X (not on iOS).

Simple algorithm for tracking a rectangular blob

I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.