Draw waveform in NSView - objective-c

I need to draw a waveform in an NSView. (I have all the samples in an array). The drawing must be efficient and really fast without clipping, flickering, etc, it must be smooth. The waveform will be "moving" according to the song position and some changes to the samples (DSP processing) will be shown as visual representation onto NSView in realtime.
I'm familiar drawing lines, arcs, etc onto canvas objects and I have developed apps doing such things but not on Mac OS X ...
I want to ask if anyone can guide me where to start drawing! Core Animation, OpenGL, simple override drawing methods, ??, etc. Which would be the best practice - API to use?

I would keep it simple and create an NSView subclass with an audioData property that uses Cocoa drawing. You could call [view setAudioData:waveArray] which would in turn call [self setNeedsDisplay:YES].
In your drawRect: method you could then iterate through the samples and use NSRectFill() accordingly. Here sample's value is between 0 and 1.
- (void)drawRect:(NSRect)dirtyRect {
[[NSColor blueColor]set];
for (id sample in self.waveArray) {
NSRect drawingRect = NSZeroRect;
drawingRect.origin.x = [self bounds].origin.x;
drawingRect.origin.y = [self bounds].origin.y + ([self.waveArray indexOfObject:sample]/([self.waveArray count] - 1.0));
drawingRect.size.width = [self bounds].size.width/[self.waveArray count];
drawingRect.size.height = [self bounds].size.height * [sample value];
NSRectFill(drawingRect);
}
}
This code isn't exact, and you should be sure to make it more efficent by only drawing samples inside dirtyRect.

I would start with a really long and thin image to represent a single bar/column for the waveform.
My plan would be to have a NSTimer that moves all bars of the wave one to the left every 0.01 seconds.
So something like this in the loop.
for (int x; x < [WaveArray count] ; x++)
{
UIImageView * Bar = [WaveArray ObjectAtIndex: x];
[Bar setCenter:CGPointMake(Bar.center.x-1,Bar.center.y)];
}
Now all you have to do is create the objects at the correct hight and add them to the WaveArray and they all will be moved to the left.

Related

How to have a tiled image for a portion of a window cocoa

I'm new to this and it's hard for me to even ask my question right because I don't know the right terminology. I've done some objective c coding so I'm a little bit beyond beginner except when it comes to working with UIs.
I would like to know the best practices to accomplish this - i.e. the right way.
I have a window with some buttons at the top of it. Below that is a region that will have an image or webview. This will be of variable size so to make it look nice I'd like to have the area behind it have a nice tiled pattern.
I've experimented with a few things that work but everything feels a bit hackish. Is there a control that automatically provides a tiled background and lets me put other controls inside of it? For that matter, is there any kind of control that allows putting other controls inside of it? (I'm used to this in GTK but it doesn't appear to be common in Cocoa)
Also, considering that the image can change sizes based on the buttons above, should I be using core animation and it's layers (I've read about them but not used them)?
One fairly simple way to do this is to use a custom NSView subclass for the background view. In its -drawRect: method, write code to take the image and draw it repeatedly to fill the bounds of the view. The algorithm to do this is pretty simple. Start at the top left (or any corner really), draw the image, then increment the x position by the width of the image, and draw again. When the x position exceeds the maximum x coordinate of the view, increment y by the height of the image and draw the next row, and so on until you've filled the whole thing. This should do the trick:
#interface TiledBackgroundView : NSView
#end
#implementation TiledBackgroundView
- (void)drawRect:(NSRect)dirtyRect
{
NSRect bounds = [self bounds];
NSImage *image = ...
NSSize imageSize = [image size];
// start at max Y (top) so that resizing the window looks to be anchored at the top left
for ( float y = NSHeight(bounds) - imageSize.height; y >= -imageSize.height; y -= imageSize.height ) {
for ( float x = NSMinX(bounds); x < NSWidth(bounds); x += imageSize.width ) {
NSRect tileRect = NSMakeRect(x, y, imageSize.width, imageSize.height);
if ( NSIntersectsRect(tileRect, dirtyRect) ) {
NSRect destRect = NSIntersectionRect(tileRect, dirtyRect);
[image drawInRect:destRect
fromRect:NSOffsetRect(destRect, -x, -y)
operation:NSCompositeSourceOver
fraction:1.0];
}
}
}
}
#end
No control automatically tiles a background for you.
Remember that NSViews (usually subclasses) do all the drawing - so, for instance, that gray area would be a subclass of NSView and you could put the images inside of it.
To actually draw the tiled image (by the NSView subclass), Madsen's method is usable, but not the most convenient. The easiest way is something along the lines of:
NSColor *patternColor = [NSColor colorWithPatternImage:[NSImage imageNamed:#"imageName"]];
[patternColor setFill];
NSRectFill(rectToDraw);
which you should put in the -drawRect: method of your custom view class. It creates an NSColor which represents a tiled image. Note that it can also be a subclass of a scroll/clip view, etc.
I am not too familiar with Core Animation but it is useful for manipulating views, and might be a direction you want to look at concerning the view drawing the image (and that view only).

iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative

I have an iPad application that has a base image UIImageView (in this case a large building or site plan or diagram) and then multiple 'pins' can be added on top of the plan (visually similar to Google Maps). These pins are also UIImageViews and are added to the main view on tap gestures. The base image is also added to the main view on viewDidLoad.
I have the base image working with the pinch gesture for zooming but obviously when you zoom the base image all the pins stay in the same x and y coordinates of the main view and loose there relative positioning on the base image (whose x,y and width,height coordinates have changed).
So far i have this...
- (IBAction)planZoom:(UIPinchGestureRecognizer *) recognizer;
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGRect pinFrame = pin.frame;
// ****************************************
// code to reposition the pins goes here...
// ****************************************
pin.frame = pinFrame;
}
}
}
I need help to calculate the math to reposition the pins x/y coordinates to retain there relative position on the zoomed in or out plan/diagram. The pins obviously do not want to be scaled/zoomed at all in terms of their width or height - they just need new x and y coordinates that are relative to there initial positions on the plan.
I have tried to work out the math myself but have struggled to work it through and unfortunately am not yet acquainted with the SDK enough to know if there is provision available built in to help or not.
Help with this math related problem would be really appreciated! :)
Many thanks,
Michael.
InNeedOfMathTuition.com
First, you might try embedding your UIImageView in a UIScrollView so zooming is largely accomplished for you. You can then set the max and min scale easily, and you can scroll around the zoomed image as desired (especially if your pins are subviews of the UIImageView or something else inside the UIScrollView).
As for scaling the locations of the pins, I think it would work to store the original x and y coordinates of each pin (i.e. when the view first loads, when they are first positioned, at scale 1.0). Then when the view is zoomed, set x = (originalX * zoomScale) and y = (originalY * zoomScale).
I had the same problem in an iOS app a couple of years ago, and if I recall correctly, that's how I accomplished it.
EDIT: Below is more detail about how I accomplished this (I'm looking my old code now).
I had a UIScrollView as a subview of my main view, and my UIImageView as a subview of that. My buttons were added to the scroll view, and I kept their original locations (at zoom 1.0) stored for reference.
In -(void)scrollViewDidScroll:(UIScrollView *)scrollView method:
for (id element in myButtons)
{
UIButton *theButton = (UIButton *)element;
CGPoint originalPoint = //get original location however you want
[theButton setFrame:CGRectMake(
(originalPoint.x - theButton.frame.size.width / 2) * scrollView.zoomScale,
(originalPoint.y - theButton.frame.size.height / 2) * scrollView.zoomScale,
theButton.frame.size.width, theButton.frame.size.height)];
}
For the -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView method, I returned my UIImageView. My buttons scaled in size, but I didn't include that in the code above. If you're finding that the pins are scaling in size automatically, you might have to store their original sizes as well as original coordinates and use that in the setFrame call.
UPDATE...
Thanks to 'Mr. Jefferson' help in his answer above, albeit with a differing implementation, I was able to work this one through as follows...
I have a scrollView which has a plan/diagram image as a subview. The scrollView is setup for zooming/panning etc, this includes adding UIScrollViewDelegate to the ViewController.
On user double tapping on the plan/diagram a pin image is added as a subview to the scrollView at the touch point. The pin image is a custom 'ZonePin' class which inherits from UIImageView and has a couple of additional properties including 'baseX' and 'baseY'.
The code for adding the pins...
- (IBAction)planDoubleTap:(UITapGestureRecognizer *) recognizer;
{
UIImage *image = [UIImage imageNamed:#"Pin.png"];
ZonePin *newPin = [[ZonePin alloc] initWithImage:image];
CGPoint touchPoint = [recognizer locationInView:planContainer];
CGFloat placementX = touchPoint.x - (image.size.width / 2);
CGFloat placementY = touchPoint.y - image.size.height;
newPin.frame = CGRectMake(placementX, placementY, image.size.width, image.size.height);
newPin.zoneRef = [NSString stringWithFormat:#"%#%d", #"BF", pinSeq++];
newPin.baseX = placementX;
newPin.baseY = placementY;
[planContainer addSubview:newPin];
}
I then have two functions for handling the scrollView interaction and this handles the scaling/repositioning of the pins relative to the plan image. These methods are as follows...
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return planImage;
}
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGFloat newX, newY;
newX = (pin.baseX * scrollView.zoomScale) + (((pin.frame.size.width * scrollView.zoomScale) - pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) + ((pin.frame.size.height * scrollView.zoomScale) - pin.frame.size.height);
CGRect pinFrame = pin.frame;
pinFrame.origin.x = newX;
pinFrame.origin.y = newY;
pin.frame = pinFrame;
}
}
}
For reference, the calculations for position the pins, by the nature of them being pins' centres the pin image on the x axis but has the y-axis bottom aligned.
The only thing left for me to do with this is to reverse the calculations used in the scrollViewDidScroll method when I add pins when zoomed in. The code for adding pins above will only work properly when the scrollView.zoomScale is 1.0.
Other than that, it now works great! :)

Simple multi-touch drawing application on iOS: Too slow (because drawRect() not additive ?)

I am using Quartz 2D to make a simple multi-touch drawing iPad game. The game requires me to draw a new stroke at the finger position every 1/30th of a second.
As far as I know, there is basically no way to get drawRect() to not clear the context every time it is called (self.clearsContextBeforeDrawing = NO; does not work), so my solution was to create a back buffer bitmap (or layer, I can use both), draw every new small stroke into that back buffer every iteration for each finger, and then copy the buffer into the screen every call to drawRect(). In other words:
backlayer = CGLayerCreateWithContext(context, CGSizeMake(W, H), NULL);
offctx = CGLayerGetContext (backlayer);
and then
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
//code here to draw small strokes from old finger position to new one
CGContextDrawLayerInRect(context, [self bounds], backlayer);
}
This worked without problems while I was testing on my iPad 2, but yesterday I noticed that this same code runs much slower on the new iPad 3. The performance is abysmal, slowing my game down from 30FPS all the way to about 5 or so, probably due to the larger, retina display. I have the same problem if I use a separate CGBitmapContext that I create, and then every iteration I create an ImageRef from it and paint it with CGContextDrawImage.
What approach could I take to address this? It seems like I must redraw everything every iteration since it's not good enough to even pass a small rectange to drawRect of what has changed (since every iteration there would need to be several rectangles for each finger)
Thank you
I managed to resolve this as follows:
I create a new UIView subclass header and implementation files:
#interface fingerView : UIView {
}
Then in my main view, in header I declare 5 of these views:
fingerView* fview[5];
and in my main view implementation I create 5 views of this instance, one for each finger separately. Also, must make sure to make them, enable multitouch for each of them, and make sure that clearsContextBeforeDrawing is set to NO, as we will be updating tiny rects in each of them at a time, and we dont want the system to clear our work.
for(int i=0;i<5;i++) {
fview[i] = [[pView alloc] initWithFrame:topFrame];
[self addSubview: fview[i]];
[self sendSubviewToBack: fview[i]];
fview[i].opaque= NO;
fview[i].clearsContextBeforeDrawing = NO;
fview[i].multipleTouchEnabled = YES;
}
Now inside every finger view keep a large array (i use a simple array, say 10,000 long) of x and y positions that the finger had drawn on. Whenever a finger moves, the main view detects it, and calls a [fview[i] updatePos(newx, newy)], and crucially, we will command the view to only update a tiny potion of itself around these coordinates:
[fview[i] setNeedsDisplayInRect: fingerRect];
where fingerRect is a small rect centered at (newx, newy). Inside the drawRect method for every finger view,
- (void)drawRect:(CGRect)rect
{
if (movep==0) return;
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, r, g, b, 1);
CGContextSetLineWidth(context, linewidth);
//paint finger
CGContextBeginPath(context);
CGFloat slack= 15;
CGFloat minx= CGRectGetMinX(rect)-slack;
CGFloat maxx= CGRectGetMaxX(rect)+slack;
CGFloat miny= CGRectGetMinY(rect)-slack;
CGFloat maxy= CGRectGetMaxY(rect)+slack;
bool drawing = NO;
for(int i=0;i<movep;i++) {
CGFloat xx= x[i];
CGFloat yy= y[i];
if(xx>minx && xx<maxx && yy>miny && yy<maxy) {
if(drawing) {
// continue line
CGContextAddLineToPoint(context, xx, yy);
CGContextMoveToPoint(context, xx, yy);
} else {
// start drawing
CGContextMoveToPoint(context, xx, yy);
drawing= YES;
}
} else {
drawing= NO;
}
}
CGContextStrokePath(context);
and also, as I mentioned
- (void)updatePos: (CGFloat)xnew: (CGFloat) ynew
{
x[movep]= xnew;
y[movep]= ynew;
movep= movep+1;
Hopefully you can figure out how this works. Every view will look into this rectangle that has been modified, and checks all finger positions that went around that rect, and only draws those. This will come down to very few strokes, and so the entire code runs very fast.
The lesson overall is that UIViews are extremely optimized. As much as possible, try to make a whole bunch of them, update them only locally if at all, and let Apple's magic blend it all together.

Using NSImageView to display multiple images in quick sucession

I have an application where, in one window, there is an NSImageView. The user should be able to drag and drop ANY FILE/FOLDER (not only images) into the image view, so I subclassed NSImageView class to add support for those types.
The reason why I chose an NSImageView instead of a normal view is because I also wanted to display an animation (say an arrow pointing downwards and going up and down) when the user hovers over with files ready to drop. My question is this: what would be the best way (most efficient, quickest, least CPU usage, etc) to do this?
In fact, I have already done it, but what made me ask this question is the fact that when I set the images to change at a rate below 0.02 sec it starts to lag. Here is how I did it:
In the NSImageView subclass:
have an ivar: NSTimer* animTimer;
override awakeFromNib, calling [super awakeFromNib] and loading the images into an array (about 45 images) using NSImage
whenever user enters with files, start animTimer with frequency = 0.025 (less and it lags), and a selector that sets the next image in the array (called drawNextImage)
whenever the user exits or ends the drag and drop, call [animTimer invalidate] to stop updating images
Here is how I set the image in the subclass:
- (void)drawNextImage
{
currentImageIndex++; // ivar / kNumberDNDImages is a constant defined as 46
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[super setImage: [imagesArray objectAtIndex: currentImageIndex]]; // imagesArray is ivar
}
So, how would I do this quick enough? I'd like the frequency to be about 0.01 secs, but less than 0.025 lags, so that is what I have set for the moment. Oh, and my images are the correct size (+ or - one pixel or something) and they are in .png (I need the transparency - jpegs, for example, won't do it).
EDIT:
I have tried to follow NSResponder's suggestion, and have updated my method to this:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, [self.bigDNDImage size].height); // Up left corner - ??
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left corner - ??
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
I have also moved this method and the other drag and drop methods from the NSImageView subclass to an NSView subclass I already had. Everything is exactly the same, except for the superclass and this method. I also modified some constants.
In my early testing of this, I got some error/warning messages that didn't stop execution talking abou NSGraphicsContext or something. These have disappeared now, but just so you know. I have absolutely no idea why they were appearing and what they mean. If they ever appear again I'll worry about them, not now :)
EDIT 2:
This is what I'm doing now:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[self drawCurrentImage];
}
- (void)drawCurrentImage
{
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, 0); // Bottom left, for sure
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left as well
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
And the catch here is to call drawCurrentImage when drawRect is called (see, it actually was easier to solve than I thought).
Now, I must say I haven't tried this with the composite image, because I couldn't find a good and quick way to merge 40+ images the way I wanted (one next to the other). But for the ones ineterested, I modified this to do the same thing as my NSImageView subclass (reading 40+ images from an array and displaying them) and I found no speed bump: NSView is as laggy below 0.025 as NSImageView. Also I found some problems when using core animation (the image is drawn in weird places instead of the place I tell her to) and some warnings talking about NSGraphicsContext, which I don't know how to solve at all (I'm a complete noob when it comes to drawing and such with Objective-C's tools). So for the time being I'm using NSImageView, unless I find a way to merge all those images and try it with NSView.
Core Animation would probably be quickest, since it'll do everything on the GPU. Create a layer for each image, setting each layer's contents to the CGImage you can make from each image, add them all as sublayers of a single top-level layer, host the top-level layer in a plain NSView, and then just toggle each image layer's hidden property in turn.
I'd probably draw all of the component images into one long image, and draw segments into a view using -drawAtPoint:fromRect:operation:fraction:. I'm sure you could make it faster than that by resorting to OpenGL, though.

Drawing a large number of lines (CGContextBeginPath) on an iPad

I'm trying to make an ipad application that draws alot, but I really mean alot of lines on stage (10.000+)
using this simple forloop, my ipad crashes after 40~60 seconds (without showing result)
for ( int i = 0; i < 10000; i++ )
{
int r_x = rand() % 750;
int r_y = rand() % 1000;
CGPoint pointpoint = CGPointMake(r_x, r_y);
UIColor *st = [[GetColor alloc] getPixelColorAtLocation:pointpoint];
DrawLine *drawview = [[DrawLine alloc]initWithFrame:CGRectMake(r_x, r_y, 20, 20) selectedcolor:st];
[self.view addSubview:drawview];
[drawview release];
[DrawLine release];
[GetColor release];
}
and this is my "DrawLine" class:
- (id)initWithFrame:(CGRect)frame selectedcolor:colors{
if ((self = [super initWithFrame:frame])) {
selectedcolor_t = colors;
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)frame{
CGContextRef c = UIGraphicsGetCurrentContext();
float* colors = CGColorGetComponents(selectedcolor_t.CGColor);
CGContextSetStrokeColor(c, colors);
CGContextBeginPath(c);
CGContextMoveToPoint(c, 0.0f, 0.0f);
CGContextAddLineToPoint(c, 20.0f, 20.0f);
CGContextStrokePath(c);
}
how can I solve this problem? How can I draw this much subviews without crashing the iOS?
thanks so much!! :)
Please reconsider what you are doing there:
In line 4 of your loop, you alloc an instance of GetColor — which you never use again. Ask yourself: Does that make any sense from a design point of view?
In that same line, if you don't violate Cocoa's naming-conventions, you create a UIColor that is never released...
Then in line 8 you release the class-object of DrawLine (ditto that for the next line and the GetColor-class). This is terribly, horribly wrong!
Please visit the Memory Management Programming Guide at the iOS Dev-Center and read the first two sections (again)!
Besides that, re-evaluate your design:
Should GetColor really be a class, so that you create instances? Wouldn't a simple helper-function for color interpolation make more sense in this context?
If it should be a class, why not create just one instance of it outside of the loop and simply query it repeatedly for the colors?
Do you really need a subclass of UIView to draw a single straight, solid, single-colored line? If the lines need not be updated, you should (as Richard and nacho4d suggested) draw all of them in one object (e.g. by a custom UIView or by a delegate of CALayer implementing the drawLayer:inContext: method). If you need to update those lines later, you could simply (ab)use CALayer...
In the latter case, your problem then becomes:
Calculate your random coordinates.
Calculate your color.
Create an opaque CALayer with
a) that color as its backgroundColor,
b) a width of 20 * sqrt(2),
c) a height of whatever-you-want-to-be-the-width-of-that-line,
d) your point as its origin and
e) a rotation of 45.
Add that layer as a sublayer to self.view's layer.
Cheers
Daniel
If your lines are static (not moving later, not animating, etc) , as they seem to be, you could also draw all the lines in a single drawRect: in one view without creating 1000 of CALayers.
I can't tell if this is faster than drawing 1000 CALayers (because CoreAnimation is hardware accelerated and CoreGraphics is not) but it's surely lighter since all the lines will be flattened in a single bitmap. (which is the context of your view)
Just move your for loop inside your drawRect: and follow danyowde advices.( you just need one color object or a helper function but not to create a color each iteration)
Good luck, Hope it helps;)