Drawing lat/lng points on an image - objective-c

I have the following:
An image - a hand drawn map - of an area of roughly 600x400 meters. The image is drawn on top of Google Maps tiles.
The latitude/longitude (from Google Maps) of the corners of this image. Or put differently, I have the north and south latitude and the east and west longitude of the image.
A latitude/longitude coordinate from iPhone's CoreLocation.
How do I draw a point on this image (or nothing if it's out of bounds), representing the coordinate from CoreLocation?
Added bonus: draw an arrow on the edge of the map, pointing to the coordinate, if the coordinate is out of bounds of the image.
I would like to do this without using a library like proj, in order to not have to bundle a large library, and understand what I'm doing and why.
As you probably guessed by know, I'm writing this in Objective-C. Your answer doesn't have to be in Objective-C, though.

If I understand it correctly, you need to do two things. The first is to put your custom image into a map view and have your custom tiles appear at the correct coordinates, then pan, zoom and so on. The second thing you need to do is to draw a point onto that image at a certain latitude and longitude.
What you need is custom overlays, available in iOS 4 and up. The best place to find out about custom overlays is the WWDC 2010 video called "Session 127 - Customizing Maps with Overlays". There is also custom code available for the video. In the video, the presenter creates a custom map and embeds it in an MKMapView. He also describes a tool which you can use to make your tiles (to cut them up, get their shapes into the Mercator projection and name them properly). His map is scanned from a marine map, then placed on top of the normal map view.
You would be able to use boundingMapRect to create a bounds rectangle by converting your custom map's bounds to points. You can convert between points and coordinates using MKMapPointForCoordinate and MKCoordinateForMapPoint.
As for getting a point drawn on the map, you can do this a couple of ways. The easiest is to just use a custom MKAnnotationView with a dot as its image. This way the image doesn't grow and shrink as you zoom in. If you want the dot to grow and shrink, you should use a custom overlay for that too. You could easily use an MKCircleView, which is a subclass of MKOverlayView
For an arrow, you could use a normal view and rotate it (and place it on one side of the screen) according to the direction of your out-of-bounds point. Use MKMapPointForCoordinate and then calculate the directtion from the centre of your view.
But your best source is going to be that video. He goes into great depth about the whole process and gives source for a working app which is 90% of what you need for your own map.

After some research, I wrote my own library: libPirateMap. It's not very polished, but it works.
In case the link goes down, I'll paste the relevant source code here.
Usage:
// .h
PirateMap *pirateMap;
PirateMapPoint *pirateMapPoint;
// .m
- (void)viewDidLoad {
[super viewDidLoad];
pirateMap = [[PirateMap alloc] initWithNorthLatitude:59.87822
andSouthLatitude:59.87428
andWestLongitude:10.79847
andEastLongitude:10.80375
andImageWidth:640
andImageHeight:960];
pirateMapPoint = [[PirateMapPoint alloc] init];
pirateMapPoint.pirateMap = pirateMap;
}
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation {
pirateMapPoint.coordinate = PirateMapCoordinate2DMake(newLocation.coordinate.latitude, newLocation.coordinate.longitude)
PirateMapPoint2D point = [pirateMapPoint pointOnImage];
// use point.x and point.y to place your view.
}
Relevant .m-code.
#import "PirateMap.h"
static const double RAD_TO_DEG = 180 / M_PI;
static const double DEG_TO_RAD = M_PI / 180;
PirateMapCoordinate2D PirateMapCoordinate2DMake(double latitude, double longitude) {
return (PirateMapCoordinate2D) {latitude, longitude};
}
// atan2(y2-y1,x2-x1)
#implementation PirateMap
#synthesize northLatitude, southLatitude, westLongitude, eastLongitude,
imageWidth, imageHeight, latitudeImageToWorldRatio, longitudeImageToWorldRatio;
-(id)initWithNorthLatitude:(double)aNorthLatitude
andSouthLatitude:(double)aSouthLatitude
andWestLongitude:(double)aWestLongitude
andEastLongitude:(double)anEastLongitude
andImageWidth:(int)anImageWidth
andImageHeight:(int)anImageHeight{
if (self = [super init]) {
self.northLatitude = aNorthLatitude;
self.southLatitude = aSouthLatitude;
self.westLongitude = aWestLongitude;
self.eastLongitude = anEastLongitude;
self.imageWidth = anImageWidth;
self.imageHeight = anImageHeight;
self.latitudeImageToWorldRatio = [self computeLatitudeImageToWorldRatio];
self.longitudeImageToWorldRatio = [self computeLongitudeImageToWorldRatio];
}
return self;
}
-(double)computeLatitudeImageToWorldRatio {
return fabs(self.northLatitude - self.southLatitude) / self.imageHeight;
}
-(double)computeLongitudeImageToWorldRatio {
return fabs(self.eastLongitude - self.westLongitude) / self.imageWidth;
}
+(double)latitudeToMercatorY:(double)latitude {
static const double M_PI_TO_4 = M_PI / 4;
return RAD_TO_DEG * log(tan(M_PI_TO_4 + latitude * (DEG_TO_RAD / 2)));
}
#end
#import "PirateMapPoint.h"
PirateMapPoint2D PirateMapPoint2DMake(int x, int y) {
return (PirateMapPoint2D) {x, y};
}
#implementation PirateMapPoint
#synthesize pirateMap, coordinate;
-(id)initWithPirateMap:(PirateMap *)aPirateMap andCoordinate:(PirateMapCoordinate2D)aCoordinate {
if (self = [super init]) {
self.pirateMap = aPirateMap;
self.coordinate = aCoordinate;
}
return self;
}
-(PirateMapPoint2D)pointOnImage {
double xDelta = self.coordinate.longitude - self.pirateMap.westLongitude;
double yDelta = self.pirateMap.northLatitude - self.coordinate.latitude;
return PirateMapPoint2DMake(round(xDelta / self.pirateMap.longitudeImageToWorldRatio), round(yDelta / self.pirateMap.latitudeImageToWorldRatio));
}
#end

Have you looked into using MapKit? It has methods for converting map coordinates to view coordinates. Have a look at the convert family of methods.
http://developer.apple.com/library/ios/#documentation/MapKit/Reference/MKMapView_Class/MKMapView/MKMapView.html
If you are on 4.0 only, you might benefit from the overlay class as well.
http://developer.apple.com/library/ios/#documentation/MapKit/Reference/MKOverlayView_class/Reference/Reference.html
Cheers!

Related

SKShapeNode update physics body on touch

I'm currently developing a IOS game using SpriteKit.
I have a background which is SKShapeNode. Basically the path of this shape is a bezierPath with some curves. This path can be updated by the player using the touchBegan or touchMove trigger.
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event{
CGPoint touchPoint = [[touches anyObject] locationInView:self.view];
// Create path
UIBezierPath* newPath = ...
// Update path
[self.backgroundShape setPath:newPath.CGPath];
[self.backgroundShape setPhysicsBody:[SKPhysicsBody bodyWithEdgeLoopFromPath:newPath.CGPath]];
}
While updating the path of the background, some other SKNodes can enter inside my background.
Let's say we have the floor of a table at y0 (original vertical position) with some boxes on it (SKNodes). When updating the physics of the table to another vertical position (for instance y0 + deltaY), the boxes (which are affected by gravity) fall down at the bottom of the screen.
How can I prevent this ? I want to update the physics of the table but I want the boxes to stay on it.
Short video of the current issue
Thanks,
In your gamescene:
func keepBallOnTable() {
if ball.frame.minY < table.frame.maxY {
ball.position.y = table.frame.maxY + (ball.size.height / 2) + 1
}
}
override func didFinishUpdate() {
keepBallOnTable()
}
You will have to translate to objC :P
Video results of implementation
gist of the whole gamescene
Physicsbody and things like .moveTo and .position don't mix very well. You will have to fight the physics system everytime you try to mix them. Hence, the function I created above :)

Trying to get my head around simulating momentum / inertia witih a UIRotationGestureController

Okay, so I'm trying to make a 'Wheel of Fortune' type of effect with a wheel shape in iOS, where I can grab and spin a wheel. I can currently drag and spin the wheel around to my heart's content, but upon releasing my finger, it stops dead. I need to apply some momentum or inertia to it, to simulate the wheel spinning down naturally.
I've got the velocity calculation in place, so when I lift my finger up I NSLog out a velocity (between a minimum of 1 and a maximum of 100), which ranges from anywhere between 1 and over 1800 (at my hardest flick!), now I'm just trying to establish how I would go about converting that velocity into an actual rotation to apply to the object, and how I'd go about slowing it down over time.
My initial thoughts were something like: begin rotating full circles on a loop at the same speed as the velocity that was given, then on each subsequent rotation, slow the speed by some small percentage. This should give the effect that a harder spin goes faster and takes longer to slow down.
I'm no mathematician, so my approach may be wrong, but if anybody has any tips on how I could get this to work, at least in a basic state, I'd be really grateful. There's a really helpful answer here: iPhone add inertia/momentum physics to animate "wheel of fortune" like rotating control, but it's more theoretical and lacking in practical information on how exactly to apply the calculated velocity to the object, etc. I'm thinking I'll need some animation help here, too.
EDIT: I'm also going to need to work out if they were dragging the wheel clockwise or anti-clockwise.
Many thanks!
I have written something analogous for my program Bit, but my case I think is a bit more complex because I rotate in 3D: https://itunes.apple.com/ua/app/bit/id366236469?mt=8
Basically what I do is I set up an NSTimer that calls some method regularly. I just take the direction and speed to create a rotation matrix (as I said, 3D is a bit nastier :P ), and I multiply the speed with some number smaller than 1 so it goes down. The reason for multiplying instead of subtracting is that you don't want the object to rotate twice as long if the spin from the user is twice as hard since that becomes annoying to wait on I find.
As for figuring out which direction the wheel is spinning, just store that in the touchesEnded:withEvent: method where you have all the information. Since you say you already have the tracking working as long as the user has the finger down this should hopefully be obvious.
What I have in 3D is something like:
// MyView.h
#interface MyView : UIView {
NSTimer *animationTimer;
}
- (void) startAnimation;
#end
// MyAppDelegate.h
#implementation MyAppDelegate
- (void) applicationDidFinishLaunching:(UIApplication *)application {
[myView startAnimation];
}
#end
// MyView.m
GLfloat rotationMomentum = 0;
GLfloat rotationDeltaX = 0.0f;
GLfloat rotationDeltaY = 0.0f;
#implementation MyView
- (void)startAnimation {
animationTimer = [NSTimer scheduledTimerWithTimeInterval:(NSTimeInterval)((1.0 / 60.0) * animationFrameInterval) target:self selector:#selector(drawView:) userInfo:nil repeats:TRUE];
}
- (void) drawView:(id)sender {
addRotationByDegree(rotationMomentum);
rotationMomentum /= 1.05;
if (rotationMomentum < 0.1)
rotationMomentum = 0.1; // never stop rotating completely
[renderer render];
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
}
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch *aTouch = [touches anyObject];
CGPoint loc = [aTouch locationInView:self];
CGPoint prevloc = [aTouch previousLocationInView:self];
rotationDeltaX = loc.x - prevloc.x;
rotationDeltaY = loc.y - prevloc.y;
GLfloat distance = sqrt(rotationDeltaX*rotationDeltaX+rotationDeltaY*rotationDeltaY)/4;
rotationMomentum = distance;
addRotationByDegree(distance);
self->moved = TRUE;
}
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
}
- (void)touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event
{
}
I've left out the addRotationByDegree function but what it does is that it uses the global variables rotationDeltaX and rotationDeltaY and applies a rotational matrix to an already stored matrix and then saves the result. In your example you probably want something much simpler, like (I'm assuming now that only movements in the X direction spin the wheel):
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch *aTouch = [touches anyObject];
CGPoint loc = [aTouch locationInView:self];
CGPoint prevloc = [aTouch previousLocationInView:self];
GLfloat distance = loc.x - prevloc.x;
rotationMomentum = distance;
addRotationByDegree(distance);
self->moved = TRUE;
}
void addRotationByDegree(distance) {
angleOfWheel += distance; // probably need to divide the number with something reasonable here to have the spin be nicer
}
It's going to be a rough answer as I don't have any detailed example at hand.
If you have the velocity when you lift your finger already then it should not be hard.
The velocity you have is at pixels per second or something like that.
First you need to convert that linear speed to an angular speed. That can be done by knowing the perimeter of the circle 2*PI*radius and then do 2*PI/perimeter*velocity to get the angular speed in radians per second.
If your wheel didn't have any friction in its axis it would run forever at that speed. Well, you can just arbitrate a value for this friction, which is an acceleration and can be represented at pixels per second squared or radians per second squared for an angular acceleration. Then it's just a matter of dividing the angular speed by this angular acceleration and you get the time until it stops.
With the animation time you can use the equation finalAngle = initialAngle + angularSpeed*animationTime - angularAcceleration/2*animationTime*animationTime to get the final angle your wheel is going to be at the end of the animation. Then just do an animation on the transformation and rotate it by that angle for the time you got and say that your animation should ease out.
This should look realistic enough. If not you'll need to give an animation path for the rotation property of your wheel based on some samples from the equation from above.

UIScrollView - Custom Map - Prevent marker subview on map from scaling with map

I have a custom map of a limited area, and have it set up to correctly show the users' location. The map is a 1600px square image within a UIScrollView.
I have a crosshair image to show the current location of the user, which at zoomScale 1.0 is the desired size. When I pinch and zoom the scrollView, the crosshair scales with it. I would like to have the subview remain the same size on screen.
I haven't been able to find any information on this, what would be the best way to go about this?
If there is anything I can provide you with to help the answer, please let me know.
Many thanks!
EDIT -
Having looked in to this further, there is a UIScrollViewDelegate method - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale which I tried using to take the marker's current center and size, then adjust, but this only scales at the end of the zoom. I would prefer to have the marker remain the same size while the user is zooming.
EDIT 2-
Cake has provided a great answer below, but I haven't been able to implement this in the way I imagined it would be.
I have the UIImageView as a placeholder, with alpha set to 0. This placeholder moves around relative to the map to show the user location. This operates as I expect it to. Unfortunately, this resizes with the map, as it is a subview of the map (so it stays in place).
Taking Cake's below answer, I have created the non-scaling crosshair image, and added it as a sibling subview to the scrollview. The maths, once Cake had pointed them out, were quite simple to get the new frame for the crosshair:
CGPoint ULPC = userLocationPlaceholder.center;
float zs = scrollView.zoomScale;
CGRect newFrame = CGRectMake(((ULPC.x * zs) - scrollView.contentOffset.x) - 20, ((ULPC.y * zs) - scrollView.contentOffset.y) - 20, 40, 40);
Where the image is 40points wide. This matches the centers perfectly.
The problem I now have is that I cannot get the crosshair image to stay locked to the placeholder.
I have tried using a self calling animation as such:
-(void)animeUserLocationAttachment
{
[UIView animateWithDuration:0.05
delay:0
options:(UIViewAnimationOptionAllowUserInteraction | UIViewAnimationOptionCurveLinear )
animations:^{
userLocationDotContainer.frame = newFrame;
} completion:^(BOOL finished){
// Call self
[self animateUserLocationAttachment];
}];
}
As soon as I start scrolling/zooming, this locks the animation so that the crosshair just sits in place until I release the scrolling/zooming, then it correctly updates it's location.
Is there any way I can get around this, or an alternative method I can apply?
Many thanks
EDIT 3 -
I've re-accepted Cake's answer as it covers 90% of the issue. Further to his answer I have implemented the ScrollViewDelegate methods scrollViewWillBeginDragging: andscrollViewWillBeginDecelerating: to scale the placeholder to match the current size of the crosshair relative to the map, show the placeholder (that is a subview of the map image) and hide the crosshair image. The delegate method scrollviewWillBeginZooming:withView: does not show the placeholder because it scales with the map. As Cake recommends, I'll make a new question for this issue.
The counterpart methods (scrollViewDidEndZooming:withView:atScale:, scrollViewDidEndDragging:willDecelerate: and -scrollViewDidEndDecelerating:`) all hide the placeholder, and re-show the crosshair.
The question is old but for the future similar questions I've recently resolved a similar problem applying the hint of Andrew Madsen of another post.
I'had a UIScrollView, with an UIImageView in it. Attached to the UIImageView I had many MKAnnotationView (those are my subviews that I didn't want scaling with the superview).
I did subclass UIImageView and implement setTransform: method like here:
#import "SLImageView.h"
#implementation SLImageView
- (void)setTransform:(CGAffineTransform)transform
{
[super setTransform:transform];
CGAffineTransform invertedTransform = CGAffineTransformInvert(transform);
for (id obj in self.subviews)
{
if ([obj isKindOfClass:[MKAnnotationView class]])
{
[((UIView *)obj) setTransform:invertedTransform];
}
}
}
#end
This works perfectly!
Mick.
Create another crosshair image that's associated with the view or view controller that contains the scrollview. Then have this one always snap to the center of the crosshair image you already have. Then, hide your original crosshair image. Then you can avoid having the scrollview scale the disassociated crosshair, and it should stay the same size.
Relative coordinate systems
Each view in cocoa touch has a frame property that has an origin. In order to position an object owned by one view properly relative to another view, all you have to do is figure out the differences in their origins. If one view is a subview of another, then this isn't too difficult.
Get the origin of the container view
Get the location of the subview inside of the container view
Get the origin of the subview
Calculate the difference in the positions of the origins
Get the location of the object you want to overlap (relative to the subview)
Calculate the location of the object you want to overlap relative to the container view
Move your crosshair to this position
Swift equivalent for Mick's answer:
class MapContainerView:UIView {
#IBOutlet var nonScalingViews: [UIView]!
override var transform: CGAffineTransform {
didSet {
guard let nonScalingViews = nonScalingViews else {
return
}
let invertedTransform = CGAffineTransformInvert(transform)
for view in nonScalingViews {
view.transform = invertedTransform
}
}
}
}

iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative

I have an iPad application that has a base image UIImageView (in this case a large building or site plan or diagram) and then multiple 'pins' can be added on top of the plan (visually similar to Google Maps). These pins are also UIImageViews and are added to the main view on tap gestures. The base image is also added to the main view on viewDidLoad.
I have the base image working with the pinch gesture for zooming but obviously when you zoom the base image all the pins stay in the same x and y coordinates of the main view and loose there relative positioning on the base image (whose x,y and width,height coordinates have changed).
So far i have this...
- (IBAction)planZoom:(UIPinchGestureRecognizer *) recognizer;
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGRect pinFrame = pin.frame;
// ****************************************
// code to reposition the pins goes here...
// ****************************************
pin.frame = pinFrame;
}
}
}
I need help to calculate the math to reposition the pins x/y coordinates to retain there relative position on the zoomed in or out plan/diagram. The pins obviously do not want to be scaled/zoomed at all in terms of their width or height - they just need new x and y coordinates that are relative to there initial positions on the plan.
I have tried to work out the math myself but have struggled to work it through and unfortunately am not yet acquainted with the SDK enough to know if there is provision available built in to help or not.
Help with this math related problem would be really appreciated! :)
Many thanks,
Michael.
InNeedOfMathTuition.com
First, you might try embedding your UIImageView in a UIScrollView so zooming is largely accomplished for you. You can then set the max and min scale easily, and you can scroll around the zoomed image as desired (especially if your pins are subviews of the UIImageView or something else inside the UIScrollView).
As for scaling the locations of the pins, I think it would work to store the original x and y coordinates of each pin (i.e. when the view first loads, when they are first positioned, at scale 1.0). Then when the view is zoomed, set x = (originalX * zoomScale) and y = (originalY * zoomScale).
I had the same problem in an iOS app a couple of years ago, and if I recall correctly, that's how I accomplished it.
EDIT: Below is more detail about how I accomplished this (I'm looking my old code now).
I had a UIScrollView as a subview of my main view, and my UIImageView as a subview of that. My buttons were added to the scroll view, and I kept their original locations (at zoom 1.0) stored for reference.
In -(void)scrollViewDidScroll:(UIScrollView *)scrollView method:
for (id element in myButtons)
{
UIButton *theButton = (UIButton *)element;
CGPoint originalPoint = //get original location however you want
[theButton setFrame:CGRectMake(
(originalPoint.x - theButton.frame.size.width / 2) * scrollView.zoomScale,
(originalPoint.y - theButton.frame.size.height / 2) * scrollView.zoomScale,
theButton.frame.size.width, theButton.frame.size.height)];
}
For the -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView method, I returned my UIImageView. My buttons scaled in size, but I didn't include that in the code above. If you're finding that the pins are scaling in size automatically, you might have to store their original sizes as well as original coordinates and use that in the setFrame call.
UPDATE...
Thanks to 'Mr. Jefferson' help in his answer above, albeit with a differing implementation, I was able to work this one through as follows...
I have a scrollView which has a plan/diagram image as a subview. The scrollView is setup for zooming/panning etc, this includes adding UIScrollViewDelegate to the ViewController.
On user double tapping on the plan/diagram a pin image is added as a subview to the scrollView at the touch point. The pin image is a custom 'ZonePin' class which inherits from UIImageView and has a couple of additional properties including 'baseX' and 'baseY'.
The code for adding the pins...
- (IBAction)planDoubleTap:(UITapGestureRecognizer *) recognizer;
{
UIImage *image = [UIImage imageNamed:#"Pin.png"];
ZonePin *newPin = [[ZonePin alloc] initWithImage:image];
CGPoint touchPoint = [recognizer locationInView:planContainer];
CGFloat placementX = touchPoint.x - (image.size.width / 2);
CGFloat placementY = touchPoint.y - image.size.height;
newPin.frame = CGRectMake(placementX, placementY, image.size.width, image.size.height);
newPin.zoneRef = [NSString stringWithFormat:#"%#%d", #"BF", pinSeq++];
newPin.baseX = placementX;
newPin.baseY = placementY;
[planContainer addSubview:newPin];
}
I then have two functions for handling the scrollView interaction and this handles the scaling/repositioning of the pins relative to the plan image. These methods are as follows...
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return planImage;
}
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGFloat newX, newY;
newX = (pin.baseX * scrollView.zoomScale) + (((pin.frame.size.width * scrollView.zoomScale) - pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) + ((pin.frame.size.height * scrollView.zoomScale) - pin.frame.size.height);
CGRect pinFrame = pin.frame;
pinFrame.origin.x = newX;
pinFrame.origin.y = newY;
pin.frame = pinFrame;
}
}
}
For reference, the calculations for position the pins, by the nature of them being pins' centres the pin image on the x axis but has the y-axis bottom aligned.
The only thing left for me to do with this is to reverse the calculations used in the scrollViewDidScroll method when I add pins when zoomed in. The code for adding pins above will only work properly when the scrollView.zoomScale is 1.0.
Other than that, it now works great! :)

Draw waveform in NSView

I need to draw a waveform in an NSView. (I have all the samples in an array). The drawing must be efficient and really fast without clipping, flickering, etc, it must be smooth. The waveform will be "moving" according to the song position and some changes to the samples (DSP processing) will be shown as visual representation onto NSView in realtime.
I'm familiar drawing lines, arcs, etc onto canvas objects and I have developed apps doing such things but not on Mac OS X ...
I want to ask if anyone can guide me where to start drawing! Core Animation, OpenGL, simple override drawing methods, ??, etc. Which would be the best practice - API to use?
I would keep it simple and create an NSView subclass with an audioData property that uses Cocoa drawing. You could call [view setAudioData:waveArray] which would in turn call [self setNeedsDisplay:YES].
In your drawRect: method you could then iterate through the samples and use NSRectFill() accordingly. Here sample's value is between 0 and 1.
- (void)drawRect:(NSRect)dirtyRect {
[[NSColor blueColor]set];
for (id sample in self.waveArray) {
NSRect drawingRect = NSZeroRect;
drawingRect.origin.x = [self bounds].origin.x;
drawingRect.origin.y = [self bounds].origin.y + ([self.waveArray indexOfObject:sample]/([self.waveArray count] - 1.0));
drawingRect.size.width = [self bounds].size.width/[self.waveArray count];
drawingRect.size.height = [self bounds].size.height * [sample value];
NSRectFill(drawingRect);
}
}
This code isn't exact, and you should be sure to make it more efficent by only drawing samples inside dirtyRect.
I would start with a really long and thin image to represent a single bar/column for the waveform.
My plan would be to have a NSTimer that moves all bars of the wave one to the left every 0.01 seconds.
So something like this in the loop.
for (int x; x < [WaveArray count] ; x++)
{
UIImageView * Bar = [WaveArray ObjectAtIndex: x];
[Bar setCenter:CGPointMake(Bar.center.x-1,Bar.center.y)];
}
Now all you have to do is create the objects at the correct hight and add them to the WaveArray and they all will be moved to the left.