Re-Position MKMapView's userLocationView? - objective-c

I'm making an app that has a series of trails in the forest. GPS is going to be off now and then by few dozen meters, so I'm trying to write some code that will "snap" the userLocationView to the trail (which is where the user likely is).
I've got some code working that scans the nearby trails, determines the probable location of the user, and gets the CLLocationCoordinate2D for it.
Now I need to apply that location to the userLocationView. I thought that this would work, but it doesn't seem to:
userLocationAnnotation.coordinate = myDerivedCoordinate;
I realize I could probably create my own annotation, but I would like to avoid that if possible.

MapView gets the user location directly from CLLocationManager, and there isn't a documented way to intercept this interaction. Doing so would be more work than adding your own annotation, and a cause for app rejection. Also, it doesn't make sense to redefine correct information provided by the system. That's why mapView.userLocation is a MKUserLocation with a readonly location property.
So you have to create your own, roughly:
Set mapView.showsUserLocation = false;
Create and start a location manager
self.locationManager = [CLLocationManager new]; self.locationManager.delegate = self; [self.locationManager startUpdatingLocation];
Add/replace your own blue dot annotation from locationManager:didUpdateToLocation:fromLocation: of the CLLocationManagerDelegate.
You can get the blue dot graphic (and probably the pulsating circles) with the UIKit-Artwork-Extractor. For the pulsating effect add to the MKAnnotationView a UIImageView with the different frames, or take one circle and use UIView animateWithDuration with a CGAffineTransformMakeScale for the view.transform property.

Related

How do I re-define size/origin of app window in code, overriding nib/xib file parameters, with Obj-C, Xcode 11.3.1 on Mac OS X 10.15.2 (Catalina)?

I'll try to keep it short. I want to create a 3D FPS game, just for myself, that can run on multiple platforms, but I figured that to keep it simple, perhaps it is best to start off with something that is exclusively for macOS. I opted for Objective-C because
(a) Window Application projects in Xcode can only be coded either in Obj-C or Swift (since we are dealing with Cocoa API) and
(b) Obj-C is closer to old-school then Swift.
But before I learn to draw/render 2D-shapes on the window's canvas by writing code, I have to learn to invoke an application window with its properties set to my liking. I've spent hours doing research and experimenting with chunks of code. This is what I've tried: I open with
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[]) {
#autoreleasepool {
Then I go with ...
1)
NSWindow *window = [[[NSApplication sharedApplication] windows] firstObject];
NSRect frame = [window frame];
frame.origin.x = 100;
frame.origin.y = 200;
frame.size.width = 100;
frame.size.height = 500;
[window setFrame: frame display: YES];
... and close with ...
NSApplicationMain(argc, argv); // runs the win display function.
}
return (0) ;
}
But no visible changes. Nothing really gets reset. So instead of (1) I tried ...
2)
NSWindow *window = [[[NSApplication sharedApplication] windows] firstObject];
NSPoint newOrigin;
newOrigin.x = 400;
newOrigin.y = 100;
[window setFrameOrigin : newOrigin];
Still nothing. Then instead of (2) I tried:
3)
NSWindowController* controller = [[NSWindowController alloc]
initWithWindowNibName:#"MainMenu"];
[controller showWindow:nil];
Great. Now it's spitting out something I don't understand, especially since I'm new to Obj-C:
2020-02-08 21:53:49.782197-0800
tryout_macApp2[14333:939233] [Nib Loading] Failed
to connect (delegate) outlet from
(NSWindowController) to (AppDelegate): missing
setter or instance variable
I remember dicing around with an ApplicationDelegate, with CGSizeMake(), etc., but it just made the experience really inundating and frustrating. Nothing happened. Then there are NSView, NSViewController, and other classes, which is really mindboggling and begs the question: why are there so many classes when all I want to do is override the preset origin of the window and the dimensions preset by the MainMenu.xib file? (By the way, this project is derived from a Window Application project provided by Xcode.)
I really can't think of anything else to add to give you the entire picture of my predicament, so if you feel that something is missing, please chime in.
[Edit:] Moving forward to phase 2 of my project here: How do I paint/draw/render a dot or color a pixel on the canvas of my window with only a few lines in Obj-C on Mac OS X using Xcode?.
The short answer is that main() is too early to be trying to do this. Instead, implement -applicationDidFinishLaunching: on your app delegate class, and do it there. Leave main() as it was originally created by Xcode's template.
After that, I would say to obtain the window (if there's only going to be one main one), it's better to add an outlet to your app delegate and then, in the NIB, connect that outlet to the window. Then, you can use that outlet whenever you want to refer to the window.
Also, make sure that Visible at Launch is disabled for the window in the NIB. That's so you configure it as you want before showing it.
For a more complex app, it's probably better to not put a window into the Main Menu NIB. Instead, make a separate NIB for the window. Then, load it using a window controller object and ask that for its window.
I love Objective-C but also feel your pain, it has this testy ability to frustrate you endlessly.
I have not really developed a game but let me try and point you in the right direction. I think you need a UIViewController.
Now each UIViewController has a built in UIView that sort of represents the visible portion of it. You can use this or add a UIView and use that, whichever depends on your implementation. For now I'd suggest add a separate UIView and use that rather. Once you're comfortable you can then move the implementation to the UIViewController's view if you need to.
Anyhow, for now, create a UIView subclass, say MyGame or something, as for now all your code will end up there.
To do all of the above is not easy, especially if its the first time. If you can follow some tutorial it will be great. Even if the tutorial just adds a button, you can use it and replace the button with your view.
Anyhow, now that you've got that running and the view you've added shows up in green or some other neon colour just to verify that you can indeed change its properties, you're good to go.
Now you start. In MyGame, implement the
-(void)drawRect:(CGRect)rect
message, grab the context through
UIGraphicsGetCurrentContext
and start drawing lines and stuff on it, basically the stuff I understand you are interested in doing. You can also, through the same context, change the origin of what you are doing.
Hope this helps.

Share Textures Between 2 OpenGL Contexts

I have an existing openGL context, using an OpenGL 2.1 core profile. I am able to draw objects/textures/etc no problem. However, now I want to be able to have my application to launch a separate NSWindow, with an NSOpenGLView, that displays part of a texture I drew in the original renderer's view. After some reading, I eventually bumped into the topic of context sharing, which I think may be the route I have to take if I want to pull this off.
My shared openGL context is of type - CGLContextObj, but I don't know what to do with it as my window resides in a different process. I've read the Apple documentation on rendering contexts, but I am unable to apply the concepts they laid out if there's barely any examples for me to go through. Any advice will be really appreciated, thank you in advance.
EDIT:
Perhaps I did not give enough description, my apologies. I subclass my NSOpenGLView, and it's init I do the following:
// *** irrelevant initialization stuff above inside init *** //
// Get pixel format from first context to be used for NSOpenGLView when it's finally initialized later
_pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void*)_attribs];
// We will create CGPixelFormatObj from our C array of pixel format ttributes
GLint nPix;
CGPixelFormatObj myCgPixObj;
CGLChoosePixelFormat(_attribs, &myCgPixOPbj, &nPix);
// Now that we have the pixel format in CGPixelFormatObj form, create CGLContextObj to be passed in later when we init NSOpenGLView
CGLContextObj newContext;
CGLCreateContext(myCgPixObj, mainRenderingContext, &newContext);
// Create an NSOpenGLContext object here to feed into NSOpenGLView
NSOpenGLContext* _contextForGLView = [[NSOpenGLContext alloc] initWithCGLContextObj:newContext];
[newContext setView:self];
[self setOpenGLContext:newContext];
// We don't need this anymore
CGLDestroyPixelFormat(myCgPixObj);
return self;
I am able to draw objects in this view just fine. But I get a blank white rectangle whenever I try to use the textures created in the main rendering context. I'm a little lost on how to proceed from here, I have never dealt with shared contexts before.
Seems like I got it working, partially at least since I had to force the view to redraw by moving my Window around to actually render the texture from the main context (another problem for another time!). Anyways, here's how I did it:
My main rendering context is supplied by a host application (yes, I'm working on a plugin), and is of type CGLContextObj. I wrap that context in an NSOpenGLContext object via calling initWithCGLContextObj
Next step was to create an NSOpenGLPixelFormat object, initializing it with the pixel format attributes used by the host application's renderer. This step is important as it ensures that the rendering context that will be used in my view will have the same OpenGL core profile, along with other attributes used by the host application.
Then in my subclassed NSOpenGLView, I create a new NSOpenGLContext object, preferably in the prepareOpenGL method, by using initWithFormat:shareContext: for allocation. I used the NSOpenGLPixelFormat and NSOpenGLContext objects created previously to pass as parameters.
Upon assigning the newly created context to my view, I was able to render the textures from the main rendering context.

Xcode (Obj-C) Updating a UIView (NSRect) after the window is already visible to the user

I am fairly new to creating Xcode projects using Objective-C and i'm trying to make a simple 2d graphics program to show me how well my graphing code works before I implement it elsewhere.
I have gotten the drawing of everything right but the problem comes when I want to clear the strokes and stroke new lines. From what I see from when I run the program the view will only display what I have executed once it hits that #end at the end of the implementation. The issue with this is that by the time it hits the #end the code has already been run. and I can't figure out if I need to recall the class in a loop to update the view each time (or where or how to do this, perhaps something in main.m?) or if I need to call a function to update the view before it exits the implementation because right now all the lines are just overwriting each other before the user can see anything.
Here is the interface in my header file (connected to a UIView):
#interface GraphView : NSView
- (void)drawRect:(NSRect)dirtyRect;
#end
Implementation file:
Here is how I am creating my rectangle:
- (void)drawRect:(NSRect)dirtyRect {
[self clearGrid:dirtyRect];
[self drawLines];
}
- (void)clearGrid:(NSRect)theRect {
//Wiping the slate clean
[super drawRect:theRect];
[self.window setBackgroundColor:[NSColor whiteColor]];
...
Here is what I am using to draw my lines:
NSBezierPath* eqLine = [NSBezierPath bezierPath];
[[NSColor greenColor] setStroke];
[eqLine setLineWidth:5.0];
[eqLine moveToPoint:NSMakePoint([self convertToPixels:previousX], [self convertToPixels:previousY])];
[eqLine lineToPoint:NSMakePoint([self convertToPixels:finalX], [self convertToPixels:finalY])];
[eqLine stroke];
I have been searching for the past few days now on how I could solve this but so far it hasn't turned up anything, perhaps i'm just not searching for the right thing. Any information is helpful even if it's just a point to a resource that I can look at. Let me know if any additional information is needed. Thanks!
It's clear that you're a noobie to this, which is not a crime. So there's a lot to address here. Let's take things one at a time.
To answer your basic question, the -drawRect: method is called whenever a view needs to draw its graphic representation, whatever that is. As long as nothing changes that would alter its graphical representation, the -drawRect: method will only be received once.
To inform Cocoa (or CocoaTouch) that the graphic representation has changed, you invalidate all, or a portion of, your view. This can be accomplished in many ways, but the simplest is by setting the needsDisplay property to YES. Once you do that, Cocoa will (at some point in the future) call the -drawRect: method again to modify the graphic representation of your view. Wash, rinse, repeat.
So here's the basic flow:
You create your class and add it to a view. Cocoa draws your view the first time when it appears by sending your class a -drawRect: message.
If you want your view to change, you invalidate a portion of the view (i.e. view.needsDisplay = YES). Then you just sit back and wait. Very soon, Cocoa will send -drawRect: again, your class draws the new and improved image, and it appears on the screen.
Many changes will cause your view to be invalidated, and subsequently redrawn, automatically—say, if it's resized. But unless you make a change that Cocoa knows will require your view to redraw itself, you'll have to invalidate the view yourself.
Now for the nit-picking, which I hope you understand is all an effort to help you understand what's going on and improve your code...
Your subject line says UIView but your code example subclasses NSView so I'm actually not sure if you're writing a macOS or an iOS app. It doesn't matter too much, because at this level the differences are minimal.
Your -drawRect: calls [self clearGrid:..., when then calls [super drawRect:... Don't do this. As a rule, never use super except from within the overloaded method of the same name. In other words, -drawRect: can use [super drawRect:, but no other methods should. It's not "illegal", but it will save you grief in the long run.
Your -clearGrid: method sets the backgroundColor of the window. Don't do this. The window's background color is a property, and your -drawRect: method should only be drawing the graphical representation of your view—nothing more, nothing less.
You're calling [super drawRect: from within a direct subclass of NSView (or UIView). While that's OK, it's unnecessary. Both of these base classes clearly document that their -drawRect: method does nothing, so there's nothing to be gained by calling it. Again, it won't cause any harm, it's just pointless. When your -drawRect: method begins execution, a graphics context has already been set up, cleared, and is ready to draw into. Just start drawing.
#end is not a statement. It does not get "executed". It's just a word that tell the compiler that the source code for your class has come to an end. The stuff that gets executed are the methods, like -drawRect:.
In your #interface section you declared a -drawRect: method. This is superfluous, because the -drawRect: method is already declared in the superclass.

Apples ZoomingPDFViewer Example - Object creation

I'm currently working on an App which should display and allow users to zoom a PDF page.
Therefore I was looking on the Apple example ZoomingPDFViewer.
Basically I understand the sample code.
But a few lines are not obvious to me.
Link to the sample code:
http://developer.apple.com/library/ios/#samplecode/ZoomingPDFViewer/Introduction/Intro.html
in PDFView.m:
//Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
What does the code above do?
And the second code snippet I don't understand in PDFView.m again:
self = [super initWithFrame:frame];
if (self) {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
...
I know it creates a CATiledLayer object. But how it will be created is not clear to me.
I hope someone could give me a short answer to my question because I don't want to use code which I don't understand.
Thank you!
The TiledPDFView.h class is a subclass of UIView, so you can see what documentation UIView has on that method. According to the docs I see, it looks like:
layerClass - Implement this method only if you want your view to use a different Core Animation layer for its backing store. For example, if you are using OpenGL ES to do your drawing, you would want to override this method and return the CAEAGLLayer class.
So it seems that it is asking the Core Animation system to use a tiled-layer. Further docs from CATiledLayer:
CATiledLayer is a subclass of CALayer providing a way to
asynchronously provide tiles of the layer's content, potentially
cached at multiple levels of detail.
As more data is required by the renderer, the layer's drawLayer:inContext: method is called on one or more background
threads to supply the drawing operations to fill in one tile of data.
The clip bounds and CTM of the drawing context can be used to
determine the bounds and resolution of the tile being requested.
Regions of the layer may be invalidated using the setNeedsDisplayInRect: method however the update will be asynchronous.
While the next display update will most likely not contain the updated
content, a future update will.

How can I add sprite image from a set of sprites which have different properties for each sprite?

In my application one player and 10 targets are there. Each target appears one after the other (from target1 to target10). It's a shooting game. If we hit the first target then second target will come. The targets have properties like name, speedOfGunDraw, probability to hit the player, speedOfFire.
What should I do to make them appear one after the other with these properties. I am using CCMenuItem for the target. I am using a sprite for the player. Please give me idea to do this.
Thank You.
To address your question: With Cocos2D, your scene would create the sprites. You can get the current running scene and send it a message ("I'm shot", for instance). This can be done through the director.
[[CCDirector sharedDirector] runningScene]; // returns a pointer to the running scene
[[[CCDirector sharedDirector] runningScene] someoneShotMe: self]; // will message the scene that you're shot.
Alternatively, if your scene is not controlling things, set the object your wish to be informed as a delegate, upon creation of the "enemy".
Enemy * enemy1 = [[Enemy alloc] init];
[enemy1 setDelegate: self];
// and then from your enemy object, you call a message on the delegate
[self->delegate someoneShotMe: self];
I think you're over complicating this because you don't use MVC in there.
You should not subclass sprites to give them more functionality beyond the "view".
Properties like probabilityToHitPlayer don't affect the view directly, so shouldn't be stored in a sprite.
Create a new class, like Enemy (subclass of NSObject), which contains a sprite, along with other properties like probabilityToHitPlayer
Enemy can then handle the logic (it is a controller) while Sprite handles the visible parts.
Also, using menu items because they have touch detection? Not pretty. Instead, look into CCTargetedTouchDelegate.