MKMapView and setRegion:animated: not updating the map visuals - iphone-sdk-3.0

Greetings! I'm attempting to use MKMapView without any Apple code samples, though there are a few others out there of varying clarity. (I know, "Read the friendly manual." I've done that but it's not 100% clear, so please bear with me on this one.)
Here's the situation. I have a MKMapView object, wherein I have added a set of about ten MKPinAnnotation objects. So far, so good. Everything is alloced/released sanely and there doesn't appear to be any complaints from Instruments.
Upon initial display, I set up a MKCoordinateRegion object with the centerpoint at our first pin location, and a (arbitrary) span of 0.2 x 0.2. I then call:
[mapView setRegion:region animated:YES];
[mapView regionThatFits:region];
Wow! That worked well.
Meanwhile ... I also have a segmented control to allow for movement to each pin location. So as I tap through the list, the map animates to each new pin location with a new pair of calls to setRegion:animated: and regionThatFits: ... or at least that's the idea.
While the map does "travel" to the new pin location, the map itself doesn't update underneath. Instead, I see my pin on a gray/blank-map background ... until I nudge the map in any direction, however slightly. Then the map shows through! (If I'm only moving within a short distance of the previous pin location, I'll usually see whatever part of the map was already loaded.)
I suspect I'm doing something dumb here, but I haven't been able to figure out what, at least not from the MapKit docs. Perhaps I'm using the wrong calls? (Well, I do need to set the region at least once, yes? Moving that around doesn't seem to help though.) I have also tried using setCenterCoordinate:animated: - same problem.
I'm assuming nothing at this point (no pun intended). Just trying to find my way.
Clues welcome/appreciated!
UPDATE: Calling setRegion:animated: and regionThatFits: the first time, followed by setCenterCoordinate:animated: while traversing the list, has no effect. Interesting finding though: If I change animated to NO in both cases, the map updates!!! Only when it's set to YES. (Wha happen?! Is animated: broken? That can't be ... ???)

It turns out that the map update doesn't work when using the SIMULATOR. When I try setCenterCoordinate:animated: on the device, I do get the map update underneath.
Bottom line: I was trusting the simulator to match the device in terms of map updating behavior. Alas, I was mistaken! Lesson learned. "Don't let this happen to you." :)

You need to invoke the setRegion:animated: call in the Main thread context.
Just do something like:
....
[self performSelectorOnMainThread:#selector(updateMyMap) withObject:nil waitUntilDone:NO];
}
-(void) updateMyMap {
[myMap setRegion:myRegion animated:YES];
}
and it should work in any case (animated or not), with the map updated underneath.

Hum strange. The map updates on my Mac even in the simulator. Maybe a network setting (proxy or whatever) that would prevent the map widget to download the tiles on the simulator ?

Even though this is an old topic I thought I'd ring in with my experience. It seems the map animation only fails on devices running iOS 3.1.x and the simulator running 3.1.x. My dev iPod touch with 3.1.3 fails to zoom if animation is on.

Related

iOS8 how to understand interface orientation from UIViewControllerTransitionCoordinator?

I'm debugging a project that was working fine in iOS7.1, but does not lay out it's content properly in iOS 8.0. I've narrowed the issue down to this method:
[self orientRootViewControllerForOrientation:rootViewController.interfaceOrientation];
iOS8 no longer returns correct UIInterfaceOrientation from rootViewController.interfaceOrientation, instead it returns "upside down".
Reading the documentation, I'm confronted with a cryptic message:
Use viewWillTransitionToSize:withTransitionCoordinator: to make
interface-based adjustments.
Reading documentation on UIViewControllerTransitionCoordinator, I don't see it having any properties. How can I modify my method that expects an interface orientation to get it's orientation from UIViewControllerTransitionCoordinator?
Basically you are supposed now to layout your content based on the available size, no more on the rotation. Which results in big pains to support older rotation-based code.
The view controller transition coordinator provides a UIViewControllerTransitionCoordinatorContext that has a rotation factor. This could give you the hint you need ?
This blog post mentions this too.
You also have the usual device ([UIDevice currentDevice].orientation) or status bar orientation.
In case you are working on iOS 6 rotation methods, this post has relevant informations too.

Hooking up Chipmunk bodies to UIKit components?

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

Extend Gesture Length

I've come across a problem to which the simplest solution would be to extend the length of a UIGestureRecognizer. What I mean is that I need the iOS device to still believe the user has their finger on the screen for about 0.1 seconds after they release it. I need the device to think that the finger is in the exact same position for the 0.1 seconds as it was when the user released this.
Any help as to weather this is possible would be greatly appreciated!
Thank you!!!
EDIT:
Sorry for the late reply, I've been really busy with work.
To elaborate, I'm using a set of classes made by Alan Quartermain called AQGridView. It's a class that strongly resembles UITableView; however, it displays data in a grid instead of a list. There appears to be a bug where (if I understand correctly, and I may very well not) the data of the grid is reloaded before the delegate method, that is called when a user ends a UIGestureRecognizer, finishes if the user releases their finger while dragging a cell (from one grid index to another) very quickly. This causes a graphical glitch (which can be recreated in the springboard example that comes with the class set) where the dragging cell appears to settle one cell before or after it's appropriate location, and then quickly jumps to it's proper location. I believe this is because there is a brief period, when the user releases their finger, where the grids count is -1 of what it is when the cell settles.
This is a poor explanation of the problem, but the best I could come up with. As well, I'm a relatively new developer and could be way off on the cause of the problem. That is why I believe the most appropriate fix would be to extend the gesture length by a very small amount. If anyone wants to take a look at the AQGridView classes (https://github.com/AlanQuatermain/AQGridView/) I would really appreciate it! But if possible a simpler fix would just be to simulate the touch that the user inputed right before they released their finger so that the desired animation occurs.
Inside touchesEnded delegate method, start timer for 0.1 second and perform your selector.
You can even, subclass UIGestureRecogniser and implement your own gestures, Check the first answer of this question.
See : custom iOS gesture

iOS 3 - UITabBarItems disappear from UITabBar after a memory warning occurs

At a great number of requests from people using older iOS hardware, I'm currently refactoring and optimizing my app so it will work on iOS 3. That being said I've got a glitch with my UITabBar that I can replicate on all of the iPhone 3G units I've tested it on.
The glitch appears to have been fixed in iOS 4, but I was wondering if before that time, anyone else had this glitch as well and had figured out a (relatively elegant) workaround for it.
The problem is what you can see below; when a memory warning occurs and all of the views offscreen are released, when I bring a view controller with a tab bar back on screen, all of the UITabBarItems that are supposed to be in it are gone. As far as I can see, they're not being drawn at all; ie tapping the tab bar has no effect. After setting breakpoints and examining the UITabBar and its items in memory, they're all still there (ie not getting released), just that they're not getting redrawn when the UITabBar is re-created in the controller loadView method.
My app works similar to the official Twitter app in that I implemented my own version of UITabBarController so I could control the integration of it with a parent UINavigationController properly. I set it up as closely as possible to the original UITabBarController class though, with all of the child view controllers handling their own respective UITabBarItems and initializing them inside the class' init methods. Once the child view controllers are passed to my TabController object via an accessor method, the tabBarItems are accessed and added to the UITabBar view.
Has anyone seen this behaviour before and know of a way I can fix it? I'm hoping there's a really simple fix for this since it already works in iOS 4, so I don't want to hack it up too badly.
Thanks a lot!
After a bit of research, I think I found a solution to this. It's not the most elegant solution I was after, but it definitely works.
I'm guessing after a memory warning is triggered, something is happening to the UITabBarItem objects that basically renders them corrupt. I tried a lot of things (flushing out the UITabBar, re-creating the controllers array etc), but nothing worked.
I finally discovered that if you completely destroy the UITabBarItems and allocate new ones in their place, then those ones will work. :)
So my final solution to this was to add an extra condition in the viewDidLoad method of my controller that if the detected system was iOS 3, and there was already an array of UITabBarItems, it would go through each one, copy out all of the properties needed, destroy it, allocate a new one and then copy the old properties over to the new one.
I'm still going to keep an eye out for a better solution (I think there's a bit of overhead in this method), but thankfully at this stage, iOS 3 legacy support is becoming less and less of an issue. :)

Problems with an unusual NSOpenGLView setup

I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?