I have ipad application which has 3 tableviews with holds data like label,image,property lists etc, as while performing orientation it has some overlapping,slow response compared to native apps,
I have tried some methods like willRotateToInterfaceOrientation etc,
is that any thing that make sense to make orientation faster?
To the best of my knowledge, it is a set speed/time. I would believe Apple has set it so that there is a uniform feeling throughout apps. As with pretty much everything, I'm sure there is a way to do it, but I am not sure if Apple approves it for App Store apps, or if it is just a hidden detail.
Related
I currently have an iPhone app thats uses xib's as the graphical interface.
We are planning to make an iPad app using the same code, but with a completely different graphical interface.
I really like storyboards because of the nice flow and overview and was thinking of using it only for the iPad, but I'm also considering not to because:
Is it bad practice to have one for iPad but not for iPhone?
The seque and push/present can probably be confusing?
Having more developers in a storyboard can give merge nightmares?
Thanks in advance
Is it bad practice to have one for iPad but not for iPhone?
If you continue to use only XIBs, do the same for both if its a universal app. It will be confusing for others who may work on your code. And you might end up with a lot of if else code.
The seque and push/present can probably be confusing?
Once you get used to it, its really a useful way of navigating between controllers. If you have any specific questions do point it out.
Having more developers in a storyboard can give merge nightmares?
Its just a XML document, any versioning system out there can handle merges pretty easily. As with any file if two or more developers have worked on same piece of code (in this case can be color change, layout change) then yes it will get conflicted. And developers should be in sync. If one dev is changing the whole structure of the controllers then yes it will be a nightmare. You need to wait and get the latest working copy and ammend your changes.
And you dont have to put all eggs in one basket. You can have multiple storyboards as well.
I have app, which takes photos, crops them to fit display and uses it as overlay for next photo. I use UIIMagePickerController for that, and it works perfectly for me, however I realized I need to take photos only in landscape orientation. As it's written in UIImagePickerController class reference, it supports portrait mode only. I know there are several workarounds, and it is possible to use it in landscape, but I've red that there's a risk Apple will reject my app.
On the other hand, AVFoundation looks like a bit overkill for my needs.
Do I really need to use AVFoundation?
I'd use AVFoundation since its much more flexible than ImagePicker, it's always a good idea to future proof your design. And it's not all that tough to set up an AVCapture session, just look at the Demo Apps and see how it's done.
I'm trying to create a navigation system similar to the iOS 7 native app switcher - in that I have a set of images along the bottom and a set of views above them, which scroll together, but at different rates.
I understand from a little digging that I need to implement a parallax scrolling effect, but what would be the best way to handle this? Should I set up two UIScrollView instances wide enough for the correct number of images/views, then set the contentOffset on one when I scroll the other? Is there a better way to ensure they stay in sync?
Apologies that this is something of a vague question, but specifically I'd like to know if there is a preferred means of implementing parallax while keeping the content in the two portions in sync in terms of both hitting the centre of the display at the same time, etc.
I have the core of an iPad app made up, its relatively simple. However I want to add support to portrait mode (currently works in landscape). Trouble is, its quite a customised, unique interface, made of different uiimages, labels, etc
So how when the user rotates the iPad, can I handle the movement of all these objects? Whats the best way to do it?
Thanks.
If you have designed the interface in interface builder, you can build up the relationships between GUI elements through the interface builder, and at a first shot, see what it does with the rotation. This MAY work for you, I know there are interfaces I have designed where this is NOT an option, and others where it works just fine.
Else, You may have to write all of the positioning code yourself. Probably the Best way to do this from my experiance is to use pre-processor directives to define your positions for all of your elements in each of the 2 orientations. This way you write the routines for manipulation and then your small tweaks that you may need to make after you let it run the first time can simply be numeric tweaks in your pre-processor directives file.
http://bynomial.com/blog/?p=85
http://the.ichibod.com/kiji/how-to-handle-device-rotation-for-uiviews-in-ios/
Being a somewhat proficient iOS developer, I have just started working on a desktop OSX project in Cocoa and I'm running into issues that I just can't grasp. So this question is for the OSX developers out there.
I don't like the Interface Builder much, so I tend to write my views in code. The most prominent method I write my view layout code in is a view controller's loadView method, and at least on iOS I use autoresizingMasks for everything. Try out the view small, large, rotated landscape and portrait and if all is dandy, I continue with the next item on my list. Now on the desktop, the autoresizingMask works (or just looks) a little bit different. First of all the properties have different names, but their behavior also seems weird or unexpected.
When I ran into the issue below, I thought it must be my code was wrong, so after trying out long enough I re-created it with Interface Builder just for confirmation's sake, and guess what: I got the exact same result. Take a view with four vertically stacked subviews. Set the middle two to have flexible heights, the outer ones to be fixed. When you run it, size it down and back up again, I get two completely different layouts before and after the resize. See image:
Now I can follow why this happens from a mathematical standpoint between run loops, but from the point of an 'autosizing' or 'autoresizing' feature, this makes absolutely no sense.
Before I try to write the mother-of-all-resizing-topics here, might I ask you these questions? Feel free to elaborate some more on the resizing topic if you feel it adds to the post.
Am I a fool for not wanting to use the Interface Builder on desktop projects?
Should I depend on the autoresizingMask less than I would on iOS projects?
What are decent alternatives to making sure your layout lives up to standards without Interface Builder?
Cheers!
Yes, in my opinion. :)
You should depend on it when it does what you need. When it's insufficient, override resizeSubviewsWithOldSize: and/or resizeWithOldSuperviewSize: (or see below).
???
If you can target 10.7, look at the new constraint-based layout system. Check out the Cocoa Autolayout video from WWDC 2011.
You could also set minSize on your NSWindow to something large enough to prevent the singularity.
I'm not sure I'd say "fool," but refusing to use Interface Builder on the Mac is a very…avante-garde choice.
You should definitely use autosizing on your views.
Be maniacally attentive and spend lots of time making sure everything is right. (This is why I don't recommend going without Interface Builder. In general, what you get is a lot of wasted time that you could have spent doing something else.)
In this case, I think the best approach would be to set a sensible minimum height for the window. Don't let it get too small to display what it needs to display.