User-controlled NSLocalizedString interface changes - objective-c

I am working on an app that has strings files for all the UI text. Depending on their language in the iPad international settings, they will start up with their chosen language on all the UI elements.
I want to add the ability for the user to change this within the app, so they could choose from the supported languages and the app would update the UI to the new choice. This way we could set up a kiosk where tourists can select their own language without assistance.
I'm not sure whether I should use NSLocalizedString:withTable: and pull the strings files into en.strings, zh.strings (instead of en.lproj/Localizable.strings) but that seems like a lot of unnecessary work.
Is there a way to use NSLocale to set the user language for the app and still use the NSLocalizedString() call?

There was an excellent answer to exactly this question in another thread:
Tutorial and example code for changing localization strings during app use
Seriously, go and upvote this guy. It is a shining example of the best SO has to offer!

Related

wxWidgets: How to find what translations are currently available for the application?

The application should offer switching the human language (the translation) via its menu. Unlike in the internat sample, the list of available languages should be created dynamically -- based on what translations are available. Is there any function to get the information?
The wanted behaviour is to reflect the situation when someone else adds the .mo catalog for another language, then the user can choose the language from the menu.
Thanks for your time and experience,
Petr
No, there is no way to get all the available catalogs now. It could be nice to add this to wxTranslations but for the moment it's not there.
Notice also that switching the language from a menu, as is done in many Windows programs, doesn't work really well with gettext approach neither as you need to recreate your entire UI to reflect the change in the language. This is why the language is usually only selected on the application launch anyhow.
Maybe this will help? wxTranslationHelper can display available catalogs.

What's better for update program - GUI or console?

I have created an update program for my project, and I'm thinking what is better - to be GUI, or to be console app?
Here's both pros and cons:
GUI: user-friendly,easy, but too much for so little program. It is unnecesseraly.
Console: Simple, but not user-friendly. And easy too!
EDIT:
Thanks for the answers! My dilemma is that the GUI is kinda too much for something so small - it will have buttons, labels, progress bars, while with console you just click, and boom. It's super easy!
Try to separate the update logic from the user interface. This makes it easy to try both of them.
You could have three separate projects in one solution. One class library containing the update logic. One console program and one WinForms program, both referencing the class library.
Well, the answer to that depends on exactly the questions you asked: who do you want to do most of the work, you or the user? In most cases, the answer is 'you'. It's your job as a developer to give the user a usable product.
Also, remember that you only develop once, but the user uses your program again and again.
Just because it's a GUI doesn't mean that it has to be a complex GUI. You could have something as simple as a form with a label in it that says "Application Updating".
I personally would go the GUI route with an option for a non-interactive install (i.e. don't show the user interface form).
The reason for this approach is that at some point down the road, you may want/need additional options or user interaction and if you start with the console route, you may need to switch to GUI eventually or risk having the console UI becoming overly complicated.
For example, if you want to charge for an update because of massive improvements, or you want to have an advanced mode for the application that is purchasable, then you would probably want to obtain a key from the user to enable this. Collecting this information in a form could be much more user friendly than keying it in at the console.
You also may want to provide a hyperlink in the update form to link to the list of new features on your web site or in the install directory. Again, it would be more user friendly in a GUI.

How does Safari's reader feature work?

I want to add a similar feature to a tool I'm making. I'm interested in how it works code-wise. I want to be able get an html page and exclude all but the article.
The Readability project does something similar for chrome and iOS. I'm not sure how it detects the content automatically but I know that Readability has an API for people who want to integrate it's features. You might want to check that out.
http://www.readability.com/learn-more
If you're working with Ruby, you could use Pismo. It extracts an article from a given document.

How to develop an app for Mac OS X that keeps reading everything the user types in?

I'm here to ask if any of you know how to develop an app for Mac OS X that keeps reading everything the user types in. An example of app that implements this behavior is Text Expander.
Text Expander reads everything the user types in, searching for abbreviations previously added on it. When one of this abbreviations is found, Text Expander replace the abbreviation form for the entire content related to that abbreviation.
So, I would like to know what resource of Objective-C or Cocoa let you do this kind of stuff.
P.S.: Just to mention, I'm not thinking about developing something like a key logger. I'm just curious and thinking about at developing a snippet platform.
This can be done with CGEventTap, but it requires that your process is running as root or “access for assistive devices” is enabled.
Check up on Services:
http://homepage.mac.com/simx/technonova/tips/creating_a_service_for_mac_os_x.html
http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/SysServices/introduction.html
That is one way to achieve this.

Global object for Javascript to interact with Safari plug-in

The issue is that I've written a Safari plug-in (Growler) that allows web applications to send Growl notifications by calling Javascript functions. However, at the moment the way it is written, people need to use <embed> to initialise the plug-in so that Javascript can begin using it (something I picked up from Apple's examples).
I was wondering if there was a way I could define something like window.<pluginName> so that they didn't have to embed it everytime? That'll allow a lot of sites to begin using it without changing any code.
I've looked at a lot of examples and documentation, and two things came up — 'WebView' and 'WebScriptObject'. I'm pretty new to this, so I'm not really sure what to do.
There's no way to write a WebKit plug-in that doesn't handle a content type. That's why so many Safari “plug-ins” or “extensions” (including GrowlSafari) are implemented as input manager hacks.
The way you've done it is the only reliable, safe, supported, and not doomed way to do it.