Let say my iOS app already have translation localizatible.strings for Japanese. Say "Continue" = "続ける";
However, I've added new NSLocalization additions to my code but I want to use genstrings to get all new NSLocalizations without having to merge them manually.
Is there any way to do that?
There are tools that manage localization and that automatically make updates to translations based on changes to the base language (and helping the translator make the necessary changes only to whatever has been changed).
For example www.gengo.com has a free online tool called Strings (which I haven't tried yet). There are also desktop apps that look very good, such as Localization Manager as part of Localization Suite http://www.loc-suite.org/ (which I haven't tried properly yet either).
Localization agencies may have their own tools, too.
These tools are a must if you do a lot of updates and have several languages but for smaller projects, they can take a bit too much getting used to. For an occasional task or a small project with few languages, manually merging the changes of your base language localizable.strings files to your translated localizable.strings files might be quicker though.
Related
I am using a command line tool (ng-xi18n) to extract the i18n strings from an angular 2 app I wrote. The output of this command is a messages.xlf file. Coming from a .po background, and being not familiar with .xlf, I assumed that this file is the equivalent to the .pot file (correct me if I am wrong).
I then assumed that if I want to translate my app, I had to cp messages.xlf messages.de.xlf to have a copy (messages.de.xlf) of the template file (messages.xlf) where I can translate each message into German (hence the .de.xlf).
After translating some dummy texts and running the app, I saw that it worked as expected, so I quit translating and continued developing the app. After some time, I added more i18n strings, and eventually thought that I had to update my template. And this is where things got hardly maintainable. I updated the template messages.xlf file, and quickly was wondering how I could update the new strings to my already translated messages.de.xlf file without loosing my progress.
When I was developing using .po files, this was no problem thanks to good tools like poEdit, but I didn't find anything comparable for .xlf. After trying some tools, I thought that the best choice would be Lokalize, but I didn't find a possibility to merge the template file to already translated (but outdated) files either.
Up to now, this was rather an essay than a question, so here's a quick summary:
Is the workflow of dealing with .xlf files really comparable to .po as I initially thought (described above), or is it completely different?
How am I suppose to update my already translated files?
What are the best practices dealing with .xlf files?
What are proof of concept tools to work with .xlf?
Sidenotes:
The Lokalize handbook was not helpful at all. I see a lot of functions that sound promising, like:
"File" > "Update file from template". I did not find anything in the handbook to explain this function. If I click on this, nothing happens.
"Sync" > "Open file for sync/merge". This seems to be a function to merge two similar files (by multiple translators) rather than a tool to update the translation file from a template. Even though there is a tooltip in Lokalize's primary sync tab, notifying me about "x unmatched entries", I just couldn't find anything to append those unmatched entries to my .de.xlf file.
[Update] Turns out, I had similar issues as in this question. After downgrading my version of Lokalize to the suggested one, many issues (including the ones mentioned in the question) disappeared. However, now the "Update file from template" option is greyed out, and I don't know why.
I also tried OmegaT, which does not work at all on my platform (Ubuntu 16.04).
[Update] Virtaal works great for merging new strings from a template, but the UI in general is very poorly designed...
Googling did not help, as every hit seems to be related to XCode or something.
Thanks for any help in advance, I really appreciate it
I wrote a small npm command line tool called xliffmerge.
In principle it does the same, that Roland Oldengarm does with his gulp tasks described in his blog article.
It is free and you can have a look at it at https://github.com/martinroob/ngx-i18nsupport#readme
The best workflow automation solution I have seen described so far is from Roland Oldengarm's blog entry "Angular 2: Automated i18n workflow using gulp". To summarize, in a few dozen lines of Gulp code he created the tooling to handle some of the challenges you faced. Specifically it runs ng-xi18n to extract the messages; creates an English translation with sources copied to targets; updates existing translations by adding new trans-units, keeping existing ones, and removing missing ones; and then exposes all xlf files as TypeScript string constants. These last strings can then be imported to supply the bootstrapModule with its translation provider options.
Caveat: I have not used this exact solution (and code) myself, but I was able to expose generated xlf as TypeScript strings and use them in an app in a manner similar to what he described. As for maintaining translations, I have leveraged IntelliJ IDEA (WebStorm) file comparison features and Counterparts Lite (for Mac) for that. My own efforts are still in early stages but are working end to end for an application that is in active development.
Official Angular docs are now updated for Internationalization (i18n) at https://angular.io/docs/ts/latest/cookbook/i18n.html including a section specifically for creating a translation source file with the ng-xi18n tool.
I'm developing an app that, among other things, will play a large audio file (30MB).
I want to submit the app to the App Store in several countries. The audio file is different per target country, the rest of the app remains the same (Although localized).
I've created a target for each country, a bash script takes care of copying the correct audio file into compiled app based on the target, and it works great.
I've also localized the ressources (Images and Localized.strings) to make it easy to maintain.
Let's say I built my target for Sweden, I want to include only the swedish localization to force the app to always show swedish language (Which matches the audio file).
Here's the actual question:* How do I exclude all localizations from a target or force a target to ONLY use a specific localization, regardless of phone settings?
Based on your comment in answer to Lvsti (where you say the reason you're doing this is that translations in some of your languages aren't finished yet but you want to release what you have), perhaps as an alternative to deleting all the relevant localization files or messing with your build settings you can try to edit the list of languages in your XCode project? It's not per target but per project, but it might allow you to exclude languages you don't want in your build. See under Localizations in your project settings (there's a little - icon you can use to remove a language).
I think you might be able to pull it of by going to:
Target Settings => Info => Add a new row called Localizations => Add a new element to that array with the kind of language you want (I think the default is english)
I haven't tested it, just let me know if it worked.
If I understand your question, you don't actually need a localized app, or at least not a fully localized one. If that is the case, I would use a run-script build phase which is responsible for copying the appropriate non-localized but target-specific resources based on the current target. E.g. supposing you have an Audio folder in your project root with all the versions for the different languages, your script could look like:
cp "$PROJECT_DIR/Audio/$TARGETNAME.mp3" "$TARGET_BUILD_DIR/$UNLOCALIZED_RESOURCES_FOLDER_PATH/audio.mp3"
which would e.g. copy/rename "Swedish.mp3" to "audio.mp3" directly accessible from the bundle.
There's a program called PPStream which is currently only available in Chinese, it allows for access to a myriad of ad-supported movies and TV series. The problems is that it is in Chinese and menus are indecipherable.
Is it possible to hook into the part of Mac OS's API that puts text on the screen so that it routes it through a wordlist first, translating the text into English? Would the API hook be able to differentiate the different applications calling the API?
I have no experience at all with Mac APIs, just pondering on if this is worth pursuing or not.
Thanks.
Edit: The reason I would like to do this at API level is that I need to dynamically dispatch HTTP queries with a list of strings to be translated (movie titles Chinese -> English), and the edit-the-i18n-file approach wouldn't do. Any other suggestions?
I haven't downloaded, installed or run PPStream myself so I'm speaking "out of my rear end" in a sense, but there are a number of ways to localize an app. But you really need to have access to the raw, uncompiled code and project to do it correctly.
The three most likely ways the string resources are saved are these:
1)
The app may have a strings file from which it fetches the strings to be displayed in the interface.
You may be able to make a copy of this strings file and set it to English or whatever language you choose.
2)
The strings may be baked into the code itself. This is generally a NO NO for commercial grade MacOS & iOS apps, but lazy and/or inexperienced developers can do this especially if they don't think their app will ever be used in other languages.
3)
The most likely set up is that there will be a folder hidden in the application package, inside the "Resources" folder, that has named like "en.lproj" or "English.lproj" or "de.lproj" or "zh_CN.lproj" or "zh_TW.lproj" (these last two are especially likely if this is only in Chinese).
Inside those folders will be localized XIB (or older NIB) files. And if you make a copy of this folder and then modify the newly made copy to add your new language.
Options 1 & 3 are ones you might be able to copy and then modify, but then again it might not work (especially these days when there's code & app signing). I've never tried this without an accompanying project, so if you have success, you should comment your question and/or this answer and let us know.
We're planning to launch a serie of applications in AppStore. They will be for some kind of different journals, showing different contents downloaded from a server via XML. So these applications will be made from exactly the same code (It's an universal application, so It'll work both in iPhone/iPad).
My initial idea was, in order to upload the application, compile just changing the images, logos and configurations (plist) that makes the application react as a particular journal. The compressed file would be uploaded to the AppStore.
However, this has resulted a horrible method, which promotes failures and mistakes. If I forget to change some image, as you can't see them in the compiled file (as it is included) they will end up in the store (and I will need four or five days in order to get the application changed).
I'm trying to look up for a better approach, wich keep the projects as independent as possible. I would like to be able to share the entire codebase: views, classes and nibs and create different projects for every journal.
Which is the best method to achieve that?. What structure would let me group both logic (controllers, classes) and UI and use it in the different projects?.
I hope I've explained.
As always, thank you very much.
You should keep most of your common code as a library project. Each final project should link with this project and provide images/assets along with code to mention these assets to common code. In my day job, I write a common library too, which gets used by 2 products/apps at my employer.
An Xcode project can have multiple Targets, all the Targets sharing code, but each Target getting its own resources (icons, images, text, plists, etc.) from a different subdirectory/folder within the same project directory/folder. Then you can check the whole thing, or just the shared source, into your source control repository.
You should also be testing each of your apps, built exactly the same way as any submission except for the codesigning, on a device before uploading to the store.
You can have a single Xcode project that creates multiple applications. You'll need to create a separate Info.plist with a different bundle identifier for each app.
If you are using a git repository you can just branch for each different app you want and that would keep track of all the differences and if you need to switch which you are working on you just have to checkout that branch. This would allow for the exact same structure just minor differences between the actual code for each.
The organization I currently work for an organization that is moving into the whole CMMI world of documenting everything. I was assigned (along with one other individual) the title of Configuration Manager. Congratulations to me right.
Part of the duties is to perform on a regular basis (they are still defining regular basis, it will either by quarterly or monthly) a physical configuration audit. This is basically a check of source code versions deployed in production to what we believe to be the source code versions in production.
Our project is a relatively small web application with written in Java. The file types we work with are java, jsp, xml, property files, and sql packages.
The problem I have (and have expressed but seem to be going ignored) is how am I supposed to physical log on to the production server and verify file versions and even if I could it would take a ridiculous amount of time?
The file versions are not even currently in the file(i.e. in a comment or something). It was suggested that we place visible version numbers on each screen that is visible to the users also. I thought this ridiculous also, since the screens themselves represent only a small fraction of the code we maintain.
The tools we currently use are Netbeans for our IDE and Serena Dimensions as our versioning tool.
I am specifically looking for ideas on how to perform this audit in a hopefully more automated way, that will be both accurate and not time consuming.
My idea is currently to add a comment to the top of each file that contains the version number of that file, a script that runs when a production build is created to create an XML file or something similar containing the file name and version file of each file in the build. Then when I need to do an audit I go to the production server grab the the xml file with the info, and compare it programmatically to what we believe to be in production, and output a report.
Any better ideas. I know this has to have been done already, and seems crazy to me that I have not found any other resources.
You could compute a SHA1 hash of the source files on the production server, and compare that hash value to the versions stored in source control. If you can find the same hash in source control, then you know what version is in production. If you can't find the same hash in source control, then there are untracked modifications in production and your new job title is justified. :)
The typical trap organizations fall into with the CMMI is trying to overdo everything. If I could suggest anything, it'd be start small & only do what you need. So consider any problems that you may have had in the CM area peviously.
The CMMI describes WHAT an organisation should do, but leaves the HOW up to you. The CMMI specification, chapter 2 is well worth a read - it describes the required, expected, and informative components of the specification - basically the goals are required, the practices are expected, and everything else is informative. This means there is only a small part of the specification which a CMMI appraiser can directly demand - the goals. At the practice level, it is permissable to have either the practices as described, or acceptable alternatives to them.
In the case of configuration audits, goal SG3 is "Integrity of baselines is established and maintained". SP3.2 says "Perform configuration audits to maintain integrity of the configuration baselines." There is nothing stated here about how often these are done, or how long they may take.
In my previous organisation, FCA/PCA was usually only done as part of the product release process, and we used ClearCase as the versioning tool, with labels applied across the codebase to define baselines. We didn't have version numbers in all the source files, nor did we have version numbers on all the products screens - the CM activity was doing the right thing & was backed up by audits, and this was never an issue in any CMMI appraisal.
We could use the deltas between labels to look at what files had changed, perform diffs to see the actual code changes. An important part of the process is being able to link those changes back to either a requirement/bug report/whatever the reason was which initiated the change.
Our auditing did use scripts to automate the process, but these were in-house developed scripts are specific to ClearCase - basically they would list all the files, their versions in the CM system, and the baseline/config item to which they belonged.
can't you use your source control for this? if you deploy a version and tag your sourcecontrol with that deployment, you can then verify against the source control system