Is there a way to use *.properties file in PowerBuilder - properties

Is there a way to read a properties file using any kind of resource bundle in PowerBuilder as we do in Java? I tried searching couple of places but I couldn't find any resolution for the same. It would be great if someone can share you thoughts on this

Speaking in absence of what "using" a properties file means in Java, you can at least use the FileOpen(), FileRead() and FileClose() set of functions to read the data.
Good luck,
Terry

Related

Workflow / best practices for XLIFF

I am using a command line tool (ng-xi18n) to extract the i18n strings from an angular 2 app I wrote. The output of this command is a messages.xlf file. Coming from a .po background, and being not familiar with .xlf, I assumed that this file is the equivalent to the .pot file (correct me if I am wrong).
I then assumed that if I want to translate my app, I had to cp messages.xlf messages.de.xlf to have a copy (messages.de.xlf) of the template file (messages.xlf) where I can translate each message into German (hence the .de.xlf).
After translating some dummy texts and running the app, I saw that it worked as expected, so I quit translating and continued developing the app. After some time, I added more i18n strings, and eventually thought that I had to update my template. And this is where things got hardly maintainable. I updated the template messages.xlf file, and quickly was wondering how I could update the new strings to my already translated messages.de.xlf file without loosing my progress.
When I was developing using .po files, this was no problem thanks to good tools like poEdit, but I didn't find anything comparable for .xlf. After trying some tools, I thought that the best choice would be Lokalize, but I didn't find a possibility to merge the template file to already translated (but outdated) files either.
Up to now, this was rather an essay than a question, so here's a quick summary:
Is the workflow of dealing with .xlf files really comparable to .po as I initially thought (described above), or is it completely different?
How am I suppose to update my already translated files?
What are the best practices dealing with .xlf files?
What are proof of concept tools to work with .xlf?
Sidenotes:
The Lokalize handbook was not helpful at all. I see a lot of functions that sound promising, like:
"File" > "Update file from template". I did not find anything in the handbook to explain this function. If I click on this, nothing happens.
"Sync" > "Open file for sync/merge". This seems to be a function to merge two similar files (by multiple translators) rather than a tool to update the translation file from a template. Even though there is a tooltip in Lokalize's primary sync tab, notifying me about "x unmatched entries", I just couldn't find anything to append those unmatched entries to my .de.xlf file.
[Update] Turns out, I had similar issues as in this question. After downgrading my version of Lokalize to the suggested one, many issues (including the ones mentioned in the question) disappeared. However, now the "Update file from template" option is greyed out, and I don't know why.
I also tried OmegaT, which does not work at all on my platform (Ubuntu 16.04).
[Update] Virtaal works great for merging new strings from a template, but the UI in general is very poorly designed...
Googling did not help, as every hit seems to be related to XCode or something.
Thanks for any help in advance, I really appreciate it
I wrote a small npm command line tool called xliffmerge.
In principle it does the same, that Roland Oldengarm does with his gulp tasks described in his blog article.
It is free and you can have a look at it at https://github.com/martinroob/ngx-i18nsupport#readme
The best workflow automation solution I have seen described so far is from Roland Oldengarm's blog entry "Angular 2: Automated i18n workflow using gulp". To summarize, in a few dozen lines of Gulp code he created the tooling to handle some of the challenges you faced. Specifically it runs ng-xi18n to extract the messages; creates an English translation with sources copied to targets; updates existing translations by adding new trans-units, keeping existing ones, and removing missing ones; and then exposes all xlf files as TypeScript string constants. These last strings can then be imported to supply the bootstrapModule with its translation provider options.
Caveat: I have not used this exact solution (and code) myself, but I was able to expose generated xlf as TypeScript strings and use them in an app in a manner similar to what he described. As for maintaining translations, I have leveraged IntelliJ IDEA (WebStorm) file comparison features and Counterparts Lite (for Mac) for that. My own efforts are still in early stages but are working end to end for an application that is in active development.
Official Angular docs are now updated for Internationalization (i18n) at https://angular.io/docs/ts/latest/cookbook/i18n.html including a section specifically for creating a translation source file with the ng-xi18n tool.

How to extract info from a file

this may be a beginner's question. I've tried searching for info but couldn't find anything. Part of my work requires me to convert a specific, proprietary, file type. Unfortunately the software is no longer supported and can't be found. I have no idea where to start on this. I would like to write a little utility to basically convert the file for me to a standard file. Question is where do I start? Conceptually what am I looking at here? Is this even possible?
You could start by understanding what is stored in the file. Is there a pattern to the data, what is the pattern, how it is repeated, etc.
Then open the file in binary mode and try to find if there is indeed a pattern. If there is one, you should be able to see it, even if in binary mode.
And lots of patience :-)

How to do File I/O in Opa?

After reading (nearly) the whole ebook and taking a look at the API
i am still asking myself how to realize "traditional" web server behaviour with opa.
I understand (at least i believe that) that opa links external resources specified at
compile time into the executable, making them immutable and permament.
But what if, say, i would want to change the stylesheet of an application without recompiling it?
There seems to be a few methods in the stdlib (apidoc) but they are not covering
what i am used to from other programming languages.
A possible solution i could think of is making use of the internal database,
but that looks like a bit of an overkill for something simple like traditional File I/O.
Edit: this blog post explains more about dealing with external resources in Opa.
Long story short: you'll rarely work with external files in Opa.
Let me try to break this down. Opa will indeed embed resources. But for development mode you indeed just want to be able to tweak them (mainly CSS) and see changes immediately. If you compiled your program in a non-release mode then it will support this kind of actions (try --help, below is an excerpt)
Debugging Resources : dynamic edition:
[...]
--debug-editable-css
Export the CSS files embedded in the server to the file
system, so that they can be viewed and edited during
execution of the application
For many other editable&changing resources one would indede use the database.
And if you really need to work with files (again: with Opa you'll need it much less than with traditional web languages) then take a look at stdlib.io and, for advanced use, at BslFile module with bindings to Ocaml functions for file manipulation.
I think this module is for you :
http://opalang.org/resources/doc/index.html#file.opa.html/!/value_stdlib.io
import stdlib.io
my_css = File.content("css/file.css")
I am not seeing some way to write file, but I think if you need to write you should use the db.
But to read I think this is the solution :)

Haskell IO Testing

I've been trying to figure out if there is already an accepted method for testing file io operations in Haskell, but I have yet to find any information that is useful for what I am trying to do.
I'm writing a small library that performs various file system operations (recursively traverse a directory and return a list of all files; sync multiple directories so that each directory contains the same files using inodes as the equality test and hardlinks...) and I want to make sure that they actually work, but the only way I can think of to test them is to create a temporary directory with a known structure and compare the results from the functions executed on this temporary directory with the known results. The thing is, I would like to get as much test coverage as possible while still being mainly automated: I don't want to have create the directory structure by hand.
I have searched google and hackage, but the packages that I have seen on hackage do not use any testing -- maybe I just picked the wrong ones -- and anything I find on google does not deal with IO testing.
Any help would be appreciated
Thanks, James
Maybe you can find a way to make this work for you.
EDIT:
the packages that I have seen on hackage do not use any testing
I have found an unit testing framework for Haskell on Hackage. Including this framework, maybe you could use assertions to verify that the files you require are present in the directories that you want them to be and they correspond to their intended purpose.
HUnit is the usual library for IO-based tests. I don't know of a set of properties/combinators for file actions -- that would be useful.
There is no reason why your test code cannot create a temporary directory, and check its contents after running your impure code.
If you want mainly automated testing of monadic code, you might want to look into Monadic QuickCheck. You can write down properties that you think should be true, such as
If you create a file with read permission, it will be possible to open the file for reading.
If you remove a file, it won't open.
Whatever else you think of...
QuickCheck will then generate random tests.

Batch source-code aware spell check

What is a tool or technique that can be used to perform spell checks upon a whole source code base and its associated resource files?
The spell check should be source code aware meaning that it would stick to checking string literals in the code and not the code itself. Bonus points if the spell checker understands common resource file formats, for example text files containing name-value pairs (only check the values). Super-bonus points if you can tell it which parts of an XML DTD or Schema should be checked and which should be ignored.
Many IDEs can do this for the file you are currently working with. The difference in what I am looking for is something that can operate upon a whole source code base at once.
Something like a Findbugs or PMD type tool for mis-spellings would be ideal.
As you mentioned, many IDEs have this functionality already, and one such IDE is Eclipse. However, unlike many other IDEs Eclipse is:
A) open source
B) designed to be programmable
For instance, here's an article on using Eclipse's code formatting functionality from the command line:
http://www.peterfriese.de/formatting-your-code-using-the-eclipse-code-formatter/
In theory, you should be able to do something similar with it's spell-checking mechanism. I know this isn't exactly what you're looking for, and if there is a program for doing spell-checking in code then obviously that'd be better, but if not then Eclipse may be the next best thing.
This seems little old but seems to do a good job
Source Code Spell Checker