Is the Perl 6 documentation available for e-readers? - raku

I wonder, whether the Perl 6 documentation is available in some format for e-readers (epub / mobi / fb2). I tried to make an epub out of this web-page, where all the docs are conveniently combined into one file. Unfortunately, the wonderful 4-level structure is ignored by available converters, so I get a huge epub without bookmarks, which it's impossible to navigate.
So, does anyone know where to find, or can make themselves an epub/mobi out of the Perl 6 docs with bookmarks?
Quick googling leads me to python docs' epub. :)

Here you go: https://temp.perl6.party/pub/2017/PSix-Docs-Oct-30-2017.epub
I've used Calibre with the single-page docs you mentioned and in "Convert individually"->"Table of Contents" told it to use h1, h2, and h3 selectors to figure out where the level 1, 2, and 3 heading are at. Other than some rendering bugs, seems to work.
In 2018, I'm hoping we'll make a Pod::To::SQL that'll load up our docs into an SQLite database, with each bit of info tagged appropriately. This will let us make a better dynamic site, make p6doc tool more useful, and cater to more usecases like yours: creating standalone version of (subset of) docs in various formats.

Related

Workflow / best practices for XLIFF

I am using a command line tool (ng-xi18n) to extract the i18n strings from an angular 2 app I wrote. The output of this command is a messages.xlf file. Coming from a .po background, and being not familiar with .xlf, I assumed that this file is the equivalent to the .pot file (correct me if I am wrong).
I then assumed that if I want to translate my app, I had to cp messages.xlf messages.de.xlf to have a copy (messages.de.xlf) of the template file (messages.xlf) where I can translate each message into German (hence the .de.xlf).
After translating some dummy texts and running the app, I saw that it worked as expected, so I quit translating and continued developing the app. After some time, I added more i18n strings, and eventually thought that I had to update my template. And this is where things got hardly maintainable. I updated the template messages.xlf file, and quickly was wondering how I could update the new strings to my already translated messages.de.xlf file without loosing my progress.
When I was developing using .po files, this was no problem thanks to good tools like poEdit, but I didn't find anything comparable for .xlf. After trying some tools, I thought that the best choice would be Lokalize, but I didn't find a possibility to merge the template file to already translated (but outdated) files either.
Up to now, this was rather an essay than a question, so here's a quick summary:
Is the workflow of dealing with .xlf files really comparable to .po as I initially thought (described above), or is it completely different?
How am I suppose to update my already translated files?
What are the best practices dealing with .xlf files?
What are proof of concept tools to work with .xlf?
Sidenotes:
The Lokalize handbook was not helpful at all. I see a lot of functions that sound promising, like:
"File" > "Update file from template". I did not find anything in the handbook to explain this function. If I click on this, nothing happens.
"Sync" > "Open file for sync/merge". This seems to be a function to merge two similar files (by multiple translators) rather than a tool to update the translation file from a template. Even though there is a tooltip in Lokalize's primary sync tab, notifying me about "x unmatched entries", I just couldn't find anything to append those unmatched entries to my .de.xlf file.
[Update] Turns out, I had similar issues as in this question. After downgrading my version of Lokalize to the suggested one, many issues (including the ones mentioned in the question) disappeared. However, now the "Update file from template" option is greyed out, and I don't know why.
I also tried OmegaT, which does not work at all on my platform (Ubuntu 16.04).
[Update] Virtaal works great for merging new strings from a template, but the UI in general is very poorly designed...
Googling did not help, as every hit seems to be related to XCode or something.
Thanks for any help in advance, I really appreciate it
I wrote a small npm command line tool called xliffmerge.
In principle it does the same, that Roland Oldengarm does with his gulp tasks described in his blog article.
It is free and you can have a look at it at https://github.com/martinroob/ngx-i18nsupport#readme
The best workflow automation solution I have seen described so far is from Roland Oldengarm's blog entry "Angular 2: Automated i18n workflow using gulp". To summarize, in a few dozen lines of Gulp code he created the tooling to handle some of the challenges you faced. Specifically it runs ng-xi18n to extract the messages; creates an English translation with sources copied to targets; updates existing translations by adding new trans-units, keeping existing ones, and removing missing ones; and then exposes all xlf files as TypeScript string constants. These last strings can then be imported to supply the bootstrapModule with its translation provider options.
Caveat: I have not used this exact solution (and code) myself, but I was able to expose generated xlf as TypeScript strings and use them in an app in a manner similar to what he described. As for maintaining translations, I have leveraged IntelliJ IDEA (WebStorm) file comparison features and Counterparts Lite (for Mac) for that. My own efforts are still in early stages but are working end to end for an application that is in active development.
Official Angular docs are now updated for Internationalization (i18n) at https://angular.io/docs/ts/latest/cookbook/i18n.html including a section specifically for creating a translation source file with the ng-xi18n tool.

XSL-FO tags not supported in Java

I'm trying to edit an XSL-FO document to be processed in Java via Apache FOP. Are there any tags that might not be supported or that have become depreciated?
I didn't write the original XSL-FO. It was originally a WordML document that I converted via Word2FO, and I've gotten rid of junk characters and made sure all the tags are closed properly, so the only thing I can think of is that some of these tags might not be supported. Particularly:
Tags with Microsoft-related properites, including fonts like Arial-Word-Unicode, references to Microsoft-Office Smarttags, and other word-related properties
SVG tags
External graphics tags with huge streams of nonsense text to represent an image.
I've been looking online for a list of these unsupported tags, but I can't seem to find anything.
I have found what I am looking for on this page.
http://xmlgraphics.apache.org/fop/compliance.html
Very useful for looking up what is/isn't supported.
I should really stop finding these solutions after asking these questions.
This question is answered, and the above link will probably help anyone who needs it answered too. Sorry to bother the rest of you.

dojo js library + jsdoc -> how to document the code?

I'd love to ask you how do the guys developing dojo create the documentation?
From nightly builds you can get the uncompressed js files with all the comments, and I'm sure there is some kind documenting script that will generate some html or xml out of it.
I guess they use jsdoc as this can be found in their utils folder, but I have no idea on how to use it. jsDoc toolkit uses different /**commenting**/ notations than the original dojo files.
Thanks for all your help
It's all done with a custom PHP parser and Drupal. If you look in util/docscripts/README and util/jsdoc/INSTALL you can get all the gory details about how to generate the docs.
It's different than jsdoc-toolkit or JSDoc (as youv'e discovered).
FWIW, I'm using jsdoc-toolkit as it's much easier to generate static HTML and there's lots of documentation about the tags on the google code page.
Also, just to be clear, I don't develop dojo itself. I just use it a lot at work.
There are two parts to the "dojo jsdoc" process. There is a parser, written in PHP, which generates xml and/or json of the entirety of listed namespaces (defined in util/docscripts/modules, so you can add your own namespaces. There are basic usage instructions atop the file "generate.php") and a Drupal part called "jsdoc" which installs as a drupal module/plugin/whatever.
The Drupal aspect of it is just Dojo's basic view of this data. A well-crafted XSLT or something to iterate over the json and produce html would work just the same, though neither of these are provided by default (would love a contribution!). I shy away from the Drupal bit myself, though it has been running on api.dojotoolkit.org for some time now.
The doc parser is exposed so that you may use its inspection capabilities to write your own custom output as well. I use it to generate the Komodo .cix code completion in a [rather sloppy] PHP file util/docscripts/makeCix.php, which dumps information as found into an XML doc crafted to match the spec there. This could be modified to generate any kind of output you chose with a little finagling.
The doc syntax is all defined on the style guideline page:
http://dojotoolkit.org/reference-guide/developer/styleguide.html

Autodocumentation type functionality for Fortran?

In the past I've used Doxygen for C and C++, but now I've been thrown on Fortran project and I would like to get a quick all encompassing look at the architecture.
In the past I've found reverse engineering tools to be useful where no documentation of the architecture exists.
So, is there a tool out there that will reverse engineer Fortran code?
I tried to use Doxygen, but didn't have any luck. I will be working with two different projects - one Fortran 90 and I think is in Fortran 77.
Thanks for any insights and feedback.
Tools which may help with reverse engineering:
SciTools Understand
Link with some more tools (search "fortran")
Also, maybe some of these unit testing frameworks will be helpful (I haven't used them, so I cannot comment on the pros and cons of any of them):
FUnit
FRUIT
Ftnunit
(these links link to fortranwiki, where you can find a tidbit on every one of them, and from there there are links to their home sites).
Doxygen 1.6.1 will generate documentation, call graphs, etc. for Fortran source code in free-format (F90) format. You are out of luck for auto-documenting fixed-format (F77) code with doxygen.
All is not lost, however. The conversion from fixed to free format is straightforward and can be automated to a great degree - change comment characters to '!', change continuation characters to '&', and append '&' to lines to be continued. In fact, if the appended continuation character is placed in column 73, it should be ignored by standard F77 compilers (which still only recognize code in columns 1 through 72) but will be recognized by F9x/F2003/F2008 compilers. This allows the same code to be recognized as both in fixed and free format, which lets you gracefully migrate from one format to the other.
Conveniently, there are about a thousand small programs that will do this format adjustment to some degree or another. Realistically, if you're going to be maintaining the code, you might as well move it away from the 1928 spec for Hollerith (IBM) punched cards. :)

What are the relative merits of pdflatex?

Not sure this is a programming question, but we use LaTeX for all our API documentation and user documentation, so I hope it will go through.
Can someone please explain what are the relative merits of using pdflatex as opposed to the "classic" technique of
latex foo
dvips -Ppdf foo
ps2pdf foo.ps
From time to time I run into people who have difficulty because things don't work in pdflatex, and I know that using pdflatex gives up two things I have grown to value:
Can't use the very speedy xdvi viewer
Can't use the PStricks package
I should add that I typically get PDF with hyperlinks by using something on the order of
\usepackage[ps2pdf,colorlinks=true]{hyperref}
so it's not necessary to use pdflatex to get good PDF.
So
What are the advantages of pdflatex that I don't know about?
What are the disadvantages of the old tools that I've overlooked?
My favorite pdflatex feature is the microtype package, which is available only when using pdflatex to go directly to PDF, and really produces stunning results with no effort on my part. Apart from that, the only caveats I run into are image formats:
pdflatex supports PDF, PNG, and JPG images.
the postscript drivers support (at least) EPS.
Also, if you want to install fonts, the procedures are slightly different depending on what fonts that driver supports. (Hint: use XeTeX to instantly enable OpenType fonts.)
As it turns out, I recently read a post that shows the difference directly. Any document that uses tables or narrow columns will be improved automatically. I also find the inter-word spacing to be far more pleasing with pdflatex.
Is xdvi much faster than xpdf? I find the edit, TeX, view cycle to be very quick with pdflatex.
Have you tried MetaPost or MetaFun for graphics? I tend to put graphics creation in the hands of the capable, but MetaFun would likely be the package I'd use. Just reading the manuals is a pleasure.
Also pdftex is the engine under development (towards luatex) and maintenance. I'm not sure the DVI counterparts are as actively maintained.
PStricks is supplanted by Tikz.
I didn't use xdvi in years, so pardon the trollish rhetorical questions: Does xdvi display vector fonts? Does it support synctex (jumping to and from code)? Does it have the confort of use of PDF readers like Skim?
Taco Hoekwater is working on Escrito, a Postscript interpreter written in Lua, which would allow you to use pstricks in Luatex. He has an impressive project completion record: maybe I should have used "will" rather than "would" in the previous sentence.
I used pdflatex to generate the PDF for my ICFP 2009 paper. (I still needed to use standard latex to generate the PostScript file.) I did so for two reasons:
I couldn't seem to get ps2pdf to generate Letter, rather than A4 output, no matter what command line options I used.
For the printers, I needed to produce a version 1.3 PDF file, not 1.4. pdflatex made this easy to do. I set the PDF author and title information while I was at it.
Both of these problems may be fixable in some way, but as a first-time latex user, I didn't find any obvious solutions, nor did more experienced users whom I'd asked.