wxStaticText inconsistently displays 'degree' character - wxwidgets

In the same application I have two different instances of wxStaticText. Each displays an angular value expressed in degrees. I've tested both instances for font name and font encoding. They are the same for both. I've tested that both strings passed to SetLabel() are using the same character value, decimal 176. Yet one displays the 'degree' character (small circle, up high) as expected and the other instead displays an odd character I'm not familiar with. How can this be? Is there some other property of wxStaticText I need to test?

I can't explain what you're seeing because obviously two identical controls must behave in the same way, but I can tell you that using decimal 176 is not a good way to encode the degree sign, unless you explicitly use wxConvISO8859_1 to create the corresponding wxString.
It is better to use wxString::FromUTF8("\xc2\xb0") instead or, preferably, make sure that your source files are UTF-8 encoded and just use wxString::FromUTF8("°").

Arghhhh! Found it. I was assuming SetLabel() was wxStaticText::SetLabel(), inherited from the wxWindow base class. It's not. We have a wrapper class of our own around wxStaticText that I was not aware of. It's the wrapper class that is bollixing the string value.
Moral: When debugging unfamiliar code, don't make assumptions, step ALL THE WAY in.

Related

How do I make iText 7 diacritic mark stacking work correctly?

I have run into a problem with iText 7 where diacritic marks are painted on top of one another instead of stacking properly when multiple marks are used on a single character. Is there a setting that makes them appear correctly, or is this a bug in iText 7? Any help greatly appreciated. This can be observed if you create a text object in your PDF like below. Obviously, replace the relevant bit with an actual font object, rather than than what I have in there.
new Text("ḗ and ṓ are characters that display incorrectly").setFont(<UNICODE COMPATIBLE FONT LIKE CHARIS>);
While Bruno and Benoit correctly pointed out that for advanced typography stuff like stacking diacritical marks you need pdfCalligraph module, there is a workaround you can try at your own risk. If your combinations of base glyph and diacritics are real, meaning they occur in real texts in some languages or some other known contexts, then such combinations are most probably present in Unicode and have their own number associated with them. For instance, in text you provided, they are 0xU1E17 and 0x1E53 Unicode characters. Some fonts may contain such glyphs, so that there is a second option to showing base glyph and stacking diacritics: showing combined glyphs. For example, ArialUni shipped with Windows does contain the above mentioned glyphs.
To try this approach, you would need the following code for composing known Unicode base glyph + diacritics combinations into single glyphs:
String originalStr = "ḗ and ṓ are characters that display incorrectly";
String normalizedStr = java.text.Normalizer.normalize(originalStr, Normalizer.Form.NFC);
new Text(normalizedStr); // Use this normalized Text instance
The result that I got with ArialUni:
But again as I mentioned do it at your own risk because it will only work if there are necessary combinations present both in Unicode and font. For correct rendering you still should use pdfCalligraph.

Alt-Code Characters in F#

Edit/Update:
Thank you all for responding. I understand I was being too vague, but wasn't sure if posting naked lines of code would be useful in this case.
In my .vb file I have a pulldown control with its validation values as:
TempUnit.DataSource = {"°C", "°F", "°R", "K"}
...which is stored in a variable:
Dim unit As String = TempUnit.SelectedItem.ToString
...which gets passed into a function along with other variables:
Function xxx(..., ByVal unitT As String) As Double
... which finally calls the .fs file and gets evaluated using:
let tempConv t u =
match u with
|"°C" -> t * 9.0 / 5.0 + 32.0
|"°R" -> t - 459.67
|"K" -> t * 9.0 / 5.0 - 459.67
|_ -> t
If any temperature unit other than Kelvin is selected, the match fails and defaults to the else case (which is Fahrenheit in this context). I ended up bypassing the degree symbol entirely by evaluating the substring instead:
Dim unit As String = TempUnit.SelectedItem.ToString.Substring(1)
The program is working again, but I have no idea what I changed, if anything, to make the string match stop working. The first thing I tried was to copy/paste from one file to other to ensure they were identical strings, in addition to trying other symbols, but to no avail. The degree symbol is what caught my attention, but then I checked the pressure units and found the exact same issue with the micro prefix.
Thank you, Hans Passant, I had unicode in mind as a possible solution, but it didn't seem like an easy fix in the heat of the moment. I appreciate your link.
Original Post:
I have a VB program referencing a function stored in an F# library file whose arguments include unit of measure strings containing special characters (e.g. "°C" "µBar").
The strings are identical in the .vb and .fs files; and there was no issue until the F# library file stopped recognizing the Alt-Code characters for reasons unbeknownst to me.
The program works as intended if I remove the offending Alt-Code character from the string definitions in the F# and VB files.
What would cause a match to fail between two identical strings that happen to contain an Alt-Code character?
What is the proper way to handle Alt-Code characters in F# (and VB for that matter)?
The µ glyph is a bit infamous. Unicode has two codepoints that look like that: U+03BC = "Greek small letter Mu" and U+00B5 = "Micro sign". One is a letter in the Greek alphabet, the other is a symbol that often appears in math and units.
Compare μ and µ. Looks almost identical in most fonts (you can see the difference with Segoe UI) and very easily fools the human eye. Typographers insist they are not the same, particularly if they are Greek I'd imagine. Nor does a computer, the problem you are surely dealing with.
Copy/paste or re-type to fix. The Charmap.exe applet in Windows is very handy to get this right.

Localizable.strings - Why do I need to put the placeholder in the key?

In Localizable.strings file, why is it necessary to put placeholders in the key.
Assuming you use a dot notation like;
"welcome-back.label" = "welcome back, %#"
I've seen examples where they mix placeholders and dot notation something like this;
"welcome-back %#.label" = "welcome back, %#"
^ The above might be incorrect.
But what I don't understand is why you even need the placeholder at all in the key when its just a pointer to a value.
Can someone shed light on this?
Many thanks
You don't need it in the key, it's there to make life easier for people who read the code in the future so they can easily tell that a parameter should be passed, what it's for and therefore which variable should be used. If you want to use some other specification to indicate this that's fine. If you want to make it super terse and hard to use that's also fine, just discouraged...
NSLocalizedString will replace the string on the left hand side with the string on the right hand side. The string on the right hand side must obviously be the correct string for the situation, the string on the left hand side can be anything you want. You could use keys "1", "2", "3" etc and it would work (although you would go mad).
You can improve your life as a developer with the right strategies. I tend to never use plain english text as the key, because the same English word can have many different translations (for example "key" in German can be Taste, Schlüssel, Tonart and lots of other things). Instead I write some text that describes what the text is used for.
And to avoid problems when you type in the key incorrectly, which the compiler has no chance to find, I tend to use #define statements for the keys. Much easier to keep just a list of #defines and your localizable.strings in sync, and the compiler will tell you if you misspell a #defined constant.
And I tend to use the word "format" for strings that are format strings and not used directly. So I might have in one header file
#define kWelcomeBackLabelTitleFormat #"WelcomeBackLabelTitleFormat"
and in the localizable.strings file
"WelcomeBackLabelTitleFormat" = "welcome back, %#";
(The #define saves you if you used "WelcomebackLabelTitleFormat" by mistake with a lowercase b).

Trailing Ampersand in VB.NET hexadecimal?

This should be an easy one for folks. Google's got nothing except content farms linking to one blurb, and that's written in broken English. So let's get this cleared up here where it'll be entombed for all time.
What's the trailing ampersand on VB hexadecimal numbers for? I've read it forces conversion to an Int32 on the chance VB wants to try and store as an Int16. That makes sense to me. But the part I didn't get from the blurb was to always use the trailing ampersand for bitmasks, flags, enums, etc. Apparantly, it has something to do with overriding VB's fetish for using signed numbers for things internally, which can lead to weird results in comparisons.
So to get easy points, what are the rules for VB.Net hexadecimal numbers, with and without the trailing ampersand? Please include the specific usage in the case of bitmasks/flags and such, and how one would also use it to force signed vs. unsigned.
No C# please :)
Vb.net will regard "&h"-notation hex constants in the range from 0x80000000-0xFFFFFFFF as negative numbers unless the type is explicitly specified as UInt32, Int64, or UInt64. Such behavior might be understandable if the numbers were written with precisely eight digits following the "&", but for some reason I cannot fathom, vb.net will behave that way even if the numbers are written with leading zeroes. In present versions of VB, one may force the number to be evaluated correctly by using a suffix of "&" suffix (Int64), "L" (Int64), "UL" (UInt64), or "UI" (UInt32). In earlier versions of VB, the "problem range" was 0x8000-0xFFFF, and the only way to force numbers in that range to be evaluated correctly (as a 32-bit integer, which was then called a "Long") was a trailing ampersand.
Visual Basic has the concept of Type Characters. These can be used to modify variable declarations and literals, although I'd not recommend using them in variable declarations - most developers are more familiar these days with As. E.g. the following declarations are equivalent:
Dim X&
Dim X As Long
But personally, I find the second more readable. If I saw the first, I'd actually have to go visit the link above, or use Intellisense, to work out what the variable is (not good if looking at the code on paper).

Asc(Chr(254)) returns 116 in .Net 1.1 when language is Hungarian

I set the culture to Hungarian language, and Chr() seems to be broken.
System.Threading.Thread.CurrentThread.CurrentCulture = "hu-US"
System.Threading.Thread.CurrentThread.CurrentUICulture = "hu-US"
Chr(254)
This returns "ţ" when it should be "þ"
However, Asc("ţ") returns 116.
This: Asc(Chr(254)) returns 116.
Why would Asc() and Chr() be different?
I checked and the 'wide' functions do work correctly: ascw(chrw(254)) = 254
Chr(254) interprets the argument in a system dependent way, by looking at the System.Globalization.CultureInfo.CurrentCulture.TextInfo.ANSICodePage property. See the MSDN article about Chr. You can check whether that value is what you expect. "hu-US" (the hungarian locale as used in the US) might do something strange there.
As a side-note, Asc() has no promise about the used codepage in its current documentation (it was there until 3.0).
Generally I would stick to the unicode variants (ending on -W) if at all possible or use the Encoding class to explicitly specify the conversions.
My best guess is that your Windows tries to represent Chr(254)="ţ" as a combined letter, where the first letter is Chr(116)="t" and the second ("¸" or something like that) cannot be returned because Chr() only returns one letter.
Unicode text should not be handled character-by-character.
It sounds like you need to set the code page for the current thread -- the current culture shouldn't have any effect on Asc and Chr.
Both the Chr docs and the Asc docs have this line:
The returned character depends on the code page for the current thread, which is contained in the ANSICodePage property of the TextInfo class. TextInfo.ANSICodePage can be obtained by specifying System.Globalization.CultureInfo.CurrentCulture.TextInfo.ANSICodePage.
I have seen several problems in VBA on the Mac where characters over 127 and some control characters are not treated properly.
This includes paragraph marks (especially in text copied from the internet or scanned), "¥", and "Ω".
They cannot always be searched for, cannot be used in file names - though they could in the past, and when tested, come up as another ascii number. I have had to write algorithms to change these when files open, as they often look like they are the right character, but then crash some of my macros when they act strangely. The character will look and act right when I save the file, but may be changed when it is reopened.
I will eventually try to switch to unicode, but I am not sure if that will help this issue.
This may not be the issue that you are observing, but I would not rule out isolated problems with certain characters like this. I have sent notes to MS about this in the past but have received no joy.
If you cannot find another solution and the character looks correct when you type it in, then I recommend using a macro snippet like the one below, which I run when updating tables. You of course have to setup theRange as the area you are looking at. A whole file can take a while.
For aChar = 1 To theRange.Characters.count
theRange.Characters(aChar).Select
If Asc(Selection.Text) = 95 And Selection.Text <> "_" Then Selection.TypeText "Ω"
Next aChar