How to affix a pound sign (£) to decimal (VB.NET) - vb.net

Initially there was an issue of lexicographic-ordering which caused a column of currency values to be sorted as string.
In order to sort a column of values within VB as currency this following code is utilised: m_dtTemp.Columns("P/O Value").DataType = GetType(Decimal)
which works absolutely fine, the problem now is the lack of pound sign (£) affixed to the values.
I can't see a method to include a pound sign without resorting back to type string, thus taking me back to square one. The overall goal is to retain the numerical sort functionality whilst an addition of a pound sign is affixed to the values.

Related

Using and displaying more than 2 decimal places

I want to use more than 2 decimal places to make calculations with my MS Project project.
So far i wasn't able to find any resource which tells how to show more than 2 decimal in a field (like Work or cost, for example), neither how to truncate numbers instead of rounding them (lets say, USD 12.357 to USD 12.35 instead of USD 12.36).
Is there any way of doing this? It could be through VBA or any method you can come up with.
You can use more than 2 decimal places, just not in the user interface.
The UI truncates displayed and entered values to 2 decimals. However, values entered and accessed via VBA do not have this limitation.
For example, using the Intermediate window (VBA), enter the cost for the first two tasks of the active project and then request the values to prove they are stored as entered, up to 16 digits:
ActiveProject.Tasks(1).Cost = 0.1234567890123456789
ActiveProject.Tasks(2).Cost = 123456789012.123456789
? ActiveProject.Tasks(1).Cost
0.123456789012346
? ActiveProject.Tasks(2).Cost
123456789012.123
To show the value in the UI as stored, customize a text field using the Format function:

Is format ####0.000000 different to 0.000000?

I am working on some legacy code at the moment and have come across the following:
FooString = String.Format("{0:####0.000000}", FooDouble)
My question is, is the format string here, ####0.000000 any different from simply 0.000000?
I'm trying to generalize the return type of the function that sets FooDouble and so checking to make sure I don't break existing functionality hence trying to work out what the # add to it here.
I've run a couple tests in a toy program and couldn't see how the result was any different but maybe there's something I'm missing?
From MSDN
The "#" custom format specifier serves as a digit-placeholder symbol.
If the value that is being formatted has a digit in the position where
the "#" symbol appears in the format string, that digit is copied to
the result string. Otherwise, nothing is stored in that position in
the result string.
Note that this specifier never displays a zero that
is not a significant digit, even if zero is the only digit in the
string. It will display zero only if it is a significant digit in the
number that is being displayed.
Because you use one 0 before decimal separator 0.0 - both formats should return same result.

Negative dollars and iTextSharp

I am trying to assign a form field in a PDF through iTextSharp that has a negative dollar amount. The value is a simple string that starts with '-$'. Every time I add the value to the form using SetField, anything after the negative sing is lost. Positive dollar amounts are fine, only negative values are lost.
I am adding the value as such:
form.SetField(fieldName, fieldValue);
form is of type AcroFields, fieldName and fieldValue are both strings. I have traced down to the point where the string is being passed to SetFields, and its right there. I have also tried replacing '$' with the Unicode value to no avail. Am I supposed to escape the dollar sign? And if so, does anyone know what the escape character is?
I fixed the issue though I do not totally understand the cause. The field was defined as a multi-line text box even though it was being used as a single line. I unchecked the option for the box to be multi-line and the issue went away.

What is the rationale behind "0xHHHHHHHH" formatted Microsoft error codes?

Why does Microsoft tend to report "error codes" as hexadecimal values?
Error codes are 32-bit double word values (4 byte values.) This is likely the raw integer return code of whatever C-style function has reported an error.
However, why report the error to a user in hexadecimal? The "0x" prefix is worthless, and the savings in character length is minimal. These errors end up displayed to end users in Microsoft software and even on Microsoft websites.
For example:
0x80302010 is 10 characters long, and very cryptic.
2150637584 is the decimal equivalent, and much more user friendly.
Is there any description of the "standard" use of a 32-bit field as an error code mechanism (possibly dividing the field into multiple fields for developer interpretation) or of the logic behind presenting a hexadecimal code to end users?
We can only guess about the reason, so this question cannot be answered for sure. But let's guess:
One reason might be that with hex numbers, you know the number will have 8 digits. If it has more or less digits the number is "corrupt" (for example, the customer mistyped). With decimal numbers the number of digits for the same value varies.
Also, to a developer, hex numbers are more convenient and natural than decimal numbers. For example, if some info is coded as bit flags you can decipher them manually easily in hex numbers but not in decimal numbers.
It is a little bit subjective as to whether hexadecimal or decimal error codes are more user friendly. Here is a scenario where the hexadecimal error codes are significantly more convenient, which could be part of the reason that hexadecimal error codes are used in the first place.
Consider the documentation for Win32 Error Codes for Active Directory Service Interfaces, ADSI uses error codes with the format 0x8007XXXX, where the XXXX corresponds to a DWORD value that maps to a Win32 error code.
This makes it extremely easy to get the corresponding Win32 error code, because you can just strip off the last 4 digits. This would not be possible with a decimal error code representation.
The middle ground answer to this would be that formatting the number like an IPv4 address would be more luser-friendly while preserving some sort of formatting that helps the dev guys.
Although TBH I think hex is fine, the hypothetical non-technical user has no more idea what 0x1234ABCD means than 1234101112 or "Cracked gangle pin on fwip valve".

Objective-C: How to use both "." and "," as a decimal separator or at least convert one to another on-the-fly

I have an instance of NSTextField, e.g. someTextField, for which I will use the number formatter to limit the input to numbers only.
The problem comes with the localization combined with the specific keyboard layouts used.
I would like to allow both the, say, American and European users to enter their localized decimal separators. As you all know, in the USA that would be . and for the good part of Europe that would be , (and similar with the thousands separator, etc. but let's put that to the side for now).
So I wrote the following task:
[numberFormatter setLocale:[NSLocale currentLocale]]; for the instance of the NSNumberFormatter.
Problems occurs when the user who has , set as a decimal separator AND US keyboard layout switched on (fairly common here in Europe) presses the decimal separator key on the numeric keyboard. With the US keyboard layout on, that would give him the . as the decimal separator but at the same time it'll be ignored in the someTextField because of the localized settings system-wide. So, if you want to type 1/2 using numeric keyboard only, you'll type 0.5 (US keyboard layout) in the text field and it would be read by the system as 0 because it recognizes only , as decimal separator. This is how the program currently is working and I would like to improve it in this regard.
I would like to allow user to type in the . in the someTextField and for the system to recognize it as a decimal separator just like it would ,. This kind of behavior can be seen in Apple's own Calculator application. If you type . on the numeric keyboard it'll appear as , immediately on the screen (for all conditions as described previously).
Question is: is it possible for me to achieve this using an instance of NSNumberFormatter? If not, is it possible to set on-the-fly conversion of the numerical keyboard decimal separator key output to the decimal separator set system-wide? Or perhaps you have some other suggestions?
Thanks.
I don't have a specific answer to your question, but I'd say the right approach is not to muck about with the NSNumberFormatter at all and concentrate on trying to change the characters generated by the keyboard.
The default locale for number formatters is usually the system's default locale as set by the user in the internationalization settings. If you change that behaviour programmatically for UI elements, you are effectively telling the user "I know better than you how you want to input numbers". Arrogance of that sort on the part of the developer never gets them good marks with respect to UI design.
In fact, you could apply the same argument to remapping the dot button on the numeric keypad. How do you know that the user hasn't set US keyboard layout because it allows them to get a dot from that key? Maybe they consider it more important to be able to type the thousands separator from the keypad than the decimal separator. I'm not saying you shouldn't implement your feature, just make sure that the user has control over when it is enabled or disabled.
Anyway, you probably want to override the keyDown event on the control. More info here.
Take a look at the UITextFieldDelegate protocol. It allows your textfield to ask its delegate if it should accept a character which the user just typed. The apropriate method would be textField:shouldChangeCharactersInRange:replacementString. If the character in question is , or . just let the delegate append the properly localized decimal separator "manually" and return NO.
I'm not quite sure if this will work if the text field is set to number mode, maybe the input is being filtered before the delegate method is called - leading to the method not being called if the "wrong" separator has been filtered out previously. If so, you might want to consider leaving the text field in alphanumerical mode and use the delegate method again to filter out anything that is not numbers or separators. However, in this case you should make sure the user is not allowed to type more then one decimal separator - either ignore the surplus ones or remove the first one and accept the new one.