I have the following code and I can't get it to format properly. It is ignoring my formats. For example sOrderNo is not 10 zeros, it is only 1 zero. What am I doing wrong here?
sKey = String.Format("{0:D10} {1:00} {2} {3} {4} {5} {6} {7} {8:000.0000} {9:000.0000}",
sOrderNo, sItem, sSpecies, sSpSort, rs.Fields("UsageCode").Value,
rs.Fields("Cure").Value, sGrade, sSurface,
rs.Fields("TkFactor").Value,rs.Fields("WdFactor").Value)
sOrderNo is not 10 zeros,
The necessary format string for that is D10, not 10D. Idle curio: the documentation implies only up to D9 is possible, but it works with 10 and all the way to 99 even
Console.WriteLine(string.Format("Hello World {0:D99}", 12345));
Related
I'm trying to convert a character variable to a numeric variable, but unfortunately i'm really struggeling. Help would be appreciated!
I keep getting the following error: 'Invalid argument to function INPUT at line 3259 column 17'
Syntax:
Data want;
Set have;
Dosis_num = input(Dosis, best12.);
run;
I have also tried multiplying the variable by 1. This doesnt work either.
The variable looks like this:
Dosis
155
201
2.1
0.8
123.80
12.0
3333.4
00.6
Want:
Dosis_num
155.0
201.0
2.1
0.8
123.8
12.0
333.4
0.6
Thanks alot!
The code will work with the data you show. So either the values in the character variable are not what you think or you are not using the right variable name for the variable.
The code is trying to only use the first 12 bytes of the character variable. Normally you don't need to restrict the number of characters you ask the INPUT() function to use. In fact the INPUT() function does not care if the width of the informat used is larger than the length of the string being read. So just use 32. as the informat since 32 is the maximum width that the normal numeric informat can read. Note that BEST is the name of a FORMAT, if you use it as the name of informat it is just an alias for the normal numeric informat.
If the variable has a length longer than 12 then perhaps there are leading spaces in the variable (note the ODS output displays do not properly display leading spaces) then use the LEFT() function to remove them.
Dosis_num = input(left(Dosis), 32.);
The typical thing to do here is to find out what's actually in the character variable. There is likely something in there that is causing the issue.
Try this:
data have;
input #1 Dosis $8.;
datalines;
155
201
2.1
0.8
123.80
12.0
3333.4
00.6
;;;;
run;
data check;
set have;
put dosis hex32.;
run;
What I get is this:
83 data check;
84 set have;
85 put dosis hex32.;
86 run;
3135352020202020
3230312020202020
322E312020202020
302E382020202020
3132332E38302020
31322E3020202020
333333332E342020
30302E3620202020
NOTE: There were 8 observations read from the data set WORK.HAVE.
NOTE: The data set WORK.CHECK has 8 observations and 1 variables.
NOTE: DATA statement used (Total process time):
real time 0.01 seconds
cpu time 0.01 seconds
All those 2020202020 are spaces, which should be there (all strings are space-padded to full length). Period/Decimal Point is 2E, Digits are 3x where x is the digit (because the ASCII for 0 is 30, not because of any other reason). So for example for the last one, 00.6, 30 means zero, 30 means zero, 2E means period, and 36 means 6.
Check to make sure that you don't have any other characters other than digits (3x) and period (2e) and space (20).
The other thing to verify is that your system is set to use . as the decimal separator and not , as many European systems are - otherwise this requires the commaw. informat. You can actually just try the commaw. informat (comma12. is sufficient if 12 is plenty - and don't include anything after the period) as anything that 12. can read in also can be read in by commaw..
I have input comprising five character upper-case English letters e.g ABCDE and I need to convert this into two character unique ASCII output.
e.g. ABCDE and ZZZZZ should both give two different outputs
I have converted from ABCDE into hex which gives me 4142434445, but from this can I get to a two character output value I require?
Example:
INPUT1 = ABCDE
Converted to hex = 4142434445
INPUT2 = 4142434445
OUTPUT = ?? Any 2 ASCII Characters
Other examples of INPUT1 =
BIRAL
BRMAL
KLAAX
So you're starting with a 5-digit base-26 number, and you want to squeeze that into some 2-digit scheme with base n?
All possible 1-5 digit base-26 numbers gives you a number space of 26^5 = 11,881,376.
So you want the minimum n where n^2 >= 11,881,376.
Which gives you 3446.
Now it's up to you to go and find a suitable glyph block somewhere in UTF where you can reliably block-out 3446 separate characters to act as your new base/alphabet. And construct a mapping from your 5-char base-26 ABCDE type number onto your 2-char base-3446 wierd-glyph number. Good luck with that.
There's not enough variety in ASCII to do this, since it's only 128 printable characters. Limiting yourself to 2-chars of ASCII means you can only address a number space of 16384.
I'm trying to assemble proper track data given a CVC3 and a bunch of positional parameters. But the EMV C-2 Kernel book is about as obtuse as you could imagine (would it kill somebody to include an example!?!). Can anyone help work this example:
9f62 - pcvc3(t1) - Position of CVC3 in track1: 0x38 (4-6?)
9f63 - punatc(t1) - Unpredictable Number Track1 Pos: 0x3C6 (2-3 7-10?)
9f64 - natc(t1) - Digits in track1 ATC: 4
9f65 - pcvc3(t2) - Position of CVC3 in track2: 0x38 (4-6)
9f66 - punatc(t2) - Unpredictable Number Track2 Pos: 0x3C6 (2-3 7-10?)
9f67 - Digits in track2 ATC: 4
After successful checksum generation:
9f61 - track2 CVC3 - 2EF4
9f60 - track1 CVC3 - 609B
9f36 - ATC - 1E47
assuming the discretionary data field starts out as all 0s, how does it end up? The spec says this:
Convert the binary encoded CVC3 (Track2) to the BCD encoding of the
corresponding number expressed in base 10. Copy the q least significant digits of the
BCD encoded CVC3 (Track2) in the eligible positions of the 'Discretionary Data' in
Track 2 Data. The eligible positions are indicated by the q non-zero bits in
PCVC3(Track2).
I read that as:
CVC3 = 0x609B = 24731 (so copy 731? What does BCD have to do with this? Or are they just saying "copy the 731 as bcd encoded to the byte array"?)
yes you are correct it is rather obtuse. you are correct that you would convert your p values (pCVC3, and PUNATC) to binary. (0011 1000, 0011 1100 0110 for your track1 p values) you then right align the proper values with the discretionary data. example
Bxxxxxxxxxxxxxxxx^ /^14111014010000000000
....000000000000000000CCC000
....00000000000000AAAA000UU0
so you say that you CVC3 for track 1 is 609B which is 24,731, since you PCVC3 is asking for only 3 characters youd set 731 in. your ATC is 1E47 which is 7,751. your PUNATC is asking for 4 digits so you'd use 7751. FYI... if the ATC is lower than the requested characters you'd pad with 0's. Your unpredictable number is even more tricky... so you make a 4 byte random number. convert it to a uint (base 10) and then mark the 8 most significant bytes as 0. example. lets say your random 4 bytes is 29A6 06AE. in base 10 that is 698,746,542. mark out the first 8 characters with 0. and you are left with 000,000,002. you'd place 02 in for you unpredictable number placment.. so.. all that said your track would look like this
Bxxxxxxxxxxxxxxxx^ /^14111014017751731020
the last character is equal to the length of the unpredicatable number (numeric) digits. which was 02.. so the last digit is 2 making your final track data
%Bxxxxxxxxxxxxxxxx^ /^1411014017751731022;
track2 is very similar. good luck with it. I understand your frustrations with this. :)
Here's the content of my DataGrid
id
1
2
3A
4
5
6A
..
...
10V1
I want to get the max number from the datagrid. Then, I want to
display the next number (In this case: 11) in the textbox beside the grid
Expected Output
id
1
2
3A
4
5
6A
..
...
10V1
11
I tried the following code:
textbox1.text = gridList.Rows(gridlist.RowCount() - 1).Cells(1).Value + 1
It works if the previous row values is entirely numeric. However, if the value is alpahnumeric, I am getting the following error:
Conversion from string "10V1" to type 'Double' is not valid.
Can someone help me solve this problem? I am looking for a solution in VB.Net
You may want to look into Regex to do that (based on what I understand from your question)
Here's a related question on this.
Regex.Match will return the part of the string that will match the expression... In your case, you want the first number in your string (Try "^\d+" as your expression, it will find any serie of numbers at the beginning of your string). You can then convert the result string into an int and add 1 to it.
Hope this helps!
Edit: Here's more info on regex expressions.
I'm trying to emulate a function in SQL that a client has produced in Excel. In effect, they have a unique, 10-digit numeric value (VARCHAR) as the primary key in one of their enterprise database systems. Within another database, they require a unique, 5-digit alphanumeric identifier. They want that 5-digit alphanumeric value to be a representation of the 10-digit number. So what they did in excel was to split the 10-digit number into pairs, then convert each of those pairs into a hexadecimal value, then stitch them back together.
The EXCEL equation is:
=IF(VALUE(MID(A2,1,4))>0,DEC2HEX(VALUE(MID(A2,3,2)))&DEC2HEX(VALUE(MID(A2,5,2)))&DEC2HEX(VALUE(MID(A2,7,2)))&DEC2HEX(VALUE(MID(A2,9,2))),DEC2HEX(VALUE(MID(A2,5,2)))&DEC2HEX(VALUE(MID(A2,7,2)))&DEC2HEX((VALUE(MID(A2,9,2)))))
I need the SQL equivalent of this. Of course, should someone out there know a better way to accomplish their goal of "a 5-digit alphanumeric identifier" based off the 10-digit number, I'm all ears.
ADDED 8/2/2011
First of all, thank you to everyone for the replies. Nice to see folks willing to help and even enjoying it! Based on all the responses, I'm apt to tell my client they're intent is sound, only their method is off kilter. I'd also like to recommend a solution. So the challenge remains, just modified slightly:
CHALLENGE: Within SQL, take a 10 digit, unique NUMERIC string and represent it ALPHANUMERICALLY in as few characters as possible. The resulting string must also be unique.
Note that the first 3-4 characters in the 10-digit string are likely to be zeros, and that they could be stripped to shorten the resulting alphanumeric string. Not required, but perhaps helpful.
This problem is inherently impossible. You have a 10 digit numeric value that you want to convert to a 5 digit alphanumeric value. Since there are 10 numeric characters, this means that there are 10^10 = 10 000 000 000 unique values for your 10 digit number. Since there are 36 alphanumeric characters (26 letters + 10 numbers), there are 36^5 = 60 466 176 unique values for your 5 digit number. You cannot map a set of 10 billion elements into a set with around 60 million.
Now, lets take a closer look at what your client's code is doing:
So what they did in excel was to split the 10-digit number into pairs, then convert each of those pairs into a hexadecimal value, then stitch them back together.
This isn't 100% accurate. The excel code never uses the first 2 digits, but performs this operation on the remaining 8. There are two main problems with this algorithm which may not be intuitively obvious:
Two 10 digit numbers can map to the same 5 digit number. Consider the numbers 1000000117 and 1000001701. The last four digits of 1000000117 get mapped to 1 11, where the last four digits of 1000001701 get mapped to 11 1. This causes both to map to 00111.
The 5 digit number may not even end up being 5 digits! For example, 1000001616 gets mapped to 001010.
So, what is a possible solution? Well, if you don't care if that 5 digit number is unique or not, in MySQL you can use something like:
hex(<NUMERIC VALUE> % 0xFFFFF)
The log of 10^10 base 2 is 33.219280948874
> return math.log(10 ^ 10) / math.log(2)
33.219280948874
> = 2 ^ 33.21928
9999993422.9114
So, it takes 34 bits to represent this number. In hex this will take 34/4 = 8.5 characters, much more than 5.
> return math.log(10 ^ 10) / math.log(16)
8.3048202372184
The Excel macro is ignoring the first 4 (or 6) characters of the 10 character string.
You could try encoding in base 36 instead of 16. This will get you to 7 characters or less.
> return math.log(10 ^ 10) / math.log(36)
6.4254860446923
The popular base 64 encoding will get you to 6 characters
> return math.log(10 ^ 10) / math.log(64)
5.5365468248123
Even Ascii85 encoding won't get you down to 5.
> return math.log(10 ^ 10) / math.log(85)
5.1829075929158
You need base 100 to get to 5 characters
> return math.log(10 ^ 10) / math.log(100)
5
There aren't 100 printable ASCII characters, so this is not going to work, as zkhr explained as well, unless you're willing to go beyond ASCII.
I found your question interesting (although I don't claim to know the answer) - I googled a bit for you out of interest and found this which may help you http://dpatrickcaldwell.blogspot.com/2009/05/converting-decimal-to-hexadecimal-with.html