Here is another question.
My requirement for Track 2 (bit 35) is:
Z (Numeric + X 'D' (BCD, ½ byte representation of D [1101] as a field separator between Primary Account Number and Expiration Date), BCD — When an odd number of digits, pack right-most half byte to '0'. Size: Variable to 37, preceded by a one-byte, BCD length indicator
The default template definition has bit 35 as a 2 byte length Ascii field so I am sure that is not it. I changed it to BcdVar(1, 37, Formatters.Bcd).
Taking a dummy track2 example of:
12345678901234567=9999999999999999999
I replace the '=' separator with the 0x0D integer value which translates to "13" (1101). We now have:
12345678901234567139999999999999999999
Does this make sense? I don't think this is the correct way of doing it.
You've run into a "feature" of OpenIso8583.Net. When you work with the field values using msg[3] = "123456", you must always work with the unpacked values.
For this track2 data, you need to build up the track 2 as 12345678901234567D9999999999999999999. Note the 'D' in the middle of the data as a separator.
Now in your Template set field 35 to have a BCD formatter, essentially.
template[Bit._035_TRACK_2_DATA] = FieldDescriptor.BcdVar(2, 37, FieldValidators.Track2)
I sort of duplicated the question (Track2 in BCD - 'D' character).
Treating the field as Binary (with a BCD-Length-Indicator!!!) is a cute trick, which might do the trick. But, still - there is no method:
public static FieldDescriptor.BinaryVar(..., ILengthFormatter lengthFormatter)
so instead of adding it (which should be done anyway, for cases of BinaryVar fields), one can add a:
public static FieldDescriptor.BcdVar(..., IFieldValidator validator)
and call:
msg[Bit._035_TRACK_2_DATA] = FieldDescriptor.BcdVar(2, 37, FieldValidators.Track2);
the 'D' will be treated as BCD - what do banks know...
Regarding the right-padding - I guess that's where the Adjuster comes handy. Again, we need to add a static method with an Adjuster parameter like this:
var setAdjuster = new LambdaAdjuster(setLambda: value => value.PadRight(value.length + 1, '0'));
It's true - you can pad the value prior to setting the field, but that's not fun (we're geeks, aren't we?).
Regarding adding static methods to FieldDescriptor - I guess it's possible to use the generic
public static IFieldDescriptor Create(ILengthFormatter lengthFormatter, IFieldValidator fieldValidator, IFormatter formatter, Adjuster adjuster)
but I'm new to C# and would be glad to get confirmation regarding my theories.
Thanks.
Related
The following is the information, which the TrueType font format documentation provides with regards to the fields of "Format 4: Segment mapping to delta values" subtable format, which may be used in cmap font table (the one used for mapping character codes to glyph indeces):
Type Name Description
1. uint16 format Format number is set to 4.
2. uint16 length This is the length in bytes of the subtable.
3. uint16 language For requirements on use of the language field, see “Use of the language field in 'cmap' subtables” in this document.
4. uint16 segCountX2 2 × segCount.
5. uint16 searchRange 2 × (2**floor(log2(segCount)))
6. uint16 entrySelector log2(searchRange/2)
7. uint16 rangeShift 2 × segCount - searchRange
8. uint16 endCode[segCount] End characterCode for each segment, last=0xFFFF.
9. uint16 reservedPad Set to 0.
10. uint16 startCode[segCount] Start character code for each segment.
11. int16 idDelta[segCount] Delta for all character codes in segment.
12. uint16 idRangeOffset[segCount] Offsets into glyphIdArray or 0
13. uint16 glyphIdArray[ ] Glyph index array (arbitrary length)
(Note: I numbered the fields as to allow referencing them)
Most fields, such as 1. format, 2. length,3. language,9. reservedPad` are trivial basic info and understood.
Other fields 4. segCountX2, 5. searchRange, 6 .entrySelector, 7. rangeShift I see as some odd way to have a precomputed values, but basically being only a redundant way to store the number of segments segCount (implicitly). Also those fields I have no major headache understanding.
Lastly there remain the fields that represent arrays. Per each segment there is a field 8. endCode, 10. stadCode, 11. idDelta and 12. idRangeOffset and there might/might not be even a field 13. glyphIdArray. Those are the fields I still struggle to interprete correctly and which this question is about.
To allow for a most helpful answer allow me to sketch quickly my take on those fields:
Working basically segment for segment, each segment maps characters codes from startCode to endCode to the indexes of the fonts glyphs (reflecting the order they appear in the glyf table).
having the character code as input
having the glyph index as output
segment is determined by iterating through them checking that the input value is inside the range of startCode to endCode.
with the segment found thus, the fields respective fields idRangeOffset and idDelta are determined as well.
idRangeOffset conveys a special meaning
case A) idRangeOffset being set to special value 0 means that the ouput can be
calculated from the input value (character code) and the idDelta. (I think it is either glyphId = inputCharCode + idDelta or glyphId = inputCharCode - idDelta )
case B) idRangeOffset being not 0 something different happens, which is part of what I seek an answer about here.
With respect to case B) the documentation states:
If the idRangeOffset value for the segment is not 0, the mapping of
character codes relies on glyphIdArray. The character code offset from
startCode is added to the idRangeOffset value. This sum is used as an
offset from the current location within idRangeOffset itself to index
out the correct glyphIdArray value. This obscure indexing trick works
because glyphIdArray immediately follows idRangeOffset in the font
file. The C expression that yields the glyph index is:
glyphId = *(idRangeOffset[i]/2
+ (c - startCode[i])
+ &idRangeOffset[i])
which I think provides a way to map a continuous input range (hence "segment") to a list of values stored in the field glyphIdArray, possibly as a way to provide output values that cannot be computed via idDelta, for being unordered/non-consecutive. This at least is my read on that what was described as "obscure" in the documentation.
Because glyphIdArray[] follows idRangeOffset[] in the TrueType file, the code segment in question
glyphId = *(&idRangeOffset[i]
+ idRangeOffset[i]/2
+ c - startCode[i])
points to the memory address of the desired position in glyphIdArray[]. To elaborate on why:
&idRangeOffset[i] points to the memory address of idRangeOffset[i]
moving forward idRangeOffset[i] bytes (or idRangeOffset[i]/2 uint16's) brings you to the relevant section of glyphIdArray[]
c - startCode[i] is the position in glyphIdArray[] that contains the desired ID value
From here, in the event that this ID is not zero, you will add idDelta[i] to obtain the glyph number corresponding to c.
It is important to point out *(&idRangeOffset[i] + idRangeOffset[i]/2 + (c - startCode[i])) is really pseudocode: you don't want a value stored in your program's memory, but rather the memory address in the file.
In a more modern language without pointers, the above code segment translates to:
glyphIndexArray[i - segCount + idRangeOffset[i]/2 + (c - startCode[i])]
The &idRangeOffset[i] in the original code segment has been replaced by i - segCount (where segCount = segCountX2/2). This is because the range offset (idRangeOffset[i]/2) is relative to the memory address &idRangeOffset[i].
I have a function returning a setof records. This can be seen in this picture
.
I have a range of boards of length 2.8m thru to 4.9m (ln28 thru ln49 respectively) they have characteristics that set bits as seen in bincodes (9,2049,4097 etc.) For each given board length, I need to sum the number of boards for each bincode. EG in this case ln28 (bincode 4097) would = 3+17+14 = 34. Where you see brdsource = 128 series is where I intend to store these values, so for row brdsource 128, bincodes 4097, I want to store 34 in ln28.
You will see that I have 0's in ln28 values for all brdsource = 128. I have generated extra records as part of my setof records, and am trying to use a multidimensional array to add the values and keep track of them as seen above with array - Summary[boardlength 0-8][bincode 0-4].
Question 1 - I see that if I add 1 (for argument sake, it can be any number) to an array location, it returns a null value (no error, just nothing in table cell). However if I first set the array location to 0, then add 1, it works perfectly. How can an array defined as type integer hold a null value?
Question 2 - How do I add my respective record (call it rc) board length count to the array. IE I want to do something like this
if (rc.bincode = 4097) then Summary[0][2] := Summary[0][2] + rc.ln28;
and then later, on, when injecting this into my table (during brdsource = 128 phase) :
if (rc.bincode = 4097) then rc.ln28 := Summary[0][2];
Of course I may be going about this in a completely unorthodox way (though to me SQL is just plain unorthodox, sigh). I have made attempts to sum all previous records based on the required conditions (eg using a case(when...end) statement, but I proved what I already suspected, ie that each returned record is simply a single row of data. There is just no means of accessing data in the previous record lines as returned by the functions FOR LOOP...END LOOP.
A final note is that everything discussed here is occurring inside the function. I am not attempting to add records etc. to data returned by the function.
I am using PostgreSQL 9.2.9, compiled by Visual C++ build 1600, 64-bit. And yes I am aware this is an older version.
I'm doing an exercise from a textbook and the book is outdated, so I'm sort of figuring out how it fits into the new system as I go along. I've got the exact text, and it's returning
'Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int''.
The book is "Cocoa Programming for Mac OS X" by Aaron Hillegass, third edition and the code is:
#import "Foo.h"
#implementation Foo
-(IBAction)generate:(id)sender
{
// Generate a number between 1 and 100 inclusive
int generated;
generated = (random() % 100) + 1;
NSLog(#"generated = %d", generated);
// Ask the text field to change what it is displaying
[textField setIntValue:generated];
}
- (IBAction)seed:(id)sender
{
// Seed the randm number generator with time
srandom(time(NULL));
[textField setStringValue:#"Generator Seeded"];
}
#end
It's on the srandom(time(NULL)); line.
If I replace time with time_t, it comes up with another error message:
Unexpected type name 'time_t': unexpected expression.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
I don't have a clue what either of them mean. A question I read with the same error was apparently something to do with 64- and 32- bit integers but, heh, I don't know what that means either. Or how to fix it.
Well you really need to do some more reading so you understand what these things mean, but here are a few pointers.
When you (as in a human) count you normally use decimal numbers. In decimal you have 10 digits, 0 through 9. If you think of a counter, like on an electric meter or a car odometer, it has a fixed number of digits. So you might have a counter which can read from 000000 to 999999, this is a six-digit counter.
A computer represents numbers in binary, which has two digits 0 and 1. A Binary digIT is called a BIT. So thinking about the counter example above, a 32-bit number has 32 binary digits, a 64-bit one 64 binary digits.
Now if you have a 64-bit number and chop off the top 32-bits you may change its value - if the value was just 1 then it will still be 1, but if it takes more than 32 bits then the result will be a different number - just as truncating the decimal 9001 to 01 changes the value.
Your error:
Implicit conversion looses integer precision: 'time_t' (aka 'long') to 'unsigned int'
Is saying you are doing just this, truncating a large number - long is a 64-bit signed integer type on your computer (not on every computer) - to a smaller one - unsigned int is a 32-bit unsigned (no negative values) integer type on your computer.
In your case the loss of precision doesn't really matter as you are using the number in the statement:
srandom(time(NULL));
This line is setting the "seed" - a random number used to make sure each run of your program gets different random numbers. It is using the time as the seed, truncating it won't make any difference - it will still be a random value. You can silence the warning by making the conversion explicit with a cast:
srandom((unsigned int)time(NULL));
But remember, if the value of an expression is important such casts can produce mathematically incorrect results unless the value is known to be in range of the target type.
Now go read some more!
HTH
Its just a notification. You are assigning 'long' to 'unsigned int'
Solution is simple. Just click the yellow notification icon on left ribbon of that particular line where you are assigning that value. it will show a solution. Double click on solution and it will do everything automatically.
It will typecast to match the equation. But try next time to keep in mind the types you are assigning are same.. hope this helps..
I have a question about Marc Gravell's Booksleeve library.
I tried to understand how booksleeve deal the Int64 value (i have billion long value in Redis actually)
I used reflection to undestand the Set long value overrides.
// BookSleeve.RedisMessage
protected static void WriteUnified(Stream stream, long value)
{
if (value >= 0L && value <= 99L)
{
int i = (int)value;
if (i <= 9)
{
stream.Write(RedisMessage.oneByteIntegerPrefix, 0, RedisMessage.oneByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i));
}
else
{
stream.Write(RedisMessage.twoByteIntegerPrefix, 0, RedisMessage.twoByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i / 10));
stream.WriteByte((byte)(48 + i % 10));
}
}
else
{
byte[] bytes = Encoding.ASCII.GetBytes(value.ToString());
stream.WriteByte(36);
RedisMessage.WriteRaw(stream, (long)bytes.Length);
stream.Write(bytes, 0, bytes.Length);
}
stream.Write(RedisMessage.Crlf, 0, 2);
}
I don't understand why, with more than two digits int64, the long is encoding in ascii?
Why don't use byte[] ? I know than i can use byte[] overrides to do this, but i just want to understand this implementation to optimize mine. There may be a relationship with the Redis storage.
By advance thank you Marc :)
P.S : i'm still very enthusiastic about your next major version, than i can use long value key instead of string.
It writes it in ASCII because that is what the redis protocol demands.
If you look carefully, it is always encoded as ASCII - but for the most common cases (0-9, 10-99) I've special-cased it, as these are very simple results:
x => $1\r\nX\r\n
xy => $2\r\nXY\r\n
where x and y are the first two digits of a number in the range 0-99, and X and Y are those digits (as numbers) offset by 48 ('0') - so decimal 17 becomes the byte sequence (in hex):
24-32-0D-0A-31-37-0D-0A
Of course, that can also be achieved simply via the writing each digit sequentially and offsetting the digit value by 48 ('0'), and handling the negative sign - I guess the answer there is simply "because I coded it the simple but obviously correct way". Consider the value -123 - which is encoded as $4\r\n-123\r\n (hey, don't look at me - I didn't design the protocol). It is slightly awkward because it needs to calculate the buffer length first, then write that buffer length, then write the value - remembering to write in the order 100s, 10s, 1s (which is much harder than writing the other way around).
Perfectly willing to revisit it - simply: it works.
Of course, it becomes trivial if you have a scratch buffer available - you just write it in the simple order, then reverse the portion of the scratch buffer. I'll check to see if one is available (and if not, it wouldn't be unreasonable to add one).
I should also clarify: there is also the integer type, which would encode -123 as :-123\r\n - however, from memory there are a lot of places this simply does not work.
So, this should be an easy question for anyone who has used FORTH before, but I am a newbie trying to learn how to code this language (and this is a lot different than C++).
Anyways, I'm just trying to create a variable in FORTH called "Height" and I want a user to be able to input a value for "Height" whenever a certain word "setHeight" is called. However, everything I try seems to be failing because I don't know how to set up the variable nor how to grab user input and put it in the variable.
VARIABLE Height 5 ALLOT
: setHeight 5 ACCEPT ATOI CR ;
I hope this is an easy problem to fix, and any help would be greatly appreciated.
Thank you in advance.
Take a look at Rosettacode input/output examples for string or number input in FORTH:
String Input
: INPUT$ ( n -- addr n )
PAD SWAP ACCEPT
PAD SWAP ;
Number Input
: INPUT# ( -- u true | false )
0. 16 INPUT$ DUP >R
>NUMBER NIP NIP
R> <> DUP 0= IF NIP THEN ;
A big point to remember for your self-edification -- C++ is heavily typecasted, Forth is the complete opposite. Do you want Height to be a string, an integer, or a float, and is it signed or unsigned? Each has its own use cases. Whatever you choose, you must interact with the Height variable with the type you choose kept in mind. Think about what your bits mean every time.
Judging by your ATOI call, I assume you want the value of Height as an integer. A 5 byte integer is unusual, though, so I'm still not certain. But here goes with that assumption:
VARIABLE Height 1 CELLS ALLOT
VARIABLE StrBuffer 7 ALLOT
: setHeight ( -- )
StrBuffer 8 ACCEPT
DECIMAL ATOI Height ! ;
The CELLS call makes sure you're creating a variable with the number of bits your CPU prefers. The DECIMAL call makes sure you didn't change to HEX somewhere along the way before your ATOI.
Creating the StrBuffer variable is one of numerous ways to get a scratch space for strings. Assuming your CELL is 16-bit, you will need a maximum of 7 characters for a zero-terminated 16-bit signed integer -- for example, "-32767\0". Some implementations have PAD, which could be used instead of creating your own buffer. Another common word is SCRATCH, but I don't think it works the way we want.
If you stick with creating your own string buffer space, which I personally like because you know exactly how much space you got, then consider creating one large buffer for all your words' string handling needs. For example:
VARIABLE StrBuffer 201 ALLOT
This also keeps you from having to make the 16-bit CELL assumption, as 200 characters easily accommodates a 64-bit signed integer, in case that's your implementation's CELL size now or some day down the road.