ISO 8583 AMEX Card Issue - iso8583

We have made an implementation to ISO8583. Every transaction works except if we make a keyed transaction for AMEX cards,
Document states :
The Track 2 Data and Primary Account Number (PAN) fields are instances of numeric data elements that follow a different format: In the case where the variable length data has an odd number of digits, set the right-most half byte to X '0'.
but adding a 0 to AMEX Card number we get a INVALID CARD NUMBER response,
If we send 15 digit card number no response at all is received.
Also at some other place in document it is mentioned :
Bitmap 2 — Primary Account Number Field Name Description
Variable up to 19 digits (if needed, last ½ byte padded-binary zero), preceded by 1-byte Length Indicator.
Comments
This field identifies the card member‘s account number. Unlike most numeric fields, the Primary Account Number is left-justified. In this case, the rightmost byte is padded with a ½ byte binary zero (e.g., a three-position field, X ‘03 12 30‘).
Is there any thing special that we need to do for odd digit card numbers?

Related

what is Boundary Value Analysis?

In an eCommerce website, the first name text field accepts a maximum of 30 characters, What are the BVA values?
Boundary values are the thresholds where a program behaves differently.
For instance, suppose I am sending messages. Each message must be split into packets of between 1 and 100 bytes. If the message is 100 bytes then one packet should be sent. If the message is 101 bytes then two packets should be sent. There is a boundary between 100 and 101 bytes. Both lengths should be tested.
Bugs typically happen at boundary values because that is where off-by-one errors and similar problems exist. In the message example above, you might find that a 100 byte message gets sent as two packets, or a 101 byte message drops the last byte, because of some subtle bug in the packet logic. This might happen even though shorter or longer messages get sent correctly, which is why it is important to concentrate on boundary values when testing.
Boundary Value Analysis is the process of identifying boundary values, both by inspecting the specification and inspecting the code. Look for conditions on things like length and range. For instance, if a field must be all letters then check values like 'a' (0x61), '`' (backtick, 0x60), 'z' (0x7a) and '{' (0x7b). If a field must not be more than 30 characters, check 30 and 31. And so on.
(Also, don't forget that Unicode is a thing: check non-English and non-Latin characters too).

What does this notation mean in TLS documentation?

For example, looking at RFC 7301, which defines ALPN:
enum {
application_layer_protocol_negotiation(16), (65535)
} ExtensionType;
The (16) is the enum value to be used, but how should I read the (65535) part?
From the same document:
opaque ProtocolName<1..2^8-1>;
struct {
ProtocolName protocol_name_list<2..2^16-1>
} ProtocolNameList;
...how should I read the <1..2^8-1> and <2..2^16-1> parts?
The notation is described in https://www.rfc-editor.org/rfc/rfc8446.
For "enumerateds" (enums), see https://www.rfc-editor.org/rfc/rfc8446#section-3.5, which says that the value in brackets is the value of that enum member, and that the enum occupies as many octets as required by the highest documented value.
Thus, if you want to leave some room, you need an un-named enum member with a sufficiently high value.
One may optionally specify a value without its associated tag to force the width definition without defining a superfluous element.
In the following example, Taste will consume two bytes in the data stream but can only assume the values 1, 2, or 4.
enum { sweet(1), sour(2), bitter(4), (32000) } Taste;
For vectors, see https://www.rfc-editor.org/rfc/rfc8446#section-3.4. This says:
Variable-length vectors are defined by specifying a subrange of legal lengths, inclusively, using the notation <floor..ceiling>. When these are encoded, the actual length precedes the vector's contents in the byte stream. The length will be in the form of a number consuming as many bytes as required to hold the vector's specified maximum (ceiling) length.
So the notation <1..2^8-1> means that ProtocolName must be at least one octet, and up to 255 octets in length.
Similarly <2..2^16-1> means that protocol_name_list must have at least 2 octets (not entries), and can have up to 65535 octets (not entries).
In this particular case, the minimum of 2 octets is because it must contain at least one entry, which is itself at least 2 octets long (u8 length prefix, at least one octet in the value).
To make the octets/entries distinction clear, later in that section, it says:
uint16 longer<0..800>;
/* zero to 400 16-bit unsigned integers */

How to correctly understand TrueType cmap's subtable Format 4?

The following is the information, which the TrueType font format documentation provides with regards to the fields of "Format 4: Segment mapping to delta values" subtable format, which may be used in cmap font table (the one used for mapping character codes to glyph indeces):
Type Name Description
1. uint16 format Format number is set to 4.
2. uint16 length This is the length in bytes of the subtable.
3. uint16 language For requirements on use of the language field, see “Use of the language field in 'cmap' subtables” in this document.
4. uint16 segCountX2 2 × segCount.
5. uint16 searchRange 2 × (2**floor(log2(segCount)))
6. uint16 entrySelector log2(searchRange/2)
7. uint16 rangeShift 2 × segCount - searchRange
8. uint16 endCode[segCount] End characterCode for each segment, last=0xFFFF.
9. uint16 reservedPad Set to 0.
10. uint16 startCode[segCount] Start character code for each segment.
11. int16 idDelta[segCount] Delta for all character codes in segment.
12. uint16 idRangeOffset[segCount] Offsets into glyphIdArray or 0
13. uint16 glyphIdArray[ ] Glyph index array (arbitrary length)
(Note: I numbered the fields as to allow referencing them)
Most fields, such as 1. format, 2. length,3. language,9. reservedPad` are trivial basic info and understood.
Other fields 4. segCountX2, 5. searchRange, 6 .entrySelector, 7. rangeShift I see as some odd way to have a precomputed values, but basically being only a redundant way to store the number of segments segCount (implicitly). Also those fields I have no major headache understanding.
Lastly there remain the fields that represent arrays. Per each segment there is a field 8. endCode, 10. stadCode, 11. idDelta and 12. idRangeOffset and there might/might not be even a field 13. glyphIdArray. Those are the fields I still struggle to interprete correctly and which this question is about.
To allow for a most helpful answer allow me to sketch quickly my take on those fields:
Working basically segment for segment, each segment maps characters codes from startCode to endCode to the indexes of the fonts glyphs (reflecting the order they appear in the glyf table).
having the character code as input
having the glyph index as output
segment is determined by iterating through them checking that the input value is inside the range of startCode to endCode.
with the segment found thus, the fields respective fields idRangeOffset and idDelta are determined as well.
idRangeOffset conveys a special meaning
case A) idRangeOffset being set to special value 0 means that the ouput can be
calculated from the input value (character code) and the idDelta. (I think it is either glyphId = inputCharCode + idDelta or glyphId = inputCharCode - idDelta )
case B) idRangeOffset being not 0 something different happens, which is part of what I seek an answer about here.
With respect to case B) the documentation states:
If the idRangeOffset value for the segment is not 0, the mapping of
character codes relies on glyphIdArray. The character code offset from
startCode is added to the idRangeOffset value. This sum is used as an
offset from the current location within idRangeOffset itself to index
out the correct glyphIdArray value. This obscure indexing trick works
because glyphIdArray immediately follows idRangeOffset in the font
file. The C expression that yields the glyph index is:
glyphId = *(idRangeOffset[i]/2
+ (c - startCode[i])
+ &idRangeOffset[i])
which I think provides a way to map a continuous input range (hence "segment") to a list of values stored in the field glyphIdArray, possibly as a way to provide output values that cannot be computed via idDelta, for being unordered/non-consecutive. This at least is my read on that what was described as "obscure" in the documentation.
Because glyphIdArray[] follows idRangeOffset[] in the TrueType file, the code segment in question
glyphId = *(&idRangeOffset[i]
+ idRangeOffset[i]/2
+ c - startCode[i])
points to the memory address of the desired position in glyphIdArray[]. To elaborate on why:
&idRangeOffset[i] points to the memory address of idRangeOffset[i]
moving forward idRangeOffset[i] bytes (or idRangeOffset[i]/2 uint16's) brings you to the relevant section of glyphIdArray[]
c - startCode[i] is the position in glyphIdArray[] that contains the desired ID value
From here, in the event that this ID is not zero, you will add idDelta[i] to obtain the glyph number corresponding to c.
It is important to point out *(&idRangeOffset[i] + idRangeOffset[i]/2 + (c - startCode[i])) is really pseudocode: you don't want a value stored in your program's memory, but rather the memory address in the file.
In a more modern language without pointers, the above code segment translates to:
glyphIndexArray[i - segCount + idRangeOffset[i]/2 + (c - startCode[i])]
The &idRangeOffset[i] in the original code segment has been replaced by i - segCount (where segCount = segCountX2/2). This is because the range offset (idRangeOffset[i]/2) is relative to the memory address &idRangeOffset[i].

emv tag 0x9F37 unpredictable numbers length

I have noticed that in some of the cases in emv transactions, the tag 9f37(TAG_UNPREDICTABLE_NUMBER) length is not 4 bytes, It is a read only tag so I cannot set it. Please someone explain me is it must be 4 bytes or it can be of any length upto 4 bytes. And also please guide me how this number is generated and what can cause its length.
As the name denotes it should not be predictable by any means and you can use any random number generation algorithm to create a value whether you developing a card application or terminal app as explained below.
Unpredictable number is used during Offline Enciphered PIN verification to ensure
that PIN block generated is different at all times. This is
generated by the chip and length is 8 bytes(image 1). This unpredictable
number you will not see at host and you will need a tool like FIME
Smartspy or Keolab Nomadlab to get the value.
Another is Unpredictable number generated by Terminal which is used
in cryptogram generation ensuring a different cryptogram is
generated every time even when all other CDOL elements are same. Its
length is 4 bytes(image 2)
image 2

Value of NSUInteger and NaN?

Why is the value of NSUInteger 2^32 - 1 instead of 2^32? Is there a relationship between this fact and the need of a nan value? This is so confusing.
Count to 10 on your fingers. Really :)
The standard way to count to 10 is 1,2,3,..10 (the ordinality of each finger is counted). However, what about "0 fingers"?
Normally that might represent that by putting your hands behind our back, but that adds another piece of information to the system: are your hands in front (present) or behind (missing)?
In this case, putting hands behind your back would equivalent to assigning nil to an NSNumber variable. However, NSUInteger represents a native integer type which does not have this extra state and must still encode 0 to be useful.
The key to encode the value 0 on your fingers is to simply count 0,1,2..9 instead. The same number of fingers (or bits of information) are available, but now the useful 0 can be accounted for .. at the expense of not having a 10 value (there are still 10 fingers, but the 10th finger only represents the value 9). This is the same reason why unsigned integers have a maximum value of 2^n-1 and not 2^n: it allows 0 to be encoded with maximum efficiency.
Now, NaN is not a typical integer value, but rather comes from floating point encodings - think of float or CGFloat. One such common encoding is IEEE 754:
In computing, NaN, standing for not a number, is a numeric data type value representing an undefined or unrepresentable value, especially in floating-point calculations ..
2^32-1 because counting starts from 0 for bits. If it's easier think of it as 2^32 - 2^0.
It is the largest value a 32-bit unsigned integer variable can hold. Add one to that, and it will wrap around to zero.
The reason for that is that the smallest unsigned number is zero, not one. Think of it: the largest number you can fit into four decimal places is 9999, not 10000. That's 10^4-1.
You cannot store 2^32 in 4 bytes, but if you subtract one then it fits (result is 0xffffffff)
Exactly the same reason why the odometer in your car shows a maximum of 999999 mi/km (assuming 6 digits) - while there are 10^6 possible values it can't show 10^6 itself but 0 through 10^6-1.