character level exchang in USB smart card class specification - usb

in USB: smart card class standard there are 3 level exchange including 1:APDU level 2: TPDU level 3: character level. what's difference between these levels? Also, what's mean exactly character level?
sorry for my bad english writing.

There is less formal control defined in the specification as one proceeds up the levels from TPDU through APDU to level 3 character level exchanges.
From the spec, 3.2.3:
Character level of exchanges is selected when none of the TPDU, Short APDU or Short and extended APDU is selected.
The CCID sends the characters in the command (maybe none) then waits for the number of characters (if not null) indicated in the command.
For character level exchange between the host and the CCID, the CCID supports asynchronous characters communication with the ICC as per ISO 7816-3 § 6.3 including
Page 14 of 123 CCID Rev 1.1
DWG Smart-Card Integrated Circuit(s) Card Interface Devices
timings defined in ISO/IEC 7816-3 § 8.2 for T = 0 and in ISO 7816-3 § 9.3 for T = 1. To respect timing the CCID shall use the defined parameters.
The CCID implements the character frame and character repetition procedure when T = 0 is selected.
This is all part of defining the nature of exchange:
3.2 Protocolandparametersselection
A CCID announces in dwFeatures Table 5.1-1 one level of exchanges with the host, TPDU, APDU (Short and Extended), or Character.
TPDU is the first of the exchanges specified, and APDU is the second.
3.2.1 TPDU level of exchange
For TPDU level exchanges, the CCID provides the transportation of host’s TPDU to the ICC’s TPDU. The TPDU format changes according to the protocol or for PPS exchange.
TPDU for PPS exchange has the following format:
Command TPDU:
FF PPS0 PPS1 PPS2 PPS3 PCK, with PPS1, PPS2, PPS3 optional [ISO/IEC7816-3 §7].
Response TPDU:
FF PPS0_R PPS1_R PPS2_R PPS3_R PCK_R, with PPS1_R, PPS2_R, PPS3_R optional [ISO/IEC7816-3 §7.4).
The CCID implements and verifies timings and protocol according to its parameters settings to assume ISO/IEC 7816-3 §7.1, §7.2. No check on frame format is mandatory on request, and on response the only recommended analysis is the most significant nibble of PPS0_R to compute the number of bytes left to receive.
A CCID that implements automatic PPS should not accept TPDU for PPS exchange and must check for PPS response validity.
T = 0 TPDU can have three formats [ISO/IEC 7816-3, § 8.3.2]:
- Form 1, no data to exchange with ICC, only header:
Command TPDU = CLA INS P1 P2, the CCID is responsible to add P3=00h. Response TPDU = SW1 SW2
- Form 2, data expected from ICC:
Command TPDU = CLA INS P1 P2 Le, Le=P3 from 00h to FFh (00h means 100h)
Response TPDU = Data(Le) SW1 SW2, Data(Le) is for the Le data received from the ICC or empty if ICC rejects the command.
- Form 3, data are to be sent to the ICC:
Command TPDU = CLA INS P1 P2 Lc Data(Lc), Lc=P3 from 01h to FFh and Data(Lc) for the Lc data to send to the ICC.
Response TPDU = SW1 SW2
The CCID, for T=0 TPDU, is in charge of managing procedure bytes (ISO 7816-3 § 8.3.3) and character level [ISO]IEC 7816-3 § 8.2].
The procedure bytes are not mapped into the response TPDU except for the SW1 SW2 bytes. The CCID implements and verifies timings according to its
CCID Rev 1.1 Page 13 of 123
DWG Smart-Card Integrated Circuit(s) Card Interface Devices
parameters settings to assume ISO/IEC 7816-3 § 8.2 (work waiting time, extra guard time, ...). If ICC uses NULL procedure byte (60h) the CCID informs the host of this request for time extension.
T = 1 TPDU command and response use the frame format [ISO/IEC 7816-3 § 9.4]. The CCID expects the respect of the character frame [ISO/IEC 7816-3 § 9.4.1]. But no check on frame format is mandatory on sending, and on receiving. The only recommended checks are:
- Expecting LEN byte as third byte
- Wait for LEN bytes as INF field.
- Wait for an EDC field which length complies with parameter bmTCCKST1 (see § 6.1.7).
The CCID implements and verifies timing according to its parameters settings to assume ISO/IEC 7816-3 § 9.5.3 (CWT, BWT, BGT, ...).
The detection of parity error on character received is optional. The interpretation of first bytes received as NAD and PCB to manage VPP is optional and depends on CCID capabilities.
3.2.2 APDU level of exchange
For APDU level exchanges, the CCID provides the transportation of host’s APDU to ICC’s TPDU.
APDU commands and responses are defined in ISO 7816-4.
Two APDU levels are defined, short APDU and extended APDU. Short APDU and extended APDU are defined in ISO/IEC 7816-4 § 5.3.2.
A CCID that indicates a short APDU exchange only accepts short APDU. A CCID that indicates an extended APDU exchange accepts both short APDU and extended APDU.
If the ICC requests time extension, by using a NULL procedure byte (60h) in T=0 protocol or S(WTX) in T=1 protocol, the CCID informs the host of this request.
A CCID supporting APDU level of exchanges implements a high level of automatism in ICC communications. It shall also provide a high level of automatism in ATR treatment and implement one of the following automatisms: automatic parameters negotiation (proprietary algorithm), or automatic PPS according to the current parameters. At least two standards of transportation for APDU are defined, ISO/IEC 7816-4 and EMV 3.1.1, which standard to implement is out of the scope of this specification.

Related

MPEG-TS pointer_field max value

What is the max value for the Pointer_filed (ISO/IEC 13818-1 2.4.4.1) in MPEG 2 standart?
I write my own library on C# for parsing ts files and found this:
As we can see here pointer_field for this table is 0xb5 bytes. EIT table header begin with 0x4E 0xF2 but end in another table and i can't get EIT section length for this table.
ps I get this EIT stream from Eutelsat 36B satellite.
It is an 8-bit field, so the max value would be 255.
Reading ISO/IEC 13818-1 2.4.4.1:
pointer_field – This is an 8-bit field whose value shall be the number
of bytes, immediately following the pointer_field until the first byte
of the first section that is present in the payload of the Transport
Stream packet (so a value of 0x00 in the pointer_field indicates that
the section starts immediately after the pointer_field). When at least
one section begins in a given Transport Stream packet, then the
payload_unit_start_indicator (refer to 2.4.3.2) shall be set to 1 and
the first byte of the payload of that Transport Stream packet shall
contain the pointer. When no section begins in a given Transport
Stream packet, then the payload_unit_start_indicator shall be set to 0
and no pointer shall be sent in the payload of that packet.
The rest of your EIT table is contained in the following packet.

How to send/receive variable length protocol messages independently on the transmission layer

I'm writing a very specific application protocol to enable communication between 2 nodes. Node 1 is an embedded platform (a microcontroller), while node 2 is a common computer.
Such protocol defines messages of variable length. This means that sometimes node 1 sends a message of 100 bytes to node 2, while another time it sends a message of 452 bytes.
Such protocol shall be independent on how the messages are transmitted. For instance, the same message can be sent over USB, Bluetooth, etc.
Let's assume that a protocol message is defined as:
| Length (4 bytes) | ...Payload (variable length)... |
I'm struggling about how the receiver can recognise how long is the incoming message. So far, I have thought about 2 approaches.
1st approach
The sender sends the length first (4 bytes, always fixed size), and the message afterwards.
For instance, the sender does something like this:
// assuming that the parameters of send() are: data, length of data
send(msg_length, 4)
send(msg, msg_length - 4)
While the receiver side does:
msg_length = receive(4)
msg = receive(msg_length)
This may be ok with some "physical protocols" (e.g. UART), but with more complex ones (e.g. USB) transmitting the length with a separate packet may introduce some overhead. The reason being that an additional USB packet (with control data, ACK packets as well) is required to be transmitted for only 4 bytes.
However, with this approach the receiver side is pretty simple.
2nd approach
The alternative would be that the receiver keeps receiving data into a buffer, and at some point tries to find a valid message. Valid means: finding the length of the message first, and then its payload.
Most likely this approach requires adding some "start message" byte(s) at the beginning of the message, such that the receiver can use them to identify where a message is starting.

WEP: response computation for shared key authentication

after a very long research on the web, I'm still not able to find any code/algorithm that shows how the shared-key authentication works in WEP, and in particular how the response is computed.
The general concept is clear:
The mobile station (MB) sends a connect request to the access
point (AP).
The AP replies with a challenge
The MB encrypts this challenge (it has to prove to have the shared key) and sends it back to the AP
The AP verifies the cypher text and allows the access.
Now:
The challenge is 128 bytes.
How is the response computed? When opening in wireshark the traffic, the response is usually 136 bytes, meaning that the encryption includes also something else.
This should be something like:
RC4 ( IV + challenge + CRC32(challenge))
Where can I verify if this expression is the correct one?
Furthermore:
the IV is 6 Hex digit (so 3 bytes) meaning that maybe there is an extension of one byte. How is this extension computed?
the challenge is 128 bytes
is the CRC-32 computed on the challenge text only? Does it include also the IV?
Could you please refer to any official document where I can find the complete specification of the fields involved in the computation?
Thanks

what is difference between syntax I(0,1) and I(1,0) in USB standard?

In the USB: smart card CCID specification's part7, there are examples for TPDU level. for T=1 protocol the syntax I(i,j) is used. I can't understand the difference between I(0,1) and I(1,0). What does exactly this syntax mean?
I see that in the protocol T=1 the information blocks are denoted by notation I(i,j) where i is the sequence number of blocks sent and encoded by one bit, while the j is the M bit of the PCB field that shows the chaining state.

Telnet reader will split input after 1448 characters

I am writing a java applet that will print what a telnet client sends to the connection. Unfortunately, the client splits at 1448 characters.
The code that is proving to be a problem:
char[] l = new char[5000];
Reader r = new BufferedReader(new InputStreamReader(s.getInputStream(), "US-ASCII"));
int i = r.read(line);
I cannot change the source of what the telnet client reads from, so I am hoping it is an issue with the above three lines.
You're expecting to get telnet protocol data units from the TCP layer. It just doesn't work that way. You can only extract telnet protocol data units from the code that implements the telnet protocol. The segmentation of bytes of data at the TCP layer is arbitrary and it's the responsibility of higher layers to reconstruct the protocol data units.
The behavior you are seeing is normal, and unless you're diagnosing a performance issue, you should completely ignore the way the data is split at the TCP level.
The reason you're only getting 1448 bytes at a time is that the underlying protocols divide the transmission into packets. Frequently, this size is around 1500, and there are some bytes used for bookkeeping, so you're left with a chunk of 1448 bytes. The protocols don't guarantee that if you send X bytes in a 'single shot', that the client will receive X bytes in a single shot (e.g. a single call to the receive method).
As has been noted already in the comments above, its up to the receiving program to re-assemble these packets in a way that is meaningful to the client. In generally, you perform receives and append the data you receive to some buffer until you find an agreed-upon 'end of the block of data' marker (such as an end-of-line, new-line, carriage return, some symbol that won't appear in the data, etc.).
If the server is genuinely a telnet server--its output might be line-based (e.g. a single block of data is terminated with a 'end of line': carriage return and linefeed characters). RFC 854 may be helpful--it details the Telnet protocol as originally specified.