I have to implement the TFTP protocol in C for a school projet according to the RFC 1782.
When a client send a RRQ paquet with option(s), the serveur reply with a OACK paquet to confirm the recognized option(s). I'm fine with that.
Buf if the client send a RRQ paquet without option, does the server have to send a OACK paquet or does he begin to send the DATA paquet(s) directly?
Thank you.
As specified in the [rfc 1782], you can send the DATA packets directly.
"the server may respond with an Options Acknowledgment "
[rfc 2119]:
5. MAY This word, or the adjective "OPTIONAL", mean that an item is
truly optional. One vendor may choose to include the item because a
particular marketplace requires it or because the vendor feels that
it enhances the product while another vendor may omit the same item.
An implementation which does not include a particular option MUST be
prepared to interoperate with another implementation which does
include the option, though perhaps with reduced functionality. In the
same vein an implementation which does include a particular option
MUST be prepared to interoperate with another implementation which
does not include the option (except, of course, for the feature the
option provides.)
[rfc 1782]: https://www.rfc-editor.org/rfc/rfc1782
[rfc 2119]: https://www.ietf.org/rfc/rfc2119.txt
Related
I understand that, if you send duplicate header values in subsequent requests, the dynamic table makes it so that you do not send the value again but a reference to it in the table is sent instead.
My question is whether this applies to the URL as well?
Say you have repeated requests to the same URL (possibly containing long IDs and/or tokens), would bandwidth be saved in this instance?
There are various options that a client can use to send headers under HTTP/2 as defined in the HPACK specification. These basically say whether to use a previously referred to header, whether to store a header for later reference, whether to never store a header for reuse...etc. The client decides which of these to use for headers it sends.
In HTTP/2 the URL is sent in the :path pseudo-header so unlike in HTTP/1.1 it is a just like any other HTTP Header so could be compressed. Typically a URL is not repeated often, however, so it would be sent as a Literal Header Field without Indexing, which means this is a once off header so don’t store it for reuse. Of course, as it’s an HTTP header much like any other, there’s nothing to stop an HTTP/2 client sending this as an indexed type, but web browsers are unlikely to do this, so this is probably only really an option for custom clients.
Incidentally if wishing to know more about this, and finding the spec a little difficult to follow, then my book HTTP/2 in Action, goes into this in a lot more detail in Chapter 8.
At 'Project Area' level on RTC under 'Team Configuration' -> 'Operation Behaviour' there are two deliver options :
What is the differnence between the two ? Are they both not delivering to the server ?
Those are for hooks:
executed on the client, that is before the deliver,
executed on the server, that is at the reception of the deliver.
It is on the client side, for instance, that I set the hook requiring that a Work Item is associated to a change set before said change set can be delivered (as illustrated in your previous question "Can I associate a change set with a work item after it has been delivered?").
I could check it on the server, but why use network traffic if the deliver is rejected anyway?
More precisely, As mentioned in this thread:
In general, you want all preconditions to run on the server, so the server (including the web server) can ensure those preconditions have been executed.
But there are some preconditions that must be run on the client, namely those that need to look at the local state of the client.
This is illustrated by the list of predefined preconditions.
In particular, most of these preconditions refer to the build/compile state of the workspace (information not available on the server), such as: "prohibit unused imports" and "prohibit workspace errors".
Note that there are three client-side preconditions that do not require client-side information ("require work item approval", "require work item and comments", "descriptive change sets").
These are included for backward compatibility, since they were made available in the first release of RTC, but have since then made available as server-side preconditions as well, so you should always use the server-side form of them.
I've submitted work item 209427 to get these client-side preconditions marked as "deprecated" with a pointer to the server-side preconditions that replace them.
Okay, so my goal is to build a easy to use protocol for sending data over tcp. basically, it would send a message, and an object(of unknown type) over tcp. To send, it would only require one method call and to receive it would only require one also.
So this is how I was thinking to format the "message".
length_of_message - "A string that is a message" - length_of_Object - object
length_of_message would be a set number of bytes. along with length_of_Object.
the actual message string and the actual object would be of variable length.
If the actual class of the object wouldn't be know, could I just declare it as a "generic object" somehow? and then get its class name from the "generic object" and the message would tell the receiver what to do with the object?
It would be simple if it was a constant object type but i want to be able use one send function and one receive function for ever object that needs to be send/recieved.
Any suggestions?
Thanks,
Andrew
Make sure you aren't reinventing the wheel (unless doing so is your primary goal).
With that in mind, consider:
• Implement and use the NSCoding protocol. It allows for the efficient archival of complexly connected object graphs, including cycles.
• Instead of raw TCP, use HTTP. While it adds a bit over overhead in the headers, the body can be straight encoded data. More importantly, HTTP is ubiquitous. It routes through just about anything whereas other protocols might be blocked (think proxy servers).
• Via HTTP, you can leverage compression. If one side of your communication pipe is an existing web server of some kind, it probably already supports gzip'd communication. Compressing an NSData (that would be the result of NSCoding) is trivial.
• Alternatively, stick with straight plists.
Unless you truly have some requirement that makes the above inviable, you are likely better off leveraging the above technologies instead of rolling a new one.
With that said, what you propose is fine. I would add, possibly, a structure like:
[HEADER][MSGID][LEN][TYPE][DATA of len][POST]
Where the POST is a known sequence of bytes that the receiver can verify to make sure that, maybe, all the data was received correctly. Or you could go whole hog and integrate a checksum. Or sub-pieces could be repeated, as needed (i.e. [LEN][TYPE][DATA] over and over.
This is related to:
How should I implement a COUNT verb in my RESTful web service? , Paging in a Rest Collection
and Using the HTTP Range Header with a range specifier other than bytes?
Actually I think the -1 rated anwser here is correct https://stackoverflow.com/a/1434701/1237617
Generally anwsers say that you can use custom units citing the sec 3.12
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
However when you read the HTTP spec please notice the production rules are thus:
Content-Range = "Content-Range" ":" content-range-spec
content-range-spec = byte-content-range-spec
byte-content-range-spec = bytes-unit SP
byte-range-resp-spec "/"
( instance-length | "*" )
The header spec only references bytes-unit from sec 3.12, not range-units, so I think that actually it's against the spec to use custom units here.
Am I missing something or is the popular anwser wrong?
EDIT: Since this probbably isn't clear, the gist of my question is:
rfc2616 sec14.16 only references bytes-unit. It never mentions range-unit, so range-unit production is not relevant for Content-Range, and thus only byte-units can be used.
I think this adresses my concerns best, although I needed some time to understand it (plus I wanted to make sure, that there is something wrong with the wording).
This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests
thanks to elgaton
The spec, as being revised, allows custom range units. See HTTPbis Part 5, Section 2.
If you read the HTTP/1.1 RFC, section 3.12, you will see that:
The only range unit defined by HTTP/1.1 is "bytes". HTTP/1.1 implementations MAY ignore ranges specified using other units.
So, the other-range-unit token has been introduced only to make servers more "liberal" when accepting. This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests, so that servers could accept even invalid requests (they will be simply ignored) and clients would use only the universally-accepted bytes unit.
Therefore, I personally recommend to:
use only the bytes unit when acting as a client, and
accept other units (discarding the Content-Range header if they are invalid) when acting as a server.
This is a purely personal opinion, but I think it is fairly consistent with how other HTTP extensions (custom methods or headers) are used. Here is how I read it: Yes I can use custom range units and no, I shouldn't submit a bug report when it gets ignored when passing through firewalls, web proxies, and other intermediaries. I conform to the HTTP spec when I'm sending it and they conform to HTTP when they ignore it. WebDAV uses HTTP extensions correctly, IMO, but rarely works over the Internet for exactly this reason. As I said, a personal opinion only.
Apparently it's OK to use custom units, because:
This reflects the fact that, apparently, the first set of grammar
rules has been specifically made for parsing and the second one for
producing HTTP requests
I get confused when and why should you use specific verbs in REST?
I know basic things like:
Get -> for retrieval
Post -> adding new entity
PUT -> updating
Delete -> for deleting
These attributes are to be used as per the operation I wrote above but I don't understand why?
What will happen if inside Get method in REST I add a new entity or inside POST I update an entity? or may be inside DELETE I add an entity. I know this may be a noob question but I need to understand it. It sounds very confusing to me.
#archil has an excellent explanation of the pitfalls of misusing the verbs, but I would point out that the rules are not quite as rigid as what you've described (at least as far as the protocol is concerned).
GET MUST be safe. That means that a GET request must not change the server state in any substantial way. (The server could do some extra work like logging the request, but will not update any data.)
PUT and DELETE MUST be idempotent. That means that multiple calls to the same URI will have the same effect as one call. So for example, if you want to change a person's name from "Jon" to "Jack" and you do it with a PUT request, that's OK because you could do it one time or 100 times and the person's name would still have been updated to "Jack".
POST makes no guarantees about safety or idempotency. That means you can technically do whatever you want with a POST request. However, you will lose any advantage that clients can take of those assumptions. For example, you could use POST to do a search, which is semantically more of a GET request. There won't be any problems, but browsers (or proxies or other agents) would never cache the results of that search because it can't assume that nothing changed as a result of the request. Further, web crawlers would never perform a POST request because it could not assume the operation was safe.
The entire HTML version of the world wide web gets along pretty well without PUT or DELETE and it's perfectly fine to do deletes or updates with POST, but if you can support PUT and DELETE for updates and deletes (and other idempotent operations) it's just a little better because agents can assume that the operation is idempotent.
See the official W3C documentation for the real nitty gritty on safety and idempotency.
Protocol is protocol. It is meant to define every rule related to it. Http is protocol too. All of above rules (including http verb rules) are defined by http protocol, and the usage is defined by http protocol. If you do not follow these rules, only you will understand what happens inside your service. It will not follow rules of the protocol and will be confusing for other users. There was an example, one time, about famous photo site (does not matter which) that did delete pictures with GET request. Once the user of that site installed the google desktop search program, that archieves the pages locally. As that program knew that GET operations are only used to get data, and should not affect anything, it made GET requests to every available url (including those GET-delete urls). As the user was logged in and the cookie was in browser, there were no authorization problems. And the result - all of the user photos were deleted on server, because of incorrect usage of http protocol and GET verb. That's why you should always follow the rules of protocol you are using. Although technically possible, it is not right to override defined rules.
Using GET to delete a resource would be like having a function named and documented to add something to an array that deletes something from the array under the hood. REST has only a few well defined methods (the HTTP verbs). Users of your service will expect that your service stick to these definition otherwise it's not a RESTful web service.
If you do so, you cannot claim that your interface is RESTful. The REST principle mandates that the specified verbs perform the actions that you have mentioned. If they don't, then it can't be called a RESTful interface.