How can I use vendor specific MIME-TYPES for a "private-labeled" REST API - api

I'm developing a RESTful API. Currently I'm considering the use of resource-specific vendor MIME-types to convey semantics and meaning as well as well as serve as the "contract" between client and server.
So for example application/vnd.mycompany.person+xml would mean that the data in question is xml that represents a person.
I have a requirement to make this API "private-labeled" meaning a reseller could in turn provide the API to his customer without his customer knowing that it is my company's service. The way this would work is that my company would host the main api at a sort of generic url, i.e www.example.com/api then my company would use a CNAME to point our domain name to that url, and our resellers could do the same.
Internally all resource links would be relative from the API root, and so would respect the actual url that is being used.
HOWEVER, I don't want to have to understand/support arbitrary vendor specific MIME-types, so what should the "mycompany" part of the example MIME-type above be?

The HTTP spec says:
Use of non-registered media types is discouraged.
I used to use “custom” media types in my platform, but it caused issues with user agents (browsers, cURL, wget, etc.) not recognizing the content.
You could try to get your custom media type registered, but (A) that takes a while; (B) it’d take a real long while before user-agents would recognize the type, if ever; (C) you’ve indicated that you don’t want the company name always present anyway.
As an alternative to “custom” media types, I recommend utilizing media type parameters instead; they’re a blessed way to add supplementary information about content to media types.
Using parameters, your media type could be application/xml; mycompany-schema=person or maybe just application/xml; schema=person.

I have seen a couple of frameworks and tutorials that recommend vendor specific mime-type to "solve" issues with making your REST interface "truly RESTful" simply because it can be done and somehow that makes it kosher for a REST service.
One issue with this approach is that by its nature is a hack or cheat to "make it work" the way you want when the whole point of shifting to a hypermedia-driven REST service is to change the model of your API and service and change how you approach the problem. Sneaking a "valid" or allowed but not recommended HTTP value for the Content-Type is like telling the starving Venezuelans that rats are fish so they could eat them without sin during Lent. Is there anything wrong with eating a rat if that's all you have? Probably not. But does pretending its a fish make it a fish? Of course not. If you need an interface that's contract driven, use RPC or SOAP or even a custom vendor mime-type. But don't point to the spec and say it's Rest, because in the end your eating a rat and everyone knows it and you're only lying to yourself.
The second issue is that you are losing the actual rewards of the hypermedia-driven interface when you cut corners. Right away you have run into issues with user agents and your own server having to jump through hoops or simply give up because the mime-type was unfamiliar. All because you thought you could have it both ways when the point isn't to impress your clients with claims of a true Rest service or to lighten the load a bit by shedding the (obviously valuable for some contexts) extra weight of a contract-driven interaction, it was to change how your service actually interacted with external clients.
Finally, I'm really unclear on how a vendor specific mime-type actually enforces a contract any better than a defined endpoint? All of the sites that mention this technique seem to just be glowing with relief that this option exists and, quite frankly, a bit smug and pleased that they are using it, like they know it's technically "naughty" but it's just so easy and fixes everything. What does it fix? In your case, why wouldn't you simply have your inbound person request/content go to:
POST /myRestService/people
and if they have some other request, have that go to a different endpoint intended for that other data type? If you need a method does_something, wouldn't you either go with:
GET /myRestService/people/personID_123/does_something
or
GET /myRestService/people/does_something/personID_123
depending on the exact context?
And just so I don't sound mean on top of loony, any frustration or anger is not at all directed at you or your question, but at the "solution" of the vendor mime-type and the obsession everyone has developed for the "Roy Fielding officially-approved and stamped
as valid REST service" that apparently no one is even able to provide a working public example of, leaving only a sense of urgency to adopt it right away taking whatever shortcuts needed and we can deal with the shame and finger pointing later when we actually fix the problems the shortcuts made.

Related

What's the correct way to create an endpoint in a API REST

I'm drawing my API routes.
A user has projects, projects have actors, actors have addresses.
There is no address without an actor, there is no actor without a project, and there is no project without a user.
Would this be the correct way to build the end_point?
GET /users/{user_id}/projects/{project_id}/actors/{actor_id}/addresses
There is no such thing as a REST endpoint. There are resources. -- Fielding, 2018
What you seem to be asking about here is how to design the identifier for your resource.
REST doesn't care what spelling conventions you use for your resource identifiers, so long as they satisfy the production rules described by RFC 3986.
Identifiers that load data into the query part are convenient if you are expecting to leverage HTML forms:
GET /addresses?user={user_id}&project={project_id}&actor=actor_id
But that design is not particularly convenient if you are expecting to use dot segments to reference other resources.
Choosing some alternative that is described by a URI Template will make some things easier down the road.
/users/{user_id}/projects/{project_id}/actors/{actor_id}/addresses
That's fine. Other spellings would also be fine (hint: URL shorteners work).
Broadly, you choose identifier spellings by thinking about the different contexts in which a human being has to look at the URI (documentation for your API, browser histories, access logs, etc.) and choose a spelling that works well in at least one of those settings.

REST API responses based on authentication, best practices?

I have an API with endpoint GET /users/{id} which returns a User object. The User object can contain sensitive fields such as cardLast4, cardBrand, etc.
{
firstName: ...,
lastName: ...,
cardLast4: ...,
cardBrand: ...
}
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
I want to know what are the best practices here for designing my response. I see three options:
Option 1. Two DTOs, one with all fields and one without the hidden fields:
// OtherUserDTO
{
firstName: ...,
lastName: ..., // cardLast4 and cardBrand hidden
}
I can see this becoming out of hand with DTOs based on role, what if now I have UserDTOForAdminRole, UserDTOForAccountingRole, etc... It looks like it quickly gets out of hand with the number of potential DTOs.
Option 2. One response object being the User, but null out the values that the user should not be able to see.
{
firstName: ...,
lastName: ...,
cardLast4: null, // hidden
cardBrand: null // hidden
}
Option 3. Create another endpoint such as /payment-methods?userId={userId} even though PaymentMethod is not an entity in my database. This will now require 2 api calls to get all the data. If the userId is not their own, it will return 403 forbidden.
{
cardLast4: ...,
cardBrand: ...
}
What are the best practices here?
You're gonna get different opinions about this, but I feel that doing a GET request on some endpoint, and getting a different shape of data depending on the authorization status can be confusing.
So I would be tempted, if it's reasonable to do this, to expose the privileged data via a secondary endpoint. Either by just exposing the private properties there, or by having 2 distinct endpoints, one with the unprivileged data and a second that repeats the data + the new private properties.
I tend to go for option 1 here, because an API endpoint is not just a means to get data. The URI is an identity, so I would want /users/123 to mean the same thing everywhere, and have a second /users/123/secret-properties
I have an API with endpoint GET /users/{id} which returns a User object.
In general, it may help to reframe your thinking -- resources in REST are generalizations of documents (think "web pages"), not generalizations of objects. "HTTP is an application protocol whose application domain is the transfer of documents over a network" -- Jim Webber, 2011
If the user calls that endpoint with their own ID, all fields should be visible. However, if it is someone elses ID then cardLast4 and cardBrand should be hidden.
Big picture view: in HTTP, you've got a bit of tension between privacy (only show documents with sensitive information to people allowed access) and caching (save bandwidth and server pressure by using copies of documents to satisfy more than one request).
Cache is an important architectural constraint in the REST architectural style; that's the bit that puts the "web scale" in the world wide web.
OK, good news first -- HTTP has special rules for caching web requests with Authorization headers. Unless you deliberately opt-in to allowing the responses to be re-used, you don't have to worry the caching.
Treating the two different views as two different documents, with different identifiers, makes almost everything easier -- the public documents are available to the public, the sensitive documents are locked down, operators looking at traffic in the log can distinguish the two different views because the logged identifier is different, and so on.
The thing that isn't easier: the case where someone is editing (POST/PUT/PATCH) one document and expecting to see the changes appear in the other. Cache-invalidation is one of the two hard problems in computer science. HTTP doesn't have a general purpose mechanism that allows the origin server to mark arbitrary documents for invalidation - successful unsafe requests will invalidate the effective-target-uri, the Location, the Content-Location, and that's it... and all three of those values have other important uses, making them more challenging to game.
Documents with different absolute-uri are different documents, and those documents, once copied from the origin server, can get out of sync.
This is the option I would normally choose - a client looking at cached copies of a document isn't seeing changes made by the server
OK, you decide that you don't like those trade offs. Can we do it with just one resource identifier? You immediately lose some clarity in your general purpose logs, but perhaps a bespoke logging system will get you past that.
You probably also have to dump public caching at this point. The only general purpose header that changes between the user allowed to look at the sensitive information and the user who isn't? That's the authorization header, and there's no "Vary" mechanism on authorization.
You've also got something of a challenge for the user who is making changes to the sensitive copy, but wants to now review the public copy (to make sure nothing leaked? or to make sure that the publicly visible changes took hold?)
There's no general purpose header for "show me the public version", so either you need to use a non standard header (which general purpose components will ignore), or you need to try standardizing something and then driving adoption by the implementors of general purpose components. It's doable (PATCH happened, after all) but it's a lot of work.
The other trick you can try is to play games with Content-Type and the Accept header -- perhaps clients use something normal for the public version (ex application/json), and a specialized type for the sensitive version (application/prs.example-sensitive+json).
That would allow the origin server to use the Vary header to indicate that the response is only suitable if the same accept headers are used.
Once again, general purpose components aren't going to know about your bespoke content-type, and are never going to ask for it.
The standardization route really isn't going to help you here, because the thing you really need is that clients discriminate between the two modes, where general purpose components today are trying to use that channel to advertise all of the standardized representations that they can handle.
I don't think this actually gets you anywhere that you can't fake more easily with a bespoke header.
REST leans heavily into the idea of using readily standardizable forms; if you think this is a general problem that could potentially apply to all resources in the world, then a header is the right way to go. So a reasonable approach would be to try a custom header, and get a bunch of experience with it, then try writing something up and getting everybody to buy in.
If you want something that just works with the out of the box web that we have today, use two different URI and move on to solving important problems.

Could REST API OPTIONS be used as the HATEOAS only request?

As I've understood it, REST MUST use the HATEOAS constraint to be implemented properly. My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
My question is if the HTTP OPTIONS method could be used as a way to navigate a REST API. Basically the response from an OPTIONS request would include the possible actions to take on a resource which would make it possible to consume the API without knowing the endpoints.
e.g.
An initial request to the API
HTTP OPTIONS /api
Could return all resources available for consumption and their relations. Like a massive tree to consume the API and all it has to offer. This idea doesn't neglect implementing HATEOAS on other responses as well, but the OPTIONS request would allow navigation without returning data that the consumer might not actually want to consume.
Is this a really bad idea? Or is it something that is commonly implemented. I'm currently attempting to implement a REST API but I'm having a hard time understanding the benefit of HATEOAS if there is no way to navigate the API without actually requesting data that you might not necessarily need when consuming certain end points. And I assume HATEOAS aims to make clients consume resources by their relation and not actually hard coding the end point?
TL;DR
Could HTTP OPTIONS request act as a way to navigate a REST API by returning what communication options are available for the requested resource without actually returning the resource?
According to RFC 7231
The OPTIONS HTTP method requests information about the communication options available for the target resource, at either the origin server or an intervening intermediary. This method allows a client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action.
...
A server generating a successful response to OPTIONS SHOULD send any header fields that might indicate optional features implemented by the server and applicable to the target resource (e.g., Allow), including potential extensions not defined by this specification. The response payload, if any, might also describe the communication options in a machine or human-readable representation. A standard format for such a representation is not defined by this specification, but might be defined by future extensions to HTTP. A server MUST generate a Content-Length field with a value of "0" if no payload body is to be sent in the response.
So, basically a response to an OPTIONS request will tell your client which HTTP operations may be performed on a certain resource. It is furthermore admissible to target the whole server on utilizing * instead of a specific resource URI.
A response to an OPTIONS request may look like this:
HTTP/1.1 204 No Content
Allow: OPTIONS, GET, HEAD, POST
Cache-Control: max-age=604800
Date: Thu, 13 Oct 2016 11:45:00 GMT
Expires: Thu, 20 Oct 2016 11:45:00 GMT
Server: EOS (lax004/2813)
x-ec-custom-error: 1
which states that a certain resource supports the mentioned operations in the Allow header of the resonse. Via the Cache-Control header a client knows that it by default can cache responses of safe requests (GET and HEAD) for up to 7 days (value is mentioned in seconds). The x-ec-custom-error header specifies a non-standard header that is specific to a particular software, in that particular case to a ECS Server. According to this Q & A the meaning isn't publicly documented and therefore application specific.
In regards to returning a tree of traversable resources from the given resource the OPTIONS operation was requested for, technically this could be possible, however, certain systems might produce an almost never-ending list of URIs. Therefore such a design is questionable for larger systems.
My understanding of HATEOAS is that basically every resource should share information about what communication options it has and how the consumer can use those options to achieve their end goal.
Hypertext as the engine of application state (HATEOAS) is basically just a requirement to use the interaction model used on the Web for decades quite successfully and offer the same functionality to applications. This enabled applications to surf the Web similar like we humans do.
Great, but how does it work?
On the Web we use links and Web forms all the time. Through a Web form a server is able to teach a client basically what properties a certain resource supports or expects. But that's not all! The same form also tells your client where to send the request to (target URI), the HTTP method to use and, usually implicitly given, the media type the payload needs to be serialized to upon sending the request to the server. This, in essence, makes out-of-band API documentation unnecessary as all the information a client needs to make a valid request is given by the server already.
On a typical Web site you might have a table of entries which offers the option to add new entries, update or delete existing ones. Usually such links are hidden behind fancy images, i.e. a dustbin for deleting an entry and a pencil for editing an existing entry or the like where the image represents an affordance. The affordance of certain elements make it clear what you should do with it or what's the purpose of that element. A button on a page wants to be pushed while a slider widget wants to be changed while a text field waits for user input. As applications aren't that eager to work on images a further concept is used instead. Link relation names exactly serve this purpose. I.e. if you have a pageable collection consisting of multiple page à 25 entries i.e. you might be familiar with a widget containing arrows to page through that collection. A link here should usually be annotated with link relation names such as self, next, prev, first or last. The purpose of such links is quite clear, some others like prefetch, that indicates that a resource can be loaded in the background early as it is very likely that the next action may request it, might be less intuitive at first. Such link relation names should be standardized or at least follow the Web Linking extension mechanism.
Through the help of link-relation names a client that knows to look for URIs annotated with next i.e. will still work if the server decides to change its URI scheme as it treats the URI rather opaque.
Of course, both client and server need to support the same media type that furthermore is able to represent such capabilities. Plain application/json is i.e. not able to provide such a support. HAL JSON or JSON Hyper-Schema at least add support for links and link relation names to JSON based documents, while hal-forms, halo+json (halform) and ion might be used to teach a client how a request needs to be created. The question here shouldn't be which media type to support but how many different ones you want to support as the more media types your API is able to handle, the more likely it will be to interact with arbitrary clients not under your control.
These concepts allow you to basically use the controls given in the server response to "drive your workflow" forward. In essence, what you, as an API designer should do is to design the interactions of a client with your API so that it follows a certain, as Jim Webber termed it, domain application protocol or state machine as Asbjørn Ulsberg put it that basically guides a client through its task, i.e. ordering from your shop API.
So, in short, HATEOAS is comparable to Web surfing for applications by making use of named link relations and form-like media type representations that allow you to take actions solely on the response retrieved from a server directly instead of having to bake external knowledge stemming from some reference documentation page (Swagger, OpenAPI or the like) into your application.
But how does HATEOAS benefit the consumer in practice then?
First, it does not have to consult any external documentation other maybe than the current media type specification, though usually support for well-known media types is already backed into popular frameworks or at least allows to add support through plugins or further libraries. Once the media type is understood and supported interactions with all serivces that also support the same media type is possible, regardless of their domain. This allows to reuse the same client implementation to interact with service A and service B out of the box. In an RPC-like systems you'd need to integrate the API of service A first and if you want to interact with service B also you need to integrate those API separately. It's most likely that these APIs are incompatible and thus don't allow the reusage of the same classes.
Without knowing the URL for a resource, is the idea that the consumer can discover it by browsing the API, but they will still have a hard dependency on the actual URL? Or is HATEOAS purpose to leverage actions on a certain resource, i.e. the consumer knows the users end-point but he does not need to know the end-points for actions to take on the users resource cause those are provided by the API?
A client usually does not care about the URI itself, it cares about the content a URI may provide. Compare this to your typical browsing behavior. Do you prefer a meaningful text that summarizes that links content so you can decide whether to request that resource or do you prefer parsing and dissecting a URI to learn what it might do? Minifying or obfuscating URIs will do you no favor in the latter case though.
A further danger arise from URIs and resources that a client put meaning to. A slopy developer will interpret such URIs/resources and implement a tiny hack to interact with that service assuming the URI/resource will remain static. I.e. it is not unreasonable to consider a URI /api/users/1 to return some user related data and based on the response format a tiny Java class is written that expects to receive a field for username and one for email i.e.. If the server now decides to add additional data or rename its fields, the client suddenly will not be able to interact with that service further. And rest assured that in practice, especially in the EDI domain, you will have to interact with clients that are not meant to interact with the Web or where programmers implemented their own JSON framework that can't coope with changing orders of elements or can't handle additional optional fields, even though the spec contains notes on those issues. Fielding claimed that
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names. [ditto] (Source)
Instead of typed resources content type negotiation should be used to support interoperability of different stackholders in the network.
As such, the URI itself is just the identifier of a resource that is mainly used to learn where to send a request to. Through the help of meaningful link relation names a client should know that it is interested in i.e. http:/www.acme.com/rel/orders if it wants to send an order to the service and just looks up the URI that either is annotated with that Web Linking extension realtion name or that has an URI attached to it. Whether the link relation name is just an annotation (i.e. a further attribute on the URI element) or the URI being attached to the link-relation name (i.e. as an embedded object of the link relation name) is dependent on the actual media type. This way, if a server ever decides to change its URI scheme or move around resources, for whatever reason, the client will still be able to find the URI that way and it couldn't care less about the characters present in the URI or not. It just treats the URI as opaque thing. The nice thing here is, that a URI can be annotated with multiple link relation names simultaneously, which allows a server to "offer" that URI to clients that support different link-relation names. In the case of forms the URI to send the request to is probably contained in the action attribute of the form element or the like.
As you hopefully can see, with HATEOAS there is no need for a hard dependency on URIs, if so there may be a dependency on the link-relation name though. It still requires URIs to learn where to send the request to, but through looking up the URI via its accompanying link relation name you make the handling of URIs much more dynamic as it allows a server to change the URI anytime it wants to or has to.

How RESTful is using subdomains as resource identifiers?

We have a single-page app (AngularJs) which interacts with the backend using REST API. The app allows each user to see information about the company the user works at, but not any other company's data. Our current REST API looks like this:
domain.com/companies/123
domain.com/companies/123/employees
domain.com/employees/987
NOTE: All ids are GUIDs, hence the last end-point doesn't have company id, just the employee id.
We recently started working on enforcing the requirement of each user having access to information related exclusively the company where the user works. This means that on the backend we need to track who the logged in user is (which is simple auth problem) as well as determining the company whose information is being accessed. The latter is not easy to determine from our REST API calls, because some of them do not include company id, such as the last one shown above.
We decided that instead of tracking company ID in the UI and sending it with each request, we would put it in the subdomain. So, assuming that ACME company has id=123 our API would change as follows:
acme.domain.com
acme.domain.com/employees
acme.domain.com/employees/987
This makes identifying the company very easy on the backend and requires minor changes to REST calls from our single-page app. However, my concern is that it breaks the RESTfulness of our API. This may also introduce some CORS problems, but I don't have a use case for it now.
I would like to hear your thoughts on this and how you dealt with this problem in the past.
Thanks!
In a similar application, we did put the 'company id' into the path (every company-specific path), not as a subdomain.
I wouldn't care a jot about whether some terminology enthusiast thought my design was "RESTful
" or not, but I can see several disadvantages to using domains, mostly stemming from the fact that the world tends to assume that the domain identifies "the server", and the path is how you find an item on that server. There will be a certain amount of extra stuff you'll have to deal with with multiple domains which you wouldn't with paths:
HTTPS - you'd need a wildcard certificate instead of a simple one
DNS - you're either going to have wildcard DNS entries, or your application management is now going to involve DNS management
All the CORS stuff which you mention - may or may not be a headache in your specific application - anything which is making 'same domain' assumptions about security policy is going to be affected.
Of course, if you want lots of isolation between companies, and effectively you would be as happy running a separate server for each company, then it's not a bad design. I can't see it's more or less RESTful, as that's just a matter of viewpoint.
There is nothing "unrestful" in using subdomains. URIs in REST are opaque, meaning that you don't really care about what the URI is, but only about the fact that every single resource in the system can be identified and referenced independently.
Also, in a RESTful application, you never compose URLs manually, but you traverse the hypermedia links you find at the API endpoint and in all the returned responses. Since you don't need to manually compose URIs, from the REST point of view it's indifferent how they look. Having a URI such as
//domain.com/ABGHTYT12345H
would be as RESTful as
//domain.com/companies/acme/employees/123
or
//domain.com/acme/employees/smith-charles
or
//acme.domain.com/employees/123
All of those are equally RESTful.
But... I like to think of usable APIs, and when it comes to usability having readable meaningful URLs is a must for me. Also following conventions is a good idea. In your particular case, there is nothing unrestful with the route, but it is unusual to find that kind of behaviour in an API, so it might not be the best practice. Also, as someone pointed out, it might complicate your development (Not specifically on the CORS part though, that one is easily solved by sending a few HTTP headers)
So, even if I can't see anything non REST on your proposal, the conventions elsewhere would be against subdomains on an API.

Suggestions on addressing REST resources for API

I'm a new REST convert and I'm trying to design my first RESTful (hopefully) api and here is my question about addressing resources
Some notes first:
The data described here are 3d render
jobs
A user (graphics company) has multiple projects.
A project has multiple render jobs.
A render job has multiple frames.
There is a hierarchy enforced in the data (1 render job
belongs to one project, to one user)
How's this for naming my resourses...?
https:/api.myrenderjobsite.com/
/users/graphicscompany/projects
/users/graphicscompany/projects/112233
/users/graphicscompany/projects/112233/renders/
/users/graphicscompany/projects/112233/renders/889900
/users/graphicscompany/projects/112233/renders/889900/frames/0004
OR a shortened address for renders?
/users/graphicscompany/renders/889900
/users/graphicscompany/renders/889900/frames/0004
OR should I shorten (even more) the address if possible, omitting the user when not needed...?
/projects/112233/
/renders/889900/
/renders/889900/frames/0004
THANK YOU!
Instead of thinking about your api in terms of URLs, try thinking of it more like pages and links
between those pages.
Consider the following:
Will it be reasonable to create a resource for users? Do you have 10, 20 or 50 users? Or do you have 10,000 users? If it is the latter then obviously creating a single resource that represents all users is probably not going too work to well when you do a GET on it.
Is the list of Users a reasonable root url? i.e. The entry point into your service. Should the list of projects that belong to a GraphicsCompany be a separate resource, or should it just be embedded into the Graphics Company resource? You can ask the same question of each of the 1-to-many relationships that exist. Even if you do decide to merge the list of projects into the GraphicsCompany resource, you may still want a distinct resource to exist simple for the purpose of being able to POST to it in order to create a new project for that company.
Using this approach you should be able get a good idea of most of the resources in your API and how they are connected without having to worry about what your URLs look like. In fact if you do the design right, then any client application you right will not need to know anything about the URLs that you create. The only part of the system that cares what the URL looks like is your server, so that it can dispatch the request to the right controller.
The other significant question you need to ask yourself is what media type are you going to use for these resources. How many different clients will need to access these resources? Are you writing the clients, or is someone else? Should you attempt to reuse an existing standard like XHTML and classes/microformats? Could you squeeze most of the information into Atom? Maybe Atom with some extra namespaces like GDATA does it? Or is this only going to be used internally so you can just create your own media types, like application/vnd.YourCompany.Project+xml, application/vnd.YourCompany.Render+xml, etc.
There are many things to think about when designing a REST api, don't get hung up on what your URLs look like and you should really try to avoid doing "design by URL".
Presuming that you authenticate to the service, I would use the 1st option, but remove the user, particularly if the user is the currently logged in user.
If user actually represents something else (like client), I would include it, but not if it simply designates the currently logged in user. Agree with StaxMan, though, don't worry too much about squeezing the paths, as readability is key in RESTful APIs.
Personally I would not try to squeeze path too much, that is, some amount of redundant information is helpful both to quickly see what resource is, and for future expansion.
Generally users won't be typing paths anyway, so verbosity is not all that bad.