Let's say I have an API to get a product given an product ID as: api/products/<productid>.
What would be a REST way to get relevant products given a product ID. I think I can use a query on the same endpoint as api/products?id=<productid>, but not sure if this is ideal or it might be confusing.
Standard practice for making api url is api/products/<productid>.
for params other than id (generally for filtering or searching purpose), query params are considered api/products?name=somename
The detailed resource naming guide can be found here
What would be REST way to get relevant resources?
Answer this question: how would you do it on the web?
You might have a web page for the product, and then a link from that page to a new page describing the related resources, where each entry in that page would include links to the product page of the specific resources.
What we need to define for the client (or make it easy to discover) is how to find the links.
In the case of the web, we are typically using HTML representation of our pages. HTML is special in that it has a standardized understanding of links. For human consumers, we surround the link with context (typically, the contents of the A element); for machines, we should be a bit more precise about defining the rule.
For representations that don't have a standardized understanding of links, we need to do something else. The most common answer is to use Web Linking, which is to say we put a description of the relationship between to URI into the headers of the response.
REST doesn't care what spelling conventions you use for your resource identifiers, so long as they are consistent with the production rules defined in RFC 3896. So you can choose any spelling convention you like.
For instance, it is common to use spelling conventions that include key value pairs in the query part of the URI, because HTML GET forms can be used to describe links with that shape, which simplifies certain use cases.
Since REST doesn't care about your spelling conventions, you can use the extra freedom to address other problems (what spellings are easy for people reading the logs? what spellings are easy for people documenting the API? and so on....)
Related
How architecturally sound and up to industry standards nesting resource representations in REST APIs is, especially when it comes to nested lists of resources (like books of an author)?
I'm interested in finding links to authoritative sources that answer to this question.
The authoritative source for REST is the dissertation of Roy Fielding, based on work he did during the standardization of HTTP/1.1 (RFC 2068, RFC 2616, etc) in the 1990s.
REST defines resource ("Any information that can be named can be a resource..."), and requires that all resources understand messages the same way (uniform interface) but does not actually constrain your resource model.
"RESTful", historically, is context sensitive; in practice it means something like "more like REST than our current designs". In the web services community, it meant "more like REST than WS-* and SOAP". In Rails, it meant more like REST than the resource models that were recommended prior to Rails 1.2. And so on.
If what you are interested in is describing the relationship between a resource that is a collection and a resource that is an item in that collection, then the standard you want is RFC 6573.
But again, it doesn't tell you how to design the resources, or how to design the identifiers for those resources -- it just tells you how to indicate a relationship between them.
As far as I understand the web resource is something abstract identified by the IRIs and accessible through the web. What dereferencing the IRI gives back is the representation of the actual state of the identified resource, this is why it is called representational state transfer. I don't remember any standard that discusses nested resources. Maybe RDF is the closest what you are looking for. In practice if we follow RDF concepts, then to answer a GET request the REST API responds with a representation of an RDF subgraph starting with the resource indentified by the giving IRI and it can be any level deep. Nestedness is not something I would consider here, because it is a graph, not a hierarchy, it is sort of expanding relationships between resources or returning hyperlinks the API consumers can follow to do the exact same thing.
Not sure if this helps. I did not find any RFC beyond what VoiceOfUnreason's answer contains, I remember to read explicitly about web resources and identifying real things with hashtags or non-dereferenceable IRIs in an RFC 5+ years ago, but I have no idea which one it was. Maybe it was the Lanthaler dissertation or the SemWeb document VoiceOfUnreason suggested. What is certain it was somehow connected to the semantic web and RDF.
REST’s identification of resources constraint requires that resources
are identifiable so that they can be accessed and manipulated via
generic interfaces. On the Web, resources are identified by IRIs [44].
Since a resource may represent con- cepts which cannot be serialized
into a byte stream (e.g., persons or a feeling), resources are not
manipulated directly. Instead, REST is built on the concept of
manipulation of resources through representations; i.e., an additional
layer of indirection in the form of resource representations is
introduced.
https://www.markus-lanthaler.com/research/third-generation-web-apis-bridging-the-gap-between-rest-and-linked-data.pdf
On the Semantic Web, all information has to be expressed as statements
about resources, like the members of the company Example.com are Alice
and Bob or Bob's telephone number is "+1 555 262" or this Web page was
created by Alice. Resources are identified by Uniform Resource
Identifiers (URIs) [RFC3986]. This modelling approach is at the heart
of Resource Description Framework (RDF) [RDFPrimer]. A nice
introduction is given in the N3 primer [N3Primer].
Using RDF, the statements can be published on the Web site of the
company. Others can read the data and publish their own information,
linking to existing resources. This forms a distributed model of the
world. It allows the user to pick any application to view and work
with the same data, for example to see Alice's published address in
your address book.
https://www.w3.org/TR/cooluris/#semweb
So what I want to say that what you see in the HTTP response is not the resource itself, just a representation of it and its relationship to other resources.
REST does not have a constraint which tells you how verbose that response must be. It just tells you that you must use hyperlinks to connect resources and that you must use standard MIME types and document your API. At least this is how I interpret the uniform interface constraint.
I think the question is very good, because this part of the architecture is open and there were many questions in the past years which ask how to use the URIs for querying nested resources. The answer is always that REST does not cover it, the URI and URI template standards don't cover it either. There are standards like OData and Hydra, which have suggestions, but it is just up to you. Your problem is connected to it, because it asks how verbose a response to such a query can be. It is not covered as well as far as I can tell, but what is certain that it can and must contain at least hyperlinks to other resources. RDF allows describing several resources in a single document, so if we extend the RDF approach to REST, which does not say this is forbidden, then I guess we can do it.
From practical perspective for example a collection is a sort of nested resource too and if the API consumer would send a dedicated request for every collection item just to know basic things like product names, then it would be wasting resources. Normally we respond this kind of requests with a single HTTP response or multiple ones with 25-50-100 items on a page. It does not make much sense from usability and scalability perspective to give hyperlinks to the consumer for each item and force them to follow those links one by one. In fact we like to respond with the exact view model the consumer needs and design APIs this way. I think the same is true for nested properties as well. From RDF perspective these responses represent a subgraph of a massive resource graph, which are managed by the REST service and by for example RDF vocabulary maintainers like OWL, Schema.org, etc.
So to have a one sentence answer: the representation of "nested resources" is not covered by REST and as far as I know not covered by standards like HTTP and URI either, but currently it is the best practice to use them and MIME types we frequently use for REST e.g. HAL+JSON or RDF/JSON-LD support nested representations too, so I would say yes.
A question for this already exists, but is more tech focused and doesnt have answers: Representing a request body on HATEOAS link
I like HATEOAS. I love using it in my frontend to check if I can perform some actions by checking if a link exists instead of having business logic.
But what I do not understand is how HATEOAS can truly be useful in other scenario's. What if you have an "AddItemToBasket" link which would need a request body with some properties in it. The frontend would still need to know what this request body looks like. But HATEOAS doesn't tell you this.
This means you still have a dependency on API knowledge. I think lots of applications solve this problem with generated API clients/graphql, but that makes HATEOAS a hard sell.
Why use HATEOAS if we can't use the URL and http method, because it doesn't offer the full picture.
REST builds on standards (uniform interface constraint) and currently there is no standard way to do this. There is a Hydra W3C WorkGroup writing a standard about how to describe Hypermedia APIs. They use RDF, standard vocabs like schema.org and you can write your API specific vocab they call documentation. As far as I understand their model you can give parameters in the documentation for operations represented by hyperlinks. You can use for example XSD to add constraints like numbers, etc. to the parameters. It takes a lot more effort than normally to write this kind of formal documentation and as far as I understand there are currently no general REST clients which could profit from these, so it does not make much sense currently to write such an API, but it is possible if you want to.
As of why to use HATEOAS, it makes your API flexible and backward compatible. For example if somebody does not have permission for an operation, you simply don't send a hyperlink for it in the response. You can always add new operations and the existing clients don't have to support them, they can just focus on what they already know and they won't break because something extra is added. They don't have to know about the URI structures and the methods, which can freely change if the only thing they depend on is the operation type and the parameters.
I wonder if it is possible to configure MediaWiki (or other wiki tools) as a modular predefined wiki. For instance, on a regular wiki page one can freely edit sections, text, everything.
I am looking for a solution that predefines a number of sections (or modules) that can be added to each wiki page. Then users are free to edit inside those sections within their predefined formats.
Hope someone can help, thanks.
As for MediaWiki, there is at least one extension that can work that way: Semantic Forms, usually used together with Semantic MediaWiki (though that is not necessary). With SF, you will define one or more templates that receives the data entered in the form, and the form can be divided into sections.
A more lighweight solution might be using one of the many boiler plate extensions available.
Either way, with a wiki you can never force your users to follow a certain scheme. The whole philosophy, making wiki's unique among collaborative tools, is that the users, not you, create not only the content but also the structure for the content!
The former Semantic Forms is now called Page_Forms and it is not dependent on SMW https://www.mediawiki.org/wiki/Extension:Page_Forms and can also make use of the Cargo extension https://www.mediawiki.org/wiki/Extension:Cargo
I would disagree that wiki users cannot or should not be forced to follow a scheme for some types of information, though the default is that they do control the categories and namespaces and can create those at will as the data evolves. All this means though is that you manage such issues socially rather than with complex permissions structures, i.e. someone undoes your change and says "do it this way instead". So it's a different kind of forcing, but, still, someone has to make sure categories don't proliferate with bad names, capitalization, etc.
The typical use of forms data is when it must be used to satisfy some legal or professional requirement (say logging for what reason a change was made for Sarbanes-Oxley, or logging what precedents were consulted for logging legal time), or will be providing input strictly to some application (like maps). It would not be a good idea to rigorize literally every page of a wiki this way.
My question is the following: when marking up an organization, business or brand with microdata and schema.org, should I use as a global identifier it's official webpage URL? Is there any kind of better reference that I could use (like IMDB for movies or actors)?
I'd like to know if there's any standard, convention or common practice recommended.
It would be better to use some kind of controlled vocabulary (e.g. VIAF) that uniqely identifies the organization in question.
The choice of identifiers is part of the explanation of REST. http://www.infoq.com/articles/rest-introduction
Inspect closely the first principle (for convention), though it is in broader terms of resources rather than specific to org/biz/brand. REST is the thesis that started this trend. Microformats accordingly makes use of rel="profile" link tags. The concept is further expanded at http://purl.org/ so, if IMDB, for example, switches to W3 like W3C did, then in the future this will minimize the impact on the application you're making right now. RDFa Dublin Core vocab's use of this is seen in the profile at http://www.w3.org/2011/rdfa-context/rdfa-1.1.html.
(For references) Applications serving general public or open initiatives such as academic support might be better served by these profiles, however when operating a site for commercial purposes, building application-specific "custom" profiles considering various legal matters identified, that should perform reliably with PURLs, might be advantageous to build credible reputation.
Finally, WHATWG considers prefixes too advanced and HTML5 for newbies only, so the support for W3's XHTML xmlns/RDFa prefix is dropped in microdata. This compels us to reuse long-form URLs for schema.org business/org/brand resources with microdata syntax. The "custom" profile then serves as mere good-will when picking up from where tasks are wrapped up, otherwise a more variety of items might appear in the content than actually intended, owing to mix-ups.
The good news is, Google supports schema.org usage as a vocab in RDFa syntax. So considering RDFa as an already "living" standard that originated in W3 spec, as per the (non-)commercial nature of application, defining PURL for scope namespaces, profiles exhibiting prefixes, and syntax (of official web-page or substitute IRIs) as per target processors is the way to go. Currently no vocab besides schema is processed as microdata, and schema in RDFa isn't supported by anybody but Google!
If you're selling widgets, we all know that having "Bob's Widgets" in the title and the H1 gives you a better ranking in Google when people search for "widgets".
But what if, as someone explained to me the other day, their product is known by different names in different parts of the world?
In the US, it's called a Widget. In Canada, it's called a Flidget. In Australia, it's called a Zidget. There's really no official name for it, just informal names.
Meta-tags are no problem, but apart from that, what's the best way to cope with that situation? Just make separate pages? You can't have 3 H1s on the page. One H1 which says "Widgets, (aka Flidgets, Zidgets)"?
Or do I just trust that Google is smart enough and some magical taxonomy database groups those three words together as the same thing?
EDIT: This question got downvoted simply because it's about SEO? How bizarre. If you even bother to read the question, you can see I'm not trying to game the system or get away with anything. I have a genuinely interesting question and a valid client need.
Please note also, that I always use semantic HTML, I am well aware of how search engine rankings work, and I'm not trying to get away with anything shady.
If my client was selling beer, I would simply use semantic HTML to put the word "beer" first and foremost. If I was selling beer to French people, I would make another page in French and do the same with "biere". But imagine for a second that beer isn't called "beer" in other English-speaking nations. Imagine it's called "reeb". How do I correctly, semantically code an English-language page when different English-language users will be searching using a different string, but searching for the same thing.
HTML meta-tags were originally created for the purpose of embedding exactly such metadata into a webpage. But because of the SEO industry and the commercialization of the web, meta-tags like 'keywords' are no longer used by major search engines.
With all of the advances in page ranking algorithms and intelligent search robots over the years, there's really not much to do in terms of active 'search engine optimization' for legitimate websites. In today's search environment, all you have to do is optimize your site for your visitors, and it will automatically be optimize for searching.
So you can passively optimize your site's ranking by doing any(or all) of the following:
Use good spelling and writing etiquette (like not writing your entire site in caps or text-message-speak)
Format your pages using proper markup. (Title your document, mark your headings with H1/H2/etc., delimit your paragraphs, and so on and so forth.)
Abide by established web standards and write well-formed code.
Weed out broken links and make sure your site works properly.
Don't use pop-ups, cover your site with banner ads, or otherwise bombard visitors with advertising
Don't link to disreputable websites
Simply put, make your site as user-friendly and as accessible as possible. If your site is useful to visitors and provides valuable content, most major search engines like Google or Yahoo! are smart enough to rank it fairly. Your ranking may be modest at first. But if you're genuinely supplying quality content then, as your site becomes better established on the web, other sites will start linking to you, increasing your search ranking.
And if other webpages linking to your site use the various names & nicknames your product is referred to by, then your site will also be associated with those names/keywords (that's how Google Bombing works). Google also tracks synonymous search terms and is even smart enough to recommend related/alternative search terms in some cases.
On the other hand, if you're creating a spam site or the 10 millionth affiliate marketing website with the same exact products and content as the other 9,999,999 sites of the same exact nature, then expect your search engine ranking to be reasonably poor.
It's generally only websites with no original content and that provide no legitimate value to visitors that require active (black hat) SEO techniques to gain a decent ranking--polluting search results in the process. Otherwise, if you're actually building a useful website, then just optimize it for your visitors and let Google/Yahoo! do their job.
The anchor text of your inbound links is a lot more important than the tags you use. So try getting links to your page with both "beer" and "reeb". As long as you'll get enough links with both terms, you'll do well in SERPs, no matter the keywords you use in it.
One option is to localize pages for the different target regions you are interested.
If you use a local domain, google will give it priority on default searches on that country. When I hit www.google.com, it redirects me to www.google.com.mx, and any search I do tends to display high results from mexico domains. I actually have to hit a couple options, when I don't want that behavior.
I also think google has an option to map parts of the site to a region, so you can keep the single domain.
Update: Regarding the beer example, you can localize per country (which is what I mention above). Actually its not that of a special need, since english british and english US have their differences.
The talk has been language agnostic, but consider how .net handle resources. Lets say the current request is being processed for en-GB, and you look for a resource (i.e. a text, image, etc). It will first try to find the resource for the specific culture: en-GB, if it isn't found it will look under the more general en (and then in the default resource file).
The previous allows you to selectively localize what you really need on the more specific resource files. If you only need to localize the resources with the key beerName, you can just configure that on the specific languages and leave the rest.