What does "v1/se" stands for in WEB APIs? - api

Many Web URLs contains "v1/se" string in them. OK for "v1", that is the version... But, what about "se"?

Related

RESTful API HATEOAS

I've come to the conclusion that building a truly RESTful API, one the uses HATEOAS is next to impossible.
Every content I've come across either fails to illustrate the true power of HATEOAS
or simply does not explicitly mentions the inherent pain points with the dynamic nature of HATEOAS.
What I believe HATEOAS is all about:
From my understanding, a truly HATEOAS API should have ALL the information needed to interact with the API, and while that is possible, it is a nightmare to use especially with different stacks.
For example, consider a collection of resources located at "/books":
{
"items": [
{
"self": "/book/sdgr345",
"id": "sdgr345",
"name": "Building a RESTful API - The unspoken truth",
"author": "Elad Chen ;)",
"published": 1607606637049000
}
],
// This describes every field needed to create a new book
// just like HyperText Markup Language (i.e. HTML) rendered on the server does with forms
"create-form": {
"href": "/books",
"method": "POST",
"rel": ["create-form"],
"accept": ["application/x-www-form-urlencoded"],
"fields": [
{ "name": "name", "label": "Name", "type": "text", "max-length": "255", "required": true }
{ "name": "author", "label": "Author", "type": "text", "max-length": "255", "required": true }
{ "name": "author", "label": "Publish Date", "type": "date", "format": "dd/mm/YY", "required": true }
]
}
}
Giving the above response, a client (such as a web app) can use the "create-form" property to render an actual HTML form.
What value do we get from all this work?
The same value we've been getting from HTML for years.
Think about it, this is exactly what hypertext is all about, and what HTML has been designed for.
When a browser hits "www.pizza.com" the browser has no knowledge of the other paths that a user
can visit, it does not concatenate strings to produce a link to the order page -> "www.pizza.com/order", it simply renders anchors
and navigates when a user clicks them. This is what allows web developers to change the path from "/order" to "/shut-up-and-take-my-money" without changing any client (browsers).
The above idea is also true for forms, browsers do not guess the parameters needed to order a pizza, they simply render a form and its inputs, and handle its submission.
I have seen too many lines of codes in front-ends and back-ends alike, that build strings
like -> "https://api.com" + "/order" - You don't see browsers do that, right?
The problems with HATEOAS
Giving the above example ("/books" response), in order to create a new book, clients are expected to parse the response in order to leverage the true power of this RESTful API, otherwise, they risk assuming what the names of the fields are, which of them is required, what their expected type is, etc...
Now consider having two clients within your company that are using this API,
one for the web (browsers) written in JS, and another for the mobile (say an android app) written in Java. They can be published as SDK's, hopefully making 3 party consumers have an easier integration.
As soon as the API is used by clients outside your control, say a 3rd party developer with an affinity to python, with the purpose of creating a new book.
That developer is REQUIRED to parse such a response, to figure out what the parameters are, their name, the URL to send inputs to, and so on.
In all my years of developing I have yet to come across an API such as the one I have in mind.
I have a feeling this type of API is nothing more than a pipe dream, and I was hoping to understand whether my assumptions are correct, and what downfalls it brings before starting the implementation phase.
P.S
in case it's not clear, this is exactly what HATEOAS compliant API is all about - when the fields to create a book change clients adapt without breaking.
On the Hypermedia Maturity Model (HMM), the example you give is at Level 0. At this level, you are absolutely correct about the problems with this approach. It's a lot of work and developers are probably going to ignore it and hard-code things anyway. However, with a generic hypermedia enabled media type, not only does all that extra work go away, it actually reduces the work for developers.
Let's take a step back for a moment and consider how the web works. There are three main components: the web server, the web browser, and the driver (usually a human user). The web server provides HTML, which the web browser executes to present a graphical user interface, which the driver can use to follow links and fill out forms. The browser uses the HTML to completely abstract from the driver all the details about how to present the form and how to send it over HTTP.
In the API world, this concept of the generic browser that abstracts away the media type and HTTP details hasn't taken hold yet. The only one I know about that is both active and high quality is Ketting. Using a browser like Ketting, removes all of that extra work the developer would have to put into making use of all that hypermedia. A hypermedia browser is like an SDK that API vendors often provide except that it works for any API. In theory, you can link from one API to another completely unrelated API. APIs would no longer be islands, they would become a web.
The thing that makes hypermedia browsers possible are general purpose hypermedia enabled media types. HTML is of course the most successful and famous example, but there are JSON based media types as well. Some of the more widely used examples are HAL and Siren.
The higher level a media type is on the Hypermedia Maturity Model, the more that a generic browser can do to abstract way the media-type, URIs, and HTTP details. Here's a brief explanation. Checkout the blog post linked above for more details and examples.
Level 0: At this level, hypermedia is encoded in an ad-hoc way. A browser can't do much with this because every API might encode things a little different. At best a browser can use heuristics or AI to guess that something is a link or a form and treat it as such, but generally HMM Level 0 media-types are intended for developers to read an interpret. This leads to many of the challenges you identified your question. Examples: JSON and XML.
Level 1: At this level, links are a first class feature. The media type has a well defined way to represent a link. A browser knows unambiguously what is to be interpreted as a link and can provide an interface to follow that link without the user needing to be concerned about URI or HTTP. This is sufficient for read-only APIs, but if we need the user to provide input, we don't have a way to represent a form-like hypermedia control. It's up to a human to read the documentation or an ad-hoc form representation to know how to submit data. Examples: HAL, RESTful JSON.
Level 2: At this level, forms (or form-like controls) are a first class feature. A media type has a well defined way of representing a form-like hypermedia control. A browser can use this media type to do things like build an HTML Form, validate user input, encode user input into an acceptable media type, and make an HTTP request using the appropriate HTTP method. Let's say you want to change your API to support PATCH and you prefer that application start using it over PUT. If you are using a HMM Level 2 media type, you can change method in the representation and any application that uses a hypermedia browser that knows how to construct a PATCH request will start sending PATCHs instead of PUTs without any developer intervention. Without a HMM Level 2 media type, you're stuck with those PUTs until you can get all the applications that use your API to update their code. Examples: HTML, HAL-Forms, Siren, JSON Hyper-Schema, Uber, Mason, Collection+JSON.
Level 3: At this level, in addition to hypermedia controls, data are also self-describing. Remember those three main components I mentioned? The last one, "driver", is the major difference between using hypermedia on the web and using hypermedia in an API. On the web, the driver is a human (excluding crawlers for simplicity), but with an API, the driver is an application. Humans can interpret the meaning of what they are presented with and deal with changes. Applications may act on heuristics or even AI, but usually they are following a fixed routine. If something changes about the data that the application didn't expect, the application breaks. At this level, we apply semantics to the data using something like JSON-LD. This allows us to construct drivers that are better at dealing with change and can even make decisions without human intervention. Examples: Hydra.
I think the only downside to choosing to use hypermedia in your API is that there aren't production-ready HMM Level 2 hypermedia browsers available in most languages. But, the good news is that one implementation will cover any API that uses a media type it supports. Ketting will work for any API and application that is written in JavaScript. It would only take a few more similar implementations to cover all the major languages and choosing to use hypermedia would be an easy choice.
The other reason to choose hypermedia is that it helps with the API design process. I personally use JSON Hyper-Schema to rapid-prototype APIs and use a generic web client to click through links and forms to get a feel for API workflows. Even if no one else uses the schemas, for me, it's worth having even if just for the design phase.
Implementing a HATEOAS API needs to be done both on the server and on the clients, so this point you make in the comments is very valid indeed:
changing a resource URI is risky given I don't believe clients actually "navigate" the API
Besides the World Wide Web, which is the best implementation of HATEOAS, I only know of the SunCloud API on the now decommissioned Project Kenai. Most APIs out there don't quite make use of HATEOAS, but are just a bunch of documented URLs where you can get or submit specific resources (basically, instead of being "hypermedia driven", they are "documentation driven"). With this type of API, clients don't actually navigate the API, they concatenate strings to go at specific endpoints where they know they can find specific resources.
If you expose a HATEOAS API, developers of any clients can still look at the links you return and may decide to build them on their own 'cause they figure what the API is doing, so then think they can just bypass any other navigation that might be needed and go straight for the URL, because it is always /products/categories/123, until - of course - it isn't anymore.
A HATEOAS API is more difficult to build and adds complexity to both the server and the clients, so when deciding to build one, the questions are:
do you need the flexibility HATEOAS is giving you to justify the extra complexity of the implementation?
do you want to make it easier or harder for clients to consume your API?
does the knowledge and discipline to make it all work exist on both sides (server developers and clients developers)?
Most of the times, the answer is no. In addition, many times the questions don't even get asked, instead people end up with the approach that is more familiar because they've seen the sort of APIs people build, have used some, or built some before. Also, many times, a REST API is just stuck in front of a database and doesn't do much but expose data from the database as JSON or XML, so not that much need for navigation there.
There is no one forcing you to implement HATEOAS in your API, and no one preventing you to do so either. At the end of the day, it's a matter of deciding if you want to expose your API this way or not (another example would be, do you expose your resources as JSON, XML, or let the client chose the content type?).
In the end, there is always the chance of breaking your clients when you make changes to your API (HATEOAS or no HATEOAS), because you don't control your clients and you can't control how knowledgeable the client developer are, how disciplined, or how good of a work they do in implementing someone else's API.

How to get sites identical in content but different in language and TLD indexed by major search engines?

Is it possible to get two "editions" of a website both indexed by the major search engines (Google/Yahoo/Bing/Teoma) which differ in content language only and are hosted under different TLDs?
Say English content is available at "http://domain.com/", German content at "http://domain.de/". Now, if e.g. Google.com is used I want it to list the "domain.com" entry and vice versa. Is "Duplicate Content" an issue here?
Depending on website software you use (wordpress, joomla, custom, etc), you might have a plugin or addon for each that supports multiple domains and search-engine pinging/seo. If that's the case, it should be possible.
I'm assuming your website layout is the same but you have a ".com" and ".de" TLD pointing to the same directory/software installation and a (auto?) language selector to choose between English and German.
Edit: (for quick readers)
It shouldn't need separate webspace for each site. What I do for my sites to get them submitted is use Sitemaps. I've never generated one myself, so I can't help in that aspect. However, you could generate sitemaps for each language (e.g. sitemap.en.xml.gz | sitemap.de.xml.gz) and have your application ping search engines with these sitemaps. Essentially, you'll have the same content but in different languages and it'll be in a sitemap which can be submitted to google/bing/yahoo/etc.
I used this method on a wordpress blog I had and every time I submitted/changed content, it would re-generate sitemaps (updating links/etc) and ping the search engines again.

What's different between HTML and WML/WAP?

I checked the source of a few WAP sites,
but doesn't find anything different from a normal HTML page.
Can you name a few detailed points?
Well, WAP/WML is very strict when it comes to markup because the page needs to be compiled before delivery to the client device.
As for specifics,
WAP "pages" can have more than one "card". (Confusing? I know...)
Although not markup related, accepted image formats are more limited
Do not forget a DOCTYPE!
Content must be served with the text/vnd.wap.wml MIME type
WAP 1 has almost nothing in common with the HTML/CSS/JS/server-side-scripting stack. The only connection it has with the larger web is that telco gateways use HTTP to request WML content from a normal web server. WML is an old-fashioned and ugly ‘card’-based hypertext system which everyone hated, largely failed in the market and is long gone (thank goodness).
The misleadingly-named “WAP 2”, on the other hand is just XHTML Mobile Profile (a somewhat limited subset of HTML); everything else about it is the same as the normal web stack. This makes it much easier to work with: it's possible to generate content for desktop and phones from the same templates. You may also see ‘i-XHTML’, which is a similar HTML-subset used by Docomo phones.
Either way, modern smartphones are happy rendering normal desktop-style [X]HTML, so you're not going to have to worry about any of this in the future. (Sure, there are compatibility issues, but that's nothing new, right?)
Here is a table with some information about the differences : http://csc.colstate.edu/summers/Research/Wireless/WAPvsWeb.html
Another difference being WAP is almost if not totally dead, and HTML is kicking ##S :-)

Will rel=canonical break site: queries?

Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this:
http://ourproduct.com/documentation/version/pageid
Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs:
http://ourproduct.com/documentation/3.0/configuration-best-practices
http://ourproduct.com/documentation/4.0/configuration-best-practices
This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this:
configuration site:ourproduct.com/documentation/4.0
But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions)
But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result?
Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.
The rel=canonical link element helps search engines to determine the URL that they should index, so ultimately, by specifying it for a URL, you're telling them to drop the old version and only to index the new version. In practice, it might be that both versions are indexed for a while (depending on how they're discovered and crawled), but in the long run only the canonical will generally remain indexed. In other words, if you do this for your site, over time the site:-query results for the old versions will drop (which probably makes sense).
If you need to have both versions indexed, then I wouldn't use the rel=canonical link element, I'd just link from the old versions to the new versions (eg "The current version of this document can be found at X").
Wikia uses rel=canonical link elements fairly extensively, though I don't think they use it in folders, but you can still see the results for individual URLs.

Adding a photo collection using the flickr API

Is it possible to create a photo collection using the flickr API?
I haven't found any example code to achieve this, however you CAN do it on the flickr website, and i suppose flickr uses the API internally for their site?
Dennis
This may be too late to help but, I thought I'd post the solution here for posterity (as this post came up while i was google'ing for the answer before i discovered it).
The method is undocumented in the Flickr API but, it does seem to work (tested via the REST interface).
The method is: flickr.collections.create
Required parameters:
auth_token
api_sig
title (Example value: "My Awesome Title Here.")
Optional parameters:
api_key
auth_hash
cb
description (Example value: "My Awesome Description Here")
parent_id (Example Value: 0)
src (Example: "js")
I found the method by enabling Firebug's console while creating a collection in the Flickr web interface and examining the POST. I have no idea what auth_hash nor cb refer to but, I would assume that they are required while using the JavaScript interface as opposed to REST.
Anything that uses the JavaScript interface on Flickr can be reverse engineered by examining the POST that occurs immediately after you take the action in the web interface.
Official support for the "flickr.collections.*" portion of the Flickr API has indeed been delayed for some reason (since at least 4/2007). There is a discussion thread over on Flickr, with a bit more information (reverse engineering) on the undocumented API.
Yes, you can. But, first you need to specify so-called 'primary photo'.
This means that flickr doesn't allow you to create empty albums (collections). I don't know why they decided to restrict creation of empty collections but it's a fact.