Alfresco Rest API to get the current node version - api

I'm making a call to GET service/api/version?nodeRef={nodeRef}, but I want to limit the query to just the most recent version. I found this StackOverflow post which suggests the use of &filter={filterQuery?} to filter results, but that looks like it only applies to the people query method. Is there something like this for the version method?

Before answering your specific query, a general point. Alfresco community is open source, and almost all of Alfresco Enterprise is too, so for many queries around this your best bet is simply to go and check the source code yourself! The rest APIs are held in the projects/remote-api
Looking in there, you can see that the versions webscript returns all versions for a given node. It's fairly fast to do that, so calling that and getting the most recent one isn't the end of the world
Otherwise, the slingshot node details api at http://localhost:8080/alfresco/service/slingshot/doclib2/node/{store_type}/{store_id}/{id} (add the parts of the noderef into the URL) will return lots of information on a node quickly, including the latest version. In the JSON returned by that, inside the item key is something like
"version": "1.7",
Which gives you the latest version for the node
Some of the listing and search APIs will also include the version, but those are likely to be quite a bit more heavyweight than just getting the version history of a node

Related

How to implement api versioning?

I have an web API application that will serve many clients at different times of release and now i need to implement a versioning. Because the API code will be constantly updated and API users will not be able to instantly change their API. Well, the standard situation is when you need to introduce versioning in general. I'm finding a way to organize it inside my API. It's clear that it will not be different folders with an application on the server, conditionally called app_v1, app_v2, app_v2.1, etc., cause this is duplication, redundancy and bad practise.
It's look like will be one application, and in the controllers at the code level there will be a division of the logic already, like If(client_version==1) do function1() else if(client_version==2) do function2(), etc. It seems that git supports tags, this is something similar to versioning, but because all supported versions of the application need be on the server at the same time, this is not about that. how i can realize an architecture in this case?
There are many well-known ways to use API versioning to make code work with older versions. (backward compatibility). The general purpose of API versioning is a way to make sure that different clients can use different versions of an API at the same time. I've seen several ways to do API versioning, such as:
URL Path Versioning: In this method, the number of the version is part of the API endpoint's URL path. For instance:
https://api.example.com/v1/assets
https://api.example.com/v2/assets
URL Query String Parameter: In this method, the version number is added to the API endpoint's URL as a query string parameter. For instance:
https://api.example.com/assets?version=1
https://api.example.com/assets?version=2
HTTP Header: In this method, the version number is put in an HTTP header, like the Accept-Version header. For instance:
Accept-Version: 1
Accept-Version: 2
If you are using dotnet for you project I would like recommend to standard library for that recommend to check this out. Or you can find solid materials in term of WebApi Versioning following link by #Steve Smith.
There is another answer.

Duplicate sites when using getItemSummaries

When using the getItemSummaries command (through both the SOAP/REST API) I recently started getting a duplicate entry for the exact same site (or institution). This is using real world data (not DAG bank).
The duplicate entries have one version (verified by looking inside of itemData being more recent than the other)
Is there any documentation as to how (and why?) this can happen. According to the docs, getItemSummaries is meant to dispaly the latest data of sites
Could you please use getItemSummariesForSite (REST/SOAP) and check if you are still getting duplicate accounts? Ideally having duplicate sites(i.e. with same credentials) should not be a use case.

Getting the most recent version of a file kept by archive.org

I have a set of harvested atom feeds. Some of them have a few years, and some of the posts link to images that are no longer there.
Is there any way to get the most recent version kept by the Way Back Machine?
I know I can do it manually, but I'd like to automate the process. archive.org provides a restful API, but as far as I could find out it doesn't seem to provide the specific calls that I need. I suppose I could always fallback to web-scraping, but I'd prefer a more elegant solution, if there is one.
Figured it out. To get the latest version of a file you just have to GET the URL (don't forget to check that the HTTP status code is 200):
http://web.archive.org/web/form-submit.jsp?type=replay&url=<file_url>

Query Wikipedia Software Infobox to Retrieve Current Version?

I would like to create a script to give me the current version of each software in a list of software. Wikipedia seems to do a good job of maintaining this information, and so I am trying to figure out how to query their API. For instance, it seems I get can get the current info by the call to API as such:
http://en.wikipedia.org/w/api.php?action=query&titles=QuickTime&prop=revisions&rvprop=content
This gives me the info for the page on QuickTime, but in the Infobox section, I am unable to find the "Stable Release" field.
Am I missing something? Is there a better way to pull this off?
If you look at the Wikipedia article you're referring to, you'll notice that to change the version, you should click the small “Edit” link near the version number.
That will take you to the page Template:Latest stable software release/QuickTime, which actually contains information about the current version.
This is also indicated in the infobox as | frequently_updated = Yes.

Will rel=canonical break site: queries?

Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this:
http://ourproduct.com/documentation/version/pageid
Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs:
http://ourproduct.com/documentation/3.0/configuration-best-practices
http://ourproduct.com/documentation/4.0/configuration-best-practices
This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this:
configuration site:ourproduct.com/documentation/4.0
But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions)
But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result?
Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.
The rel=canonical link element helps search engines to determine the URL that they should index, so ultimately, by specifying it for a URL, you're telling them to drop the old version and only to index the new version. In practice, it might be that both versions are indexed for a while (depending on how they're discovered and crawled), but in the long run only the canonical will generally remain indexed. In other words, if you do this for your site, over time the site:-query results for the old versions will drop (which probably makes sense).
If you need to have both versions indexed, then I wouldn't use the rel=canonical link element, I'd just link from the old versions to the new versions (eg "The current version of this document can be found at X").
Wikia uses rel=canonical link elements fairly extensively, though I don't think they use it in folders, but you can still see the results for individual URLs.