Is it possible to be notified when W3C specs get updated? - notifications

WHATWG has announced recently that it's possible now to get notified about the changes in some sections of specifications (more on the way). Is there a similar mechanism for W3C specs? Is it possible to get notified of updates by email or through a feed?

If the specs are posted online (and I have to assume that w3c specs are), then why not use a service like http://www.changedetection.com/ ?
There are several similar services which will notify you whenever any web page is changed and you can even limit the checking ot certain parts of the page (to avoid changing banner headlines, for instance).
hope this helps

Many of the W3C specs are managed by version controlling systems like Git or Mercurial. Apart from being able to actually clone the repository and regularly pull in the changes, one can simply subscribe to the updates in RSS or Atom format. Every spec has it's own channel, but here's the page that seemingly lists them all, along with corresponding links - to repositories, zip/tarballs or RSS/Atom channels.
https://dvcs.w3.org/hg/
While this is definitely great. Some of the specifications are still off the list like File API.

Related

Is there an API for purging a project in the openstack?

I need to purge my users on an OpenStack project easily, through an API call.
Just like this CLI command :
neutron purge PROJECT_ID
Which is available in the Neutron project docs, but with an API call.
I couldn't find the API, so actually my question is :
1. Isn't there such an API?
if there is not,
2.why?
Is there a specific reason for?
I checked out the source-code of the clients and neutron-server, but unfortunately there is no dedicated endpoint in the REST-API for this functionality.
This feature is only supported by the neutron-client, but not by the openstack-client. When you run neutron purge PROJECT_ID all what the neutron-client does inside the python-code of the client, is to list all resources which are related to the given project and then iterate over this list and send a delete to the neutron REST-API for each single resource. So its only a simple automatism in the python-code of the client and not a specific endpoint on server-side.
See the specific function inside the code here: https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py#L63
Based on my experience with openstack and its community, I think it was done like this, because it was easier to add new code only into the neutron-client. When this should have become a new endpoint, this feature had to be implemented in neutron, openstack-client and openstacksdk as well. Each repository has its own team. This purge-feature is so small, that it was not worth to persuade all 4 teams. The more components you try to update for one simple feature, the harder it is, because the one who wants to bring the feature upstream, is responsible to bring the teams of all required components together and when only one within the core-teams have a problem with your implementation, you have to start nearly at the beginning. So it can easily take over a year or two to bring a cross-component feature like a new endpoint upstream, when you are not part of the core-team by yourself. So to bring the feature only into the neutron-client is quite easy compared to a cross-project contribution.
This is at least the reason, why I would implement this feature only in neutron-client too, or only in the openstack-client if possible, instead of adding a new endpoint, when I would bring this feature upstream.

google home reading from website

I'm currently working on a project where my main focus is to create an Action for Google Home which can be invoked and asked to read out some articles (chosen previously from a list, also by voice) from a particular website.
I was wondering if it was possible, or if it were already some similar projects.
What I'd like to do is something like the feature in Pocket or instapaper, where you can make the device read the article for you.
I also thought to make something like a database with all the articles I'm interested in, which auto-updates itself whenever a new article is posted, but my main concern now is to be able to separate the articles in various lists, parse the article and in the end implement text to speech into the Action.
Also some implementations with 3rd party services and apps would be useful.
Please ask me if anything isn't exactly clear, english is not my first language.
Yes, this is possible. Not necessarily easy, but possible.
First - there is nothing in the Actions on Google library or in Google Home that will automatically scrape a website. That will be up to you.
Second - Responses from your Action are limited in how much they can send at a time.
If you're having it do text-to-speech, you're limited to two "text bubbles" of 640 characters each before the user has to reply. You should keep well below that and should probably stick to just one "text bubble".
If you're playing an audio cut, then you're limited to two minutes.
You can work around both of these limitations by using the Media Response. With TTS, you would play a portion of the text, a brief Media response, at the conclusion of which, your server would be triggered to send the next chunk of text. If it is all recorded, you can just send the longer audio as the Media.
Be advised, however, that if you're using the inline editor or using Firebase Cloud Functions (which the inline editor uses), that by default you're not able to access most sites outside Google's network. You need to upgrade to a paid plan to do so. I suggest the Blaze plan which is pay-as-you-go, but includes a free tier which is typically good enough for development work and light production usage.

How to get third-party API up-to-date?

So, I stepped once at this problem. I had offered a website that used the SoundCloud API. Everything worked properly. Content was extracted from the JSON and placed in the layout of the website. However, I received an email one day from the owner of the website, which indicated that the website did not work properly. I then came out to investigate and came to the conclusion that the "problem" was not on my side, but at SoundCloud's side. I studied on the API page of SoundCloud and came to the conclusion that the API had received a major update, making the link with SC and the site no longer worked.
Lately I'm trying many new APIs to, including those from Instagram and Dribbble. I was therefore wondering if it is at all possible to ensure that such problems can be reduced in the future or it might be appropriate API pages of this third-party APIs to monitor?
There's no "right" answer. After many years of using and maintaining many APIs here are some of the conclusions I've come to:
The best providers let you work with a specific version of their API whose interface and expected behavior never changes. They might release bug fixes and new endpoints, but you can be confident that as long as the API is supported it will not break your system.
A good provider will provide an end-of-life date for each version of their API. It's up to you to keep track of when you need to update.
Paid services will often be supported longer than free services. Plus the contract / SLA will guarantee it remains available for a specific amount of time.
The most popular APIs often have mailing lists and/or blogs. For those that offer it, sign up to be notified of updates. For those that don't you'll have to monitor their blogs or news posts. And I suggest not using any service that would drop support for an API version without warning.

How to store postman collections in source control

I am using POSTMAN collections to test my API before opening it up. I work with a team of developers and we would like to share/add/edit our collections amongst each other.
Doing this in source control is proving slightly tricky as can be seen in this comment on the GitHUB page:
This issue still persists in Version 2.1.1 (packaged)
The order of requests might be deterministic now, but the diff of an exported collection from two different machines and users includes data that are not related to the collections exported. The diff is full of owner and other id conflicts if there are several people working on the tests at the same time.
What is the best way that we have of putting this data in some sort of version control system? Any suggestions otherwise?
Putting it in a VCS undoubtly will give you some headaches as you mentioned. Your best bet is to use Postmans functionality to share collections. Here is from the documentation found at https://www.getpostman.com/docs/sharing
Starting with Postman v0.9.3 you have the ability to share and manage your collections more effectively. The first thing you will have to do is create a Postman account. You can create one using your email ID or a Google account. Once you are signed in after creating an account, the collections you upload on Postman are linked to your account. You can delete them later through the "Shared collections" item in the navigation bar dropdown.
Collection v2 format removes most, if not all, problems with portability.
http://blog.getpostman.com/2015/06/05/travelogue-of-postman-collection-format-v2/
The format must be highly portable so that it can be easily transported between various systems without loosing functionality.
Source Control in Postman
The question about sharing collections so that you can collaborate with your teammates has been answered a few different ways, as described in other answers of this question such as by sharing the collection or by syncing to a team account.
Version Control in Postman
The other part of the question was about putting the Postman data into a version control system. Postman introduced some version control features for the paid team accounts, like being able to restore collections to a certain point in the activity feed.
The paid team accounts also get integrations to sync their collections to their own version control systems like GitHub for example. If you're on a free account, you can use the Postman API to build your own similar integration to update the collections.
This blog post talks about some of the version control features in Postman.
UPDATE: Postman released forking and merging in Postman app v6.7.1 so you can manage version control in the app.
To automatically share your existing postman collection you can use Postman Pro.
It is a paid service provided using which a team lead can purchase the complete pro- scheme for his team and work as an admin.
Postman pro enables the following and many more:
Any changes in the API are automatically reflected in Postman for all member
Members subscribe to the collections from the Team library and get notifications of any changes.
For more information you can refer:
https://app.getpostman.com/dashboard/team-upgrades
This is what I use with my team of automation testers.

Reliably detecting PhantomJS-based spam bots

Is there any way to consistently detect PhantomJS/CasperJS? I've been dealing with a spat of malicious spambots built with it and have been able to mostly block them based on certain behaviours, but I'm curious if there's a rock-solid way to know if CasperJS is in use, as dealing with constant adaptations gets slightly annoying.
I don't believe in using Captchas. They are a negative user experience and ReCaptcha has never worked to block spam on my MediaWiki installations. As our site has no user registrations (anonymous discussion board), we'd need to have a Captcha entry for every post. We get several thousand legitimate posts a day and a Captcha would see that number divebomb.
I very much share your take on CAPTCHA. I'll list what I have been able to detect so far, for my own detection script, with similar goals. It's only partial, as they are many more headless browsers.
Fairly safe to use exposed window properties to detect/assume those particular headless browser:
window._phantom (or window.callPhantom) //phantomjs
window.__phantomas //PhantomJS-based web perf metrics + monitoring tool
window.Buffer //nodejs
window.emit //couchjs
window.spawn //rhino
The above is gathered from jslint doc and testing with phantom js.
Browser automation drivers (used by BrowserStack or other web capture services for snapshot):
window.webdriver //selenium
window.domAutomation (or window.domAutomationController) //chromium based automation driver
The properties are not always exposed and I am looking into other more robust ways to detect such bots, which I'll probably release as full blown script when done. But that mainly answers your question.
Here is another fairly sound method to detect JS capable headless browsers more broadly:
if (window.outerWidth === 0 && window.outerHeight === 0){ //headless browser }
This should work well because the properties are 0 by default even if a virtual viewport size is set by headless browsers, and by default it can't report a size of a browser window that doesn't exist. In particular, Phantom JS doesn't support outerWith or outerHeight.
ADDENDUM: There is however a Chrome/Blink bug with outer/innerDimensions. Chromium does not report those dimensions when a page loads in a hidden tab, such as when restored from previous session. Safari doesn't seem to have that issue..
Update: Turns out iOS Safari 8+ has a bug with outerWidth & outerHeight at 0, and a Sailfish webview can too. So while it's a signal, it can't be used alone without being mindful of these bugs. Hence, warning: Please don't use this raw snippet unless you really know what you are doing.
PS: If you know of other headless browser properties not listed here, please share in comments.
There is no rock-solid way: PhantomJS, and Selenium, are just software being used to control browser software, instead of a user controlling it.
With PhantomJS 1.x, in particular, I believe there is some JavaScript you can use to crash the browser that exploits a bug in the version of WebKit being used (it is equivalent to Chrome 13, so very few genuine users should be affected). (I remember this being mentioned on the Phantom mailing list a few months back, but I don't know if the exact JS to use was described.) More generally you could use a combination of user-agent matching up with feature detection. E.g. if a browser claims to be "Chrome 23" but does not have a feature that Chrome 23 has (and that Chrome 13 did not have), then get suspicious.
As a user, I hate CAPTCHAs too. But they are quite effective in that they increase the cost for the spammer: he has to write more software or hire humans to read them. (That is why I think easy CAPTCHAs are good enough: the ones that annoy users are those where you have no idea what it says and have to keep pressing reload to get something you recognize.)
One approach (which I believe Google uses) is to show the CAPTCHA conditionally. E.g. users who are logged-in never get shown it. Users who have already done one post this session are not shown it again. Users from IP addresses in a whitelist (which could be built from previous legitimate posts) are not shown them. Or conversely just show them to users from a blacklist of IP ranges.
I know none of those approaches are perfect, sorry.
You could detect phantom on the client-side by checking window.callPhantom property. The minimal script is on the client side is:
var isPhantom = !!window.callPhantom;
Here is a gist with proof of concept that this works.
A spammer could try to delete this property with page.evaluate and then it depends on who is faster. After you tried the detection you do a reload with the post form and a CAPTCHA or not depending on your detection result.
The problem is that you incur a redirect that might annoy your users. This will be necessary with every detection technique on the client. Which can be subverted and changed with onResourceRequested.
Generally, I don't think that this is possible, because you can only detect on the client and send the result to the server. Adding the CAPTCHA combined with the detection step with only one page load does not really add anything as it could be removed just as easily with phantomjs/casperjs. Defense based on user agent also doesn't make sense since it can be easily changed in phantomjs/casperjs.