Getting a table from wikipedia article from it's API - api

I need to use the table in from this wiki page https://en.wikipedia.org/wiki/List_of_most_visited_museums to make a database in python (though the latter part is irrelevant atm). I have to use the API (can't scrape) to access it. Right now I'm trying the API's documentation https://www.mediawiki.org/wiki/API:Parsing_wikitext#Example_1:_Parse_content_of_a_page Example #2 from this page is exactly what I want to do, but it's returning an error, and even running the original code in my notebook it also returns an error. Can anyone tell me how to either alter that code so it runs, or direct me to another way to do the same thing? Thanks.

This fetches the content you're after: https://en.wikipedia.org/w/api.php?action=parse&page=List%20of%20most%20visited%20museums&prop=wikitext&section=2&format=json
If the example isn't working in your notebook, the problem probably lies with the rest of your code and not with the API call.

Related

Is it possible to use the Canonical Landscape API to get script output?

The documentation I can find for the Canonical Landscape API lets you do lots of things with scripts, but I can't find anything suggesting that you can get output. However, if you use the Canonical web interface, script output is available, so it's presumably exposed somehow...?
I just had this issue as well and since you're the first hit right now on google, I wanted to share the answer for everyone - if you run ExecuteScript on a landscape client and get back an ID of 123, and let's assume the job finished already - you want to then use that ID to ask the GetActivities API, with an input argument of "query" with value "parent-id:123". If there is a result there, you will find the script output you are looking for under the result_text field of the response. Good luck!!! It worked over here very well.

Endpoint with Google Flex env

There is bit of a confusion, wondering if somebody can help me with this.
Here is an example which is a year old and uses goapp with polymer and endpoint
https://github.com/googlesamples/cloud-polymer-go
Here is a recent example using gcloud
https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine_flexible/endpoints
Both are different as google changed its approach.
As per second example, I am able to create endpoint functions which uses json for input and output. However there are 2 problems
1st. It is throwing error if I put functions in different file under same package. It works when I run go run .go. but then I dont understand how app.yaml come into picture. I think this url /_ah/spi/. should work
2nd. I am using postman app to check the request and response from endpoint. Is there a better way? I thought google does provide a platform to test endpoint.
Is there any example which implements examples similar to 1st one with new libraries?
looking forward for your help. Thanks.

Get members's avatarHash when getting cards from Trello

According to the Trello API documentation, it is possible to return a member's avatarHash as part of the data for the cards on a list. I should be able to use the feed from either of the following:
https://trello.com/1/lists/[LIST_ID]/cards?member_fields=all
https://trello.com/1/lists/[LIST_ID]/cards?member_fields=avatarHash
However, for me anyway, the data is exactly the same with or without the query paramaters. I have also tried adding my application key and a token to the URL, but still no success.
What I actually want to do is get the URI for a member's avatar, and I believe I can build the correct one with the hash. Any help to do this or any pointers as to what I am doing wrong will be greatly appreciated.
Trello's documentation for their API shows that there are optional fields, but it isn't clear or even stated (although fairly obvious after reading) that for the member_fields parameter to be valid, there should also be members=true specified as part of the URI.
I came across this when inspecting the API calls Trello make themselves and having removed everything but member_fields, things even went missing for them, but adding members back in worked as expected.
Right now, my API call is finally working and looks like this:
https://trello.com/1/lists/[LIST_ID]/cards?members=true&member_fields=avatarHash

Spotify API get Related Artists

Is there a way to get a list of related artists through the spotify api. Like the small list that displays in the actual program?
Would be very useful if so, but I am not so sure
Cheers
Yes, it's available through libspotify. There's a SPArtistBrowse class that contains a lot of metadata, including the related artists. Check out the
CocoaLibSpotify comes with a documentation package, where you can find more details on what's included: https://github.com/spotify/cocoalibspotify.
Do note that it's currently extremely slow to load an entire SPArtistBrowse object. I'm assuming it's got something to do with libspotify loading it all in one chunk and on the main thread (?). From what I know, Spotify are suppose to remedy that in an upcoming version of libspotify, though.
Get an artist's related artists is now available through the Spotify Web API.
GET https://api.spotify.com/v1/artists/{id}/related-artists is the format.
https://api.spotify.com/v1/artists/43ZHCT0cAZBISjO8DG9PnE/related-artists is an example request.
For more information, see the API documentation.
There is also a JSFiddle demo app.
Definitely! All you need is the "artist_id" and the SpotifyPublicAPI can return a list of the related artists. You don't need an access_token at all.
You can easily test the API call here on RapidAPI. I've specifically linked you to the getArtistsRelatedArtists endpoint. Rapid will provide you with a code snippet you can copy and paste directly into your code to make the call.
Here is an example testing the API call using Beyonce's artist_id:
Here is a sample code snippet provided by RapidAPI wth Beyonce's artist_id passed as a parameter:

How can I get the full change history for an article on Wikipedia?

I'd like a way to download the content of every page in the history of a popular article on Wikipedia. In other words I want to get the full contents of every edit for a single article. How would I go about doing this?
Is there a simple way to do this using the Wikipedia API. I looked and didn't find anything the popped out as a simple solution. I've also looked into the scripts on the PyWikipedia Bot page (http://botwiki.sno.cc/w/index.php?title=Template:Script&oldid=3813) and didn't find anything that was useful. Some simple way to do it in Python or Java would be the best, but I'm open to any simple solution that will get me the data.
There are multiple options for this. You can use the Special:Export special page to fetch an XML stream of the page history. Or you can use the API, found under /w/api.php. Use action=query&title=$TITLE&prop=revisions&rvprop=timestamp|user|content etc. to fetch the history.
Pywikipedia provides an interface to this, but I do not know by heart how to call it. An alternative library for Python, mwclient, also provides this, via site.pages[page_title].revisions()
Well, one solution is to parse the Wikipedia XML dump.
Just thought I'd put that out there.
If you're only getting one page, that's overkill. But if you don't need the very very latest information, using the XML would have the advantage of being a one-time download instead of repeated network hits.