Is there any way to get all the blog posts to use in another component? - docusaurus

I've been looking for any hooks or API by plugin-content-blog to get all the blog posts in json format under folder blog because I want to use the data in another component but seems like there's none. How can I achieve that?

Related

Can I have any way (such as an API) to count the like and react on my post in my page and use that count to store in my database to use it?

I want to create a post in my website and post/share it in my Facebook page. This can be live/video/image etc. The amount likes and reaction my post get in Facebook will need to be fetched and stored in my post related database of my website.
Is it somehow possible to do?
If it is possible then what tools or guideline I need to follow?
The easiest solution would be to save the post ID when you create a post (Which you will get if the post was successfully created via the API) docs
And when you want to get the likes, use that post ID and fetch the API for the information. docs

Remove collection through API in Fauna

Header says it all. Is there a way to delete/remove/drop a collection in FaunaDB through the API?
I've tried to look through all of the listed functions but I hope I've missed it.
Sure. Delete($ref) where $ref is something like Collection("foo") will delete all the documents in the named collection.
Full docs for Delete are here.
There is also an example provided, as eskwayrd points out.

Get wikitext from wikipedia API?

I'm looking at the API documentation here,
https://www.mediawiki.org/wiki/API:Query
Getting the wikitext for a page is mentioned in the beginning of the documentation,
The action=query module allows you to get information about a wiki and the data stored in it, such as the wikitext of a particular page, the links and categories of a set of pages, or the token you need to change wiki content.
but I cant seem to figure out what parameters to pass in the API request to return the wikitext for a given page. Anyone know how to do this?
I've tried parameters like,
{'action':'query', 'titles':'Anarchism', 'prop':'wikitext', 'format':'json'}
You must use this query .
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Anarchism&rvslots=main

How to get wiki template's content?

Does anybody know how to get access to the template's body inside the page?
I'm familiar with the API that returns the list of ALL templates that exist on the page, but how I can get access to the template's body? Is there any API for this? For now I see only one possible way... parse it manually. Am I wrong?
You can use the expandtemplates API call, or the rvexpandtemplates parameter for the revisions API call.
This is an old question, but it helped me figure out how to fetch a mediawiki page with template macros expanded. Very useful if you are doing a conversion.
<MW_BASEURL>/api.php?action=query&prop=revisions
&titles=<url_encoded_page_title>&format=xml&rvprop=content&rvexpandtemplates
I am parsing the xml returned from this query to get the expanded page.

How can I get the full change history for an article on Wikipedia?

I'd like a way to download the content of every page in the history of a popular article on Wikipedia. In other words I want to get the full contents of every edit for a single article. How would I go about doing this?
Is there a simple way to do this using the Wikipedia API. I looked and didn't find anything the popped out as a simple solution. I've also looked into the scripts on the PyWikipedia Bot page (http://botwiki.sno.cc/w/index.php?title=Template:Script&oldid=3813) and didn't find anything that was useful. Some simple way to do it in Python or Java would be the best, but I'm open to any simple solution that will get me the data.
There are multiple options for this. You can use the Special:Export special page to fetch an XML stream of the page history. Or you can use the API, found under /w/api.php. Use action=query&title=$TITLE&prop=revisions&rvprop=timestamp|user|content etc. to fetch the history.
Pywikipedia provides an interface to this, but I do not know by heart how to call it. An alternative library for Python, mwclient, also provides this, via site.pages[page_title].revisions()
Well, one solution is to parse the Wikipedia XML dump.
Just thought I'd put that out there.
If you're only getting one page, that's overkill. But if you don't need the very very latest information, using the XML would have the advantage of being a one-time download instead of repeated network hits.