In current project i need to make bulk posts to wordpress (from text files) and need to add dates of mine own choice .
What is best way to do it ?
Use XML-RPC. You can build a script that reads the text file, then makes an XML-RPC request to the WordPress server to create the post. You'll need to have a valid username and password to make this work (and will need to enable XML-RPC on the WordPress site as well).
The API is fairly well-defined. You make an XML-RPC call to metaWeblog.newPost and pass in the blog information, the post content, and your username and password. WordPress does the rest. You can also specify the date the post was/will be published as an optional field.
Further reading
XML-RPC Support « WordPress Codex
WordPress XML-RPC MetaWeblog API « My own documentation of the API
RFC: MetaWeblog API « Further documentation of the API
Related
I am learning express and the http methods, but I cannot find any documentation on it. Is /api/value just for the json data, like an address just for that data? Just any extra info on it would be appreciated. Like what exactly does it do and if there is any documentation from express about it. Or is this a global term used in urls throughout frameworks and the internet?
For example:
app.get('/api/jackets'(req, res) => {res.send('logic')})
Why do we need to add the api before jackets and what does it do?
It's not necessary, it's used only for a better understanding
The /api request is not required, but putting a prefix in front of the API requests such as:
/api
or, in some cases including a version number:
/api/v1
Allows you to use the same web server for more than one type of request because the /api prefix uniquely identifies each API request as an API request and it can easily be routed to where you handle API requests. You could have plain web page requests from a browser served by the same web server. While, you don't have to use such a prefix, using it gives you the most flexibility in how you deploy and use your server.
Similarly, putting the version in the prefix such as /api/v1 allows you to evolve your API in the future in a non-backward-compatible way by adding a new version designation without breaking prior API clients (you support both versions at the same time - at least for a transition period).
The company I work for has an web application built with Angular, that has user authentication.
We also have a blog built with Webflow for simplicity.
The thing is, we want to create special pages on our blog only for premium users. For that, a user would need to sign in on the blog (webflow) using the same account they use on the main web application. After that, the blog would also need to know if they should have access to said pages (is a premium user), and then allow them to access such areas.
I've been looking for information about this, but I've been unable to locate a clear answer. I tried following this, but the GET request for https://webflow.com/oauth/authorize (using my own clientID) returns their home page. This can be seen on the printscreen below:
The request has the following format: https://webflow.com/oauth/authorize/?client_id=<CLIENT_ID>&response_type=code. It redirects twice (code 301 and 302), then just returns me their homepage.
In fact, I'm not even sure this oauth integration would solve my problem. Is this even achievable using webflow?
I have my API documented with Swagger. For developer convenience I would like to provide the swagger GUI on my website as well. However, my provider has not installed the php yaml extension. It implies that I can't use the GUI on my own website.
So, I would like to use a third party GUI. I know that I can use https://petstore.swagger.io/ and enter the link to my yaml file in the text box. This is also not really user friendly. I prefer to open the GUI and specify the yaml when calling the url. For the user the GUI opens with my API definition.
Any thoughts?
If for some reason you cannot host Swagger UI youself, here are some alternatives you can try:
Use SwaggerHub to host your API definition and docs.
Disclosure: I work for the company that makes SwaggerHub.
Use GitLab to host your OpenAPI YAML/JSON file. GitLab uses Swagger UI to render OpenAPI files. Example:
https://gitlab.com/gofus/gofus-api/blob/dev/swagger.yaml
Use https://petstore.swagger.io with the url query parameter to automatically load your API definition:
https://petstore.swagger.io?url=https://yoursite.com/api.yaml
For this to work, the server where your OpenAPI file is hosted must use HTTPS and support CORS.
I've been looking at backends and APIs for a while now. It seems that sometimes devs will build a regular backend (in say a language like PHP) that handles all the backend matters and sometimes devs will instead choose to build out their backend through an API and then use their own (and possibly other) sites to pull data from this API.
I was wondering this:
Say I want to build a regular backend using a server-scripting language like PHP, which I will use to not only render my main website, but will also allow me to do other server-side scripting etc. Then say I want to use this data from the current site and make it accessible to another site of mine through API calls. Will it be possible to build an API on top of a regular backend?
If the answer yes, how complex can it get to achieve something like this?
What tools or design strategies (if any) would you have or have used for achieving this?
This is an old question, but since I'm here, I may as well provide an answer for anyone wondering. Joe is asking about server-side web APIs versus regular server-side code.
Yes, you can have a "regular" backend and an API backend exist at the same time. If your backend is in PHP, you can refactor and extend your code to handle API requests.
Like Patrick Evans said, an API is the backend. If your backend PHP code communicates with a database to manipulate or retrieve data, then you can consider this an API transaction. Whenever your backend receives a request, evaluates/actions that request, and returns a response, it is essentially acting like an API.
Let's say you own example.com, with an index.php file in the root directory - so when a user requests example.com in their browser, this index.php file is processed and served to them. Now, you can set up this index.php file to handle both regular page requests (i.e. the php script returns an html template that is rendered by the browser) and API calls. This can be as complex or as simple as you want it to be.
The best way to handle this would be to assign different routes for rendering your main webpages and API calls. You can set up routes in the following way...
example.com/index.php?route=api&data=users can be handled by your 'API code' in index.php to return a JSON response containing a list of all the users in your database, while example.com/index.php?route=home will just return your website's home page.
I was googling for tools for checking broken links in a remote web page. The w3c validator seemed a good one. But I am still unsure as how to check for pages which are restricted, i.e. the pages which I can only access by logging in to the site. Can we do that using the w3c validator? If not than is there any other tool for the same?
For basic authentication the online validator will proxy it and prompt you to logon, alternatively see this post.
Sometimes you can specify the login details in the URL: username:password#url.to.the.site. This I believe only will work if you are using a .htaccess file for logins.