how to prevent rapid and automatically http request for insert content on web page?example massive blog posts - httprequest

how to prevent rapid and automaticaly http request for insert content on web page? For example I have a blog. How prevent that a program automatically insert massive blog posts? I want only human can insert blog post or a api function provide by my web application

Try using recaptcha for preventing bots to add content on your blog
http://www.google.com/recaptcha

Related

CloudFlare failed to purge dynamic content to a blog post

I run my blog that linked with Cloudflare when I create a new post I don't see the current post that affected my site. I tried many methods that I learned from sites like trying to use a page rule to bypass Cloudflare cache but it didn't work. Also, I turn it off auto minify js, CSS, and Html still does not work. my blog still shows the oldest post that is from since 5 days ago. When you log in WordPress dashboard panel you will see the current posts but for the normal visit will see the cached posts that remain static all time
here is my Cloudflare setting
Page Rules Settings
Page Speed Settings
Page Caching Settings
I need your help from everybody who knows about this problem and how we can solve it
Thanks....!
Looking at the site now, looks like you have maybe disabled Cloudflare? I'm seeing the latest posts, and there are no Cloudflare response headers coming back.
For troubleshooting caching issues, one of the most useful things you can do is to inspect the response headers (using Chrome Dev Tools or similar 'Network' tab). First step is to identify which request is responsible for the cached content (a document, or an AJAX call, etc.)
From there, you can look at the response headers to see why it is behaving this way, specifically you'll want to check the Cache-Control and CF-Cache-Status header. More info here - https://developers.cloudflare.com/cache/about/default-cache-behavior
I fixed this problem because all performance is managed by Ezoic. I tried to purge everything on Ezoic and all things are working perfectly.

How to add dynamic meta tags to website with no middleware or SSR

I have a relatively large app where there are a lot of user profile pages. I want to make it so that if you share one of the user's profile page it will preview their name and picture on social medias like FB and Twitter (think sharing a Twitch streamer's page on Twitter). I used create-react-app to start the project so I don't have server side rendering or any middleware for pre-rendering tools. Is there another way I can accomplish this?
There two ways you can get this to work
Is the server your files via express server and check for who has the made the request by checking user-agent header from request and if its a bot instead of sending them the usual response you can fetch the required user profile data and use that data to populate the open-graph metatags and return them the HTML with those metatags.
Second way would be to use a network interceptor from the CDN you're using to identify the who is requesting the page (either bot or a person) if its a bot, make a request to your backend to fetch related data and send them the HTML with populated metatags.
Explained approach
Every time a request comes into our server, it comes with a header value user-agent which tells the server who is requesting the resources (human or a bot from Facebook trying to do link preview). Just by comparing a list of known user-agent (so it won't work on all but will work all know platforms and 90% of others.)
Let's say we have something.com where we want the link preview and let's say a request comes for something.com/john. What we will do is check for request that is coming to the server and will check for user-agent property, if its a human it will be redirected to our normal site but if its a bot (so it just wants an HTML for link preview) what we are going to do is since it's our server we can grab the data of akkshay and set the proper metatags inside our HTML and send it back as a response.
So what happens here is whenever a human tries to go for something.com/john he will be redirected to our landing page as he is more concerned about what he sees on his browse but when a bot comes in we will send it HTML response with proper metatags as its the link previews which is the concern for the bot.
This thing can be done on our express server with something like this. But this can also be done infrastructure level.

Webflow authentication integration with private API

The company I work for has an web application built with Angular, that has user authentication.
We also have a blog built with Webflow for simplicity.
The thing is, we want to create special pages on our blog only for premium users. For that, a user would need to sign in on the blog (webflow) using the same account they use on the main web application. After that, the blog would also need to know if they should have access to said pages (is a premium user), and then allow them to access such areas.
I've been looking for information about this, but I've been unable to locate a clear answer. I tried following this, but the GET request for https://webflow.com/oauth/authorize (using my own clientID) returns their home page. This can be seen on the printscreen below:
The request has the following format: https://webflow.com/oauth/authorize/?client_id=<CLIENT_ID>&response_type=code. It redirects twice (code 301 and 302), then just returns me their homepage.
In fact, I'm not even sure this oauth integration would solve my problem. Is this even achievable using webflow?

Who knows which files should be included in a website?

When the browser requests a website, any website from a HTTP server, which of the two parses the site's content in order to know which other files need to be included on the webpage?
What I mean is this:
the browser asks for the html file and then observers that it needs to import some external css files and HE is the one who requests them.
OR
the HTTP server when faced with a request for a website, parses (already knows) which sites need to be linked to a certain webpage and sends them alongside the html page?
I'm guessing the first case is the correct one, but if someone can confirm and maybe clarify it, I'd appreciate it.
It's all done by the client (which is usually a browser). When it sees <script>, <iframe>, <img>, <link>, etc. tags that reference other documents, it downloads them if necessary.
According to Wikipedia -
The primary function of a web server is to cater web page to the
request of clients using the Hypertext Transfer Protocol (HTTP). This
means delivery of HTML documents and any additional content that may
be included by a document, such as images, style sheets and scripts.
and
The primary purpose of a web browser is to bring information resources
to the user ("retrieval" or "fetching"), allowing them to view the
information ("display", "rendering"), and then access other
information ("navigation", "following links").
It is the Browser that parses the HTML and request for the associated contents.

bulk posting to wordpress via scripts

In current project i need to make bulk posts to wordpress (from text files) and need to add dates of mine own choice .
What is best way to do it ?
Use XML-RPC. You can build a script that reads the text file, then makes an XML-RPC request to the WordPress server to create the post. You'll need to have a valid username and password to make this work (and will need to enable XML-RPC on the WordPress site as well).
The API is fairly well-defined. You make an XML-RPC call to metaWeblog.newPost and pass in the blog information, the post content, and your username and password. WordPress does the rest. You can also specify the date the post was/will be published as an optional field.
Further reading
XML-RPC Support « WordPress Codex
WordPress XML-RPC MetaWeblog API « My own documentation of the API
RFC: MetaWeblog API « Further documentation of the API