error 301 versus 410 when page deleted - seo

before when I deleted a page, I was them redirected (301) this page to the category page, but in google webmastertool, it said that I had too many "soft 404"
so recently I changed this and send a 410 error and display some links to similar page, but now in webmastertool, it said that it found an increase of "404 not found"
what to do ??

Of course they see an increase of 404 errors because that is what is happening! They warn you so that you notice unintended problems. But right now the increase is indeed intended so just ignore it. The warning will disappear.

The only thing you can do to not get people hitting deleted pages is to:
Stop deleting pages.
Clear out any links which may be leading to a deleted page.
If pages get deleted and links not removed then you'll be getting not found errors. Of course you can't avoid this if people are bookmarking your pages.

if you don't want to see 404 error is to redirect this to the related pages but keep in mind that excessive use of redirects is also harm full.Try not to delete more pages and try to remove your links of this page or try to reduce the link and wait and ignore the 404 for this page as time pass it will be go down in google search results and you get know 404.

Related

HTMX sometimes redirects to a page fragment instead of placing it in the page

I'm using HTMX to replace a fragment of page every 30 seconds using the following code:
<ul hx-get="/refresh" hx-trigger="every 30s" id="today" hx-select="#today" hx-target="#today" hx-swap="outerHTML">
This works perfectly most of the time. However when I leave the page open I sometimes come back and find it has redirected to the hx-get url ("/refresh" in the example). So it's showing just the unstyled fragment and the url bar shows the fragment url.
I think this might be something to do with the behaviour that is intended to show errors when they occur, but the page shown isn't an error. I think was happened is that HTMX got a transient error (maybe a drop in my internet connection when my laptop went to sleep) then just redirected to the page to show it but by the time it loaded the error was gone.
Any way to fix this? Assuming my assumption above is correct I think I'd like to silently drop errors for this particular element rather than redirecting to them.
Thanks in advance!

Instagram Login Failure Issue

enter image description here
When i try to login my instagram through the web this blank screen doesn't load. Tried clearing cache and cookies on the browser but still the issue persists. On inspecting the page on console this error is displayed
https://i.instagram.com/api/v1/business/account/get_web_pro_onboarding_eligibility/
The resource https://www.instagram.com/static/bundles/es6/FeedSidebarContainer.css/f627ebef4169.css was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
Subsequent non-fatal errors won't be logged; see https://fburl.com/debugjs.
Same errors here. Feed site is empty. I can open my profile site or profiles from other users, that is working.
See the screenshot. On the feed site is a GET request and in the status I get "429 Too Many Requests". I have no glue what is going wrong here. On my two other Instagram profiles the feed is working.

Google plus one counts lost after adding 'www' to website URL

After I changed my website URL in my Google+ page (from http://unimojo.ir to http://www.unimojo.ir for getting better SEO results) theses things are happening:
My home page plus one count got reset.
Before the change, by clicking on the +1 button on any post in the page, it was adding the click to my home page plus one badge. But after the change it doesn't any more.
Anyone knows what I can do about it?

How do I setup a robots.txt which allows all pages EXCEPT the main page?

If I have a site called http://example.com, and under it I have articles, such as:
http://example.com/articles/norwegian-statoil-ceo-resigns
Basically, I don't want the text from the frontpage to show on Google results, so that when you search for "statoil ceo", you ONLY get the article itself, but not the frontpage which contains this text but is not of the article itself.
If you did that, then Google could still display your home page with a note under the link saying they couldnt crawl the page. This is because robots.txt doesnt stop a page being indexed. You could noindex the home page, though personally I wouldnt recommend it.

Web part lost when page post back

Here is the brief detail of issue.
I have Page1 where I have put LinkButton. The LinkButton Has property PostBackUrl pointing to Page2.
When user is redirected to page2, I am using Page Load method to access controls from previous page & get the needed value. To make clear, I am using this approach becuase I cant use querystring.
Page 2 has 2 web parts on it. The web parts use data received in Page Load event from page1 and renders data.
This works perfect on first page load. When user clicks on a URL in page which posts back, the web parts gets lost.
Note that if I come directly to page2 without going at page1, then web parts are retained in the page and they are not lost.
Can anyone give me the clue of issue cause?
Thanks in advance.
Do you have any debugging enabled? you are most likely looking for values on page load that don't exist and might be getting exceptions that aren't handled properly.
I am not sure why but somehow code was throwing an exception when I tried to access the property Page.PreviousPage. Though I had made sure to check null on each step. Even code was never get hit when web part was lost. So it is still a mystery for me.
Just in case someone comes across this issue my workaround may help. I used Post back to page2 using post method. I accessed the variables using Page.Form[] variables. This way my issue of getting web parts lost got resolved.