Why My website name when indexed google changes to japanese font - indexing

This is a malware or what, when i type my site name keyword then changes to Japanese font? Can you tell me why its happen?

Just make sure that nobody injected malicious code in one of your website index. Maybe there is code which directs you to this site.
You can try:
make a new fresh clean index
delete all of your strange files or better templates of your website
try out Google Search Console

Related

Google search console fails to fetch sitemaps | "Sitemap could not be read"

I have generated a sitemap from online generators, it seems to be working and even i tested it on old google search console sitemap testor and it works. but when i submit it in both versions it just displays error message.
This is a known bug. See this Google support answer.
In my case, it's the sitemap that had a syntax error.
You should open sitemaps in Firefox, it will tell you if you have a syntax error.
Your sitemap domain address might have changed. If it is wordpress use yoast plugin, where search console will automatically consider sitemap.xml
I had the same problem and the solution was very simple, just put the full path to your sitemap.
Where the console asks 'add new sitemap', instead of writing /sitemap.xml, write the full path, such as https://example.com/sitemap.xml.
That should fix the problem.
Using the yoast SEO plugin which built out 10 sitemaps, the index got red the first time and only one of the sub-sitemaps did. I manually visited the other sitemaps (likely they took to long to respond I thought) and deleted the sitemap on google search console and re-uploaded. All were read that time.
I had this issue and it was because I didn't set the content-type to application/xml
This sitemap validator notified me of the issue: https://www.xml-sitemaps.com/validate-xml-sitemap.html
Enter the full URL of your sitemap, e.g., https://example.com/sitemap.xml. Also, ensure your sitemap name does not include numbers and symbols.

Remove google indexing from our image server

We do a lot of email marketing and sometimes developers will put the html file out on the image server (i know the easy answer is to not do this) but those html files end up getting indexed by Google and eventually rank high on search results. Which in turns makes the SEO company's want us to remove these pages. Is it possible to have google not index anything from our sub domain? we have image.{ourUrl}.com where we put all these files.
Would putting a robot.txt file in the main directory do it? Or would we need to add that robot text file in every directory?
Is there an easy way to blanket this?
A robots.txt file would just stop crawling, files might still be indexed. a noindex directive would work, you could use an x-robots-tag. See here https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag

Rename wp-login.php with default permalink

For better security, I would like to rename the login url of my blog to something other than /wp-login.php. I found a plugin that would do the Job
http://wordpress.org/plugins/rename-wp-login/
But the problem is that it works only with non-default permalinks, which is a problem for me, because I use unicode names for my topics, which could make the link very long and messy with percent encoding. I wouldn't want to translate every link name to english... that's tedious!
Is there a way to hide wp-login.php and wp-admin from hackers without having to change the permalink form?
Thank you.
You can now use Rename wp-login.php plugin with any kind of permalink structure! ;)
I can suggest one great plugin that have plenty useful things in it and also what you want. And it uses other technique, that is not dependent on permalinks (in two words - it uses htaccess for all the magic).
It's called Better WP Security.
Here is the link
Why don't you use a permalink structure like this?
/%post_id%/
From long time i was tackiling with one issue.
some one trying to access my website using random password.
i got report of ip addresss, who hits wp-login.php files.
beside that i found .sd0 file in my root folder.
that file filled with some encrypted code.
I removed this and change my wp-login.php to wp-login-xx.php
After changed this file you required to change below file also to get proper execution.
search for wp-login.php and replace this with your assign name (wp-login-xx.php)
wp-login.php
wp-includes/general-template.php
wp-includes/pluggable.php
for better security also update wordpress with latest one.

Google, do not index YET

In the effort of building a live site on its actual live hosting platform is there a way to tell google to not YET index the website? I found the following:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710
But would that tell them to never come back or would they simply see the noindex tag and then not list the results, then when it comes back to crawl again later and my site is good to go I would have the noindex removed and the site would then start getting indexed?
Sounds like you want to use a robots.txt file instead:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449&topic=2370588&ctx=topic
Update your robots.txt file when you want your content to be indexed.
You can use the robot.txt method.
You can specify which subpage could be spidered. And google comes back, checking the file before indexing. So you can delete the file later in order to get fully indexed.
More Information
About /robots.txt
Robots.txt File Generator
You can always change it. The way Google and other robots find your page is if it is linked to on another page. As long as it isn't linked to on another page, it won't be found. Also, once your site is up, chances are that it will be far back in the list of sites.

Is there a way to prevent Googlebot from indexing certain parts of a page?

Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest?
There are a couple of different issues we've come across which would be helped by this, such as:
RSS feed/news ticker-type text on a page displaying content from an external source
users entering contact phone etc. details who want them visible on the site but would rather they not be google-able
I'm aware that both of the above can be addressed via other techniques (such as writing the content with JavaScript), but am wondering if anyone knows if there's a cleaner option already available from Google?
I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to Google Search Appliances.
Does anyone know if there's a similar set of tags to which Googlebot will adhere?
Edit: Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to Google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.
What you're asking for, can't really be done, Google either takes the entire page, or none of it.
You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.
In short NO - unless you use cloaking with is discouraged by Google.
Please check out the official documentation from here
http://code.google.com/apis/searchappliance/documentation/46/admin_crawl/Preparing.html
Go to section "Excluding Unwanted Text from the Index"
<!--googleoff: index-->
here will be skipped
<!--googleon: index-->
Found useful resource for using certain duplicate content and not to allow index by search engine for such content.
<p>This is normal (X)HTML content that will be indexed by Google.</p>
<!--googleoff: index-->
<p>This (X)HTML content will NOT be indexed by Google.</p>
<!--googleon: index>
At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.
This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.
Yes definitely you can stop Google from indexing some parts of your website by creating custom robots.txt and write which portions you don't want to index like wpadmins, or a particular post or page so you can do that easily by creating this robots.txt file .before creating check your site robots.txt for example www.yoursite.com/robots.txt.
All search engines either index or ignore the entire page. The only possible way to implement what you want is to:
(a) have two different versions of the same page
(b) detect the browser used
(c) If it's a search engine, serve the second version of your page.
This link might prove helpful.
There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.