Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
https://skydrive.live.com/robots.txt shows:
User-agent: *
Disallow:
Since "Disallow: " allows a web spider to crawl the whole site, doesn't this create a privacy/security concern?
In comparison, http://drive.google.com/robots.txt has "Disallow: /"
The privacy/security concern does not come from the robots.txt file but how well you secure the files in the skydrive. A robots.txt file is just a suggestion to robots on what they should and should not access, they do not have to follow the rules setup in a robots.txt. Since the documents are inherently protected by the requirement that a user login with a user name and password a robot would not be able to see and index any files for a user (unless the robot is hacking into the system or knows the uid/password to login with).
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I have made custom segments/blocks around my website which I use it for advertising/marketing. Google search bots are considering those as part of my website and gets confused of what my site really is versus advertisement.
This negatively impacts my SEO. Is there a way I can register or use certain directives or elements to inform google and other bots to avoid crawling that portion or even if crawled should not be considered as part of that page.
I am aware of robots.txt file but that is for an entire page. I would like to block a certain blocks within each page. This could be a side bar a floating bar.
there's no way to ensure all bots don't index parts of a page. it's kind of a all or nothing thing.
can could use a robots.txt file and with
Disallow: /iframes/
then load the content you don't want indexed into iframes.
there's also the data-nosnippet-attr tag attribute.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a website : https://linuxquizapp.com.uy.
I recently used Google Search Console to index it into google but when I do a search, I get this instead:
That's the right IP but most importantly, how did the IP ended up there instead of the domain name and also, is there anything I can do to correct this?
The app is written in Go, and there is no Apache or Nginx or whatnot configuration I should change?
Note- I am including an image in the question instead of plain text or a link so this does not get "outdated" as Google indexer updates stuff.
You need to redirect the IP as host based requests to the domain host.
Once you will do that, google indexes will get updated in few days to show the hostname as domain instead of IP address.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
So I was sharing a folder on Dropbox and decided to see its Folder Settings. I noticed then that some label text was missing from the form. It might just be me (Linux Mint), but can anyone tell me what these fields do?
The settings are similar to Dropbox for Business. But I wonder if it's just your browser not rendering it properly. You might try another browser to see. But to answer your question, the options are just share settings; who can be added, who can manage, and how links can be shared.
This is the only thing I see when I view a shared folder's settings. It seems that you have a Dropbox Team, which I don't have.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Today I found strange checkbox in content settings in Chromium Version 29.0.1543.0 (207211) with the following text:
Some content services use machine identifiers to uniquely identify you
for the purposes of authorizing access to protected content.
Allow identifiers for protected content (computer restart may be
required)
What exactly mean "uniquely identify"?
What API would be used to retrieve such an identifier?
Screenshot: http://i.stack.imgur.com/7ztd8.png
Available on Windows and ChromeOS only: If enabled, Chrome provides a unique ID to plugins in order to run protected content. The value of this preference is of type boolean, and the default value is true.
http://developer.chrome.com/extensions/privacy.html#property-websites-protectedContentEnabled
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
For our company I want to setup a file sharing service such as Dropbox but on our own servers for our corporate information.
It must be only available for employes of our company.
Please suggest me software package.
I suggest you try http://owncloud.org/.
That's what we use in my company and it is quite convenient to sync our working files (similar to what dropbox do), and to share files as well.
Have a look at arXshare (http://www.arxshare.com). You can install it on any server with PHP, it is easy to setup, and it does not require any database and is very lightweight. Furthermore, it does end-to-end encryption, so your shared files on the server are useless without your password.