Related
I am looking for options to set up cross browser testing at our organization. Our clients use only IE 9 and up so my test environment will only require IE 9/10/11/Edge and should allow the tester to ma
Due to the nature of our work security is of high concern so we cannot invest in any tool unless we are absolutely sure that the probability of a compromise is minimal. I am currently looking into Browserstack and Saucelabs. I had also looked into modern.ie but it's licensing does not allow it to be used for 'commercial' purposes.
Would love to hear what has worked for others in similar situations.
If it's just IE that you're testing on, why not use the built-in Emulation in IE Edge (Developer Tools F12)? You can emulate browsers back to IE7.
Of course that's never going to be 100% as good as the actual browser, but it could be enough depending on your needs.
Disclosure: I work in the Product team at BrowserStack. I know you've asked others for what has worked for them but I thought I'll post a reply nonetheless.
We work with a number of large enterprises (including many banks which probably have the most stringent security requirements). Our Security is pretty robust and I'll be happy to share documentation around this if you'd like. You can reach out to me at arpitrai at browserstack. More information here in case you haven't referred to it already https://www.browserstack.com/security.
I've searched for a while, but I can't find anything related on Google or here.
Me and some friends were debating starting a company, so I figure it might be good to do a quick pilot project to see how well we can work together. We have a designer who can do HTML, CSS and Flash, enjoys doing art, but doesn't like to do HTML and CSS... And 2 programmers that are willing to do anything.
My question is, from an experienced site builder's perspective, what steps do we do - in chronological order - to properly handle a website? Does the designer design the look and feel of the site, then the programmers fill in the gaps with functionality? Or do the programmers create a "mock-up" of the site with most of the functionality, then the designer spices it up? Or is it more of a back-and-forth process?
I just want to know how a professional normally handles it.
Update:
A recap taking some of the notes from each post.
Step 1: Define requirements. What will your site/application do?
Step 2: Use cases. Who will use the application, and what will they do with it? This doesn't have to be done with a bunch of crazy UML diagrams, just use whatever visual aids you think work best for you. Find a CMS vendor, or a search vendor, or both. While planning, maybe do some competitor analysis, and see how those in similar fields have done theirs.
Step 3: Visual proof-of-concept. This is done by your designer, NOT your programmers... Programmers are notoriously bad at UI. Use an image program like Photoshop, not an HTML editor. Leave it fluid and simple at first. Select the three-color theme for the site (two primaries and an accent.) Get a sense of how you want to lay things out, keeping in mind the chosen CMS and/or search functionality. Focus hard on usability, add pizzaz later. Turn the created concept into JPEG mock-ups, or create a staging site to allow the client to view the work. A staging site will allow for future releases to be tested prior to moving it to production.
Step 4: Once the site is conceptualized by your designers, have your HTML/CSS developer turn it into markup. He/she should shoot for XHTML compliance and test on as many major browsers as you can. Also a good time to set up versioning/bug tracking/management systems, to keep track of changes, bugs, and feedback.
Step 5: Have your programmers start turning your requirements into software. This can and should be done in parallel with Step 4- there's no reason they can't be coding up the major pieces and writing tests while the UI is designed and developed.
Step 6: Marry up the final UI design with the code. Test, Test, Test!!
Step 7: Display end result to client, and get client sign-off.
Step 8: Deploy the site to production.
Rinse, Repeat...
Step 1: Define requirements. What will your site/application do?
Step 2: Use cases. Who will use the application, and what will they do with it? This doesn't have to be done with a bunch of crazy UML diagrams, just use whatever visual aids you think work best for you.
Step 3: Visual proof-of-concept. This is done by your designer, NOT your programmers. Use an image program like Photoshop, not an HTML editor. Leave it fluid and simple at first. Select the three-color theme for the site (two primaries and an accent.) Get a sense of how you want to lay things out. Focus hard on usability, add pizzaz later.
Step 4: Once the site is conceptualized by your designers, have your HTML/CSS developer turn it into markup. He/she should shoot for XHTML compliance and test on as many major browsers as you can.
Step 5: Have your programmers start turning your requirements into software. This can and should be done in parallel with Step 4- there's no reason they can't be coding up the major pieces and writing tests while the UI is designed and developed.
Step 6: Marry up the final UI design with the code. Test, Test, Test!!
Rinse, Repeat...
There is no one universal way. Every shop does it differently. Hence, a warning: gross generalizations follow.
Web development typically consists of much shorter release cycles, because it's so simple to push out a release, compared to client-side software. Thus the more "agile" methods are more frequently used than the "waterfall" models encountered in developing client software.
Figure out what, exactly, you're building.
Take care of all the legal stuff (e.g. what business entity you'll be forming, how will each team member be compensated for their work, will there be health benefits, etc).
Mockups. I suggest having the designers do the mockups since programmers are notoriously bad at UI design.
Set up some sort of bug tracking / case management system so that you have a centralized place for all your feature requests and bug reports.
Start coding.
Once you have a simple version of your app, get some people to test it out to make sure you're on the right path.
???
Profit!
As a first step, I'd recommend doing a bit of up-front design using an approach such as paper prototyping, to lock down what it is you want your website to do, and roughly how you want it to look.
Next up, read up on the Agile approach to software development and see if you like the sound of what it suggests. It tends to work best with smaller, well-motivated teams.
Figure out the minimum amount of functionality you can create that you can deliver as a product so that you can get user feedback as soon as possibly. Then expect to iteratively add functionality to the product over time.
The Web Style Guide provides a pretty detailed overview of the process.
You should mix and match the lists provided here for your needs.
I just want to make sure you know one thing...
Customers are "stoopid" when it comes to web design.
You will have to claw, scrape, drag, gnash, rip, and extricate every requirement from their naive little souls. If you fail to do so? Guess who gets the blame?
The road you now look down is a hard one filled with competition, stress, and risk. It requires endurance, faith, patience, and the ability to eat ramen 5 of 7 days a week.
To add (or repeat) Dave Swersky's list.
Gather requirements from clients
Do some competitor analysis. Gather
screen shots of competitor sites.
Build a sitemap /wireframe - What is
the structure/content of the site?
Get designers to create JPG mockups.
They may use the screen shots for
"inspiration"
Get feedback from
clients based on JPEG's
Create HTML
mockups from JPEG's
Get feedback
from clients. Go back to step 4 if
necessary
Implement HTML using
technology of choice
Unit test the site
UAT and obtain sign off.
Deploy to live
client feedback is critical, they should be involved in every step to ensure a successful implementation.
Hope this helps
In addition to the steps outlined in other answers, I'd add this (to be added somewhere near the end of the "cycle"):
x. Once you have a more or less end to end solution, set up a staging site.
y. Get client sign off on staging site.
z. Deploy to production site.
Celebrate! But not too hard, there's almost always going to be a few iterations of changes, because users rarely know exactly what they really want the first time around.
So, when (not if), the client asks for changes, you can work on the changes and promote them to the staging site first! This is important because a) it gives clients a chance to preview changes before the whole world sees them b) if the integrity of the data on the production site is important, you can hopefully weed out any issues on the staging site before they impact production data.
Just to give something on the other side of the coin. Where I work, we have for the past couple of years, worked on a redesign of the company's website. Here are some highlights of the process:
Identify vendors for various functions that will be needed. In this case that meant finding a Content Management System vendor as well as a Search vendor.
Get a new design for the site that can be applied to what was selected in the first step.
Using system integrators and in-house developers, start to build some of the functionality for the site and take the flexible, customizable software in 1 and make it useful for the organization. Note that this is where a couple of years have been spent getting this working and some business decisions ironed out.
Release a preview site to verify functionality and fix bugs, add enhancements as needed.
Note that in your case you may not have the same budget but there are various CMS frameworks out there to select as well as how much integration do you want to have for the site? Does it have to talk to a half-dozen different systems? In the case I mentioned above there are CRM integrations, ESB integrations, search integrations, and translation integrations to give a few examples of where things had to be wired up correctly.
In response to the comment, be sure you and the client know what is meant by "simple" as if there is any e-commerce functionality, forums, or personalization these are examples where it can be important to know what is needed now and have an idea of what is needed down the road as there can likely be a ton of things that customers may want but you have to figure out some of the nitty-gritty details at points in the future. For example, some people may think that Google is simple, and from an end-user perspective it is though how many computers does Google have running how many different applications doing how much processing 24/7? Quite a bit, I'd imagine. Simple is good, but sometimes making something look simple can be incredibly hard to do.
Should developers be limited to certain applications for development use?
For most, the answer would be as long as the development team agrees it shouldn't matter.
For a company that is audited for security certifications, is there a method that balances the risk of the company and the flexibility, performance of the developers?
Scope
coding/development software
build system software
3rd party software included with distribution (libraries, utilities)
(Additional) Remaining software on workstation
Possible solutions
Create white-list of approved software where developer must ask for approval for desired software before he/she can use it. Approval would be based on business purpose/security risk.
Create black-list for software. Developers list all software used. Review board periodically goes over list.
Has anyone had to work at a company that restricted developer tools beyond the team setting? How did they handle the situation?
Edit
Cleaned up question. Attempted to make less argumentative.
Limiting the software that developers can use on their work machines is a fantastic idea. This way, all the developers will quit, and then the company won't have to spend as much money on salaries and equipment, resulting in higher profits.
Real answer: NO!!!
No, developers should not be limited in the software they use, because it prevents them from successfully doing their jobs. Think about how much you are paying your team of developers, - do you really want all that money to go spiraling down the drain because you've artificially prevented them from solving problems?
1) Company locks down the pc and treats the developer as competent as a secretary
What happens when the developer needs to do something with administrative permissions? EG: Register a COM object, restart IIS, or install the product they're building? You've just shut them down.
2) Create a white-list of approved software...
This is also impractical due to the sheer amount of software. As a .NET developer I regularly (at least once per week) use upwards of 50 distinct applications, and am constantly evaluating newer upgrades/alternatives for many of these applications. If everything must go through a whitelist, your "approval" staff are going to be utterly swamped by just one or 2 developers, let alone a team of them.
If you take either of these actions, you'll achieve the following:
You'll burn giant piles of time and money as the developers sit on their thumbs waiting for your approval team, or doing things the long slow tedious way because they weren't allowed to install a helpful tool
You'll make yourself the enemy of the development department (not good if you want your devs to actually do what you ask them to do)
You'll depress team morale substantially. Nobody enjoys feeling like they're locked in a cage, and every time they think "This would be finished 5 hours ago if only I could install grep", they'll be unhappy.
A more acceptable answer is to create a blacklist for "problem" software (and websites) such as Pidgin, MSN messenger, etc if you have problems with developers slacking off. Some developers will also rail against this, but many will be OK with it, provided you are sensible in what you blacklist and don't go overboard.
I think developers should have total control on applications that they use as long as they can do their job with them. Developers' productivity is directly related to working environment and no one will like being restricted and everyone likes to use software they like themselves.
Of course there should be some standards in terms of version control, document format, etc., but generally developers should have right to use any programs they want.
And security should be developer's concern - company admins should care about setting up proper firewall to protect against any kinds of attacks.
A better solution would to create a secure independent environment for the developers. An environment that if compromised won't put the rest of the company at risk.
The very nature of the development is to create crafty ingenuous pithy solutions. To achieve this, failures must happen.
Whatever they do, don't take away the Internet in general. Google = Coding Help 101 :)
Or maybe just leave www.stackoverflow.com allowed haha.
I'd say this depends on quite a list of factors.
One is team size. If you have a team of half a dozen developers, this can be negotiated whenever a need for some application pops up. If you have a team of 100 developers, some policy is probably in order.
Another factor is what those developers do. If they compile C code using a proprietary compiler for an embedded platform, things are very different from a team producing distributed web or PC software in a constantly shifting environment.
The software you produce and the target customers are important, too. If you're porting the Linux kernel to some new platform, whether code leaks probably doesn't matter all that bad. OTOH, there are a lot of cases where this is very different.
There are more factors, but in the end it all boils down to two conflicting goals:
You want to give your developers as much freedom as possible, because that stimulates their creativity.
You want to restrict them as much as possible, as this reduces risks. (I'm talking of security risks as well as the risk to ship non-functioning software etc.)
You'll have to find a middle ground that doesn't hurt creativity while allowing enough guarantees to not to hurt the company.
Of course! If you want a repeatable build process, you don't want it contaminated by whatever random bit of junk a programmer happens to use as a tool to generate part of the code. Since whatever application you are building lasts much longer than anyone expects, you also want to ensure that the tools you use to build it are available for roughly the same duration; random tools from the internet don't provide any such gaurantee.
Your team should say "The following tools are allowed for build steps and nothing else" and attempt to make that list short.
Obviously, it shouldn't matter what a programmer looks at to decide what to do, so the entire Internet is just fine as long as its just-look. Nor does it matter if he produces code by magic (or random tool) as long as your team doesn't mind accepting just that tool's output as though it were written by hand.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We have hundreds of websites which were developed in asp, .net and java and we are paying lot of money for an external agency to do a penetration testing for our sites to check for security loopholes.
Are there any (good) software (paid or free) to do this?
or.. are there any technical articles which can help me develop this tool?
There are a couple different directions you can go with automated testing tools for web applications.
First, there are the commercial web scanners, of which HP WebInspect and Rational AppScan are the two most popular. These are "all-in-one", "fire-and-forget" tools that you download and install on an internal Windows desktop and then give a URL to spider your site, scan for well-known vulnerabilities (ie, the things that have hit Bugtraq), and probe for cross-site scripting and SQL injection vulnerabilities.
Second, there are the source-code scanning tools, of which Coverity and Fortify are probably the two best known. These are tools you install on a developer's desktop to process your Java or C# source code and look for well-known patterns of insecure code, like poor input validation.
Finally, there are the penetration test tools. By far the most popular web app penetration testing tool among security professionals is Burp Suite, which you can find at http://www.portswigger.net/proxy. Others include Spike Proxy and OWASP WebScarab. Again, you'll install this on an internal Windows desktop. It will run as an HTTP proxy, and you'll point your browser at it. You'll use your applications as a normal user would, while it records your actions. You can then go back to each individual page or HTTP action and probe it for security problems.
In a complex environment, and especially if you're considering anything DIY, I strongly recommend the penetration testing tools. Here's why:
Commercial web scanners provide a lot of "breadth", along with excellent reporting. However:
They tend to miss things, because every application is different.
They're expensive (WebInspect starts in the 10's of thousands).
You're paying for stuff you don't need (like databases of known bad CGIs from the '90s).
They're hard to customize.
They can produce noisy results.
Source code scanners are more thorough than web scanners. However:
They're even more expensive than the web scanners.
They require source code to operate.
To be effective, they often require you to annotate your source code (for instance, to pick out input pathways).
They have a tendency to produce false positives.
Both commercial scanners and source code scanners have a bad habit of becoming shelfware. Worse, even if they work, their cost is comparable to getting 1 or 2 entire applications audited by a consultancy; if you trust your consultants, you're guaranteed to get better results from them than from the tools.
Penetration testing tools have downsides too:
They're much harder to use than fire-and-forget commercial scanners.
They assume some expertise in web application vulnerabilities --- you have to know what you're looking for.
They produce little or no formal reporting.
On the other hand:
They're much, much cheaper --- the best of the lot, Burp Suite, costs only 99EU, and has a free version.
They're easy to customize and add to a testing workflow.
They're much better at helping you "get to know" your applications from the inside.
Here's something you'd do with a pen-test tool for a basic web application:
Log into the application through the proxy
Create a "hit list" of the major functional areas of the application, and exercise each once.
Use the "spider" tool in your pen-test application to find all the pages and actions and handlers in the application.
For each dynamic page and each HTML form the spider uncovers, use the "fuzzer" tool (Burp calls it an "intruder") to exercise every parameter with invalid inputs. Most fuzzers come with basic test strings that include:
SQL metacharacters
HTML/Javascript escapes and metacharacters
Internationalized variants of these to evade input filters
Well-known default form field names and values
Well-known directory names, file names, and handler verbs
Spend several hours filtering the resulting errors (a typical fuzz run for one form might generate 1000 of them) looking for suspicious responses.
This is a labor-intensive, "bare-metal" approach. But when your company owns the actual applications, the bare-metal approach pays off, because you can use it to build regression test suites that will run like clockwork at each dev cycle for each app. This is a win for a bunch of reasons:
Your security testing will take a predictable amount of time and resources per application, which allows you to budget and triage.
Your team will get maximally accurate and thorough results, since your testing is going to be tuned to your applications.
It's going to cost less than commercial scanners and less than consultants.
Of course, if you go this route, you're basically turning yourself into a security consultant for your company. I don't think that's a bad thing; if you don't want that expertise, WebInspect or Fortify isn't going to help you much anyways.
I know you asked specifically about pentesting tools, but since those have been amply answered (I usually go with a mix of AppScan and trained pentester), I think it's important to point out that pentesting is not the only way to "check for security loopholes", and is often not the most effective.
Source code review tools can provide you with much better visibility into your codebase, and find many flaws that pentesting won't.
These include Fortify and OunceLabs (expensive and for many languages), VisualStudio.NET CodeAnalysis (for .NET and C++, free with VSTS, decent but not great), OWASP's LAPSE for Java (free, decent not great), CheckMarx (not cheap, fanTASTic tool for .NET and Java, but high overhead), and many more.
An important point you must note - (most of) the automated tools do not find all the vulnerabilities, not even close. You can expect the automated tools to find approximately 35-40% of the secbugs that would be found by a professional pentester; the same goes for automated vs. manual source code review.
And of course a proper SDLC (Security Development Lifecycle), including Threat Modeling, Design Review, etc, will help even more...
McAfee Secure is not a solution. The service they provide is a joke.
See below:
http://blogs.zdnet.com/security/?p=1092&tag=rbxccnbzd1
http://blogs.zdnet.com/security/?p=1068&tag=rbxccnbzd1
http://blogs.zdnet.com/security/?p=1114&tag=rbxccnbzd1
I've heard good things about SpiDynamics WebInspect as far as paid solutions go, as well as Nikto (for a free solution) and other open source tools. Nessus is an excellent tool for infrastructure in case you need to check that layer as well. You can pick up a live cd with several tools on it called Nubuntu (Auditor, Helix, or any other security based distribution works too) and then Google up some tutorials for the specific tool. Always, always make sure to scan from the local network though. You run the risk of having yourself blocked by the data center if you scan a box from the WAN without authorization. Lesson learned the hard way. ;)
I know you asked specifically about pentesting tools, but since those have been amply answered (I usually go with a mix of AppScan and trained pentester), I think it's important to point out that pentesting is not the only way to "check for security loopholes", and is often not the most effective.
Source code review tools can provide you with much better visibility into your codebase, and find many flaws that pentesting won't.
These include Fortify and OunceLabs (expensive and for many languages), VisualStudio.NET CodeAnalysis (for .NET and C++, free with VSTS, decent but not great), OWASP's LAPSE for Java (free, decent not great), CheckMarx (not cheap, fanTASTic tool for .NET and Java, but high overhead), and many more.
An important point you must note - (most of) the automated tools do not find all the vulnerabilities, not even close. You can expect the automated tools to find approximately 35-40% of the secbugs that would be found by a professional pentester; the same goes for automated vs. manual source code review.
And of course a proper SDLC (Security Development Lifecycle), including Threat Modeling, Design Review, etc, will help even more...
Skipfish, w3af, arachni, ratproxy, ZAP, WebScarab : all free and very good IMO
http://www.nessus.org/nessus/ -- Nessus will help suggests ways to make your servers better. It can't really test custom apps by itself, though I think the plugins are relatively easy to create on your own.
Take a look at Rational App Scan (used to be called Watchfire). Its not free, but has a nice UI, is dead powerful, generates reports (bespoke and against standard compliance frameworks such as Basel2) and I believe you can script it into your CI build.
How about nikto ?
For this type of testing you really want to be looking at some type of fuzz tester. SPIKE Proxy is one of a couple of fuzz testers for web apps. It is open source and written in Python. I believe there are a couple of videos from BlackHat or DefCON on using SPIKE out there somewhere, but I'm having difficulty locating them.
There are a couple of high end professional software packages that will do the web app testing and much more. One of the more popular tools would be CoreImpact
If you do plan on going through with the Pen Testing on your own I highly recommend you read through much of the OWASP Project's documentation. Specifically the OWASP Application Security Verification and Testing/Development guides. The mindset you need to thoroughly test your application is a little different than your normal development mindset (not that it SHOULD be different, but it usually is).
what about rat proxy?
A semi-automated, largely passive web
application security audit tool,
optimized for an accurate and
sensitive detection, and automatic
annotation, of potential problems and
security-relevant design patterns
based on the observation of existing,
user-initiated traffic in complex web
2.0 environments.
Detects and prioritizes broad classes
of security problems, such as dynamic
cross-site trust model considerations,
script inclusion issues, content
serving problems, insufficient XSRF
and XSS defenses, and much more
Ratproxy is currently believed to support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments.
formerly hackersafe McAfee Secure.
What are the most common things to test in a new site?
For instance to prevent exploits by bots, malicious users, massive load, etc.?
And just as importantly, what tools and approaches should you use?
(some stress test tools are really expensive/had to use, do you write your own? etc)
Common exploits that should be checked for.
Edit: the reason for this question is partially from being in SO beta, however please refrain from SO beta discussion, SO beta got me thinking about my own site and good thing too. This is meant to be a checklist for things that I, you, or someone else hasn't thought of before.
Try and break your own site before someone else does. Your web site is basically a publicly accessible API that allows access to a database and other backend systems. Test the URLs as if they were any other API. I like to start by cataloging all URLs that have some sort of permenant affect on the state of the system - this is easy if you are doing Ruby on Rails development or trying to follow a RESTful design pattern. For each of those URLs, try running a GET, POST, PUT or DELETE HTTP methods with different parameters so that you can ensure that you're only giving access to what you want to give access to.
This of course is in addition to obvious: Functional testing, Load Testing, SQL Injection, XSS etc.
Turn off javascript and make sure your site can still be navigated.
Even if you want to ignore the small but significant number of people who have it disabled, this will impact search engines as well.
YSlow can give you a quick analysis of different metrics.
What do friendly bots see (eg: Google); check using Google Webmaster Tools;
Regarding tools for running functional tests of a web pages, I've found that Selenium IDE to be useful.
The Firefox (version 2 only compatible at the moment) plug in lets your capture almost all web events, and save them and replay them in the same browser.
In conjunction with another Firefox https://addons.mozilla.org/en-US/firefox/addon/1843"> Firebug
you can create some very powerful tests.
If you want to set up Selenium Remote Control
you can then convert the Selenium IDE tests into nUnit tests, which you can run automatically.
I use cruise control and run these web tests as part of a daily build.
The nice thing about using Selenium remote control is that it can run the same functional tests on multiple browsers and operating systems, something that you can't do with the IDE.
Although the web tests will take ages to run, there is an version of Selenium called Selenium Grid that lets you use any old hardware you have spare to run the tests in parallel as part of a computing grid. Not tried this myself, but it sounds interesting.
All of the above is open source and free which helped me convince management to use if :-)
For checking the cross browser and cross platform look of your site, browershots.org is maybe the best free tool that can safe a lot of time and costs.
There's seperate stages for this one.
Firstly there's the technical testing, where you check all technical functionality:
SQL injections
Cross-site Scripting (XSS)
load times
stress levels
Then there's the phase where you have someone completely computer-illiterate sit down and ask them to find something. Not only does it show you where there's flaws in your navigational logic (I find that developers look upon things way differently than 'other people') but they're also guaranteed to find some way to break your site.