As background, I currently develop for a university, and we have problems with departments demanding "web 2.0 content" and accessibility requirements.
How do really big sites that are JavaScript based deal with 508 Compliance? Some sites degrade, and others require enabling JavaScript. How much impact does one decision have versus the other?
Also, in a realistic sense, how much development time should be devoted the accessible versions of sites versus the "main" versions?
I'm a blind developer and find it possible to use many Web 2.0 sites - this is most certainly possible.
Firstly, I strongly advise against making a separae accessible site, regardless of how many people advise you to do this. This is bad practice and will end up being more work, even if it initialy seems simpler.
Next, try to use progressive enhancement (particularly if this is a new site). Code the site without any Javascript; it's not just accessibility which benefits. Then, in your OnLoad() go through and attach Click events to the anchor tags; this way if you have Javascript you'll see the Ajax version, otherwise you will have a full page refresh and see another HTML page.
Luckily, there is a new standard, WAI-Aria (www.w3.org/WAI/intro/aria.php) which makes this much simpler. You attach attributes to HTML tags to identify the semantics of an Ajax control, for example. The only problem with Aria is that it only works with newer screen readers and web browsers. The university may well require the site be accessible to people running older software.
I'm a screen reader user and often use Javascript enabled sites. Javascript is not an accessibility issue, the way it is used can be. For example if the site uses javascript that requires the use of a mouse and doesn't have keyboard alternatives it will not be 508 compliant. An example of a site that uses Javascript and is accessible is stackoverflow.com. The only feature that isn't accessible is the ability to determine if you have accepted an answer to a question. I would take a look at the links in Annie's answer. All the blind college students I know use a fairly modern browser with Javascript enabled, Lynx is no longer popular in the blind community. If you want to try using a screen reader a good open source one for windows can be found at
http://www.nvda-project.org/
and it works well with firefox. If you want to try using the web with out Javascript install the Noscript addin.
Sites don't have to disable JavaScript to be accessible. Many sites use ARIA roles to work better with screen readers. There's a giant list of articles on accessible AJAX applications here. You could try something like AxsJAX.
Related
I am interested in automating repetitive data entry in some forms for a website I frequent. So far the tools I've looked up that would provide support for this in a headless fashion could be Selenium WebDriver and Mechanize.
My question is, is there a fundamental technical difference in using once versus the other? Selenium is mostly used for testing. I've also noticed some folks use it for doing exactly what I'm looking for, and that's automating data entry. Testing becomes a second benefit in that case.
Is there reasons to not use Selenium for what I want to do over Mechanize? Does it not matter and both of these tools will work?
I'm not asking which is better, I'm asking which is the right tool for the job. Perhaps I'm not understanding the premise behind the purpose of each tool.
These are completely different tools that somewhat "cross" in the web-scraping, web automation, automated data extraction scope.
mechanize is a mature and widely-used tool for programmatic web-browsing with a lot of built-in features, like cookie handing, browser history, form submissions. The key thing to understand here is that mechanize.Browser is not a real browser, it cannot execute and understand javascript, it cannot send asynchronous requests often needed to form a web page.
This is where selenium comes into play - it is a browser automation tool which is also widely used in web-scraping. selenium usually becomes a "fall-back" tool - when someone cannot web-scrape a site with mechanize or RoboBrowser or MechanicalSoup (note - another alternatives) because of, for instance, it's javascript "heaviness", the choice is usually selenium. With selenium you can also go headless, automating PhantomJS browser, or having a virtual display. As a commonly mentioned drawback, performance is often mentioned - with selenium you are working with a target site as a real user in a web browser, which is loading additional files needed to form a page, making XHR requests, rendering etc.
And this itself does not mean you should use selenium everywhere - choose the tool wisely, choose it because it fits the problem better, not because you are more familiar with an instrument.
Also note that you should, first, consider using an API (if provided by the target website) instead of going down to web-scraping. And, if it comes to it, be a good web-scraping citizen:
How to be a good citizen when crawling web sites?
Web scraping etiquette
What are the differences between the three Smalltalk web application frameworks?
Some starting points:
What is the sweet spot for each framework? in Which case would you use one or the other?
What are their weaknesses?
Which one has the cleanest URLs?
How do they handle Ajax?
Do they have some preference in their use of persistence?
I'm just trying to decide which framework is appropriate for each kind of application.
I can only answer for Seaside:
Target: Seaside targets complex web applications with focus on reusability and development productivity. There is automatic session state management and back-button support. The two free online books Dynamic Web Development with Seaside and Seaside Tutorial provide documentation.
Weakness: For RESTful URLs you have to do some extra work.
Clean URLs: For RESTful URLs you have to do some extra work, but it can be worth it (e.g. Pier).
AJAX: There are plenty of AJAX libraries integrated in Seaside (jQuery, jQueryUI, Prototype, script.aculo.us, ...). The integrations give you full access to these libraries from within Smalltalk. New libraries can be easily integrated, e.g. JQueryWidgetBox.
Persistency: Seaside is a web application framework, not a persistency framework. You can use whatever persistency solution fits you the best, e.g. GemStone, GOODS, GLORP, ...
Also see these other questions/discussions on StackOverflow:
What is the difference between Seaside programmming and other web programming
Is Seaside still a valid option?
I can say something on the Iliad side:
Sweet spot(s): It handles AJAX painlessly. For me, that was the turning point that made me switch to Iliad. Also, it's so small and non-bloated that you can read the whole code in a day and have a grasp on how it works.
Weaknesses: The community is also very small. This results in a lack of documentation, additional modules or pre-made widgets. OTOH, small communities tend to be willing to help each other more eagerly, so pretty much all your doubts can be solved by asking at the mailing list.
URLs: Well, since all calls in Iliad are AJAX by default, the URL stays clean the whole time.
Ajax: Yep. For free and by default. You just #markDirty a widget and it'll update automatically. Dependencies are as easy to define as sending #addDependantWidget: to a widget, so that when the first is marked dirty, both will be updated. Also, if the client doesn't have a javascript capable browser, all calls will fall back to regular HTTP requests automatically.
Persistence: No preference. Since the model is separated from the framework (I think this applies for the three frameworks) you can still follow the same guidelines you would for Aida or Seaside.
And for Aida/Web:
Sweet spots: Realtime web support out of the box, for both content websites and complex web apps, HTML5 and mobile support, web server included so it works immediately after installation, you can serve many virtual websites from the same image.
Weaknesses: lack of documentation, small community
URLs: clean REST-like URLs all the time, because Aida follows from the start the moto: every domain object can have its URL (also by Alan Kay) and domain object can even choose its URL by itself.
Ajax: Seamlessly integrated, you don't see it anymore, all is just there. To refresh some element on webpage you simply call e update. No need to know any jQuery or some other JavaScript. Same goes for realtime web apps as well. WebSocket protocol is default communication channel on supported browsers to exchange JSON messages between browser and Aida based server.
Persistence: Image based persistence with automatic snapshot every hour is turned on by default. Gemstone/GLASS support provided for the next step. Relational/other DB is a duty of domain level, if needed.
For more:
Comparison of Smalltalk web frameworks from Aida centric
perspective
ToDo example in Aida/Web shows the newest realtime web/HTML5
features, as part of Comparison by example initiative
For some persistency solutions for Seaside, there is a page. Most of the solutions there are independent of Seaside.
A friend of mine and I are looking to start a project looking into accessible user interface (for blind users) design. There are a number of projects making existing GUI's accessible by tagging them with audio information but we're looking to work from the ground up and actually take input from a ML and create an accessible application.
I'm trying to figure out what ML to use and am torn between three at the moment. The three I'm considering are XAML,MXML, and XUL. Currently, I'm leaning towards XUL because it's open but I was wondering if anyone could think of any pros/cons that I might be missing? I know that XAML is the most popular but does it do things that XUL can't? How similar are they?
I should add that whatever ML we end up using we will be extending the syntax so that we can provide additional information to the audio system.
I have already addressed this question to some extent here.
The pros/cons of XUL are:
it's open
it's cross platform
it's well established with a large community
it still basically has to be run in a browser that supports XUL (firefox)
one of the comments from my question stated that XUL is a bad choice because firefos is buggy
The pros/cons of XAML are:
it'll work on Windows/Mac
it has a well established drag-drop IDE (VS 2010) to create GUIs
it has a massive support community
it's closed source
it's a closed platform, IE. it not an open standard (not covered under ECMA like .NET and C#)
there are legal issues regarding the use on non microsoft/mac plagforms (see my post)
it requires either a browser with a the silverlight plug-in or the .NET framework to use it on the desktop
it's developed/controlled by MS. This isn't an attempt to troll. Seriously, look it up on google. There are a lot of people who are suspicious of MS's intent behind creating XAML and it has garnered a lot of negativity behind the platform. It might be worth taking into consideration.
The pros/cons of MXML:
it's cross platform
it's closed source
it runs on a closed platform
it requires adobe flash (which, a lot of people claim is a dying platform now that Apple is rejecting to support/allow it).
it requires a browser with a plug-in
Note: I can't really say much about MXML because this is the first time I've heard about it. I just pointed out the obvious pros/cons for completeness. I'll have to research it and add an entry to in the question I linked.
XUL application can be run under XUL Runner because after Firefox 4, remote XUL application execution within Firefox browser is prohibited
In the bad old days of interactive console applications, Don Libes created a tool called Expect, which enabled you to write Tcl scripts that interacted with these applications, much as a user would. Expect had two tremendous benefits:
It was possible to script interactions that otherwise would have had to be repeated by hand, tediously. A classic example was dialup Internet access hell (from the days before PPP).
It was possible to write scripts to test one's own interactive applications, programmatically, as part of a regression suite.
Today most interactive applications are on the web, not on the console. Hence my question: is there any tool that provides the ability to interact with web pages and web forms programmatically, much as Expect provides the ability to interact with console applications programmatically?
(The closest thing I am aware of is Chickenfoot.)
You might be looking for Selenium
I've used Selenium RC in conjunction with Python to drive web page interactions programmatically. This has allowed me to write pretty extensive user tests in which forms and inputs are driven and their results are measured.
Check out the Selenium IDE on Firefox (as mentioned above). It allows you to record tests in the browser and play them back, either using the IDE itself, or the Remote Control app.
Perl Mechanize works pretty well for this exact issue.
HTTPS and some authentication issues are tricky at times. I will be posting couple questions about those in the future.
I did a ton of Expect work in a former life and always thought Don Libes' Expect book was one of the best-written and most enlightening technical books I'd ever seen.
Hands down I would say that Perl's WWW::Mechanize library is what you want. I note above that you were having trouble finding documentation. There is good documentation for it! Look up the module's distribution on search.cpan.org and see what all is packaged with it. There's a FAQ, Cookbook with examples, etc. Plus I've always been able to get help on the web. If you can't get it here, try at use.perl.org or perlmonks.org. WWW::Mechanize's author, Andy Lester, is present on Stack Overflow. (He's also an all around friendly and helpful guy.)
I believe WWW::Mechanize also has a program that is analogous to Expect's autoexpect program: you set up a proxy process running this program as a server, point your browser to it as a proxy, perform the actions you want to automate, and then the proxy program gives you a WWW::Mechanize program for you to use as a base for your project. (If it works like autoexpect, you will certainly want to make modifications from there.)
As mentioned above, WWW::Mechanize is a browser (to be more exact, it is a web client or http client) that happens to be programmable. The last time I looked, there was even work in progress to make it support JavaScript.
In addition to Selenium, if you're doing the Ruby/Rails thing, there's Webrat.
I am planning to design an application XUL & XPCOM for proprietary system. So i have decided to use C/C++ but how can I start the development as a beginner in this field
I cannot find a good guide to start around. It will be good if you can give some links
and books. I also would like to know how to prevent the user from modifying the code specially in the view part because the logic can be done in XPCOM.
XUL explorer is a tool that lets you drag and drop XUL. It's good for mocking up an interface or starting to learn about the various elements you can use.
xulrunner is Mozilla's binary that allows you to run XUL/XPCOM/javascript applications.
The Mozilla Developer Center is your friend.
If you use IRC, check out #xulrunner on irc.mozilla.org . They are fairly tolerant of some questions from beginners.
I don't think there's going to be away around allowing the user to see (or potentially modify) the actually XUL interface. There are some paths for trying to secure JavaScript in some way (some surface level, like obscuring, minifying, but then some possible secure loading methods). XPCOM can be written in C++ or JavaScript, to name a few, if you put more of your code in XPCOM it should be more secure, I think.
A fun start for seeing what you can do in XUL is to check out the XUL Periodic Table.
Preventing the user from modifying your code is futile, as they will always be able to do this.
You could of course ship a modified build of xulrunner (containing some required XPCOM as well) which only loads jars signed by some key, but they could trivially hack around that by modifying the binary or the image in memory.
So don't bother trying to stop people modifying your code - you can't - unless you're on a trusted platform such as a games console - and even then it's not guaranteed.
This helped me to create my first XPCOM.