I've spend a fair amount of time on website optimization (YSlow, Google's Page Speed, etc), and now I'm thinking more and more about improving the accessibility of my websites.
However, it seems like that they are competing interests at times. If I include separate style sheets for screen readers, mobile devices, etc, those are additional files to download. Similarly, there are many files that would be unnecessary for visitors using screen readers, mobile devices, etc.
So where does that leave us? Server-side browser sniffing? I imagine that would only help in a limited set of cases. Are there teams out there (e.g. at Google or Yahoo) that are actively working on these issues or have come out with some recommended practices?
One interesting approach that I read about for this optimizing the request size but maintaining accessibility is to store the accessibility class (screen reader, mobile device, etc.) in the session. In the case that the accessibility class is not stored in the session (for instance, first page load - session start), send all style sheets (etc), and detect the accessibility class using Javascript. Send this back and store it in the session for the future. Where the session does store the accessibility class, simply transmit only the appropriate style sheet (etc).
Sniffing of user agent (browser or otherwise) is hardly a strange technique these days, and frameworks such as jQuery or dojo do it on your behalf anyway, so, why not take advantage of it? Just make sure you let your user override things by some explicit but simple action (for those times in which the sniffing heuristics get it wrong).
Well, I wouldn't worry too much about the stylesheets for the different platforms, since they're pretty much downloaded once and then cached. I would strongly recommend designing for accessibility first and optimizing the downloads second.
Related
I am writing a detailed estimate for a client (project has been accepted but it is now a matter of explaining the different functionalities) to develop a responsive layout website.
It is not my first development of that kind, but this one is a key account and path must be paved.
The layout will adapt from 300px width to 1200+px and thus virtually to "any" device and browser, but i am a little lost on my commitment to that. With desktop websites it is easy to write in your contract that the supported browsers will be "IE7+, up-to-date versions of FF, Safari, Chrome, Opera", but what do you write about a responsive website?
I have a bunch of devices that i know i'll perform tests with (let's say a PC, a Mac, iPad, iPhone, 2 or 3 Android devices) but what do i say to my client?
I can't write that the "website will work on any device", nor can I give an exhaustive list of the combinations of devices/browsers it will work on. And i don't want to be stuck with "my uncle has seen the website on his 2.2 Android old phone and it doesn't work".
There are a lot of desktop tools around to simulate various viewports and perform tests on, but they hardly work as the "real thing" ; or is there one standard we developers can refer to "contractually"? How do you manage that and what are your commitments towards your clients?
Take on a Progressive Enhancement approach for development. It isn't possible to get a website to be pixel perfect and work the same on every single browser.
Take of a tiered approach (Gold/Silver/Bronze). Old and untested browsers will get the content (IE7, older blackberry, older anything). Newish browsers get content and layout nice (IE8/9, Firefox < 4). And Modern browsers get a typical nice modern website.
It's possible to set this up with appropriate thinking. Build it from the bottom up. Bronze to Silver to Gold. Start with the very minimum setup (only colour, font, text. No divs, no layout, nada). This is your bronze. Next get the silver level setup. Include layout. This layout would be for smaller screens. And finally we would have gold. This would include media queries for larger screens and JS for increased usability and niceties.
It is possible to split between Bronze and Silver for the layout by wrapping your layout within #media only screen{} query. Older browsers do not understand it. The content still appears on those browsers. To split between Silver and Gold it's simply put in a min-width media query and you're set.
Also, ensure that the client understands the definition of "website will work on any device". Just because Opera Mini doesn't support line-height doesn't mean the site doesn't work on it. Here is an article that Brad Frost wrote on that subject: Support vs Optimization: http://bradfrostweb.com/blog/mobile/support-vs-optimization/
I hope this helps a bit
What devices/browsers to target with a responsive layout
You should be targeting a minimal resolution and not a device or browser. You should re-size your design and watch for the design to no longer work comfortably and then use media queries to respond and adapt the design.
Clients:
I think ideally you're looking for a way to explain the concept to customers. What you need to do is communicate the goal, which is to provide the best experience possible. I have found that you should be honest with your customer, let them know that you are following industry best practices and the design will be "functional" across the majority of devices and browsers on the market.
Here is a blurb that I like to use:
Compatibility across platforms: Due to the vast number of web
browsers, platforms and devices, the user experience of the website
may change to best fit the viewing platform.
If you would like to explain, or are asked to explain what this means, I would say:
We(I) always use best practices to meet industry standards for web
design. We(I) do our(my) best to insure that the design will be
"functional" on all main stream platforms. Because of the increasingly
vast number of web accessible devices available, I can not guarantee
that the design will look identical from one device to another.
You must also consider that new devices are created all the time. So you don't want to make your statement too concrete or you will be retrofitting you designs to accommodate the next iWatch or iFridge that hits the market.
Remember to communicate what is really important, that the content gets displayed. For the most part text and images should work almost anywhere. It's the fancy stuff like shadows, rounded corners, video (IE7), and media queries that don't always work but shouldn't obscure content.
Also, web applications can be a bit tricker since some form elements don't work across devices & browsers. (ex: File uploads).
Hopefully this helps.
I'm not a contract lawyer, you may want to run this across a paralegal or law savvy friend for further advice.
Having gone through this experience a lot of times these are the key points i tend to show to client to make them understand the concept.
What is cross browser, platform (OS), screen resolution and device screen size. I tend to take latest screenshots or links of W3C stats to show them some trends in past 2 years and where this market is going. it clearly convince them that they need a responsive layout.
If they want further evidence to these i show them twitter bootstrap and jquery UI and jquery mobile sites on at least 2-3 devices using Adobe Edge Inspect. Laptop, Tablet and Phone can be all on same network and just browse through the site to show them how it responds to different sizes.
I have a list of pros and cons collected over time for IE7, IE6, Chromeframe, Android native browser vs Chrome on Android, IPhone4 vs iPhone5 browser. usually i see which way they are more inclined and if its phone i will make them aware of pros and cons of that market. Most of the time they should understand that its not an app but a responsive site.
When you are writing a proposal do not put in "We will cover for all devices". You can never do that. Be realistic. Create virtaul machine and simulators to test at least the framework you are going to use before making these claims in writing. Setting up VM or Virtual Box is free and you can get Linux and eclipse or netbeans to run dummy phone and tablet and browse your framework on it.
For screen resolution and screen size again W3C stats are very realistic and convincing. Use them and use Resolution test kind of plugins in firefox or chrome and take some screenshots of your past work or just the framework so you can at least show the goodness involved or limitation of what can be done and cannot be done.
Most of the answers here are quite right just wanted to add how you can use both W3C and statistics to convince on the route for responsive layout. There are more than million sample sites online to convince people now and for past 6 month i have found that people do get convinced by step 1 alone.
It's quite clear. Your client wants a modern website. However, all browsers don't support all the modern features. Your client wants to spend money wise and have a site that improve through time. Not grow old soon and new bugs emerge as new browsers are released. This is why Graded Browser Support is for: http://jquerymobile.com/gbs/
Site will be usable with all the browsers. Mainstream browsers will get enhanced user experience. Most modern browsers will get the super cool newest goodies.
Let the client know, the more you "hack" the site to get some feature to work with some defect old browser, the more likely it will break on the new ones. It's not worth the time and money.
Here is what I do when designing responsive websites.
When an old or unsupported browser is detected, the website will simply exclude the jQuery elements that make it responsive, so the result is a fixed width site.
Now, what do you tell your client?
Just be frank. Tell them you made your website responsive for the modern devices out there. For the older devices, their website wont be so good-looking. Show them a couple of examples too. Some big companies, simply display a alert telling the user their browser it outdated and the website won't work properly. One example is Google.
So, essentially, your website works with all devices, but looks better and is responsive on the modern devices and browsers out there.
If a website is to provide non-free content in terms of videos and documents, how is one about to serve this kind of content? I don't want to write an occasionally connected desktop application (similar to iTunes) that prevents to easily share bought content so I have a few questions about this:
Documents: What's best document format for this kind of scenario that would prevent one to share it freely after it was bought?
If I think of PDF it's a great format so people can't temper with its content and they will actually see what you created, but the problem is that after one obtains it, it can easily get shared with others. Either having password or not.
Videos: If supporting video it's probably wise to use some public (may be payable as well) service that can handle video streaming like YouTube (which is AFAIK not able to have a non-public videos).
I'm well aware that there is no 100% perfect solution that prevents it all, but having a 90% successfully locked down content is still better than hoping people won't share something that can easily be shared.
Can you list a few websites that do a similar thing. I may learn a lot from them. And please provide some guidelines I should follow in this regard.
Is there maybe a similar to PDF document format that has built in security capabilities? Commercial even... It should support some kind of authorization functionality within that would work similarly to any software activation.
Note: I wasn't sure whether this should be posted on webapps.stackexchange.com or here but I decided it should be posted here because it's related to development. I'm generally interested in programmable approaches I could use.
Our testing system is pretty rudimentary; fire up a browser, see if it works. Recently we ran into problems, found by our client, with our application where the number of users created a slow-down in the application. The application is basically a huge Word document with people editing their own versions all at the same time. Part of the problem came from not knowing how to test multiple instances at the same time. My partner and I thought about how to test this; one idea was to hire out an internet cafe and hire students for an hour to bang on the app.
What are other ways that people have tried to emulate concurrency in testing their web-based application? Most of the advice here is for specific methodology; I'm asking, how do you test it to make sure that it works?
If you have never checked out Selenium, then you need to. It will allow you to do automated web testing through the browser. Ok, so first problem solved.
Now ideally you could use that same script and load it up on a bunch of boxes and run them all at once to get some sort of load testing right? Luckily for you someone has already figured this out, although it is a paid service: Browser Mob. But, it looks like you were willing to spend a little money to do this anyway, and would probably net you better, more repeatable results.
We usually answer the question "can the web application do more than one thing at a time" by using JMeter to produce a simulated HTTP load on the web server.
I find that it helps to consider distinguish several different types of testing; concurrency (what happens when two events in the system collide), capacity (what happens when there are many overlapping requests), volume (what happens as data accumulates in the system)...
Huge general slow down, evidenced by response times that fall outside of the SLA, are usually related to capacity problems (with contention as a common cause) or volume (many users, much data, and the system gets slower over time). The former usually requires some sort of multi-threaded request stream; the latter you can usually manage by preloading the volume, and then measuring the response times experienced by a single user.
I generally find that separating the load generator from the actual measurement/instrumentation is a good idea. That can be as simple as having a black box over there to generate a typical load, and sitting here with a stop watch measuring the responsiveness of a typical use case.
JMeter http://jmeter.apache.org/
I have a huge web application and I want to use a system which will help the developers keep track of areas (links) that have been visited and tested. I am not looking for a bug tracking system, etc. Just a system that will help me track pages that I have visited, etc.
Let me know if there is a tool to keep track of pages physically visited and signed off.
I don't know any out-of-the-box solution to do this.
If nothing comes up, you might be able to put something together using frames. You could have your site in the top frame, and the tracking application in a tiny bottom frame. The bottom frame would constantly update itself and tell you whether the URL in the big frame is in the list of visited pages. You would need some server side programming for that (e.g. PHP) and it would be some work, but nothing impossible.
I am also not aware of any Tools that are available for this. However, without any tools, you can still have developers give you a snapshot of browser history which shall give you this info. Alternatively depending on the webserver you are using, you can write simple scripts to scan through the access logs and generate a report of all the pages visited (uniquely and otherwise) and compare, if needed.
If you want to just track where you're visited you might consider pulling the data out from your browser history or, if you have access to them, server logs.
But I would consider writing, or recording—or creating somehow—actual automated tests for your web app esp. given its size.
SimpleTest provides a, now quite mature, web tester. I heard things about it integrating with Selenium. You can also use Watir which I know will run with Ruby but possibly more than that. Watir has the advantage that you can test multiple browsers. I'm sure there are plenty more tools if you do a few searches.
One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on.
One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours).
So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like
how to detect if another application makes periodical screenshots?
how to detect if another application scans our process memory?
what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client?
Your feedback is highly appreciated!
I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet.
So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to.
Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that.
You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue.
Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already.
The best protection against automation is to not have tasks that require grinding.
That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often.
However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning.
A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx.
Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application.
Google the term: "Key Loggers"...
Here's an article that describes the problem and methods to prevent it.
I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go:
Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on.
You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function.
This should work for the screenshot taking too, but you might want to intercept the BitBlt function.
Here's a basic tutorial concerning the function interception:
Intercepting System API Calls
You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers.
Edit: What I mean is, look into how they do it; how they detect that stuff.
I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running.
That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work.
Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments).
All these questions are easily solved by the rule 1 above:
* how to detect if another application makes periodical screenshots?
* how to detect if another application scans our process memory?
* what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
* is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.