Supporting multi-monitors - requirements

I want to provide multi-monitor support in my application.
In the past I have had the simplistic view that multi-monitor support is simply the lack of open multi-monitor related bugs. If it seems to work on a multi-monitor setup, then it supports multi-monitors, right?
But I would like to create some clear requirements about this.
What are the basic requirements I need to adhere to, in order to satisfy most users' expectations so that they might say "yes this application supports multi-monitors"?
For example an obvious requirement is that all windows/messageboxes/tooltips etc must open on the same monitor that the application is on. And any children of those windows must open on the same monitor as their parent.
Can you think of any more? Are there any guidelines about this anywhere?

if the application was last used in a multi-monitor setup, and started up a second time with none of the additional monitors plugged in, make your application sense this and more all boxes and working areas back into the main screen.
(Eclipse doesn't do this and it annoys the hell out of me)

What makes me very happy is an application that remembers where all of its windows are (be they on the main monitor or second monitor) and re-displays everything in the same layout when I start up the application.

Couple of nasties to watch out for from personal experience (with GDI and Direct3D9 anyway):
When the two monitors have different bit depths, and the user expands the window to cover both monitors or drags it from one to the other... well it can get ugly if your app wasn't expecting it. Used to be much more of a headache back when systems might mix 8-bit palletized displays with higher bit-depth RGB ones (although be aware that such configurations are still out there for certain specialized applications).
Direct3D windows created on one "D3D device" may need attention if you drag them to another display (wholly or partially). Depending how you set up the device, windows either displays nothing on the other display, or does display content but with a massive amount of CPU-processing overhead (enough to kill high frame rates). (This was true on XP anyway).

I always thought this was an OS responsibility. It seems that way on the Mac. Alas, I've run into lots of apps in Windows that don't behave very well on dual monitors, though I've tended to blame that on the video card.
My biggest complaint are apps that insist on using an MDI interface only. MDIs are great if you have limited screen space but get quite annoying when you actually have lots of screen space and want floated pallets/documents separate from each other.

Related

What devices/browsers to target with a responsive layout

I am writing a detailed estimate for a client (project has been accepted but it is now a matter of explaining the different functionalities) to develop a responsive layout website.
It is not my first development of that kind, but this one is a key account and path must be paved.
The layout will adapt from 300px width to 1200+px and thus virtually to "any" device and browser, but i am a little lost on my commitment to that. With desktop websites it is easy to write in your contract that the supported browsers will be "IE7+, up-to-date versions of FF, Safari, Chrome, Opera", but what do you write about a responsive website?
I have a bunch of devices that i know i'll perform tests with (let's say a PC, a Mac, iPad, iPhone, 2 or 3 Android devices) but what do i say to my client?
I can't write that the "website will work on any device", nor can I give an exhaustive list of the combinations of devices/browsers it will work on. And i don't want to be stuck with "my uncle has seen the website on his 2.2 Android old phone and it doesn't work".
There are a lot of desktop tools around to simulate various viewports and perform tests on, but they hardly work as the "real thing" ; or is there one standard we developers can refer to "contractually"? How do you manage that and what are your commitments towards your clients?
Take on a Progressive Enhancement approach for development. It isn't possible to get a website to be pixel perfect and work the same on every single browser.
Take of a tiered approach (Gold/Silver/Bronze). Old and untested browsers will get the content (IE7, older blackberry, older anything). Newish browsers get content and layout nice (IE8/9, Firefox < 4). And Modern browsers get a typical nice modern website.
It's possible to set this up with appropriate thinking. Build it from the bottom up. Bronze to Silver to Gold. Start with the very minimum setup (only colour, font, text. No divs, no layout, nada). This is your bronze. Next get the silver level setup. Include layout. This layout would be for smaller screens. And finally we would have gold. This would include media queries for larger screens and JS for increased usability and niceties.
It is possible to split between Bronze and Silver for the layout by wrapping your layout within #media only screen{} query. Older browsers do not understand it. The content still appears on those browsers. To split between Silver and Gold it's simply put in a min-width media query and you're set.
Also, ensure that the client understands the definition of "website will work on any device". Just because Opera Mini doesn't support line-height doesn't mean the site doesn't work on it. Here is an article that Brad Frost wrote on that subject: Support vs Optimization: http://bradfrostweb.com/blog/mobile/support-vs-optimization/
I hope this helps a bit
What devices/browsers to target with a responsive layout
You should be targeting a minimal resolution and not a device or browser. You should re-size your design and watch for the design to no longer work comfortably and then use media queries to respond and adapt the design.
Clients:
I think ideally you're looking for a way to explain the concept to customers. What you need to do is communicate the goal, which is to provide the best experience possible. I have found that you should be honest with your customer, let them know that you are following industry best practices and the design will be "functional" across the majority of devices and browsers on the market.
Here is a blurb that I like to use:
Compatibility across platforms: Due to the vast number of web
browsers, platforms and devices, the user experience of the website
may change to best fit the viewing platform.
If you would like to explain, or are asked to explain what this means, I would say:
We(I) always use best practices to meet industry standards for web
design. We(I) do our(my) best to insure that the design will be
"functional" on all main stream platforms. Because of the increasingly
vast number of web accessible devices available, I can not guarantee
that the design will look identical from one device to another.
You must also consider that new devices are created all the time. So you don't want to make your statement too concrete or you will be retrofitting you designs to accommodate the next iWatch or iFridge that hits the market.
Remember to communicate what is really important, that the content gets displayed. For the most part text and images should work almost anywhere. It's the fancy stuff like shadows, rounded corners, video (IE7), and media queries that don't always work but shouldn't obscure content.
Also, web applications can be a bit tricker since some form elements don't work across devices & browsers. (ex: File uploads).
Hopefully this helps.
I'm not a contract lawyer, you may want to run this across a paralegal or law savvy friend for further advice.
Having gone through this experience a lot of times these are the key points i tend to show to client to make them understand the concept.
What is cross browser, platform (OS), screen resolution and device screen size. I tend to take latest screenshots or links of W3C stats to show them some trends in past 2 years and where this market is going. it clearly convince them that they need a responsive layout.
If they want further evidence to these i show them twitter bootstrap and jquery UI and jquery mobile sites on at least 2-3 devices using Adobe Edge Inspect. Laptop, Tablet and Phone can be all on same network and just browse through the site to show them how it responds to different sizes.
I have a list of pros and cons collected over time for IE7, IE6, Chromeframe, Android native browser vs Chrome on Android, IPhone4 vs iPhone5 browser. usually i see which way they are more inclined and if its phone i will make them aware of pros and cons of that market. Most of the time they should understand that its not an app but a responsive site.
When you are writing a proposal do not put in "We will cover for all devices". You can never do that. Be realistic. Create virtaul machine and simulators to test at least the framework you are going to use before making these claims in writing. Setting up VM or Virtual Box is free and you can get Linux and eclipse or netbeans to run dummy phone and tablet and browse your framework on it.
For screen resolution and screen size again W3C stats are very realistic and convincing. Use them and use Resolution test kind of plugins in firefox or chrome and take some screenshots of your past work or just the framework so you can at least show the goodness involved or limitation of what can be done and cannot be done.
Most of the answers here are quite right just wanted to add how you can use both W3C and statistics to convince on the route for responsive layout. There are more than million sample sites online to convince people now and for past 6 month i have found that people do get convinced by step 1 alone.
It's quite clear. Your client wants a modern website. However, all browsers don't support all the modern features. Your client wants to spend money wise and have a site that improve through time. Not grow old soon and new bugs emerge as new browsers are released. This is why Graded Browser Support is for: http://jquerymobile.com/gbs/
Site will be usable with all the browsers. Mainstream browsers will get enhanced user experience. Most modern browsers will get the super cool newest goodies.
Let the client know, the more you "hack" the site to get some feature to work with some defect old browser, the more likely it will break on the new ones. It's not worth the time and money.
Here is what I do when designing responsive websites.
When an old or unsupported browser is detected, the website will simply exclude the jQuery elements that make it responsive, so the result is a fixed width site.
Now, what do you tell your client?
Just be frank. Tell them you made your website responsive for the modern devices out there. For the older devices, their website wont be so good-looking. Show them a couple of examples too. Some big companies, simply display a alert telling the user their browser it outdated and the website won't work properly. One example is Google.
So, essentially, your website works with all devices, but looks better and is responsive on the modern devices and browsers out there.

Webkit vs Processing for Interactive Applications

I know this sounds a little bizarre, but there is a very simple application I want to write, a sort of unique image viewer, which requires some interactivity with the host system at the user level. Simplicity when developing is a must as this is a very small side project. The project does require some amount of graphical work and quite a bit of mouse based interactivity (as well as some keyboard shortcuts), but quite frankly, I don't want to dig my hands into OGL for something this small. I looked at the available options, and I think I've narrowed it down to two main choices: Webkit (through either QtWebkit or WebkitGtk), and the language Processing.
Since I haven't actually used Processing but I do have some amount of HTML5 canvas and Javascript experience, I am somewhat tempted to using a Webkit based solution. There are however, several concerns I have.
How is Webkit's support for canvas, specifically for more graphically intensive processes?
I've heard that bridging is handled better in QtWebkit than WebkitGtk. Is this still true?
To what degree can bridging actually do? Can a Webkit based application do everything that an application which interacts with the files on the system needs?
Looking at Processing, there are similarly, a couple things I'm wondering.
Processing is known for its graphical capabilities, but how capable is it for writing a general everyday desktop application?
There are many sources that link Processing to Java, both in lineage as well as in distributing applications over the web (ie: JApplets). Is the "Application Export" similarly closely integrated with Java?
As for directly comparing the two, the main concern I do have is the overhead of each. I want the application to start up as snappy as possible, and I know that Java has a bit of an overhead regarding start up because it first has to start up the interpreter. How do Processing and QtWebkit/WebkitGtk compare for start up?
Note that I am targeting the Linux platform only.
Thanks!
It's difficult to give a specific answer, because you're actually asking a few different kind of questions - and some of them you could be more precise.
Processing is a subset or child of java - it's really "just" a java framework with an free ide that hides the messy setup work of building an applet, so that a user can dive in and write something quickly without getting bogged down in widgets and ui, etc. So processing can exist by itself and the end user needs to know nothing of Java (except syntax - processing is java, so the user must learn java syntax).
But a programmer who already knows java can exploit the fun quick nature of processing and then leverage their normal java experience for whatever else is needed - everything of java is in processing, just a maybe slightly hidden (but only at first) It's also possible to import the processing.jars into an existing java program and use them there. See http://processing.org/learning/eclipse/ form more information.
"how capable is it for writing a general everyday desktop application?" - Not particularly on it's own (it's not made to be), but some things are possible and easy (i.e. file saving & loading, non-standard gui, etc.), and in some ways it's similar to old school actionscript or lingo. There is a library called controlP5 that makes gui stuff a bit easier.
Webkit is another kettle of fish, especially if you aren't making a web-based thing (it sounds like you're thinking on using the webkit libraries as part of a larger program. I'll admit I don't have the dev expertise with those specific libraries to give you the answer you really want, but I'm pretty certain that unless you have programming experience beyond html5/javascript you'll probably get going much faster with processing.
Good luck with whichever path you choose!

IDS an over-kill for a single-user app?

I have the following dilema: My clients (mom-n-pop pawnshops) have been using my mgmt. system, developed with ISQL, for over 20 years. Throughout these two decades, I have customized the app to each clients desire, or when changes in Laws/Regulations have required it. Most clients are single-user sites. Some have multiple stores, but have never wanted a distributed db, don't trust the reliability or security of the internet or any other type of networking. So, they all use Standard Engines. I've been able to work around some SE limitations and done some clever tricks with ISQL and SE, but sooner or later, new laws may require images of pawnshop customers, merchandise, electronic transmision, etc. and then it will be time to upgrade to IDS, re-write the app in 4GL or change to another RDBMS. The logical and easiest route would be IDS/4GL, however, when I mentioned Linux or Unix-like platforms to my clients, they reacted negatively and demanded a Windows platform, so the easiest solution could be 4Js, Querix, etc.?.. or Access, Visual FoxPro or ???.. anyone have suggestions?
This whole issue probably comes down to a couple of issues that you'll have to deal with.
The first thing is what application programming and development language Are you willing to learn and work with?
The other thing is what kind of Internet capabilities to you want?
So for example while looking at a report do you want to be able to click on a button and have the report converted to a PDF document, and then launch the e-mail client with that PDF attached?
What about after they enter all the information data into the system, perhaps each store would like their own miniature web site in which people in town could go there to check what they've have place of having to phone up the store and ask if they have a $3 used lighter (the labor of phone and checking for these cheap items is MORE than the cost of selling the item – so web really great for this type of scenario).
The other issue is what kind of interface do you want? I assume you currently have some type of green screen or text based interface? Or perhaps over the years you did convert over to a GUI (graphical user interface).
If still green screen (text based) you now you have to sit down and give a considerable amount of effort and time into the layout and how you of screens will work with a graphical based system. I can remember when going from green screens to color, all of a sudden now the choices and effort of having to choose correct colors and layouts for that screen actually increased the workload by quite a bit. And then I went from color test screens to that of a graphical interface, then again all of a sudden now we're presented with a large number of new controls, colors, and in addition to that we have large choices in terms of different fonts and sizes.
And then now with the web, not only do you deal at different kinds a button styles (round, oval, shading, shadows, glow effects), but in addition to all those hover effects and shading effects etc, you now have to get down to some pretty serious issues in terms of what kind of colors (theme) your software will adopt for the whole web site.
This really comes down to how much learning and time you are willing to invest into new tools and how much software you can and will produce for given amount of time and effort.
I quite partial to RAD tools when you get down into the smaller business marketplace. Most of the smaller businesses can not afford rates for a .net developer (it not so much the rate, as the time to build an application). So, using ms-access is a good choice in the smaller business market place. Access is still a good 3 to 5 times many of the other tools in the marketplace. So quote by .net developer to develop something might be 12,000 bucks, and the same thing in Access might be $3000. I mean that small business can not afford to pay you to write unit testing code. This type of extra cost is just not going to happen on the smaller scale projects.
The other big issue you have to deal is what kind of report writing system are you going to build into the system? This is another reason why I like for the smaller business applications is access is because the report writer is really fantastic. Access reports have a whole bunch of abilities to bake connections in from forms and queries and pass filters and parameters into those reports. And, often the forms and queries that you spend time building already can talk to reports with parameters and pass values in a way that again really reduces the workload (development costs).
I think the number one issue that you'll have to address here however is what you're going to do for your web based strategy? You absolutely have to have one. Even if you build the front end part in access, you might still want to use a free edition of SQL server for the back end part. There are several reasons for this, but one reason is then it makes it easy to connect multiple stores up over the Internet.
Another advantage of putting your data in some type of server based system, is now you can set up some type of web server for all the stores to use, and build a tiny little customize system that allows each store to have their products and listings online (but, they use YOUR web server, or one that you paying $15 per month to host all of those customers). This web part could be an optional component that maybe perhaps all customers don't necessarily want. It would work off of the data they have to enter into the system anyway.
One great advantage of adopting these web based systems is not only does it allow these stores to serve their customers far better, but it also opens up the doors for you to convert your software into a monthly fee based system, or at least some part of it such as the optional web hosting part you offer.
When I converted so my longer time applications from green screen mainframe type software into windows desktop based applications it opened up large markets for me. With remote desktop, downloading software, issuing updates from a web site, then these new software systems make all of these nuts and bolts part of delivering software very easy now and especially so for supporting customers in different cities that you've never met face to face.
So, if you talking still primarily single user and one location, Access will reduce your development costs by a lot. It really depends on how complex and rich of an application you are talking about. If the size and scope of the project is beyond one developer, then you talking more about developer scaling (source code control, object development methodology, unit testing, cost and time of setting up a server based database system like SQL server etc). So they're certainly tipping point here when you go beyond that tipping point of cost time in complex city, then I actually don't recommend access. So this all comes down to the right horse for the right course.
Perhaps that the end of the day, it really comes down to what application development system are you willing to invest the time to learn?
Look at Aubit4GL - that is, I believe, available (or can be compiled on) Windows.
Yes, IDS is verging on overkill for a single-user system, but if SE doesn't provide all the features you need, or anticipate needing in the near future, it is a perfectly sensible choice. However, with a modicum of care, it can be set up to be (essentially) completely invisible to the user. And for a non-stressful application like this, the configuration is not complicated. You, as the supplier, would need to be fairly savvy about it. But there are features like silent install such that you could have your own installer run the IDS installer to get the software onto the customer's machine without extra ado. The total size of the system would go up - IDS is a lot bigger on disk than SE is (but you get a lot more functionality). There are also mechanisms to strip out the bigger chunks of code that you won't be using - in all probability. For example, you'd probably use ON-Tape for the backups; you would therefore omit ON-Bar and ISM from what you ship to customers.
IDS is used in embedded systems where there are no users and no managers working with the system. The hardware sits in the cupboard (closet) and works, communicating over the network.
It's good to see folks still getting value out of "old school" Informix Tools. I was never adept at Perform, but the ACE report writer always suited me. We skipped Perform and went straight for FourGen, and I lament that I've never been as productive as I was with FourGen. It had it own kind of elegance from its code generators to it funky, but actually quit powerful, stand alone menu system.
I appreciate the modern UI dynamics, but, damn, is it hard to write applications today. Not just tools, but simply industry requirements et al (such as you may be experiencing in your domain). And the Web is just flat out murder.
I guess part of it is that since most "green screen" apps look the same, it's hard to make one that looks bad! With GUIs and the Web etc., you can't simply get away with a good field order and the labels lining up.
But, alas, such as it is, that is what we have.
I have not used it in, what now, 15 years, but you may also want to look at Alpha 5. It was a pretty powerful, but not overly complicated, database development package, and (apparently) still going strong.
I wouldn't be too afraid of IDS. It runs pretty simply. Out of the box with zero or little tweaking, the DB works and is efficient, and it used to be pretty trivial to install. It was no SE, in that SE's access was tied to the application (using a library) vs an independent server that is IDS. But, operationally, it's really straightforward -- especially for an app like what you're talking about. I appreciate that it might be overkill, but even today, the resource requirements won't necessarily be insane. There's a lot of functionality, of course, and flexibility that you won't use. But frankly, beyond "flat file" DBase style databases, pretty much ALL of the server based SQL databases are very powerful and capable and potentially complicated. But they don't have to be. They can still be used "simply" and easily (well, save for Oracle -- Oracle can't do anything "simply").
As far as exploring other solutions, don't be too afraid of the "OOP" stuff, as most applications, while they leverage OOP libraries, aren't really OOP themselves (they can be, they just typically aren't, they simply don't need to be). The biggest issue with many of the OOPs systems, is they're simply to finely structured. Dealing with events at far too low of a level. While many programs need to access to that fine level of control, most applications, particularly the ones much like yours, do not. So, the extra flexibility simply gets in the way or creates more boiler plate.
That said, you shouldn't be frightened away from them per se, citing lacking of expertise. They can be picked up reasonably quickly. But I would certainly exhaust the more specialized tools (like Alpha 5, or Access, etc.) first to see if they don't offer what you want.
In terms of Visual FoxPro, was and remains a peerless tool (despite flak from people who know little about it). It has a fast, native database engine, built-in SQL and powerful report designer and so on. But you also have to consider that Microsoft support will be dropped for it in 2014, there will never be a 64-bit version, and so on. And the file locking method it uses will be increasingly flaky on future versions of Windows IMO.

Why are HTML frames bad?

I know they are, but my co-worker doesn't believe me. He keeps telling me that Google crawls the inside content and caches it just fine. According to Google, it does crawl them, but doesn't guarantee doing it properly.
Any thoughts why frames are bad for public web sites?
There are various usability and accessibility issues with frames:
link can open in the frame it is enclosed in (e.g. a side pane);
can break the forward/backward navigation;
difficult to bookmark;
are not easily searchable (likely to see the content in Google, etc.);
break on browsers like Lynx, that are console/terminal based;
difficult to size properly (e.g. consuming height on widescreen monitors for banner frames);
can break with screen readers and magnifiers (for blind users and visual impaired users);
See http://www.angelfire.com/super/badwebs/ for an example of what not to do.
Frames are more difficult to bookmark and, therefore, more difficult to share with others.
http://www.yourhtmlsource.com/frames/goodorbad.html
IFrames (like HTML tables) are not bad. However, people were abusing them quite a lot, thus giving them the bad name.
IFrames do represent a good concept - single visual representation of documents coming from different sources, while keeping the DOM trees properly separated and isolated.
The problem arises when a script in one of the DOM trees needs to access the elements in another tree. Or when people want to reference the document location, which happens to be the URL of the root document, and fail to realize they need to location of the secondary document.
But the biggest problem with frames is that there are sites that want to encapsulated other sites in frame and trick the user to think they are interacting with the framed site, while in fact interacting with the outer one. This is the primary reason why most websites will employ some form of frame-busting scripts for their login pages.
Update: It's Friday and we need some fun, so here's the (obligatory) link to Jeff's post on frames-busting-busters-busting... :-)
At the beginning ...
The idea behind framesets is great. It's alive and kicking today; check StackOverflow's left side panel, or the header. They are fixed divs, which is basically the same thing as having frames, although a lot more flexible.
The very concept of keeping some part while changing another is simply necessary by the logic of webpages. We need something to stay where it was (typically navigation) while we go through a lot of details in the main area. Framesets served this purpose very well, they were easy to use and fully supported by all browsers, meaning 3 at that time (Netscape, IE, Opera).
Then we scorched the sky
The real, practical problems with frames had nothing to do with their basic concept. Instead, it was us being only human. I followed this whole debate very closely so believe me when I say these were the real charges against frame technology:
Designers hated them. Yes, that was the deadliest punch. Everything looked square and straight. They hated it. They wanted arcs and image backgrounds and rounded borders. Now they have it in CSS3 - guess what, they're drawing squares. #whatever
Programmers had trouble with them. It was inconvenient to follow the logic of frames, and you had to do some extra work. I mean, some. Today it's a lot harder to create AJAX solutions for the same problem, but no one complains. #whatever
Websites could include one another. This was painful for some site owners because they worked hard on something and another fella used it as own content. Later, they invented same origin policy, but it was way after starting to hate frames. Content stealing is still an issue today, absolutely unrelated to whether we have frames or not. #whatever
Back button worked differently. Yes, it was a bit annoying. But it was not the frame concept's fault, again: it was browsers who did this to us. Could have been solved easily, but nah, browsers kept going back one by one, not providing the site a way to implement its own "step back" method, and alas, this is still happening today. #whatever
So instead of coming up with a solution, the world's web developers decided to hate frames. They ditched it, and now we live in a world where there are lots of better solutions - but with a lot more effort. This was not the only feature going thru the hate-ditch-reinvent-love cycle; see vertical centering and flexbox, aka the table tag debate - and it will happen many more times because it's always easier to point fingers at something than to learn why it's great.
I don't hate frames; don't miss them either, they belong to a somewhat outdated world of web. But they were a good solution for something, and there's a chance we'll see something similar in the future just as CSS grids came back to implement what table layouts did before. The same community who hates the old solution will happily embrace the new and tell you why it's not the same at all.
I think this story has a single takeaway:
Implementations come and go.Concepts stay and evolve.
Depending on what you want to do, most things done with frames can be done with CSS. CSS stylesheets are compatible with all MODERN browsers, meaning your website will look the same if using firefox, chrome, or IE 7(with some tweaks). Also backward-compatibility is not a concern as users can view the content even with CSS off (where as a website using frames and without a frameless version of the site will be useless to a user with an old browser), it just won't be as stylized. It's also quite easy to learn, and once you get the hang of it you'll wonder why you haven't learned it in the first place.
I know this is an old thread but..
Been using Frames almost all my life and I think they are great. I still have a few websites using frames and I cannot understand why they are being droppped. Read all of the comments above and disagree with most of them. Problem is most people never bothered to overcome the issues.
Link can open in the frame it is enclosed in (e.g. a side pane);
Yes it can, but if you do it properly it does not matter. Frames can in fact be very useful for this precise reason, as clicking on a link will only refresh the frame the link it pointing to, not the entire webpage. In the days of dialup modems at very slow speeds this used to be extremely useful to save on bandwidth and make webpages appear superfast. Dont forget, there are still people around the world today (albeit not many) that have very limited internet connectivity at very slow speeds. (people on sailboats in the middle of oceans, those die hards that dial into the internet using HF radio. Oh and those that live in war zones that revert back to poor mobile phone signals, or possibly even need to dial into the internet in another contry using their infrared connections on mobile phones via a modem)
can break the forward/backward navigation.
Yes it can, but if you do it properly it wont.
difficult to bookmark
again very easy to overcome, requires very little additional work, but can be easily overcome
are not easily searchable (likely to see the content in Google, etc.);
break on browsers like Lynx, that are console/terminal based;
Already covered by somebody else earlier. Personally I have never even heard of Lynx (apart from the deodarant). In fact it used to be quite useful that pages werent searchable when you did not want to get spammed by bots searching for email addresses... Unfortunately Google or somebody figured out how to do it.
Difficult to size properly (e.g. consuming height on widescreen monitors for banner frames);
Clearly whoever wrote this has hardly any experience of using frames. This was exactly why I used frames because I could make it work on any screensize in what some would refer to as Fluid views on modern web design
can break with screen readers and magnifiers (for blind users and visual impaired users);
I suppose it can if screen reading software and maginfiers are cheap and rubbish and dont know what they are doing, probably them that complained about it, but there are others that manage this easily.
The only argument that I think makes sense, is that people were abusing them. Now I would not know how that was done, as I am not in that game, but I suppose it would be easy to use frames to show a copy of lets say a financial payment page inside another another frame which is completely hidden to make it look like the user is on the correct page. Therefore conning users out of their beer tokens. But I believe more modern webbrowsers have been updated to overcome these issues and not allow re-direction where encryption certificates are used.
I can therefore understand why they would want to restrict the use of frames, but dont understand why they need to completely remove what is a pretty good bit of tech. (Bit like saying we are going to stop people from using 0 when they do math as it causes can sometimes cause problems when you add many 0's together.)
I still have some websites that use frames and wonder when I am going to have to re-code them one day.
PS. also note that google calendar and youtube allow one to imbed pages into websites and both of these use iframes.

Protection against automation

One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on.
One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours).
So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like
how to detect if another application makes periodical screenshots?
how to detect if another application scans our process memory?
what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client?
Your feedback is highly appreciated!
I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet.
So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to.
Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that.
You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue.
Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already.
The best protection against automation is to not have tasks that require grinding.
That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often.
However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning.
A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx.
Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application.
Google the term: "Key Loggers"...
Here's an article that describes the problem and methods to prevent it.
I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go:
Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on.
You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function.
This should work for the screenshot taking too, but you might want to intercept the BitBlt function.
Here's a basic tutorial concerning the function interception:
Intercepting System API Calls
You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers.
Edit: What I mean is, look into how they do it; how they detect that stuff.
I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running.
That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work.
Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments).
All these questions are easily solved by the rule 1 above:
* how to detect if another application makes periodical screenshots?
* how to detect if another application scans our process memory?
* what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
* is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.