Why does Squeak use Colors to identify Mouse Buttons? [closed] - documentation

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This is really annoying when you try to follow the documentation Squeak by Example.
Instead of calling the mouse buttons left, right, and middle, like in any other documentation, they give them colors. It's even suggested to label the mouse to help you learning.
It's 2009 and there are 3 dominant systems left: Windows, MacOS X, Linux
Why do they still stick to this naming scheme? How should I be able to sell this to co-workers, or even customers?
From Squeak by Example:
Squeak avoids terms like “left mouse
click” because different computers,
mice, keyboards and personal
configurations mean that different
users will need to press different
physical buttons to achieve the same
effect. Instead, the mouse buttons are
labeled with colors. The mouse button
that you pressed to get the “World”
menu is called the red button; it is
most often used for selecting items in
lists, selecting text, and selecting
menu items. When you start using
Squeak, it can be surprisingly helpful
to actually label your mouse, as shown
in Figure 1.4.

The button colors probably date back to the experiments at Xerox (where the mouse was invented). So maybe the question should be “why do current computers have colorless mouse buttons?” :D
As for sticking with the colors in the book, I think the reason was that the colors are still mentioned in the code, and colors don't always get mapped to the same fingers depending on the platform. But I agree, the color system is not very practical; probably the best would be to use primary/secondary/tertiary buttons?

That's one of those things you take with a grain of salt. :)
I read that the other day, and I will certainly not go out of my way to add some colourful buttons to my mouse.
Just mentally substitute "left-click" for red, etc.

It's ridiculous. Left and right are already abstract concepts. Naming the buttons with colours is an abstraction of an abstraction.

The labels left and right are avoided because left-handed people will have the buttons reversed. What does it mean when a lefty mouse has its right button clicked? Should the program perform its right-click action or its left-click action. If we simply swap the mappings, then right and left become rather meaningless to the programmer.
I assume the designers of Squeak wanted to avoid this thorny issue, so actions are labeled with colors which are agnostic to right/left.

Squeak is a SmallTalk tool. Obviously they feel compelled to abstract the buttons into something less specific.
It appears they've blurred the line between reality and code constructs.

This legacy is so 70ies, I hope that Pharo will fix this.

Contrary to what Damien said, the mouse was not invented at Xerox; rather, it was invented by team at Stanford Research Institute led by Douglas Engelbart as part of their revolutionary ONLINE system.
The coloring of the buttons is an old, old convention, one that I personally tend not to pay much attention to. The odd thing about that image you posted, though, is that the right button ("yellow" in Smalltalk parlance) appears more green than yellow--at least, to me. Does it appear that way to anyone else? Perhaps this is in part why the coloring convention was dropped elsewhere (and ought similarly be abandoned in Squeak).

The paragraph that was quoted has been echoed among the answers as well: the "left-click" might or might not come from the button on the left - and the "right-click" might or might not come from the button on the right.
A pet peeve of mine is the talk of a "third button" - which almost always is in the middle. The sequence is not 1-3-2 but 1-2-3. Perhaps that third button should be a color too....

Related

Where in Blender source code is code that draws transform rotate/resize black arrow visual indicators?

I've been trying to find the piece of code that draws or initiates drawing of the double black arrow visual indicators that show up when transform rotate is executed by pressing R key (or resize with S key), visible here:
I've been stepping trough the code of the Rotate operator, various drawing functions etc., with no success. I suppose I do not have a good enough picture of the code structure.
I would appreciate it very much if someone could point me into the right direction.
Does someone know at least the right terminology to look for?
I'm using Blender 2.76 but I suppose insight into any version would be helpful.
(What I'm trying to do is to locate the point in code where decision is made whether to draw the indicator or not. I explained the "problem" in this question. The goal is to get it show always.)
I have finally found the place, not by stepping through the code but by browsing it, lol!
The function that draws the indicators is drawHelpline() and the check for the region being 'WINDOW' is done in helpline_poll(), both from transform.c file.
Actual decision is made in wm_paintcursor_draw() from wm_draw.c file which calls the helpline_poll() indirectly with pc->poll(C).
The wm_paintcursor_draw() is called by wm_method_draw_triple() which in turn is called from wm_draw_update() which is called from WM_main().
That answers my question.
However, that does not solve my actual problem because the active subwindow in these functions is the region from which the operator was executed - in my case the ToolShelf! It is because cursor_warp(), which I use to move the mouse in my operator, changes only the mouse pointer position and does not update anything else (i.e. does not update the active subwindow).
So, if I force helpline_poll() to return 1, it will draw the indicator only over the ToolShelf.
The solution is to hack WM_cursor_warp() from wm_window.c to set win->screen->subwinactive to the correct window id, but that is really an ugly hack and not directly related to the question I asked here.
The solution is to use modal timer operator to allow Blender to update the active subwindow, explained here.

rhombus framed buttons objective c?

basically I am trying to edit not only the appearance of the button(thats the easy part) but the frame that detects the touch to be a rhombus rather then a square
html example:http://irwinproject.com
I've tried CGAfflineTransform however it doesn't allow me to make non rectangular objects. is there a way to skew
Im just wondering if this is possible because, if not could someone point me in a direction the only viable answer I've found is resorting to something along the lines of a spriteKit;
I found this however this implementation leaves dead spots on buttons where they overlap
custom UIButton with skewed area in iPhone
is there a way to message people on here there was a gentleman who said he figured out how to transform but never posted his solution.
In order to modify the touch area of an oddly shaped button you could use a solution similar to OBShapedButton. I have used this particular project in the past myself for adjacent hexagon buttons and it worked perfectly. That said, you may have to modify it a bit to work with drawn shapes instead of images.

suggested algorithms to prevent "image persistence" on an LCD screen [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have a Windows application which is displayed on kiosk machines and often runs continuously for weeks. The application is full screen. For reference imagine a screen divided into 2 panels, the left one uses about 30% and the right fills the rest. The left panel is completely static and informational, the right panel has video, image and text slides rotate, animations, etc.
No surprise, the left panel can cause some "image persistence" (screen burn) issues. I am looking for suggested remedies on how to prevent the image persistence issue. I'm only concerned about LCD not CRT.
Check out this "wiper" style solution, give it a few seconds you'll see the line wipe across.
http://tinyurl.com/lprt6tr
I like this idea, simple and just overlay it on top, it will work anywhere.
But, my question is how much pixel color change is actually required to avoid the image persistence? Do you need to make sure the pixel changes color at least once every minute, 10 minutes, hour? Does it need to rotate through a range of colors? Does it need to hold a state for a period of time?
Any insight about how often and what kind of color change is needed to actually prevent the problem is what I'm looking for.
Thanks.
Though I agree to Ken White, I guess a double wiper, with a white trail on the left and a black one on the right, would be sufficient, since it would do a hard set on the pixels.
As for the update frequency, you don't need to set it as frequent as you have in your fiddle. See this fiddle: http://jsfiddle.net/4w2K3/83/
Notice I added a border to the sweeper element and changed the way your code works a bit.
var $burnGuard = $('<div>').attr('id','burnGuard').css({
'background-color':'#000',
'width':'1px',
'height':$(document).height()+'px',
'position':'absolute',
'top':'0px',
'left':'0px',
'border-left':'solid 1px',
'border-color':'#FFF',
'display':'none'
}).appendTo('body');
var delay = 10000, scrollDelay = 1000;
function burnGuardAnimate()
{
$burnGuard.css({
'left':'0px'
}).show().animate({
'left':$(window).width()+'px'
},scrollDelay,function(){
$(this).hide();
});
setTimeout(burnGuardAnimate,delay);
}
setTimeout(burnGuardAnimate,delay);
Since screens are basically a giant array of coloured lights (simplified) organised in sets of 3 (red, green and blue), according to colour composition, white would activate all three lights at once, while black is no light at all. So, sweeping this strong-contrast line across the screen would help. Though the timing is much more dependant on the hardware you're using and how often your screen changes naturally (ads, user interaction, etc) I don't know much about the hardware part, but I would suggest from personal experience with CRT monitors, running this every 15 minutes or so, it also depends how long you plan to maintain the same screen for the kiosk.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

Keyboard shortcuts in iOS?

Is it possible to capture command-key sequences in 3rd party iPad/iPhone apps?
Long version:
On my excellent journey of discovery vis-a-vis my new iPad with it's gleaming keyboard dock, I discovered, much to my joy, that when editing text in standard issue text views; commands ranging from ⌘C/⌘P for copy-paste and ^A, ^B, ^E and friends for line and character jumping works.
So far so good, yeah? Problem is, this enthralling behaviour seems limited to text fields, and more specifically, standard issue text fields. What I would really like is to capture events like these for my own use.
An issue I often find with a lot of apps is that they tend to either be close to useless, or at least cumbersome, without the keyboard dock (e.g. the iWork Suite), or close to useless, or at least cumbersome, with the keyboard dock (most other applications that don't rely heavily on text input, but rather touch gestures [that is to say, most other applications period]).
Many games, for instance Civilization Revolutions, and similar, would benefit massively from just the simple addition of the ability to use the arrow keys to move units and the enter key to end turn.
So the question, then, as stated above: Is there a way to capture and respond to these events in order to offer an alternative to touch commands for those that desire this and have the hardware?
Disclaimer: I have no intention of developing applications that rely exclusively on keyboard input, of course, and nor should anyone else. The touch interface is paramount. It's just not always completely practical.
The only way (that I know of) to get input from the keyboard in iOS is using the UITextInput protocol. Unfortunately, the protocol doesn't give you the raw keys that were pressed, and instead sends you messages like "insert this string" and "move the caret to this position." So in order to know that an arrow key was pressed would require you to do some digging.
As for shortcuts with modifier keys, like copy/paste or undo/redo, Apple only seems to support the basics, and doesn't allow you to create custom ones. They use methods in UIResponder: –canPerformAction:withSender: and undoManager.
So if I were writing a game and wanted to take advantage of the keyboard I would subclass UIResponder and have it conform to the UITextInput protocol. And then make it the first responder. This however will probably bring up the software keyboard if a physical one is not present.
My own disclaimer: I haven't done all the hard work to use UITextInput in a way it wasn't meant to be used, so I don't know how feasible it would be to actually get it working. And I don't really want to. Rather, let's all file bug reports to get Apple to create an API that allows us to get more precise input from the keyboard.
In iOS 7, the UIResponder property keyCommands and the class UIKeyCommand were added to support shortcuts. Simply override keyCommands to return an array of UIKeyCommand and you should be good to go.
Worth mentioning: Though the details are currently under NDA, Apple is adding support for keyboard shortcuts/events in iOS 7.
I suspect it will work similarly to how it works on Mac OS X, which is briefly described in this answer to a similar question.