What are the ascii values of up down left right? - hardware

What are the ASCII values of the arrow keys? (up/down/left/right)

In short:
left arrow: 37 up arrow: 38right arrow: 39down arrow: 40

There is no real ascii codes for these keys as such, you will need to check out the scan codes for these keys, known as Make and Break key codes as per helppc's information. The reason the codes sounds 'ascii' is because the key codes are handled by the old BIOS interrupt 0x16 and keyboard interrupt 0x9.
Normal Mode Num lock on
Make Break Make Break
Down arrow E0 50 E0 D0 E0 2A E0 50 E0 D0 E0 AA
Left arrow E0 4B E0 CB E0 2A E0 4B E0 CB E0 AA
Right arrow E0 4D E0 CD E0 2A E0 4D E0 CD E0 AA
Up arrow E0 48 E0 C8 E0 2A E0 48 E0 C8 E0 AA
Hence by looking at the codes following E0 for the Make key code, such as 0x50, 0x4B, 0x4D, 0x48 respectively, that is where the confusion arise from looking at key-codes and treating them as 'ascii'... the answer is don't as the platform varies, the OS varies, under Windows it would have virtual key code corresponding to those keys, not necessarily the same as the BIOS codes, VK_UP, VK_DOWN, VK_LEFT, VK_RIGHT.. this will be found in your C++'s header file windows.h, as I recall in the SDK's include folder.
Do not rely on the key-codes to have the same 'identical ascii' codes shown here as the Operating system will reprogram the entire BIOS code in whatever the OS sees fit, naturally that would be expected because since the BIOS code is 16bit, and the OS (nowadays are 32bit protected mode), of course those codes from the BIOS will no longer be valid.
Hence the original keyboard interrupt 0x9 and BIOS interrupt 0x16 would be wiped from the memory after the BIOS loads it and when the protected mode OS starts loading, it would overwrite that area of memory and replace it with their own 32 bit protected mode handlers to deal with those keyboard scan codes.
Here is a code sample from the old days of DOS programming, using Borland C v3:
#include <bios.h>
int getKey(void){
int key, lo, hi;
key = bioskey(0);
lo = key & 0x00FF;
hi = (key & 0xFF00) >> 8;
return (lo == 0) ? hi + 256 : lo;
}
This routine actually, returned the codes for up, down is 328 and 336 respectively, (I do not have the code for left and right actually, this is in my old cook book!) The actual scancode is found in the lo variable. Keys other than the A-Z,0-9, had a scan code of 0 via the bioskey routine.... the reason 256 is added, because variable lo has code of 0 and the hi variable would have the scan code and adds 256 on to it in order not to confuse with the 'ascii' codes...

Really the answer to this question depends on what operating system and programming language you are using. There is no "ASCII code" per se. The operating system detects you hit an arrow key and triggers an event that programs can capture. For example, on modern Windows machines, you would get a WM_KEYUP or WM_KEYDOWN event. It passes a 16-bit value usually to determine which key was pushed.

The ascii values of the:
Up key - 224
72
Down key - 224
80
Left key - 224
75
Right key - 224
77
Each of these has two integer values for ascii value, because they are special keys, as opposed to the code for $, which is simply 36. These 2 byte special keys usually have the first digit as either 224, or 0. this can be found with the F# in windows, or the delete key.
EDIT : This may actually be unicode looking back, but they do work.

If you're programming in OpenGL, use GLUT. The following page should help: http://www.lighthouse3d.com/opengl/glut/index.php?5
GLUT_KEY_LEFT Left function key
GLUT_KEY_RIGHT Right function key
GLUT_KEY_UP Up function key
GLUT_KEY_DOWN Down function key
void processSpecialKeys(int key, int x, int y) {
switch(key) {
case GLUT_KEY_F1 :
red = 1.0;
green = 0.0;
blue = 0.0; break;
case GLUT_KEY_F2 :
red = 0.0;
green = 1.0;
blue = 0.0; break;
case GLUT_KEY_F3 :
red = 0.0;
green = 0.0;
blue = 1.0; break;
}
}

You can check it by compiling,and running this small C++ program.
#include <iostream>
#include <conio.h>
#include <cstdlib>
int show;
int main()
{
while(true)
{
int show = getch();
std::cout << show;
}
getch(); // Just to keep the console open after program execution
}

If you're working with terminals, as I was when I found this in a search, then you'll find that the arrow keys send the corresponding cursor movement escape sequences.
So in this context,
UP = ^[[A
DOWN = ^[[B
RIGHT = ^[[C
LEFT = ^[[D
with ^[ being the symbol meaning escape, but you'll use the ASCII value for escape which is 27, as well as for the bracket and letter.
In my case, using a serial connection to communicate these directions, for Up arrow, I sent the byte sequence 27,91,65 for ^[, [, A

You can utilize the special function for activating the navigation for your programming purpose. Below is the sample code for it.
void Specialkey(int key, int x, int y)
{
switch(key)
{
case GLUT_KEY_UP:
/*Do whatever you want*/
break;
case GLUT_KEY_DOWN:
/*Do whatever you want*/
break;
case GLUT_KEY_LEFT:
/*Do whatever you want*/
break;
case GLUT_KEY_RIGHT:
/*Do whatever you want*/
break;
}
glutPostRedisplay();
}
Add this to your main function
glutSpecialFunc(Specialkey);
Hope this will to solve the problem!

The Ascii codes for arrow characters are the following:
↑ 24
↓ 25
→ 26
← 27

I got stuck with this question and was not able to find a good solution, so decided to have some tinkering with the Mingw compiler I have. I used C++ and getch() function in <conio.h> header file and pressed the arrow keys to find what value was assigned to these keys. As it turns out, they are assigned values as 22472, 22477, 22480, 22475 for up, right, down and left keys respectively. But wait, it does not work as you would expect. You have to ditch the 224 part and write only the last two digits for the software to recognize the correct key; and as you guessed, 72, 77, 80 and 75 are all preoccupied by other characters, but this works for me and I hope it works for you as well. If you want to run the C++ code for yourself and find out the values for yourself, run this code and press enter to get out of the loop:
#include<iostream>
#include<conio.h>
using namespace std;
int main()
{
int x;
while(1){
x=(int)getch();
if(x==13){
break;
}
else
cout<<endl<<endl<<x;
}
return getch();
}

If you Come for JavaScript Purpose to get to Know which Key is Pressed.
Then there is a method of AddEventListener of JavaScript name keydown.
which give us that key which is pressed but there are certain method that you can perform on that pressed key that you get by keydown or onkeydown quite same both of them.
The Methods of pressed Key are :-
.code :- This Return a String About Which key is Pressed Like ArrowUp, ArrowDown, KeyW, Keyr and Like that
.keyCode :- This Method is Depereciated but still you can use it. This return an integer like for small a ---> 65 , Capital A :- 65 mainly ASCII code means case Insensitive
ArrowLeft :- 37, ArrowUp :- 38, ArrowRight :- 39 and ArrowDown :- 40
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body>
<h1 id="show">Press Any Button</h1>
// JavaScript Code Starting From here See Magic By Pressing Any Buttton
<script>
document.addEventListener('keydown', (key)=> {
let keycode = key.keyCode;
document.getElementById('show').innerText = keycode;
/*
let keyString = key.code;
switch(keyString){
case "ArrowLeft":
console.log("Left Key is Pressed");
break;
case "ArrowUp":
console.log("Up Key is Pressed");
break;
case "ArrowRight":
console.log("Right Key is Pressed");
break;
case "ArrowDown":
console.log("Down Key is Pressed");
break;
default:
console.log("Any Other Key is Pressed");
break;
}
*/
});
</script>
</body>
</html>

Can't address every operating system/situation, but for AppleScript on a Mac, it is as follows:
LEFT: 28
RIGHT: 29
UP: 30
DOWN: 31
tell application "System Events" to keystroke (ASCII character 31) --down arrow

Gaa! Go to asciitable.com. The arrow keys are the control equivalent of the HJKL keys. I.e., in vi create a big block of text. Note you can move around in that text using the HJKL keys. The arrow keys are going to be ^H, ^J, ^K, ^L.
At asciitable.com find, "K" in the third column. Now, look at the same row in the first column to find the matching control-code ("VT" in this case).

this is what i get:
Left - 19
Up - 5
Right - 4
Down - 24
Works in Visual Fox Pro

Related

Ambiguous process calcChecksum

CONTEXT
I'm using a code written to work with a GPS module that connects to the Arduino through serial communication. The module starts each packet with a header (0xb5, 0x62), continues with the information you requested and ends with to bytes of checksum, CK_A, and CK_B. I don't understand the code that calculates that checksum. More info about the algorithm of checksum (8-Bit Fletcher Algorithm) in the module protocol (https://www.u-blox.com/sites/default/files/products/documents/u-blox7-V14_ReceiverDescriptionProtocolSpec_%28GPS.G7-SW-12001%29_Public.pdf), page 74 (87 with index).
MORE INFO
Just wanted to understand the code, it works fine. In the UBX protocol, I mentioned there is also a piece of code that explains how it works (isn't write in c++)
struct NAV_POSLLH {
//Here goes the struct
};
NAV_POSLLH posllh;
void calcChecksum(unsigned char* CK) {
memset(CK, 0, 2);
for (int i = 0; i < (int)sizeof(NAV_POSLLH); i++) {
CK[0] += ((unsigned char*)(&posllh))[i];
CK[1] += CK[0];
}
}
In the link you provide, you can find a link to RFC 1145, containing that Fletcher 8 bit algorithm as well and explaining
It can be shown that at the end of the loop A will contain the 8-bit
1's complement sum of all octets in the datagram, and that B will
contain (n)*D[0] + (n-1)*D[1] + ... + D[n-1].
n = sizeof byte D[];
Quote adjusted to C syntax
Try it with a couple of bytes, pen and paper, and you'll see :)

What numbering system is used by terminal when outputing to a script

I'm currently learning C programming, and wrote a script similar to one I've written in Python before. My goal is to learn how to pass input to an application and have it process the data I pass it.
The problem I'm having now is the feedback my application is giving me. I wrote a simple application to read keyboard input and give 1 of 3 responses based on what input I give it. The code is as follows:
/*Input test.*/
#include<stdio.h>
#include<stdlib.h>
char input;
const int option_a = 1;
const int option_b = 2;
int main()
{
printf("Lets get started! a for on or b for off?\n");
while(1)
{
input = getchar();
if(input == option_a)
{
printf("We're on.!\n");
}
else if(input == option_b)
{
printf("Off we go.\n");
}
else
{
printf("Excuse me, but I didn't get that.\n");
}
}
return 0;
}
Simply option_a is me pressing the 1 key on keyboard, and option_b is key 2. When I press these keys, or any key for that matter, the application will always go to the "else" portion of the decision tree. Saying that, it's clear to me that, and I'll say with a lack of a better term/expression, that my application isn't seeing my input as the decimal number 1 or 2.
From the terminal, what is the structure of the data I'm sending to my application, or simply put, what does my 1or 2 "look" like to my application?
When you are taking input with getchar() you are getting an char value. But you are comparing it with an integer. Instead of comparing with the integer, you can compare the input with corresponding characters. For example, use
const char option_a = '1';
const char option_b = '2';
I believe you want to find the American Standard Code for information interchange(ASCII) table. 0 = 48, 1 = 49, 2 = 50,...I had the same problem while working with arduino’s serial monitor, and it should all be covered by the same standard.

Arduino - Reading Serial Data

I am trying to send information to an Arduino Mega 2560 using serial data in order to control both LED Pixel Strips and conventional christmas light strings. I am also using VIXEN lighting software.
I can control one strip of LED pixels from Vixen using this code in the Arduino loop() function;
Serial.readBytes((char*)leds, NUM_LEDS * 3);//buffer to store things in, length (number of bytes to read)
FastLED.show();//refresh the pixel LED's
I can also control a relay (or multiple relays) for the conventional lights using this code;
#define CHANNEL_01 7 //Pin #7 on the Arduino Mega board
void setup()
{
// Begin serial communication
Serial.begin(BAUD_RATE);
#define CHANNEL_COUNT 1
int channels[] = {CHANNEL_01}
int incomingByte[16];
// Define baud rate. This figure must match that of your profile configuration in Vixen!
#define BAUD_RATE 9600
// Set up each channel as an output
for(int i = 0; i < CHANNEL_COUNT; i++)
{
pinMode(channels[i], OUTPUT);
}
}
void loop()
{
if (Serial.available() >= CHANNEL_COUNT)
{
// Read data from Vixen, store in array
for (int i = 0; i < CHANNEL_COUNT; i++)
{
incomingByte[i] = Serial.read();
}
// Write data from array to a pin on Arduino
for (int i = 0; i < CHANNEL_COUNT; i++)
{
digitalWrite(channels[i], incomingByte[i]);
}
}
}
The problem is that I cannot do both of these things. I can either assign the 150 bytes of LED data to the LED strip and it works fine, OR, I can run the relays and they work fine. I have not been able to figure out how to chop up the bytes from the serial data and send it to the appropriate pin. For example, maybe I want to control a relay using pin 7 and a strip of LED pixels using pin 6.
The strip of pixel LED's consumes the first 150 bytes of data from the serial data. But how can I get the next one byte that controls a relay that turns on and off the conventional christmas light string? The byte that controls the light string would be the 151'st in the serial data. Is there a way to specify the 151'st byte? Serial.read() does nothing more than read the first byte (I think). How can a user iterate through the bytes of serial data and select only the ones they want?
When you do the Serial.readBytes((char*)leds, NUM_LEDS * 3); you read the first 150 bytes, assuming you have 50 LEDs. So the next byte pending in the serial buffer would be the 151'st byte, therefore if you call Serial.read() after Serial.readBytes((char*)leds, NUM_LEDS * 3); you would get that byte.
Note that you can use one byte to controle 8 relays if you want, one bit per relay, by using bitRead()
An example.
bool relayState[8];
Serial.readBytes((char*)leds, NUM_LEDS * 3);
byte relays = Serial.read();
for(byte i=0;i<8;i++){
relayState[i] = bitRead(relays, i);
}
for(byte i=0;i<8;i++) {
digitalWrite(relay[i], relayState[i]);
}
Then a value of 1 would turn on relay 0, a value of 2 would turn on relay 1, a value of 3 would turn on relay 0 and relay 1, etc.
To solve this problem I bought an Arduino Uno to run the standard (non-LED) lights separate from the LED lights which run off an Arduino MEGA 2560. The non-LED lights are run on one controller in the Vixen Lights software. The controller has 4 outputs (channels), one for each of the non-LED light sets. Each channel will control one line on a solid state relay. The Arduino Uno runs the relays using this code;
#define PIN1 7 //Pin number seven
#define PIN2 6 //Pin number six
#define PIN3 5 //Pin number five
#define PIN4 4 //Pin number four
#define BAUD_RATE 9600 //just running 4 relay switches so we don't need much speed
#define CHANNEL_COUNT 4 //there are 4 channels coming from the Vixen controller
int bt[4]; //a variable we will use in the loop section of code
int x; //another variable we will use in the loop section of code
void setup() {
delay(1000); //a little delay to give Uno some time on startup
Serial.begin(BAUD_RATE); //set the baud rate of the serial stream
pinMode(PIN1, OUTPUT); //set the four pins on the Arduino Uno to output
pinMode(PIN2, OUTPUT);
pinMode(PIN3, OUTPUT);
pinMode(PIN4, OUTPUT);
}
void loop() {
if (Serial.available() >= CHANNEL_COUNT) {
for (X = 0; x < CHANNEL_COUNT; x++) { //for every channel...
bt[x] = Serial.read(); //we read a byte from the serial stream buffer and store it in an array for later use
}
digitalWrite(PIN1, bt[0]); //we tell the pins on the arduino what to do...
digitalWrite(PIN2, bt[1]); //using the array of integers that holds the byte values from the serial stream...
digitalWrite(PIN3, bt[2]);
digitalWrite(PIN4, bt[3]);
}
}
The LED's run off a second controller in the Vixen Lights software. I have two 12 volt, 50 pixel LED strips of type WS2811. The Arduino uses the FastLED library that can be downloaded for free from FastLED.io. What I found was that there is one byte of garbage data that comes in the serial stream for the LED strips and I had to move past that byte of data in order for the LED's to receive the correct bytes of data to control their color, position etc. I use this code to run my LED's off the Arduino MEGA 2560;
#include <FastLED.h> //include the FastLED library in the Arduino project
#define LED_PIN1 7 //I am going to run one strip of 50 LEDs off of pin 7 on the MEGA
#define LED_PIN2 6 //I am going to run a second strip of 50 LEDs off of pin 6 on the MEGA
#define BAUD_RATE 115200
#define NUM_LEDS 50
//It took me some time to figure out that my two pixel strips are set
//to different color codes. Why? I don't know, but they are.
#define RGB_ORDER RGB //one of my pixel strips is set to use RGB color codes
#define BRG_ORDER BRG //the second strip came from the factory with BRG color codes
#define LED_TYPE WS2811 //what type of LEDs are they? Mine are WS2811, yours may be different.
//create an array to hold the FastLED CRBG code for each of the 50 pixels in the
//first strip.
CRGB leds1[NUM_LEDS];
//create another array to hold the FastLED CRBG codes for each of the 50 pixels in
//the second strip.
CRGB leds2[NUM_LEDS];
int g; //a variable we will use in the loop section
int bufferGarbage[1]; //THIS IS THE KEY TO MAKING THIS WHOLE THING WORK. WE NEED TO
//GET PAST THE FIRST MYSTERY BYTE THAT IS SENT TO THE ARDUINO MEGA FROM THE VIXEN
//LIGHTS SOFTWARE. So we create a variable we will use in the loop section of code.
void setup() {
delay(1000);
Serial.begin(BAUD_RATE);
pinMode(LED_PIN1, OUTPUT); //set our pins to output. PIN1 is pin 6 on the Arduino board.
pinMode(LED_PIN2, OUTPUT); //set our pins to output. PIN2 is pin 7 on the Arduino board.
//This line sets up the first pixel strip to run using FastLED
FastLED<LED_TYPE, LED_PIN1, RGB_ORDER>(leds1, NUM_LEDS).setCorrection(TypicalLEDStrip);
//This line sets up the second pixel strip to run using FastLED
FastLED<LED_TYPE, LED_PIN2, BRG_ORDER>(leds2, NUM_LEDS).setCorrection(TypicalLEDStrip);
}
void loop() {
if (Serial.available() >= 0) { //if there is data in the serial stream
//bufferGarbage is to capture the first byte of garbage that comes across.
//Without this the LED's are out of sync.
//In my case if I didn't capture this byte the first pixel on my
//second LED strip would match the color code that should be on the last
//pixel of the first strip. We don't do anything with this byte.
//but we need to read it from the serial stream so we can move to the
//next byte in the stream.
bufferGarbage[0] = Serial.read();
//then we need to populate the leds1 array so FastLED can tell the pixels what to do.
//We have 50 pixels in the strip and each pixel has a CRGB property that uses
//a red, green, and blue attribute. So for each LED we need to capture 3
//bytes from the serial stream. 50 LEDs * 3 bytes each means we need to read
//150 bytes of data from the serial stream.
for (g = 0; g < NUM_LEDS; g++) {
Serial.readBytes( ( char*)(&leds1[g], 3);
}
for (g = 0; g < NUM_LEDS; g++) {//then we read the next 150 bytes for the second strip of LEDs
Serial.readBytes( ( char*)(&leds2[g], 3);
}
FastLED.show(); //then we tell FastLED to show the pixels!
}
}

TLS 1.0-calculating the finished message MAC

I'm having trouble calculating the MAC of the finished message.The RFC gives the formula
HMAC_hash(MAC_write_secret, seq_num + TLSCompressed.type +
TLSCompressed.version + TLSCompressed.length +
TLSCompressed.fragment));
But the tlsCompressed(tlsplaintext in this case because no compression is used) does not contain version information:(hex dump)
14 00 00 0c 2c 93 e6 c5 d1 cb 44 12 bd a0 f9 2d
the first byte is the tlsplaintext.type, followed by uint24 length.
The full message, with the MAC and padding appended and before encryption is
1400000c2c93e6c5d1cb4412bda0f92dbc175a02daab04c6096da8d4736e7c3d251381b10b
I have tried to calculate the hmac with the following parameters(complying to the rfc) but it does not work:
uint64 seq_num
uint8 tlsplaintext.type
uint8 tlsplaintext.version_major
uint8 tlscompressed.version_minor
uint16 tlsplaintext.length
opaque tlsplaintext.fragment
I have also tried omitting the version and using uint24 length instead.no luck.
My hmac_hash() function cannot be the problem because it has worked thus far. I am also able to compute the verify_data and verify it.
Because this is the first message sent under the new connection state, the sequence number is 0.
So, what exactly are the parameters for the calculation of the MAC for the finished message?
Here's the relevant source from Forge (JS implementation of TLS 1.0):
The HMAC function:
var hmac_sha1 = function(key, seqNum, record) {
/* MAC is computed like so:
HMAC_hash(
key, seqNum +
TLSCompressed.type +
TLSCompressed.version +
TLSCompressed.length +
TLSCompressed.fragment)
*/
var hmac = forge.hmac.create();
hmac.start('SHA1', key);
var b = forge.util.createBuffer();
b.putInt32(seqNum[0]);
b.putInt32(seqNum[1]);
b.putByte(record.type);
b.putByte(record.version.major);
b.putByte(record.version.minor);
b.putInt16(record.length);
b.putBytes(record.fragment.bytes());
hmac.update(b.getBytes());
return hmac.digest().getBytes();
};
The function that creates the Finished record:
tls.createFinished = function(c) {
// generate verify_data
var b = forge.util.createBuffer();
b.putBuffer(c.session.md5.digest());
b.putBuffer(c.session.sha1.digest());
// TODO: determine prf function and verify length for TLS 1.2
var client = (c.entity === tls.ConnectionEnd.client);
var sp = c.session.sp;
var vdl = 12;
var prf = prf_TLS1;
var label = client ? 'client finished' : 'server finished';
b = prf(sp.master_secret, label, b.getBytes(), vdl);
// build record fragment
var rval = forge.util.createBuffer();
rval.putByte(tls.HandshakeType.finished);
rval.putInt24(b.length());
rval.putBuffer(b);
return rval;
};
The code to handle a Finished message is a bit lengthier and can be found here. I see that I have a comment in that code that sounds like it might be relevant to your problem:
// rewind to get full bytes for message so it can be manually
// digested below (special case for Finished messages because they
// must be digested *after* handling as opposed to all others)
Does this help you spot anything in your implementation?
Update 1
Per your comments, I wanted to clarify how TLSPlainText works. TLSPlainText is the main "record" for the TLS protocol. It is the "wrapper" or "envelope" for content-specific types of messages. It always looks like this:
struct {
ContentType type;
ProtocolVersion version;
uint16 length;
opaque fragment[TLSPlaintext.length];
} TLSPlaintext;
So it always has a version. A Finished message is a type of handshake message. All handshake messages have a content type of 22. A handshake message looks like this:
struct {
HandshakeType msg_type;
uint24 length;
body
} Handshake;
A Handshake message is yet another envelope/wrapper for other messages, like the Finished message. In this case, the body will be a Finished message (HandshakeType 20), which looks like this:
struct {
opaque verify_data[12];
} Finished;
To actually send a Finished message, you have to wrap it up in a Handshake message envelope, and then like any other message, you have to wrap it up in a TLS record (TLSPlainText). The ultimate result looks/represents something like this:
struct {
ContentType type=22;
ProtocolVersion version=<major, minor>;
uint16 length=<length of fragment>;
opaque fragment=<struct {
HandshakeType msg_type=20;
uint24 length=<length of finished message>;
body=<struct {
opaque verify_data[12]>;
} Finished>
} Handshake>
} TLSPlainText;
Then, before transport, the record may be altered. You can think of these alterations as operations that take a record and transform its fragment (and fragment length). The first operation compresses the fragment. After compression you compute the MAC, as described above and then append that to the fragment. Then you encrypt the fragment (adding the appropriate padding if using a block cipher) and replace it with the ciphered result. So, when you're finished, you've still got a record with a type, version, length, and fragment, but the fragment is encrypted.
So, just so we're clear, when you're computing the MAC for the Finished message, imagine passing in the above TLSPlainText (assuming there's no compression as you indicated) to a function. This function takes this TLSPlainText record, which has properties for type, version, length, and fragment. The HMAC function above is run on the record. The HMAC key and sequence number (which is 0 here) are provided via the session state. Therefore, you can see that everything the HMAC function needs is available.
In any case, hopefully this better explains how the protocol works and that will maybe reveal what's going wrong with your implementation.

Detect a double key press in AutoHotkey

I'd like to trigger an event in AutoHotkey when the user double "presses" the esc key. But let the escape keystroke go through to the app in focus if it's not a double press (say within the space of a second).
How would I go about doing this?
I've come up with this so far, but I can't work out how to check for the second escape key press:
~Esc::
Input, TextEntry1, L1 T1
endKey=%ErrorLevel%
if( endKey != "Timeout" )
{
; perform my double press operation
WinMinimize, A
}
return
Found the answer in the AutoHotkey documentation!
; Example #4: Detects when a key has been double-pressed (similar to double-click).
; KeyWait is used to stop the keyboard's auto-repeat feature from creating an unwanted
; double-press when you hold down the RControl key to modify another key. It does this by
; keeping the hotkey's thread running, which blocks the auto-repeats by relying upon
; #MaxThreadsPerHotkey being at its default setting of 1.
; Note: There is a more elaborate script to distinguish between single, double, and
; triple-presses at the bottom of the SetTimer page.
~RControl::
if (A_PriorHotkey <> "~RControl" or A_TimeSincePriorHotkey > 400)
{
; Too much time between presses, so this isn't a double-press.
KeyWait, RControl
return
}
MsgBox You double-pressed the right control key.
return
So for my case:
~Esc::
if (A_PriorHotkey <> "~Esc" or A_TimeSincePriorHotkey > 400)
{
; Too much time between presses, so this isn't a double-press.
KeyWait, Esc
return
}
WinMinimize, A
return
With the script above, i found out that the button i wanted to detect was being forwared to the program (i.e. the "~" prefix).
This seems to do the trick for me (i wanted to detect a double "d" press)
d::
keywait,d
keywait,d,d t0.5 ; Increase the "t" value for a longer timeout.
if errorlevel
{
; pretend that nothing happened and forward the single "d"
Send d
return
}
; A double "d" has been detected, act accordingly.
Send {Del}
return
Source