I'm using a PPC platform that has an older version of zlib ported to it. Is it possible to use zlib 1.1.3 to inflate an archive made with gzip 1.5?
$ gzip --list --verbose vmlinux.z
method crc date time compressed uncompressed ratio uncompressed_name
defla 12169518 Apr 29 13:00 4261643 9199404 53.7% vmlinux
The first 32 bytes of the archive are
00000000 1f 8b 08 08 29 f4 8a 60 00 03 76 6d 6c 69 6e 75 |....)..`..vmlinu|
00000010 78 00 ec 9a 7f 54 1c 55 96 c7 6f 75 37 d0 fc 70 |x....T.U..ou7..p|
I've tried using this code (where source is a pointer to the first byte at 1f 8b) with the three options A, B, and C for the WBIT initialization.
int ZEXPORT gunzip (dest, destLen, source, sourceLen)
Bytef *dest;
uLongf *destLen;
const Bytef *source;
uLong sourceLen;
{
z_stream stream;
int err;
stream.next_in = (Bytef*)source;
stream.avail_in = (uInt)sourceLen;
/* Check for source > 64K on 16-bit machine: */
if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
stream.next_out = dest;
stream.avail_out = (uInt)*destLen;
if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
stream.zalloc = (alloc_func)my_alloc;
stream.zfree = (free_func)my_free;
/* option A */
err = inflateInit(&stream);
/* option B */
err = inflateInit2(&stream, 15 + 16);
/* option C */
err = inflateInit2(&stream, -MAX_WBITS);
if (err != Z_OK) return err;
err = inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
inflateEnd(&stream);
return err == Z_OK ? Z_BUF_ERROR : err;
}
*destLen = stream.total_out;
err = inflateEnd(&stream);
return err;
}
Option A:
zlib inflate() fails with error Z_DATA_ERROR. "unknown compression method"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Option B:
zlib inflateInit2_() fails at line 118 with a Z_STREAM_ERROR.
/* set window size */
if (w < 8 || w > 15)
{
inflateEnd(z);
return Z_STREAM_ERROR;
}
Option C:
zlib inflate() fails with error Z_DATA_ERROR. "invalid block type"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Your option B would work for zlib 1.2.1 or later.
With zlib 1.1.3, there are two ways.
Use the gzopen(), gzread(), and gzclose() to read the gzip stream from a file and decompress into memory.
To decompress from the gzip stream in memory, use your option C, raw inflate, after manually decoding the gzip header. Use crc32() to calculate the CRC-32 of the decompressed data as you inflate it. When the inflation completes, manually decode the gzip trailer, checking the CRC-32 and size of the decompressed data.
Manual decoding of the gzip header and trailer is simple to implement. See RFC 1952 for the description of the header and trailer.
I am new to Arduino and working on a project to pass cryptographic keys between Arduino (ESP8266) and a React Native app. On the ESP8266 side I am using arduino crypto library and for React Native I am using react-native-crypto-js. The encryption doesn't seem to do the right thing and returns garbage. So, as a first step I was trying to pass the key from Arduino into the mobile app (using the bluetooth HC-05). The following code was used in the arduino side:
void getConnected() {
// wait till someone try to connect to the user
if (btSerial.available() > 0) {
String message = btSerial.readString();
if(message == "Hello") {
Serial.println("user trying to connect");
byte key[32];
device.getKey(key);
Serial.println((char*)key);
btSerial.write((char*)key);
}
}
}
The key is generated using RNG.rand(key, sizeof(key)) and I also printed the generated bytes separately and it is something like:
137 224 186 115 0 0 0 0 172 228 254 63 131 53 32 64 208 218 255 63 13 0 0 0 172 228 254 63 48 70 32 64
As you see above, since there are 0s in the bytes, the code in the React Native app is only getting the first 4 bytes, and the remaining is ignored as it is thinking the 5th byte to be the end of string.
The code used in the App is as below:
async setup(deviceId) {
console.log('connecting');
await BluetoothSerial.connect(deviceId);
console.log('connected');
await BluetoothSerial.write('Hello');
setTimeout(async () => {
let key = await BluetoothSerial.readFromDevice();
console.log(key);
}, 3000);
// const input = await BluetoothSerial.readFromDevice();
return true;
}
I would really appreciate some pointers. Please help.
Thanks
I built a simple program that executes a test in CUnit.
The main function is:
int main()
50 {
51 CU_pSuite pSuite = NULL;
52
53 /* initialize the CUnit test registry */
54 if (CUE_SUCCESS != CU_initialize_registry())
55 return CU_get_error();
56
57 /* add a suite to the registry */
58 pSuite = CU_add_suite("Suite_1", init_suite1, clean_suite1);
59 if (NULL == pSuite) {
60 CU_cleanup_registry();
61 return CU_get_error();
62 }
63
64 if ((NULL == CU_add_test(pSuite, "test of fprintf()", test_parse))) {
65 CU_cleanup_registry();
66 return CU_get_error();
67 }
68
69 /* Run all tests using the CUnit Basic interface */
70 CU_basic_set_mode(CU_BRM_VERBOSE);
71 CU_basic_run_tests();
72 CU_cleanup_registry();
73 printf("ERROR CODE: %d", CU_get_error());
74 return CU_get_error();
75 }
The test_parse function uses CU_ASSERT_FATAL. The test fails, but the output of main is the following:
CUnit - A unit testing framework for C - Version 2.1-3
http://cunit.sourceforge.net/
Suite: Suite_1
Test: test of fprintf() ...FAILED
1. /home/fedetask/Desktop/curl/tests/main.c:42 - parsed == 3
Run Summary: Type Total Ran Passed Failed Inactive
suites 1 1 n/a 0 0
tests 1 1 0 1 0
asserts 5 5 4 1 n/a
Elapsed time = 0.000 seconds
ERROR CODE: 0
The main() returns 0. It returns 0 also if the test passes. What am I doing wrong?
My error: CU_get_error() returns an error code only if a framework function had an error, not the tests. To get test results, follow http://cunit.sourceforge.net/doc/running_tests.html
Ran into the same issue with this. Indeed, CU_get_error() will be 0 even if a testcase if failing. The following variables store the results as shown in the doc
unsigned int CU_get_number_of_suites_run(void)
unsigned int CU_get_number_of_suites_failed(void)
unsigned int CU_get_number_of_tests_run(void)
unsigned int CU_get_number_of_tests_failed(void)
unsigned int CU_get_number_of_asserts(void)
unsigned int CU_get_number_of_successes(void)
unsigned int CU_get_number_of_failures(void)
So a simple approach for checking if there has been any error, would be something like:
if (CU_get_number_of_tests_failed() != 0){
// Do Something
}
I have an application that accepts hex values from a C++/CLI richtextbox.
The string comes from a user input.
Sample input and expected output.
01 02 03 04 05 06 07 08 09 0A //good input
0102030405060708090A //bad input but can automatically be converted to good by adding spaces.
XX ZZ DD AA OO PP II UU HH SS //bad input this is not hex
01 000 00 00 00 00 00 01 001 0 //bad input hex is only 2 chars
How to write function:
1. Detect if input is good or bad input.
2. If its a bad input check what kind of bad input: no spaces, not hex, must be 2 chars split.
3. If its no spaces bad input then just add the spaces automatically.
So far I made a space checker by searching for a space like:
for ( int i = 2; i < input.size(); i++ )
{
if(inputpkt[i] == ' ')
{
cout << "good input" << endl;
i = i+2;
}
else
{
cout << "bad input. I will format for you" << endl;
}
}
But it doesn't really work as expected because it returns this:
01 000 //bad input
01 000 00 00 00 00 00 01 001 00 //good input
update
1 Check if input is actually hex:
bool ishex(std::string const& s)
{
return s.find_first_not_of("0123456789abcdefABCDEF ", 0) == std::string::npos;
}
Are you operating in C++/CLI, or in plain C++? You've got it tagged C++/CLI, but you're using std::string, not .Net System::String.
I suggest this as a general plan: First, split your large string into smaller ones based on any whitespace. For each individual string, make sure it only contains [0-9a-fA-F], and is a multiple of two characters long.
The implementation could go something like this:
array<Byte>^ ConvertString(String^ input)
{
List<System::Byte>^ output = gcnew List<System::Byte>();
// Splitting on a null string array makes it split on all whitespace.
array<String^>^ words = input->Split(
(array<String^>)nullptr,
StringSplitOptions::RemoveEmptyEntries);
for each(String^ word in words)
{
if(word->Length % 2 == 1) throw gcnew Exception("Invalid input string");
for(int i = 0; i < word->Length; i += 2)
{
output->Add((Byte)(GetHexValue(word[i]) << 4 + GetHexValue(word[i+1])));
}
}
return output->ToArray();
}
int GetHexValue(Char c) // Note: Upper case 'C' makes it System::Char
{
// If not [0-9a-fA-F], throw gcnew Exception("Invalid input string");
// If is [0-9a-fA-F], return integer between 0 and 15.
}
What are the ASCII values of the arrow keys? (up/down/left/right)
In short:
left arrow: 37 up arrow: 38right arrow: 39down arrow: 40
There is no real ascii codes for these keys as such, you will need to check out the scan codes for these keys, known as Make and Break key codes as per helppc's information. The reason the codes sounds 'ascii' is because the key codes are handled by the old BIOS interrupt 0x16 and keyboard interrupt 0x9.
Normal Mode Num lock on
Make Break Make Break
Down arrow E0 50 E0 D0 E0 2A E0 50 E0 D0 E0 AA
Left arrow E0 4B E0 CB E0 2A E0 4B E0 CB E0 AA
Right arrow E0 4D E0 CD E0 2A E0 4D E0 CD E0 AA
Up arrow E0 48 E0 C8 E0 2A E0 48 E0 C8 E0 AA
Hence by looking at the codes following E0 for the Make key code, such as 0x50, 0x4B, 0x4D, 0x48 respectively, that is where the confusion arise from looking at key-codes and treating them as 'ascii'... the answer is don't as the platform varies, the OS varies, under Windows it would have virtual key code corresponding to those keys, not necessarily the same as the BIOS codes, VK_UP, VK_DOWN, VK_LEFT, VK_RIGHT.. this will be found in your C++'s header file windows.h, as I recall in the SDK's include folder.
Do not rely on the key-codes to have the same 'identical ascii' codes shown here as the Operating system will reprogram the entire BIOS code in whatever the OS sees fit, naturally that would be expected because since the BIOS code is 16bit, and the OS (nowadays are 32bit protected mode), of course those codes from the BIOS will no longer be valid.
Hence the original keyboard interrupt 0x9 and BIOS interrupt 0x16 would be wiped from the memory after the BIOS loads it and when the protected mode OS starts loading, it would overwrite that area of memory and replace it with their own 32 bit protected mode handlers to deal with those keyboard scan codes.
Here is a code sample from the old days of DOS programming, using Borland C v3:
#include <bios.h>
int getKey(void){
int key, lo, hi;
key = bioskey(0);
lo = key & 0x00FF;
hi = (key & 0xFF00) >> 8;
return (lo == 0) ? hi + 256 : lo;
}
This routine actually, returned the codes for up, down is 328 and 336 respectively, (I do not have the code for left and right actually, this is in my old cook book!) The actual scancode is found in the lo variable. Keys other than the A-Z,0-9, had a scan code of 0 via the bioskey routine.... the reason 256 is added, because variable lo has code of 0 and the hi variable would have the scan code and adds 256 on to it in order not to confuse with the 'ascii' codes...
Really the answer to this question depends on what operating system and programming language you are using. There is no "ASCII code" per se. The operating system detects you hit an arrow key and triggers an event that programs can capture. For example, on modern Windows machines, you would get a WM_KEYUP or WM_KEYDOWN event. It passes a 16-bit value usually to determine which key was pushed.
The ascii values of the:
Up key - 224
72
Down key - 224
80
Left key - 224
75
Right key - 224
77
Each of these has two integer values for ascii value, because they are special keys, as opposed to the code for $, which is simply 36. These 2 byte special keys usually have the first digit as either 224, or 0. this can be found with the F# in windows, or the delete key.
EDIT : This may actually be unicode looking back, but they do work.
If you're programming in OpenGL, use GLUT. The following page should help: http://www.lighthouse3d.com/opengl/glut/index.php?5
GLUT_KEY_LEFT Left function key
GLUT_KEY_RIGHT Right function key
GLUT_KEY_UP Up function key
GLUT_KEY_DOWN Down function key
void processSpecialKeys(int key, int x, int y) {
switch(key) {
case GLUT_KEY_F1 :
red = 1.0;
green = 0.0;
blue = 0.0; break;
case GLUT_KEY_F2 :
red = 0.0;
green = 1.0;
blue = 0.0; break;
case GLUT_KEY_F3 :
red = 0.0;
green = 0.0;
blue = 1.0; break;
}
}
You can check it by compiling,and running this small C++ program.
#include <iostream>
#include <conio.h>
#include <cstdlib>
int show;
int main()
{
while(true)
{
int show = getch();
std::cout << show;
}
getch(); // Just to keep the console open after program execution
}
If you're working with terminals, as I was when I found this in a search, then you'll find that the arrow keys send the corresponding cursor movement escape sequences.
So in this context,
UP = ^[[A
DOWN = ^[[B
RIGHT = ^[[C
LEFT = ^[[D
with ^[ being the symbol meaning escape, but you'll use the ASCII value for escape which is 27, as well as for the bracket and letter.
In my case, using a serial connection to communicate these directions, for Up arrow, I sent the byte sequence 27,91,65 for ^[, [, A
You can utilize the special function for activating the navigation for your programming purpose. Below is the sample code for it.
void Specialkey(int key, int x, int y)
{
switch(key)
{
case GLUT_KEY_UP:
/*Do whatever you want*/
break;
case GLUT_KEY_DOWN:
/*Do whatever you want*/
break;
case GLUT_KEY_LEFT:
/*Do whatever you want*/
break;
case GLUT_KEY_RIGHT:
/*Do whatever you want*/
break;
}
glutPostRedisplay();
}
Add this to your main function
glutSpecialFunc(Specialkey);
Hope this will to solve the problem!
The Ascii codes for arrow characters are the following:
↑ 24
↓ 25
→ 26
← 27
I got stuck with this question and was not able to find a good solution, so decided to have some tinkering with the Mingw compiler I have. I used C++ and getch() function in <conio.h> header file and pressed the arrow keys to find what value was assigned to these keys. As it turns out, they are assigned values as 22472, 22477, 22480, 22475 for up, right, down and left keys respectively. But wait, it does not work as you would expect. You have to ditch the 224 part and write only the last two digits for the software to recognize the correct key; and as you guessed, 72, 77, 80 and 75 are all preoccupied by other characters, but this works for me and I hope it works for you as well. If you want to run the C++ code for yourself and find out the values for yourself, run this code and press enter to get out of the loop:
#include<iostream>
#include<conio.h>
using namespace std;
int main()
{
int x;
while(1){
x=(int)getch();
if(x==13){
break;
}
else
cout<<endl<<endl<<x;
}
return getch();
}
If you Come for JavaScript Purpose to get to Know which Key is Pressed.
Then there is a method of AddEventListener of JavaScript name keydown.
which give us that key which is pressed but there are certain method that you can perform on that pressed key that you get by keydown or onkeydown quite same both of them.
The Methods of pressed Key are :-
.code :- This Return a String About Which key is Pressed Like ArrowUp, ArrowDown, KeyW, Keyr and Like that
.keyCode :- This Method is Depereciated but still you can use it. This return an integer like for small a ---> 65 , Capital A :- 65 mainly ASCII code means case Insensitive
ArrowLeft :- 37, ArrowUp :- 38, ArrowRight :- 39 and ArrowDown :- 40
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
</head>
<body>
<h1 id="show">Press Any Button</h1>
// JavaScript Code Starting From here See Magic By Pressing Any Buttton
<script>
document.addEventListener('keydown', (key)=> {
let keycode = key.keyCode;
document.getElementById('show').innerText = keycode;
/*
let keyString = key.code;
switch(keyString){
case "ArrowLeft":
console.log("Left Key is Pressed");
break;
case "ArrowUp":
console.log("Up Key is Pressed");
break;
case "ArrowRight":
console.log("Right Key is Pressed");
break;
case "ArrowDown":
console.log("Down Key is Pressed");
break;
default:
console.log("Any Other Key is Pressed");
break;
}
*/
});
</script>
</body>
</html>
Can't address every operating system/situation, but for AppleScript on a Mac, it is as follows:
LEFT: 28
RIGHT: 29
UP: 30
DOWN: 31
tell application "System Events" to keystroke (ASCII character 31) --down arrow
Gaa! Go to asciitable.com. The arrow keys are the control equivalent of the HJKL keys. I.e., in vi create a big block of text. Note you can move around in that text using the HJKL keys. The arrow keys are going to be ^H, ^J, ^K, ^L.
At asciitable.com find, "K" in the third column. Now, look at the same row in the first column to find the matching control-code ("VT" in this case).
this is what i get:
Left - 19
Up - 5
Right - 4
Down - 24
Works in Visual Fox Pro