TLS v1.2 handshake fails after client's Change cipher spec and Encrypted Handshake message - ssl

I have a PSK Server and Client example using Open SSL that work very well with one another. However, what I need to do is make my client using PolarSSL/mBedTLS talk to the server. I am experiencing handshake failure once the client sends ChangeCipherSpec and EncryptedHandshakeMessage. Any ideas what could be wrong?
I have used https://bitbucket.org/tiebingzhang/tls-psk-server-client-example/overview as reference.
Sample mBedTLS/PolarSSL code is as below:
static const unsigned char *psk_identity = "Client_identity";
static const unsigned char *psk_key = "1A1A1A1A1A1A1A1A";
ssl_set_endpoint(&context,SSL_IS_CLIENT);
ssl_set_authmode(&context, SSL_VERIFY_NONE );
ssl_set_rng(&context, random_vector_generate, NULL);
ssl_set_ciphersuites(&context, default_ciphers);
ssl_set_bio(&context, transport_read,
NULL,
transport_write,
NULL);
ssl_set_psk(&context, psk_key, strlen((char *)psk_key), psk_identity, strlen((char *)psk_identity));
ssl_handshake(&context);
#Note The only change to the server code is that I have changed the Preshared Key size to 16 from 32.
Also the configuration used for PolarSSL is below:
#define POLARSSL_AES_C
#define POLARSSL_CIPHER_C
#define POLARSSL_CTR_DRBG_C
#define POLARSSL_MD_C
#define POLARSSL_MD5_C
#define POLARSSL_SHA1_C
#define POLARSSL_SSL_CLI_C
#define POLARSSL_SSL_TLS_C
#define POLARSSL_PLATFORM_C
#define POLARSSL_PLATFORM_MEMORY
#define POLARSSL_CIPHER_MODE_CBC
#define POLARSSL_DEBUG_C
#define POLARSSL_BIGNUM_C
#define POLARSSL_AES_ROM_TABLES
#define POLARSSL_PSK_MAX_LEN 32

Related

Verify ECDSA signature with MbedTLS 3.X

A client sends to me a message signed with a private key, type ECDSA secp256R1. I'm in possession of a leaf certificate, in DER format, provided by the client. In addition, I also have the raw message and a sha256 digest of the msg.
I have created a struct where to store all the required info for the verification, with the idea of providing a public API in my application:
struct SignatureVerifyData {
unsigned char *msg;
unsigned char *hash; // digest sha256 of msg
unsigned char *cert; // leaf cert in DER
unsigned char *signature;
size_t msg_len;
size_t hash_len;
size_t cert_len;
size_t signature_len;
};
I'm reading the ecdsa.c example from MbedTLS, but in this case the cert is generated in the same example, I can use mbedtls_x509_crt_parse_der() to load my leaf cert, but then, should I to move it to a mbedtls_ecdsa_context object to use with mbedtls_ecdsa_read_signature()?
Should I use other way to load the leaf cert?
Confused on how to use the group and point objects, or if I need to use them at all.
#define MBEDTLS_HAVE_ASM
#define MBEDTLS_HAVE_TIME
#define MBEDTLS_ALLOW_PRIVATE_ACCESS
#define MBEDTLS_PLATFORM_C
#define MBEDTLS_ECP_DP_SECP256R1_ENABLED
#define MBEDTLS_KEY_EXCHANGE_ECDHE_ECDSA_ENABLED
#define MBEDTLS_SSL_PROTO_TLS1_2
#define MBEDTLS_AES_C
#define MBEDTLS_ASN1_PARSE_C
#define MBEDTLS_ASN1_WRITE_C
#define MBEDTLS_BIGNUM_C
#define MBEDTLS_CIPHER_C
#define MBEDTLS_CTR_DRBG_C
#define MBEDTLS_ECDH_C
#define MBEDTLS_ECDSA_C
#define MBEDTLS_ECP_C
#define MBEDTLS_ENTROPY_C
#define MBEDTLS_GCM_C
#define MBEDTLS_MD_C
#define MBEDTLS_NET_C
#define MBEDTLS_OID_C
#define MBEDTLS_PK_C
#define MBEDTLS_PK_PARSE_C
#define MBEDTLS_SHA224_C
#define MBEDTLS_SHA256_C
#define MBEDTLS_SHA384_C
#define MBEDTLS_SHA512_C
#define MBEDTLS_SSL_CLI_C
#define MBEDTLS_SSL_SRV_C
#define MBEDTLS_SSL_TLS_C
#define MBEDTLS_X509_CRT_PARSE_C
#define MBEDTLS_X509_USE_C
#define MBEDTLS_BASE64_C
#define MBEDTLS_PEM_PARSE_C
#define MBEDTLS_AES_ROM_TABLES
#define MBEDTLS_MPI_MAX_SIZE 48 // 384-bit EC curve = 48 bytes
#define MBEDTLS_ECP_WINDOW_SIZE 2
#define MBEDTLS_ECP_FIXED_POINT_OPTIM 0
#define MBEDTLS_ECP_NIST_OPTIM
#define MBEDTLS_ENTROPY_MAX_SOURCES 2
#define MBEDTLS_SSL_CIPHERSUITES \
MBEDTLS_TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, \
MBEDTLS_TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
#define MBEDTLS_SSL_IN_CONTENT_LEN 1024
#define MBEDTLS_SSL_OUT_CONTENT_LEN 1024
#include "mbedtls/check_config.h"
mbedtls_x509_crt_parse_der constructs an object of type mbedtls_x509_crt. This structure has a field called pk which contains the public key. Call mbedtls_pk_verify to verify the signature.
Here's the general idea of the code to parse the certificate, calculate the hash and verify the signature. Untested code, typed directly into my browser. Error checking omitted, be sure to check that all the function calls succeed.
#include <mbedtls/md.h>
#include <mbedtls/pk.h>
#include <mbedtls/x509_crt.h>
mbedtls_x509_crt crt;
mbedtls_x509_init(&crt);
mbedtls_x509_crt_parse_der(&crt, cert, cert_len);
const mbedtls_md_info_t *md_info = mbedtls_md_info_from_type(MBEDTLS_MD_SHA256);
hash_len = mbedtls_md_get_size(md_info);
hash = malloc(hash_len);
mbedtls_md(md_info, msg, msg_len, hash);
mbedtls_pk_verify(&crt->pk, MBEDTLS_MD_SHA256, hash, hash_len, signature, signature_len);
mbedtls_x509_free(&crt);
free(hash);

How to make downward connection in Contiki-NG with UDP

I'm trying to make a simple mesh connection using 6LoWPAN with Contiki.
For simplicity I'm making this in Cooja, so the hardware is not a constrain in this problem i think.
My objective is to have one root (UDP Server) and many motes (UDP Client). With the examples provided by Contiki, I'm able to do start the communication with the Motes and talk to the Server, but is it possible to do it the other way around?
I want the Root to start send the message to any client, and if it's neccesary, to hop the message via another clients in the network.
Do you have any idea if this is possible to do? Or any track for achieve this?
Update: What I've tried so far:
What i've tried so far, in the server device, create two process, one for initiating the root, and the other one for sendig the packet periodically:
#include "contiki.h"
#include <stdlib.h>
#include "net/routing/routing.h"
#include "random.h"
#include "net/netstack.h"
#include "net/ipv6/simple-udp.h"
#include "sys/log.h"
#define LOG_MODULE "App"
#define LOG_LEVEL LOG_LEVEL_DBG
#define UDP_CLIENT_PORT 8765
#define UDP_SERVER_PORT 5678
#define SEND_INTERVAL (5 * CLOCK_SECOND)
static struct simple_udp_connection udp_conn;
static struct etimer periodic_timer;
PROCESS(udp_server_process, "UDP server");
PROCESS(send_msg_process, "UDP server");
AUTOSTART_PROCESSES(&udp_server_process, &send_msg_process);
static void
udp_rx_callback(struct simple_udp_connection *c,
const uip_ipaddr_t *sender_addr,
uint16_t sender_port,
const uip_ipaddr_t *receiver_addr,
uint16_t receiver_port,
const uint8_t *data,
uint16_t datalen)
{
LOG_INFO("Received response '%.*s' from ", datalen, (char *) data);
LOG_INFO_6ADDR(sender_addr);
LOG_INFO_("\n");
}
PROCESS_THREAD(udp_server_process, ev, data)
{
PROCESS_BEGIN();
/* Initialize DAG root */
NETSTACK_ROUTING.root_start();
/* Initialize UDP connection */
simple_udp_register(&udp_conn, UDP_SERVER_PORT, NULL,
UDP_CLIENT_PORT, udp_rx_callback);
PROCESS_END();
}
PROCESS_THREAD(send_msg_process, ev, data)
{
static unsigned count;
static char str[32];
uip_ipaddr_t dest_ipaddr;
LOG_INFO("%u", count);
PROCESS_BEGIN();
while(1) {
etimer_set(&periodic_timer, CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&periodic_timer));
uip_ip6addr(&dest_ipaddr,0xfe80,0,0,0,0x207,0x7,0x7,0x7);
LOG_INFO("Sending request %u to ", count);
LOG_INFO_6ADDR(&dest_ipaddr);
LOG_INFO_("\n");
snprintf(str, sizeof(str), "hello %d", count);
simple_udp_sendto(&udp_conn, str, strlen(str), &dest_ipaddr);
count++;
}
PROCESS_END();
}
In the cliend side, the code is based simply on listening to the UDP socket, and make a response in the case that it receives a packet.
#include "contiki.h"
#include "net/routing/routing.h"
#include "random.h"
#include "net/netstack.h"
#include "net/ipv6/simple-udp.h"
#include "sys/log.h"
#define LOG_MODULE "App"
#define LOG_LEVEL LOG_LEVEL_DBG
#define WITH_SERVER_REPLY 1
#define UDP_CLIENT_PORT 8765
#define UDP_SERVER_PORT 5678
#define SEND_INTERVAL (5 * CLOCK_SECOND)
static struct simple_udp_connection udp_conn;
/*---------------------------------------------------------------------------*/
PROCESS(udp_client_process, "UDP client");
AUTOSTART_PROCESSES(&udp_client_process);
/*---------------------------------------------------------------------------*/
static void
udp_rx_callback(struct simple_udp_connection *c,
const uip_ipaddr_t *sender_addr,
uint16_t sender_port,
const uip_ipaddr_t *receiver_addr,
uint16_t receiver_port,
const uint8_t *data,
uint16_t datalen)
{
LOG_INFO("Received request '%.*s' from ", datalen, (char *) data);
LOG_INFO_6ADDR(sender_addr);
LOG_INFO("Sending response.\n");
simple_udp_sendto(&udp_conn, data, datalen, sender_addr);
LOG_INFO_("\n");
}
PROCESS_THREAD(udp_client_process, ev, data)
{
PROCESS_BEGIN();
simple_udp_register(&udp_conn, UDP_CLIENT_PORT, NULL,
UDP_SERVER_PORT, udp_rx_callback);
PROCESS_END();
}
As you can see, the code for the server sends periodically a packet to the ipv6 direction: 0xfe80:0:0:0:0x207:0x7:0x7:0x7, which is the ip that will be assigned to a mote in cooja when it is the number 7 in the simulation.
The results I've obtained is that, when the root (A) and the client (B) are in direct connection, they talk to each other perfectly, but when I separate them and try to make the connection from the root (A) to the client (B) via another client (C), the message won't get from A to B.
Yes, it is possible. The RPL routing protocol allows to send packets in both directions from and to the root. Simply use the node's IP address as the destination.
One issue is that a node typically has two IPv6 addresses:
Addresses starting with 0xfe80 are link local.
Addresses starting with the network prefix - defined in the OS config as UIP_DS6_DEFAULT_PREFIX, equal to 0xfd00 by default. This address is only present after the node has joined the RPL network.
Packets to link-local addresses must be single-hop, they are not forwarded by nodes. In order to utilize the multi-hop mesh forwarding properly, use the other address as the destination.

CocoaLumberjack's Log Level switches to verbose

I'm using the CocoaLumberjack logging framework 2.0.0 for logging with different levels. In my Prefix.pch (I know that this file is deprecated, but it should work nevertheless) I include Cocoalumberjack and set the global log level as suggested here:
#ifdef DEBUG
static const DDLogLevel ddLogLevel = DDLogLevelDebug;
#else
static const DDLogLevel ddLogLevel = DDLogLevelWarn;
#endif
I have a DDLogVerbose statement on a few methods, that should not be logged by default. Problem: However, they are getting logged.
Inspecting the ddLogLevel in an init-function shows 00001111, which equals DDLogLevelDebug. Nevertheless, a verbose logging statement directly after this is executed. (1)
Preprocessing the line DDLogVerbose(#"I AM VERBOSE") shows this code:
do {
if(DDLogLevelVerbose & DDLogFlagVerbose)
[DDLog log : __objc_yes
level : DDLogLevelVerbose
flag : DDLogFlagVerbose
context : 0
file : "....m"
function : __PRETTY_FUNCTION__
line : 59
tag : ((void *)0)
format : (#"I AM VERBOSE")];
} while(0);
which means, that the LogLevel after preprocessing is Verbose. (2) I found out that this level is the default in CocoaLumberjack in case, no log level is defined:
#ifndef LOG_LEVEL_DEF
#ifdef ddLogLevel
#define LOG_LEVEL_DEF ddLogLevel
#else
#define LOG_LEVEL_DEF DDLogLevelVerbose
#endif
#endif
But: Debugging this shows that the first path is executed, i.e. LOG_LEVEL_DEF (which is checked against the level of the statement to determine if it should be logged or not) is assigned the correct level (Debug).
Question: I didn't find out, why (1) shows the LogLevel Debug and, after preprocessing, it switched to Verbose (2). Could this be a matter of the order in which headers are included? Or am I missing some important point?
I didn't solve this issue, so I wrote my own header file for logging:
// Create Logging Messages by calling the functions:
// * DDLogFatal(...)
// * DDLogError(...)
// * DDLogWarn(...)
// * DDLogInfo(...)
// * DDLogDebug(...)
// * DDLogTrace(...)
// * DDLogTrace()
// Only the functions that match Log Level (defined beneath) and above this level will lead to an output.
//
// NOTE: For this file to work, the option "Treat warnings as errors" must be turned off!
/*********************************
* CURRENT LOG LEVEL ***
*********************************/
#define LOG_LEVEL LOG_LEVEL_DEBUG
/* Default Log Level */
#ifndef LOG_LEVEL
#ifdef DEBUG
#define LOG_LEVEL LOG_LEVEL_DEBUG
#else
#define LOG_LEVEL LOG_LEVEL_WARN
#endif
#endif
/* List of Log Levels */
#define LOG_LEVEL_OFF 0 // 0000 0000
#define LOG_LEVEL_FATAL 1 // 0000 0001
#define LOG_LEVEL_ERROR 3 // 0000 0011
#define LOG_LEVEL_WARN 7 // 0000 0111
#define LOG_LEVEL_INFO 15 // 0000 1111
#define LOG_LEVEL_DEBUG 31 // 0001 1111
#define LOG_LEVEL_TRACE 63 // 0011 1111
#define LOG_FLAG_FATAL 1 // 0000 0001
#define LOG_FLAG_ERROR 2 // 0000 0010
#define LOG_FLAG_WARN 4 // 0000 0100
#define LOG_FLAG_INFO 8 // 0000 1000
#define LOG_FLAG_DEBUG 16 // 0001 0000
#define LOG_FLAG_TRACE 32 // 0010 0000
#if (LOG_LEVEL & LOG_FLAG_FATAL) > 0
#define DDLogFatal(...) ALog(#"FATAL", __VA_ARGS__)
#else
#define DDLogFatal(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_ERROR) > 0
#define DDLogError(...) ALog(#"ERROR", __VA_ARGS__)
#else
#define DDLogError(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_WARN) > 0
#define DDLogWarn(...) ALog(#"WARNING", __VA_ARGS__)
#else
#define DDLogWarn(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_INFO) > 0
#define DDLogInfo(...) ALog(#"INFO", __VA_ARGS__)
#else
#define DDLogInfo(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_DEBUG) > 0
#define DDLogDebug(...) ALog(#"DEBUG", __VA_ARGS__)
#else
#define DDLogDebug(...)
#endif
#if (LOG_LEVEL & LOG_FLAG_TRACE) > 0
#define DDLogTrace(...) ALog(#"TRACE", __VA_ARGS__)
#define DDLogEntry() ALog(#"TRACE", #"->")
#else
#define DDLogTrace(...)
#define DDLogEntry()
#endif
#define ALog(logLevel, fmt, ...) NSLog((#"%s [Line %d] %#: " fmt), __PRETTY_FUNCTION__, __LINE__, logLevel, ##__VA_ARGS__)
Include this file wherever Logging is needed. Hope this helps someone!
So I'm not sure if this is the same issue you were running into, but I had a similar symptom, i.e. my log levels being ignored. What was happening for me is that the cocoa lumberjack folks made it easier in v2 for new users to get started by not having to specify a log level at all to get the framework to work.
As per the lumberjack docs, to actually use ddLogLevel I needed to #define it before importing the CocoaLumberjack.h file:
Using ddLogLevel to start using
the library is now optional. If you define it add #define
LOG_LEVEL_DEF ddLogLevel before #import
and make change its type to
DDLogLevel
In my case, I'm doing that in the .pch file, so it looks like:
// ProjectX.pch
#define LOG_LEVEL_DEF ddLogLevel // this is the crucial bit!
#import "CocoaLumberjack/CocoaLumberjack.h"
// Then the normal definitions...
#ifdef DEBUG
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused-variable"
static DDLogLevel ddLogLevel = DDLogLevelWarning;
#pragma clang diagnostic pop
#else
static const DDLogLevel ddLogLevel = DDLogLevelWarning;
#endif
#define LOG_LEVEL_DEF ddLogLevel
CocoaLumberjack has 4 log levels
Error
Warning
Info
Verbose
The "ddLogLevel" determines which logs are to be executed and which to be ignored.
If you do not want DDLogVerbose to be executed change to lower levels like Info.
Change your DEBUG macro as follows
#ifdef DEBUG
static const int ddLogLevel = LOG_LEVEL_INFO;
#else
static const int ddLogLevel = LOG_LEVEL_ERROR;
#endif
Hope this solves your issue.

Fast SHA-2 Authentication with Apache, is it even possible?

Okay, I spent the last couple of days researching this, and I can't believe Apache's natively supported hashing functions are that outdated.
I discovered a couple of ways to do this, which are mod_perl and mod_authnz_external, both of which are too slow, because apache runs that whenever any object inside a protected directory is called. That means that a user may have to be authenticated hundreds of times in a single session.
Has anyone ever managed to get Apache to use something that's more secure than MD5 and SHA-1 without moving authentication away from Apache? Salted SHA-2 would be a real bonus.
Thanks!
If you're on a GNU/Linux system with a version of glibc2 released in the last 5 or so years, you can modify htpasswd's crypt() implementation to prepend "$6$" to the salt, and then it'd be as simple as:
# htpasswd -d -c .htpasswd someusername
When the salt starts with "$6$", glibc2 will use salted SHA-512, with the up to 16 characters after that being the salt, in the range [a-zA-Z0-9./].
See man 3 crypt.
I'm not aware of any patch to support this, but it should be a simple one.
EDIT: I'd also like to mention that one round of even salted SHA-512 is breakable if your attacker is determined enough. I'd recommend, and am using in most things I've been able to edit, 128000 rounds of PBKDF2 with HMAC-SHA512, but this would be a very extensive edit, unless you want to link htpasswd against openssl, which has a PKCS5_PBKDF2_HMAC() function.
EDIT 2: Also, using openssl to do strong hashing isn't hard, if you're interested:
abraxas ~ # cat pbkdf2.c
#include <string.h>
#include <stdio.h>
#include <openssl/evp.h>
#include <openssl/sha.h>
#define PBKDF2_SALT_PREFIX "$pbkdf2sha512$"
#define PBKDF2_SALT_PREFIX_LENGTH strlen(PBKDF2_SALT_PREFIX)
#define PBKDF2_PRF_ALGORITHM EVP_sha512()
#define PBKDF2_DIGEST_LENGTH SHA512_DIGEST_LENGTH
#define PBKDF2_SALT_LENGTH 32
#define PBKDF2_RESULT_LENGTH PBKDF2_SALT_PREFIX_LENGTH + (2 * PBKDF2_DIGEST_LENGTH) + PBKDF2_SALT_LENGTH + 2
#define PBKDF2_ROUNDS 128000
void hash_password(const char* pass, const unsigned char* salt, char* result)
{
unsigned int i;
static unsigned char digest[PBKDF2_DIGEST_LENGTH];
memcpy(result, PBKDF2_SALT_PREFIX, PBKDF2_SALT_PREFIX_LENGTH);
memcpy(result + PBKDF2_SALT_PREFIX_LENGTH, salt, PBKDF2_SALT_LENGTH);
result[PBKDF2_SALT_PREFIX_LENGTH + PBKDF2_SALT_LENGTH] = '$';
PKCS5_PBKDF2_HMAC(pass, strlen(pass), salt, PBKDF2_SALT_LENGTH, PBKDF2_ROUNDS, PBKDF2_PRF_ALGORITHM, PBKDF2_DIGEST_LENGTH, digest);
for (i = 0; i < sizeof(digest); i++)
sprintf(result + PBKDF2_SALT_PREFIX_LENGTH + PBKDF2_SALT_LENGTH + 1 + (i * 2), "%02x", 255 & digest[i]);
}
int main(void)
{
char result[PBKDF2_RESULT_LENGTH];
char pass[] = "password";
unsigned char salt[] = "178556d2988b6f833f239cd69bc07ed3";
printf("Computing PBKDF2(HMAC-SHA512, '%s', '%s', %d, %d) ...\n", pass, salt, PBKDF2_ROUNDS, PBKDF2_DIGEST_LENGTH);
memset(result, 0, PBKDF2_RESULT_LENGTH);
hash_password(pass, salt, result);
printf("Result: %s\n", result);
return 0;
}
abraxas ~ # gcc -Wall -Wextra -O3 -lssl pbkdf2.c -o pbkdf2
abraxas ~ # time ./pbkdf2
Computing PBKDF2(HMAC-SHA512, 'password', '178556d2988b6f833f239cd69bc07ed3', 128000, 64) ...
Result: $pbkdf2sha512$178556d2988b6f833f239cd69bc07ed3$3acb79896ce3e623c3fac32f91d4421fe360fcdacfb96ee3460902beac26807d28aca4ed01394de2ea37b363ab86ba448286eaf21e1d5b316149c0b9886741a7
real 0m0.320s
user 0m0.319s
sys 0m0.001s
abraxas ~ #

How to receive packets on the MCU's serial port?

Consider this code running on my microcontroller unit(MCU):
while(1){
do_stuff;
if(packet_from_PC)
send_data_via_gpio(new_packet); //send via general purpose i/o pins
else
send_data_via_gpio(default_packet);
do_other_stuff;
}
The MCU is also interfaced to a PC via a UART.Whenever the PC sends data to the MCU, the new_packet is sent,
otherwise the default_packet is sent.Each packet can be 5 or more bytes with a pre defined packet structure.
My question is:
1.Should i receive the entire packet from PC using inside the UART interrut service routine (ISR)? In this case, i have to implement
a state machine inside the ISR to assemble the packet (which can be lengthy with if-else or switch-case blocks).
OR
2.Have the PC send some sort of a REQUEST command (one byte),detect it in my ISR set a flag, disable UART interrupt alone and form the packet in my while(1) loop by checking for the flag and polling the UART?In this case the UART interrupt would be re-enabled in the while(1) loop after the entire packet is formed.
Those are not the only two choices, and the second one seems suboptimal.
My first approach would be to a simple circular queue, and push bytes into it from the ISR and read bytes from in your main loop. That way you have a small and simple ISR and you and do the processing in your main loop without disabling interrupts.
The first choice is possible assuming you can code the ISR sensibly. You probably want to have timeouts when dealing with constructing packets; you need to be able to handle that correctly in your ISR. It depends on the line speed, the speed of your MCU and what else you need to do.
Update:
Doing it in the ISR is certainly reasonable. However, using a circular queue is pretty straightforward with a standard implementation in your bag of tricks. Here is a circular queue implementation; readers and writers can operate independently.
#ifndef ARRAY_ELEMENTS
#define ARRAY_ELEMENTS(x) (sizeof(x) / sizeof(x[0]))
#endif
#define QUEUE_DEFINE(name, queue_depth, type) \
struct queue_type__##name { \
volatile size_t m_in; \
volatile size_t m_out; \
type m_queue[queue_depth]; \
}
#define QUEUE_DECLARE(name) struct queue_type__##name name
#define QUEUE_SIZE(name) ARRAY_ELEMENTS((name).m_queue)
#define QUEUE_CALC_NEXT(name, i) \
(((name).i == (QUEUE_SIZE(name) - 1)) ? 0 : ((name).i + 1))
#define QUEUE_INIT(name) (name).m_in = (name).m_out = 0
#define QUEUE_EMPTY(name) ((name).m_in == (name).m_out)
#define QUEUE_FULL(name) (QUEUE_CALC_NEXT(name, m_in) == (name).m_out)
#define QUEUE_NEXT_OUT(name) ((name).m_queue + (name).m_out)
#define QUEUE_NEXT_IN(name) ((name).m_queue + (name).m_in)
#define QUEUE_PUSH(name) ((name).m_in = QUEUE_CALC_NEXT((name), m_in))
#define QUEUE_POP(name) ((name).m_out = QUEUE_CALC_NEXT((name), m_out))
Use it like this:
QUEUE_DEFINE(bytes_received, 64, unsigned char);
QUEUE_DECLARE(bytes_received);
void isr(void)
{
/* Move the received byte into 'c' */
/* This code enqueues the byte, or drops it if the queue is full */
if (!QUEUE_FULL(bytes_received)) {
*QUEUE_NEXT_IN(bytes_received) = c;
QUEUE_PUSH(bytes_received);
}
}
void main(void)
{
QUEUE_INIT(bytes_received);
for (;;) {
other_processing();
if (!QUEUE_EMPTY(bytes_received)) {
unsigned char c = *QUEUE_NEXT_OUT(bytes_received);
QUEUE_POP(bytes_received);
/* Use c as you see fit ... */
}
}
}