I am attempting to implement HMAC verification using SHA256 to interact with an API. I've found the hmac and sha2 crates, and according to their examples they will work perfectly for my purposes.
I have this code:
extern crate hmac;
extern crate sha2;
use hmac::{Hmac, Mac};
use sha2::{Digest, Sha256};
pub fn verify(message: &[u8], code: &[u8], key: &[u8]) -> bool {
type HmacSha256 = Hmac<Sha256>;
let mut mac = HmacSha256::new_varkey(key).unwrap();
mac.input(message);
let result = mac.result().code();
return result == code;
}
#[cfg(test)]
mod tests {
use verify;
#[test]
fn should_work() {
assert!(verify(
b"code=0907a61c0c8d55e99db179b68161bc00&shop=some-shop.myshopify.com×tamp=1337178173",
b"4712bf92ffc2917d15a2f5a273e39f0116667419aa4b6ac0b3baaf26fa3c4d20",
b"hush"
), "Returned false with correct parameters!");
}
#[test]
fn shouldnt_work() {
assert!(
!verify(
b"things=things&stuff=this_is_pod_racing",
b"3b3f62798a09c78hjbjsakbycut^%9n29ddeb8f6862b42c7eb6fa65cf2a8cade",
b"mysecu)reAn111eecretB"
),
"Returned true with incorrect parameters!"
);
}
}
cargo test should show a valid HMAC verification, and an invalid one.
Unfortunately, the results given from the verify function disagree with the result from online HMAC generators. As an example, with the message code=0907a61c0c8d55e99db179b68161bc00&shop=some-shop.myshopify.com×tamp=1337178173 and key hush, this online HMAC generator indicates the hash should be 4712bf92ffc2917d15a2f5a273e39f0116667419aa4b6ac0b3baaf26fa3c4d20, but this causes my tests to fail, and printing out the result confirms that the hash is not correct.
I've confirmed that the results of my byte-string literals are indeed their ASCII equivalents, and otherwise I'm performing this process almost exactly like the examples demonstrate.
I will not be using result == code in the final version due to side-channel attacks, that is just to make my debugging life a little easier.
Cargo.toml
[package]
name = "crypto"
version = "0.1.0"
[dependencies]
hmac = "0.6.2"
sha2 = "0.7.1"
4712bf92ffc2917d15a2f5a273e39f0116667419aa4b6ac0b3baaf26fa3c4d20
This isn't supposed to be treated as an ASCII bytestring. This is a hex-encoding of the raw bytes to an easily human-readable format. You need to properly match encodings:
extern crate hmac;
extern crate sha2;
extern crate hex;
use hmac::{Hmac, Mac};
use sha2::Sha256;
pub fn verify(message: &[u8], code: &str, key: &[u8]) -> bool {
type HmacSha256 = Hmac<Sha256>;
let mut mac = HmacSha256::new_varkey(key).unwrap();
mac.input(message);
let result = mac.result().code();
let r2 = hex::encode(&result);
r2 == code
}
#[test]
fn should_work() {
assert!(verify(
b"code=0907a61c0c8d55e99db179b68161bc00&shop=some-shop.myshopify.com×tamp=1337178173",
"4712bf92ffc2917d15a2f5a273e39f0116667419aa4b6ac0b3baaf26fa3c4d20",
b"hush"
), "Returned false with correct parameters!");
}
#[test]
fn shouldnt_work() {
assert!(
!verify(
b"things=things&stuff=this_is_pod_racing",
"3b3f62798a09c78hjbjsakbycut^%9n29ddeb8f6862b42c7eb6fa65cf2a8cade",
b"mysecu)reAn111eecretB"
),
"Returned true with incorrect parameters!"
);
}
See also:
How do I convert a string to hex in Rust?
Why do I get incorrect values when implementing HMAC-SHA256?
Using PBKDF2 key derivation to properly create user-readable salt with rust-crypto
Related
When I'm testing functions that have an obvious, slower, brute-force alternative, I've often found it helpful to write both functions, and verify that the outputs match when debugging flags are on. In C, it might look something like this:
#include <inttypes.h>
#include <stdio.h>
#ifdef NDEBUG
#define _rsqrt rsqrt
#else
#include <assert.h>
#include <math.h>
#endif
// https://en.wikipedia.org/wiki/Fast_inverse_square_root
float _rsqrt(float number) {
const float x2 = number * 0.5F;
const float threehalfs = 1.5F;
union {
float f;
uint32_t i;
} conv = {number}; // member 'f' set to value of 'number'.
// approximation via Newton's method
conv.i = 0x5f3759df - (conv.i >> 1);
conv.f *= (threehalfs - (x2 * conv.f * conv.f));
return conv.f;
}
#ifndef NDEBUG
float rsqrt(float number) {
float res = _rsqrt(number);
// brute force solution to verify
float correct = 1 / sqrt(number);
// make sure the approximation is within 1% of correct
float err = fabs(res - correct) / correct;
assert(err < 0.01);
// for exposition sake: large scale systems would verify quietly
printf("DEBUG: rsqrt(%f) -> %f error\n", number, err);
return res;
}
#endif
float graphics_code() {
// graphics code that invokes rsqrt a bunch of different ways
float total = 0;
for (float i = 1; i < 10; i++)
total += rsqrt(i);
return total;
}
int main(int argc, char *argv[]) {
printf("%f\n", graphics_code());
return 0;
}
and execution might look like this (if the above code is in tmp.c):
$ clang tmp.c -o tmp -lm && ./tmp # debug mode
DEBUG: rsqrt(1.000000) -> 0.001693 error
DEBUG: rsqrt(2.000000) -> 0.000250 error
DEBUG: rsqrt(3.000000) -> 0.000872 error
DEBUG: rsqrt(4.000000) -> 0.001693 error
DEBUG: rsqrt(5.000000) -> 0.000162 error
DEBUG: rsqrt(6.000000) -> 0.001389 error
DEBUG: rsqrt(7.000000) -> 0.001377 error
DEBUG: rsqrt(8.000000) -> 0.000250 error
DEBUG: rsqrt(9.000000) -> 0.001140 error
4.699923
$ clang tmp.c -o tmp -lm -O3 -DNDEBUG && ./tmp # production mode
4.699923
I like to do this in addition to unit and integration tests because it makes the source of a lot of errors more obvious. It will catch boundary cases that I may have forgotten to unit test, and will naturally expand to the scope to whatever more complex cases I may need in the future (e.g. if the light settings change and I need accuracy for much higher values).
I'm learning Rust, and I really like the natively established separation of interests between testing and production code. I'm trying to do something similar to the above, but can't figure out what the best way to do it is. From what I gather in this thread, I could probably do it with some combination of macro_rules! and #[cfg!( ... )] in the source code, but it feels like I would be breaking the test/production barrier. Ideally I would like to be able to just drop a verification wrapper in around the already defined function, but only for testing. Are macros and cfg my best option here? Can I redefine the default namespace for the imported package just when testing, or do something more clever with macros? I understand that normally files shouldn't be able to modify how imports are linked, but is there an exception for testing? What if I also want it to be wrapped if the module importing it is being tested?
I'm also open to the response that this is a bad way to do testing/verification, but please address the advantages I mentioned above. (Or as a bonus, is there a way the C code can be improved?)
If this isn't currently possible, is it a reasonable thing to go into a feature request?
it feels like I would be breaking the test/production barrier.
Yes, but I don't get why you are concerned about this; your existing code already breaks that boundary. You can use debug_assert and friends to ensure that the function is only called and verified when debug assertions are enabled. If you want to be doubly-sure, you can use cfg(debug_assertions) to only define your slow function then as well:
pub fn add(a: i32, b: i32) -> i32 {
let fast = fast_but_tricky(a, b);
debug_assert_eq!(fast, slow_but_right(a, b));
fast
}
fn fast_but_tricky(a: i32, b: i32) -> i32 {
a + a + b - a
}
#[cfg(debug_assertions)]
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
I don't like this solution. I prefer to keep the testing code more distinct from the production code. What I do instead is use property-based testing to help ensure that my tests cover what is important. I've used proptest to...
Compare a Rust implementation against C
Compare a SIMD implementation against standard
I usually take any cases that are found and create dedicated unit tests for them.
In this case, the proptest might look like:
pub fn add(a: i32, b: i32) -> i32 {
// fast but tricky
a + a + b - a
}
#[cfg(test)]
mod test {
use super::*;
use proptest::{proptest, prop_assert_eq};
fn slow_but_right(a: i32, b: i32) -> i32 {
a + b
}
proptest! {
#[test]
fn same_as_slow_version(a: i32, b: i32) {
prop_assert_eq!(add(a, b), slow_but_right(a, b));
}
}
}
Which finds an error with my "clever" implementation in less than a tenth of a second:
thread 'test::same_as_slow_version' panicked at 'Test failed: attempt to add with overflow; minimal failing
input: a = 375403587, b = 1396676474
My initial intent was to convert a signed primitive number to its hexadecimal representation in a way that preserves the number's sign. It turns out that the current implementations of LowerHex, UpperHex and relatives for signed primitive integers will simply treat them as unsigned. Regardless of what extra formatting flags that I add, these implementations appear to simply reinterpret the number as its unsigned counterpart for formatting purposes. (Playground)
println!("{:X}", 15i32); // F
println!("{:X}", -15i32); // FFFFFFF1 (expected "-F")
println!("{:X}", -0x80000000i32); // 80000000 (expected "-80000000")
println!("{:+X}", -0x80000000i32); // +80000000
println!("{:+o}", -0x8000i16); // +100000
println!("{:+b}", -0x8000i16); // +1000000000000000
The documentation in std::fmt is not clear on whether this is supposed to happen, or is even valid, and UpperHex (or any other formatting trait) does not mention that the implementations for signed integers interpret the numbers as unsigned. There seem to be no related issues on Rust's GitHub repository either. (Post-addendum notice: Starting from 1.24.0, the documentation has been improved to properly address these concerns, see issue #42860)
Ultimately, one could implement specific functions for the task (as below), with the unfortunate downside of not being very compatible with the formatter API.
fn to_signed_hex(n: i32) -> String {
if n < 0 {
format!("-{:X}", -n)
} else {
format!("{:X}", n)
}
}
assert_eq!(to_signed_hex(-15i32), "-F".to_string());
Is this behaviour for signed integer types intentional? Is there a way to do this formatting procedure while still adhering to a standard Formatter?
Is there a way to do this formatting procedure while still adhering to a standard Formatter?
Yes, but you need to make a newtype in order to provide a distinct implementation of UpperHex. Here's an implementation that respects the +, # and 0 flags (and possibly more, I haven't tested):
use std::fmt::{self, Formatter, UpperHex};
struct ReallySigned(i32);
impl UpperHex for ReallySigned {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
let prefix = if f.alternate() { "0x" } else { "" };
let bare_hex = format!("{:X}", self.0.abs());
f.pad_integral(self.0 >= 0, prefix, &bare_hex)
}
}
fn main() {
for &v in &[15, -15] {
for &v in &[&v as &UpperHex, &ReallySigned(v) as &UpperHex] {
println!("Value: {:X}", v);
println!("Value: {:08X}", v);
println!("Value: {:+08X}", v);
println!("Value: {:#08X}", v);
println!("Value: {:+#08X}", v);
println!();
}
}
}
This is like Francis Gagné's answer, but made generic to handle i8 through i128.
use std::fmt::{self, Formatter, UpperHex};
use num_traits::Signed;
struct ReallySigned<T: PartialOrd + Signed + UpperHex>(T);
impl<T: PartialOrd + Signed + UpperHex> UpperHex for ReallySigned<T> {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
let prefix = if f.alternate() { "0x" } else { "" };
let bare_hex = format!("{:X}", self.0.abs());
f.pad_integral(self.0 >= T::zero(), prefix, &bare_hex)
}
}
fn main() {
println!("{:#X}", -0x12345678);
println!("{:#X}", ReallySigned(-0x12345678));
}
I am trying to make an iterator that maps a string to an integer:
fn main() {
use std::collections::HashMap;
let mut word_map = HashMap::new();
word_map.insert("world!", 0u32);
let sentence: Vec<&str> = vec!["Hello", "world!"];
let int_sentence: Vec<u32> = sentence.into_iter()
.map(|x| word_map.entry(x).or_insert(word_map.len() as u32))
.collect();
}
(Rust playground)
This fails with
the trait core::iter::FromIterator<&mut u32> is not implemented for the type collections::vec::Vec<u32>
Adding a dereference operator around the word_map.entry().or_insert() expression does not work as it complains about borrowing which is surprising to me as I'm just trying to copy the value.
The borrow checker uses lexical lifetime rules, so you can't have conflicting borrows in a single expression. The solution is to extract getting the length into a separate let statement:
let int_sentence: Vec<u32> = sentence.into_iter()
.map(|x| *({let len = word_map.len() as u32;
word_map.entry(x).or_insert(len)}))
.collect();
Such issues will hopefully go away when Rust supports non-lexical lifetimes.
For learning purposes I am currently trying to write small program which will implement echo-server for UDP packets which will work on a certain set of ports (say 10000-60000). So as it wouldn't be so good to spam 50k threads for this I need to use asynchronous IO and mio is excellent match for this task. But I've got a problem right from the start with this code:
extern crate mio;
extern crate bytes;
use mio::udp::*;
use bytes::MutSliceBuf;
fn main() {
let addr = "127.0.0.1:10000".parse().unwrap();
let socket = UdpSocket::bound(&addr).unwrap();
let mut buf = [0; 128];
socket.recv_from(&mut MutSliceBuf::wrap(&mut buf));
}
It is almost full copypaste from of mio's test_udp_socket.rs.But while mio's tests successfully pass through, then I try to compile this code I am getting following error:
src/main.rs:12:12: 12:55 error: the trait `bytes::buf::MutBuf` is not implemented for the type `bytes::buf::slice::MutSliceBuf<'_>` [E0277]
src/main.rs:12 socket.recv_from(&mut MutSliceBuf::wrap(&mut buf));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/main.rs:12:12: 12:55 help: run `rustc --explain E0277` to see a detailed explanation
But examining code of src/buf/slice.rs from bytes crate (local copy of it too) we can clearly see what this trait was implemented:
impl<'a> MutBuf for MutSliceBuf<'a> {
fn remaining(&self) -> usize {
self.bytes.len() - self.pos
}
fn advance(&mut self, mut cnt: usize) {
cnt = cmp::min(cnt, self.remaining());
self.pos += cnt;
}
unsafe fn mut_bytes<'b>(&'b mut self) -> &'b mut [u8] {
&mut self.bytes[self.pos..]
}
}
It's probably something trivial, but I can't find it... What could be a problem which causes this error?
I am using rustc 1.3.0 (9a92aaf19 2015-09-15), crates mio and bytes is gotten straight from github.
Using Cargo with
[dependencies]
mio = "*"
bytes = "*"
this runs for me. Using the Github dependency,
[dependencies.mio]
git = "https://github.com/carllerche/mio.git"
gives the error you mention.
Strangely, version 0.4 depends on
bytes = "0.2.11"
whereas master depends on
git = "https://github.com/carllerche/bytes"
rev = "7edb577d0a"
which is only version 0.2.10. Strange.
The problem is that you end up getting two bytes dependencies compiled, so the error is more like
the trait `mio::bytes::buf::MutBuf` is not implemented for the type `self::bytes::buf::slice::MutSliceBuf<'_>`
The simplest way I see to fix this is to just use both packages from crates.io.
[dependencies]
mio = "*"
bytes = "*"
Another way is to use
[dependencies.bytes]
git = "https://github.com/carllerche/bytes"
rev = "7edb577d0a"
in your own Cargo.toml, such that you share versions.
I've been trying to find an easy way to read variables in Rust, but haven't had any luck so far. All the examples in the Rust Book deal with strings AFAIK, I couldn't find anything concerning integers or floats that would work.
I don't have a Rust compiler on this machine, but based in part on this answer that comes close, you want something like...
let user_val = match input_string.parse::<i32>() {
Ok(x) => x,
Err(_) => -1,
};
Or, as pointed out in the comments,
let user_val = input_string.parse::<i32>().unwrap_or(-1);
...though your choice in integer size and default value might obviously be different, and you don't always need that type qualifier (::<i32>) for parse() where the type can be inferred from the assignment.
To read user input, you always read a set of bytes. Sometimes, you can interpret those bytes as a UTF-8 string. You can then further interpret the string as an integral or floating point number (or lots of other things, like an IP address).
Here's a complete example of reading a single line of input and parsing it as a 32-bit signed integer:
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input).expect("Not a valid string");
let input_num: i32 = input.trim().parse().expect("Not a valid number");
println!("Your number plus one is {}", input_num + 1);
}
Note that no user-friendly error handling is taking place. The program simply panics if reading input or parsing fails. Running the program produces:
$ ./input
41
Your number plus one is 42
A set of bytes comprises an input. In Rust, you accept the input as a UTF-8 String. Then you parse the string to an integer or floating point number. In simple ways you accept the string and parse it, then write an expect`` statement for both, to display a message to the user what went wrong when the program panics during runtime.
fn main() {
let mut x = String::new();
std::io::stdin().read_line(&mut x)
.expect("Failed to read input.");
let x: u32 = x.trim().parse()
.expect("Enter a number not a string.");
println!("{:?}", x);
}
If the program fails to parse the input string then it panics and displays an error message. Notice that the program still panics and we are not handling an error perfectly. One more thing to notice is that we can use the same variable name x and not some x_int because of the variable shadowing feature. To handle the error better we can use the match construct.
fn main() {
let mut x = String::new();
match std::io::stdin().read_line(&mut x) {
Ok(_) => println!("String has been taken in."),
Err(_) => {
println!("Failed to read input.");
return;
},
};
let x: u32 = match x.trim().parse() {
Ok(n) => {
println!("Converted string to int.");
n
},
Err(_) => {
println!("Failed to parse.");
return;
},
};
println!("{:?}", x);
}
This is longer way but a nicer way to handle errors and input and parse a number.