Golang crypto multiple calls have different responses - cryptography

I'm having a problem with some Go code I've written for a password authentication library. The general idea is to provide 2 functions, Check() and New(), which are both provided a password and a 256 bit HMAC key. The Check() function is also provided a 256 bit salt and a 256 bit hash, and returns a boolean. The New() function returns a new, random salt, and it's corresponding hash. Both functions rely on a helper function, hash(), that uses scrypt for key lengthening, and does the real work of generating an output hash.
This was working when I originally wrote it (as evidenced by the fact that I have working test data generated by an earlier, lost revision of the code).
The problem I'm now having is that the Check() function seems to work perfectly when provided with data generated by the old version of the code, but now seems to fail with any data generated by the code's own New() function (which both use the underlying hash() function).
I know, I should have had git version controlling the code from the start! I've learnt my lesson now.
I've grouped the functions, and a quick demo of the problem into one .go file, as below, and added some output for debugging:
package main
import (
"code.google.com/p/go.crypto/scrypt"
"crypto/hmac"
"crypto/rand"
"crypto/sha256"
"crypto/subtle"
"errors"
"fmt"
"io"
)
// Constants for scrypt. See code.google.com/p/go.crypto/scrypt
const (
KEYLENGTH = 32
N = 16384
R = 8
P = 1
)
// hash takes an HMAC key, a password and a salt (as byte slices)
// scrypt transforms the password and salt, and then HMAC transforms the result.
// Returns the resulting 256 bit hash.
func hash(hmk, pw, s []byte) (h []byte, err error) {
sch, err := scrypt.Key(pw, s, N, R, P, KEYLENGTH)
if err != nil {
return nil, err
}
hmh := hmac.New(sha256.New, hmk)
hmh.Write(sch)
h = hmh.Sum(nil)
hmh.Reset() // Probably not necessary
return h, nil
}
// Check takes an HMAC key, a hash to check, a password and a salt (as byte slices)
// Calls hash().
// Compares the resulting 256 bit hash against the check hash and returns a boolean.
func Check(hmk, h, pw, s []byte) (chk bool, err error) {
// Print the input hash
fmt.Printf("Hash: %x\nHMAC: %x\nSalt: %x\nPass: %x\n", h, hmk, s, []byte(pw))
hchk, err := hash(hmk, pw, s)
if err != nil {
return false, err
}
// Print the hash to compare against
fmt.Printf("Hchk: %x\n", hchk)
if subtle.ConstantTimeCompare(h, hchk) != 1 {
return false, errors.New("Error: Hash verification failed")
}
return true, nil
}
// New takes an HMAC key and a password (as byte slices)
// Generates a new salt using "crypto/rand"
// Calls hash().
// Returns the resulting 256 bit hash and salt.
func New(hmk, pw []byte) (h, s []byte, err error) {
s = make([]byte, KEYLENGTH)
_, err = io.ReadFull(rand.Reader, s)
if err != nil {
return nil, nil, err
}
h, err = hash(pw, hmk, s)
if err != nil {
return nil, nil, err
}
fmt.Printf("Hash: %x\nSalt: %x\nPass: %x\n", h, s, []byte(pw))
return h, s, nil
}
func main() {
// Known values that work
pass := "pleaseletmein"
hash := []byte{
0x6f, 0x38, 0x7b, 0x9c, 0xe3, 0x9d, 0x9, 0xff,
0x6b, 0x1c, 0xc, 0xb5, 0x1, 0x67, 0x1d, 0x11,
0x8f, 0x72, 0x78, 0x85, 0xca, 0x6, 0x50, 0xd0,
0xe6, 0x8b, 0x12, 0x9c, 0x9d, 0xf4, 0xcb, 0x29,
}
salt := []byte{
0x77, 0xd6, 0x57, 0x62, 0x38, 0x65, 0x7b, 0x20,
0x3b, 0x19, 0xca, 0x42, 0xc1, 0x8a, 0x4, 0x97,
0x48, 0x44, 0xe3, 0x7, 0x4a, 0xe8, 0xdf, 0xdf,
0xfa, 0x3f, 0xed, 0xe2, 0x14, 0x42, 0xfc, 0xd0,
}
hmac := []byte{
0x70, 0x23, 0xbd, 0xcb, 0x3a, 0xfd, 0x73, 0x48,
0x46, 0x1c, 0x6, 0xcd, 0x81, 0xfd, 0x38, 0xeb,
0xfd, 0xa8, 0xfb, 0xba, 0x90, 0x4f, 0x8e, 0x3e,
0xa9, 0xb5, 0x43, 0xf6, 0x54, 0x5d, 0xa1, 0xf2,
}
// Check the known values. This Works.
fmt.Println("Checking known values...")
chk, err := Check(hmac, hash, []byte(pass), salt)
if err != nil {
fmt.Printf("%s\n", err)
}
fmt.Printf("%t\n", chk)
fmt.Println()
// Create new hash and salt from the known HMAC and Salt
fmt.Println("Creating new hash and salt values...")
h, s, err := New(hmac, []byte(pass))
if err != nil {
fmt.Printf("%s\n", err)
}
// Check the new values. This Fails!
fmt.Println("Checking new hash and salt values...")
chk, err = Check(hmac, h, []byte(pass), s)
if err != nil {
fmt.Printf("%s\n", err)
}
fmt.Printf("%t\n", chk)
}
I've tried this on both Linux 64bit and Windows8 64bit and it fails on both.
Any help would be much appreciated! As I said, I did have this working at some point, but I seem to have broken it somewhere along the way. I typically only discovered it wasn't working when writing unit tests... I suppose that's what they're for!
Thanks,
Mike.

You seem to have reversed the arguments to hash() in one of your functions. In Check(), you have:
hchk, err := hash(hmk, pw, s)
While in New() you have:
h, err = hash(pw, hmk, s)
These obviously won't produce the same result leading to the verification failure.
With three similar arguments with the same types like this, mistakes like this aren't too surprising. Perhaps it would be worth seeing whether you could restructure things to let the type system catch this class of error?

Related

how to use the ECDSA.sol module correctly?

I have this contract using the ECDSA library.
import "./ECDSA.sol";
struct Signature {
uint8 v;
bytes32 r;
bytes32 s;
}
function make(Signature memory sign) public returns(bool)
I try to understand the parameters I have to use in this case. What I can see it's a tuple type value, but I can't figure out what it looks like for v, r, s. Where can I get these values from my address?
The v, r, and s parameters are a result of signing a message with a private key. The signature has 65 bytes, which are split into 3 parts:
65 byte array (of type bytes in Solidity) arranged the following way: [[v (1)], [r (32)], [s (32)]].
Source: OpenZeppelin
Sign off-chain (because you're using a private key).
Note the address in the comment, we'll verify it on-chain later.
const signature = await web3.eth.accounts.sign(
'Hello world',
// below is private key to the address `0x0647EcF0D64F65AdA7991A44cF5E7361fd131643`
'02ed07b6d5f2e29907962d2bfde8f46f03c46e79d5f2ded0b1e0c27fa82f1384'
);
console.log(signature);
Output
{
message: 'Hello world',
messageHash: '0x8144a6fa26be252b86456491fbcd43c1de7e022241845ffea1c3df066f7cfede',
v: '0x1c',
r: '0x285e6fbb504b57dca3ceacc851a7bfa37743c79b5c53fb184f4cc0b10ebff6ad',
s: '0x245f558fa13540029f0ee2dc0bd73264cf04f28ba9c2520ad63ddb1f2e7e9b24',
signature: '0x285e6fbb504b57dca3ceacc851a7bfa37743c79b5c53fb184f4cc0b10ebff6ad245f558fa13540029f0ee2dc0bd73264cf04f28ba9c2520ad63ddb1f2e7e9b241c'
}
Note that v is the last byte of signature, r is the first half, and s is the second half (excluding the last byte).
Verify on-chain
pragma solidity ^0.8;
import "https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/utils/cryptography/ECDSA.sol";
contract MyContract {
function foo() external pure returns (bool) {
address recovered = ECDSA.recover(
0x8144a6fa26be252b86456491fbcd43c1de7e022241845ffea1c3df066f7cfede, // messageHash
0x1c, // v
0x285e6fbb504b57dca3ceacc851a7bfa37743c79b5c53fb184f4cc0b10ebff6ad, // r
0x245f558fa13540029f0ee2dc0bd73264cf04f28ba9c2520ad63ddb1f2e7e9b24 // s
);
return recovered == address(0x0647EcF0D64F65AdA7991A44cF5E7361fd131643);
}
}

Using ecrecover function - Solidity

I'm trying to verify a message, I searched on StackOverflow and I found the ecrecover function. but when I use it, it returns a different address from what I expect.
function verify(bytes32 hash, uint8 v, bytes32 r, bytes32 s) constant returns(address) {
bytes memory prefix = "\x19Ethereum Signed Message:\n32";
bytes32 prefixedHash = keccak256(prefix, hash);
return ecrecover(prefixedHash, v, r, s);
}
signature object:
{
message: 'a',
messageHash: '0x34f291c0b5f0c13c8f43e9d37c04094c22234da43f4040adb36654c98235b4b3',
v: '0x1b',
r: '0x944f8187c19a711259e32dd9ab0f005c97c9e2013c735f823d3ad34c7cd5030f',
s: '0x254607e8d32e8a0436c8d678fe7d3478c8858fd903e164c51f8a8595e723b7a7',
signature: '0x944f8187c19a711259e32dd9ab0f005c97c9e2013c735f823d3ad34c7cd5030f254607e8d32e8a0436c8d678fe7d3478c8858fd903e164c51f8a8595e723b7a71b' }
input: (i pass it to remix-ide)
"0x34f291c0b5f0c13c8f43e9d37c04094c22234da43f4040adb36654c98235b4b3", 0x1b, "0x944f8187c19a711259e32dd9ab0f005c97c9e2013c735f823d3ad34c7cd5030f", "0x254607e8d32e8a0436c8d678fe7d3478c8858fd903e164c51f8a8595e723b7a7"
output: (wrong)
0x5dd277a46b3ab8ce30735d82df5e6e8312bce7ef
Please help me figure out the problems. many thanks.

Use Gob to write logs to a file in an append style

Would it be possible to use Gob encoding for appending structs in series to the same file using append? It works for writing, but when reading with the decoder more than once I run into:
extra data in buffer
So I wonder if that's possible in the first place or whether I should use something like JSON to append JSON documents on a per line basis instead. Because the alternative would be to serialize a slice, but then again reading it as a whole would defeat the purpose of append.
The gob package wasn't designed to be used this way. A gob stream has to be written by a single gob.Encoder, and it also has to be read by a single gob.Decoder.
The reason for this is because the gob package not only serializes the values you pass to it, it also transmits data to describe their types:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
This is a state of the encoder / decoder–about what types and how they have been transmitted–, a subsequent new encoder / decoder will not (cannot) analyze the "preceeding" stream to reconstruct the same state and continue where a previous encoder / decoder left off.
Of course if you create a single gob.Encoder, you may use it to serialize as many values as you'd like to.
Also you can create a gob.Encoder and write to a file, and then later create a new gob.Encoder, and append to the same file, but you must use 2 gob.Decoders to read those values, exactly matching the encoding process.
As a demonstration, let's follow an example. This example will write to an in-memory buffer (bytes.Buffer). 2 subsequent encoders will write to it, then we will use 2 subsequent decoders to read the values. We'll write values of this struct:
type Point struct {
X, Y int
}
For short, compact code, I use this "error handler" function:
func he(err error) {
if err != nil {
panic(err)
}
}
And now the code:
const n, m = 3, 2
buf := &bytes.Buffer{}
e := gob.NewEncoder(buf)
for i := 0; i < n; i++ {
he(e.Encode(&Point{X: i, Y: i * 2}))
}
e = gob.NewEncoder(buf)
for i := 0; i < m; i++ {
he(e.Encode(&Point{X: i, Y: 10 + i}))
}
d := gob.NewDecoder(buf)
for i := 0; i < n; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
d = gob.NewDecoder(buf)
for i := 0; i < m; i++ {
var p *Point
he(d.Decode(&p))
fmt.Println(p)
}
Output (try it on the Go Playground):
&{0 0}
&{1 2}
&{2 4}
&{0 10}
&{1 11}
Note that if we'd use only 1 decoder to read all the values (looping until i < n + m, we'd get the same error message you posted in your question when the iteration reaches n + 1, because the subsequent data is not a serialized Point, but the start of a new gob stream.
So if you want to stick with the gob package for doing what you want to do, you have to slightly modify, enhance your encoding / decoding process. You have to somehow mark the boundaries when a new encoder is used (so when decoding, you'll know you have to create a new decoder to read subsequent values).
You may use different techniques to achieve this:
You may write out a number, a count before you proceed to write values, and this number would tell how many values were written using the current encoder.
If you don't want to or can't tell how many values will be written with the current encoder, you may opt to write out a special end-of-encoder value when you don't write more values with the current encoder. When decoding, if you encounter this special end-of-encoder value, you'll know you have to create a new decoder to be able to read more values.
Some things to note here:
The gob package is most efficient, most compact if only a single encoder is used, because each time you create and use a new encoder, the type specifications will have to be re-transmitted, causing more overhead, and making the encoding / decoding process slower.
You can't seek in the data stream, you can only decode any value if you read the whole file from the beginning up until the value you want. Note that this somewhat applies even if you use other formats (such as JSON or XML).
If you want seeking functionality, you'd need to manage an index file separately, which would tell at which positions new encoders / decoders start, so you could seek to that position, create a new decoder, and start reading values from there.
Check a related question: Efficient Go serialization of struct to disk
In addition to the above, I suggest using an intermediate structure to exclude the gob header:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"log"
)
type Point struct {
X, Y int
}
func main() {
buf := new(bytes.Buffer)
enc, _, err := NewEncoderWithoutHeader(buf, new(Point))
if err != nil {
log.Fatal(err)
}
enc.Encode(&Point{10, 10})
fmt.Println(buf.Bytes())
}
type HeaderSkiper struct {
src io.Reader
dst io.Writer
}
func (hs *HeaderSkiper) Read(p []byte) (int, error) {
return hs.src.Read(p)
}
func (hs *HeaderSkiper) Write(p []byte) (int, error) {
return hs.dst.Write(p)
}
func NewEncoderWithoutHeader(w io.Writer, sample interface{}) (*gob.Encoder, *bytes.Buffer, error) {
hs := new(HeaderSkiper)
hdr := new(bytes.Buffer)
hs.dst = hdr
enc := gob.NewEncoder(hs)
// Write sample with header info
if err := enc.Encode(sample); err != nil {
return nil, nil, err
}
// Change writer
hs.dst = w
return enc, hdr, nil
}
func NewDecoderWithoutHeader(r io.Reader, hdr *bytes.Buffer, dummy interface{}) (*gob.Decoder, error) {
hs := new(HeaderSkiper)
hs.src = hdr
dec := gob.NewDecoder(hs)
if err := dec.Decode(dummy); err != nil {
return nil, err
}
hs.src = r
return dec, nil
}
Additionally to great icza answer, you could use the following trick to append to a gob file with already written data: when append the first time write and discard the first encode:
Create the file Encode gob as usual (first encode write headers)
Close file
Open file for append
Using and intermediate writer encode dummy struct (which write headers)
Reset the writer
Encode gob as usual (writes no headers)
Example:
package main
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"io/ioutil"
"log"
"os"
)
type Record struct {
ID int
Body string
}
func main() {
r1 := Record{ID: 1, Body: "abc"}
r2 := Record{ID: 2, Body: "def"}
// encode r1
var buf1 bytes.Buffer
enc := gob.NewEncoder(&buf1)
err := enc.Encode(r1)
if err != nil {
log.Fatal(err)
}
// write to file
err = ioutil.WriteFile("/tmp/log.gob", buf1.Bytes(), 0600)
if err != nil {
log.Fatal()
}
// encode dummy (which write headers)
var buf2 bytes.Buffer
enc = gob.NewEncoder(&buf2)
err = enc.Encode(Record{})
if err != nil {
log.Fatal(err)
}
// remove dummy
buf2.Reset()
// encode r2
err = enc.Encode(r2)
if err != nil {
log.Fatal(err)
}
// open file
f, err := os.OpenFile("/tmp/log.gob", os.O_WRONLY|os.O_APPEND, 0600)
if err != nil {
log.Fatal(err)
}
// write r2
_, err = f.Write(buf2.Bytes())
if err != nil {
log.Fatal(err)
}
// decode file
data, err := ioutil.ReadFile("/tmp/log.gob")
if err != nil {
log.Fatal(err)
}
var r Record
dec := gob.NewDecoder(bytes.NewReader(data))
for {
err = dec.Decode(&r)
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Println(r)
}
}

Allocate uninitialized slice

Is there some way to allocate an uninitialized slice in Go? A frequent pattern is to create a slice of a given size as a buffer, and then only use part of it to receive data. For example:
b := make([]byte, 0x20000) // b is zero initialized
n, err := conn.Read(b)
// do stuff with b[:n]. all of b is zeroed for no reason
This initialization can add up when lots of buffers are being allocated, as the spec states it will default initialize the array on allocation.
You can get non zeroed byte buffers from bufs.Cache.Get (or see CCache for the concurrent safe version). From the docs:
NOTE: The buffer returned by Get is not guaranteed to be zeroed. That's okay for e.g. passing a buffer to io.Reader. If you need a zeroed buffer use Cget.
Technically you could by allocating the memory outside the go runtime and using unsafe.Pointer, but this is definitely the wrong thing to do.
A better solution is to reduce the number of allocations. Move buffers outside loops, or, if you need per goroutine buffers, allocate several of them in a pool and only allocate more when they're needed.
type BufferPool struct {
Capacity int
buffersize int
buffers []byte
lock sync.Mutex
}
func NewBufferPool(buffersize int, cap int) {
ret := new(BufferPool)
ret.Capacity = cap
ret.buffersize = buffersize
return ret
}
func (b *BufferPool) Alloc() []byte {
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) == 0 {
return make([]byte, b.buffersize)
} else {
ret := b.buffers[len(b.buffers) - 1]
b.buffers = b.buffers[0:len(b.buffers) - 1]
return ret
}
}
func (b *BufferPool) Free(buf []byte) {
if len(buf) != b.buffersize {
panic("illegal free")
}
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) < b.Capacity {
b.buffers = append(b.buffers, buf)
}
}

iOS: send customized uint8 array

I'm trying to make an app that communicate iPhone with another hardware using dock to RS232 wire (I bought from RedPark). I'm also using the library provided by redpark. I made a simple code at beginning, it worked fine.
UInt8 infoCmd[5] = {0x3E,0x3E,0x05,0x80,0xff};
[rscMgr write:infoCmd Length:5];
Then I want to add more command to it, so I create a method that returns different combinations of command I need.
- (UInt8 *)requestCommand:(int)commandName{
UInt8 * command;
if (commandName == DATADUMP) {
command=[Communication buildDataDump];
}
if (commandName == GETSERIALINFO) {
command=[Communication buildGetSerailInfo];
}
return command;
}
+ (UInt8 *)buildGetSerailInfo{
UInt8 *command = malloc(sizeof(UInt8)*5);
command[0]=SYN;
command[1]=SYN;
command[2]=ENQ;
command[3]=GETSERIALINFO;
//command[4] = {SYN, SYN, ENQ, GETSERIALINFO};
return command;
}
The thing is, some of my commands includes data that can be 200 bytes long. How can I create an UInt8 array that is easier for me to add bytes?
I'm new to programming, please explain to me in detail. Thank you a lot in advance.
Actually you will just send data, row byte over the wire. I do something similar in one project (not wire, but RS232 commands over TCP/IP), and it becomes quite simple, if you use an NSMutableData instance.
A snippet from my code:
static u_int8_t codeTable[] = { 0x1b, 0x74, 0x10 };
static u_int8_t charSet[] = { 0x1b, 0x52, 0x10 };
static u_int8_t formatOff[] = { 0x1b, 0x21, 0x00 };
static u_int8_t reverseOn[] = { 0x1d, 0x42, 0x01 };
static u_int8_t reverseOff[]= { 0x1d, 0x42, 0x00 };
static u_int8_t paperCut[] = { 0x1d, 0x56, 0x0 };
NSMutableData *mdata = [NSMutableData dataWithBytes:&formatOff length:sizeof(formatOff)];
[mdata appendBytes:&formatOff length:sizeof(formatOff)];
[mdata appendBytes:&reverseOff length:sizeof(reverseOff)];
[mdata appendData: [NSData dataWithBytes: &codeTable length:sizeof(codeTable)]];
[mdata appendData: [NSData dataWithBytes: &charSet length:sizeof(charSet)]];
As you see, I am just appending the data byte by byte.