"Hash of previous block" from stratum protocol - bitcoin

When I receive mining.notify from pool (Stratum mining protocol),
previous block hash not exists in blockchain.
Can someone explain this?
For example, I received this data from pool:
{
ExtraNonce1: '849409a3',
ExtraNonce2_size: 4,
previousblockhash: '852ab3acf6baeb51e883cc88f49ef03ae17ed8110009a5fb0000000000000000',
coinbase: [
'01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff5c03e7420a192f5669614254432f4d696e6564206279206c6576616d75732f2cfabe6d6ddfad6fb1792a227710fa21eb53e9664813efb4dd0928111be83e78973f35ad32100000000000000010b40fd717cd7b9ec1',
'ffffffff0479ea912a000000001976a914536ffa992491508dca0354e52f32a3a7a679a53a88ac00000000000000002b6a2952534b424c4f434b3a00b8dedb0ea9e1f81651377346c755d4e05d8ca0b9560adf528f9d28002ff5f70000000000000000266a24b9e11b6d88d3d1c922f8fcdaad85a93a12f093732b1d5108107af224e6cd16ba7772af950000000000000000266a24aa21a9eda1cf7b0087a17891942bc1375cde699540d0ce75e17c4f40a8224eb77bf8103100000000'
],
merkleTree: [
'f2e751916a979c8f5c6e6e455ed6a7486806bc0f4461dd69b3ca7ff997e1b996',
'4a58c7d29b63e0392959529669167a022dcb7842d3ca887dd5f7bc016f8bf61f',
'60bc408ce79b45feb3bd766c33ccdc58b226f61633622fb2f26149a8c1f8de6f',
'daeba6aa2194259fc8ff18f5fc0ed1fcaac756077ad3415c425f9da06fe3bd05',
'dbe7942d0a8cb8daa4a07b06ce890b2d8b9c217bee2aac1352baa9cfdc6ed9a3',
'c74d2e1b1860cadcb03b53e82c39f6273e70c194d5ac52f09e77b745e88254db',
'004fa6b6f3efc01579fd34a6b1481a8580b233e88f4a73ccaebf8e832cd2a9e6',
'ca729fb1c4b7d03753c877a0c81529b7240b05f7b0a1d07b35c4e89829eb0c30',
'2a93783af9811663c532ccb34534eb218951cafc854411ce81a17a9753b3c248',
'b39333007fa1179b8a116dd790e2cc4cba151b8ff10e3ccccf19816962cc803f',
'e77c8fdc22907f11c74dd545e84a1094ef34342825a37bdcbc91d0b85a1bd7f4',
'f7995ac6c21bf3f3b2990152f748db87e93e4f6e9f38ac785eb5f3d84450ba3b'
],
blockVersion: '20000000',
nBits: '170cf4e3'
}
Swap endian for previousblockhash is 0000000000000000fba5090011d87ee13af09ef488cc83e851ebbaf6acb32a85
and such block not exists.

I found answer.
Previous block hash inside "mining.notify" is 8 x 4-Byte-string expressed as little endian.
in my case byte array: 852ab3ac_f6baeb51_e883cc88_f49ef03a_e17ed811_0009a5fb_00000000_00000000
This array is a collection of eight 4-Bytes words that, when converted to big endian, produce "00000000_00000000_0009a5fb_e17ed811_f49ef03a_e883cc88_f6baeb51_852ab3ac", which is Block 672486

Related

V8 elements kinds optimization

After reading this article: https://v8.dev/blog/elements-kinds, I wondering if null and object are considered same type by V8 in terms of internal optimizations.
eg.
[{}, null, {}] vs [{}, {}, {}]
Yes. The only types considered for elements kinds are "small integer", "double", and "anything". null is not an integer or a double, so it's "anything".
Note that elements kinds are tracked per array, not per element. An array's elements kind is the most generic elements kind required for any of its elements:
[1, 2, 3] // "integer" elements (stored as integers internally)
[1, 2, 3.5] // "double" elements (stored as doubles: [1.0, 2.0, 3.5])
[1, 2, {}] // "anything" elements
[1, 2, null] // "anything" elements
[1, 2, "3"] // "anything" elements
The reason is that the benefit of tracking elements kinds in the first place is that some checks can be avoided. That has significant impact (in relative terms) for operations that are otherwise cheap. For example, if you wanted to sum up an array's elements, which are all integers:
for (let i = 0; i < array.length; i++) result += array[i];
adding integers is really fast (one instruction + overflow check), so checking for every element "is this element an integer (so I can do an integer addition)?" (another instruction + conditional jump) adds a relatively large overhead, so knowing up front that every element in this array is an integer lets the engine skip those checks inside the loop. Whereas if the array contained strings and you wanted to concatenate them all, string concatenation is a much slower operation (you have to allocate a new string object for the result, and then decide whether you want to copy the characters or just refer to the input strings), so the overhead added by an additional "is this element a string (so I can do a string concatenation)?" check is probably barely measurable. So tracking "strings" as an elements kind wouldn't provide much benefit, but would add complexity to the implementation and probably a small performance cost in some situations, so V8 doesn't do it. Similarly, if you knew up front "this array contains only null", there isn't anything obvious that you could speed up with that knowledge.
Also: as a JavaScript developer, don't worry about elements kinds. See that blog post as a (hopefully interesting) story about the lengths to which V8 goes to squeeze every last bit of performance out of your code; don't specifically contort your code to make better use of it (or spend time worrying about it). The difference is usually small, and in the cases where it does matter, it'll probably happen without you having to think about it.

Why isn't Rust able to optimise a match on a specific error as well as it does for is_err()? [duplicate]

Consider this silly enum:
enum Number {
Rational {
numerator: i32,
denominator: std::num::NonZeroU32,
},
FixedPoint {
whole: i16,
fractional: u16,
},
}
The data in the Rational variant takes up 8 bytes, and the data in the FixedPoint variant takes up 4 bytes. The Rational variant has a field which must be nonzero, so i would hope that the enum layout rules would use that as a discriminator, with zero indicating the presence of the FixedPoint variant.
However, this:
fn main() {
println!("Number = {}", std::mem::size_of::<Number>(),);
}
Prints:
Number = 12
So, the enum gets space for an explicit discriminator, rather than exploiting the presence of the nonzero field.
Why isn't the compiler able to make this enum smaller?
Although simple cases like Option<&T> can be handled without reserving space for the tag, the layout calculator in rustc is still not clever enough to optimize the size of enums with multiple non-empty variants.
This is issue #46213 on GitHub.
The case you ask about is pretty clear-cut, but there are similar cases where an enum looks like it should be optimized, but in fact can't be because the optimization would preclude taking internal references; for example, see Why does Rust use two bytes to represent this enum when only one is necessary?

Does the "C" code algorithm in RFC1071 work well on big-endian machine?

As described in RFC1071, an extra 0-byte should be added to the last byte when calculating checksum in the situation of odd count of bytes:
But in the "C" code algorithm, only the last byte is added:
The above code does work on little-endian machine where [Z,0] equals Z, but I think there's some problem on big-endian one where [Z,0] equals Z*256.
So I wonder whether the example "C" code in RFC1071 only works on little-endian machine?
-------------New Added---------------
There's one more example of "breaking the sum into two groups" described in RFC1071:
We can just take the data here (addr[]={0x00, 0x01, 0xf2}) for example:
Here, "standard" represents the situation described in the formula [2], while "C-code" representing the C code algorithm situation.
As we can see, in "standard" situation, the final sum is f201 regardless of endian-difference since there's no endian-issue with the abstract form of [Z,0] after "Swap". But it matters in "C-code" situation because f2 is always the low-byte whether in big-endian or in little-endian.
Thus, the checksum is variable with the same data(addr&count) on different endian.
I think you're right. The code in the RFC adds the last byte in as low-order, regardless of whether it is on a litte-endian or big-endian machine.
In these examples of code on the web we see they have taken special care with the last byte:
https://github.com/sjaeckel/wireshark/blob/master/epan/in_cksum.c
and in
http://www.opensource.apple.com/source/tcpdump/tcpdump-23/tcpdump/print-ip.c
it does this:
if (nleft == 1)
sum += htons(*(u_char *)w<<8);
Which means that this text in the RFC is incorrect:
Therefore, the sum may be calculated in exactly the same way
regardless of the byte order ("big-endian" or "little-endian")
of the underlaying hardware. For example, assume a "little-
endian" machine summing data that is stored in memory in network
("big-endian") order. Fetching each 16-bit word will swap
bytes, resulting in the sum; however, storing the result
back into memory will swap the sum back into network byte order.
The following code in place of the original odd byte handling is portable (i.e. will work on both big- and little-endian machines), and doesn't depend on an external function:
if (count > 0)
{
char buf2[2] = {*addr, 0};
sum += *(unsigned short *)buf2;
}
(Assumes addr is char * or const char *).

Memcpy and Memset on structures of Short Type in C

I have a query about using memset and memcopy on structures and their reliablity. For eg:
I have a code looks like this
typedef struct
{
short a[10];
short b[10];
}tDataStruct;
tDataStruct m,n;
memset(&m, 2, sizeof(m));
memcpy(&n,&m,sizeof(m));
My question is,
1): in memset if i set to 0 it is fine. But when setting 2 i get m.a and m.b as 514 instead of 2. When I make them as char instead of short it is fine. Does it mean we cannot use memset for any initialization other than 0? Is it a limitation on short for eg
2): Is it reliable to do memcopy between two structures above of type short. I have a huge
strings of a,b,c,d,e... I need to make sure copy is perfect one to one.
3): Am I better off using memset and memcopy on individual arrays rather than collecting in a structure as above?
One more query,
In the structue above i have array of variables. But if I am passed pointer to these arrays
and I want to collect these pointers in a structure
typedef struct
{
short *pa[10];
short *pb[10];
}tDataStruct;
tDataStruct m,n;
memset(&m, 2, sizeof(m));
memcpy(&n,&m,sizeof(m));
In this case if i or memset of memcopy it only changes the address rather than value. How do i change the values instead? Is the prototype wrong?
Please suggest. Your inputs are very imp
Thanks
dsp guy
memset set's bytes, not shorts. always. 514 = (256*2) + (1*2)... 2s appearing on byte boundaries.
1.a. This does, admittedly, lessen it's usefulness for purposes such as you're trying to do (array fill).
reliable as long as both structs are of the same type. Just to be clear, these structures are NOT of "type short" as you suggest.
if I understand your question, I don't believe it matters as long as they are of the same type.
Just remember, these are byte level operations, nothing more, nothing less. See also this.
For the second part of your question, try
memset(m.pa, 0, sizeof(*(m.pa));
memset(m.pb, 0, sizeof(*(m.pb));
Note two operations to copy from two different addresses (m.pa, m.pb are effectively addresses as you recognized). Note also the sizeof: not sizeof the references, but sizeof what's being referenced. Similarly for memcopy.

(bitcoin) Calculate hash from getwork function - how to do it?

when I call getwork on my bitcoind server, I get the following:
./bitcoind getwork
{
"midstate" : "695d56ae173bbd0fd5f51d8f7753438b940b7cdd61eb62039036acd1af5e51e3",
"data" : "000000013d9dcbbc2d120137c5b1cb1da96bd45b249fd1014ae2c2b400001511000000009726fba001940ebb5c04adc4450bdc0c20b50db44951d9ca22fc5e75d51d501f4deec2711a1d932f00000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000",
"hash1" : "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000",
"target" : "00000000000000000000000000000000000000000000002f931d000000000000"
}
This protocol does not seem to be documented. How do I compute the hash from this data. I think that this data is in little endian. So the first step is to convert everything to big endian? Once that is done, I calculate the sha256 of the data. The data can be divided in two chuncks of 64 bytes each. The hash of the first chuck is given by midstate and therefore does not have to be computed.
I must therefore hash the chunck #2 with sha256, using the midstate as the initial hash values. Once that is done, I end up with a hash of chunk 2, which is 32 bytes. I calculate the hash of this chunk one more time to get a final hash.
Then, do I convert everything to little endian and submit the work?
What is hash1 used for?
The hash calculation is documented at Block hashing algorithm.
Start there for the relatively simple basics. The basic data structures are documented in Protocol specification - Bitcoin Wiki. Note that the protocol definition (and the definition of work) more or less assumes that SHA-256 hashes are 256-bit little-endian values, rather than big-endian as the standard implies. See also
Getwork is more complicated and runs into more serious endian/byte ordering confusion.
First note that the getwork API is optimized to speed up the initial steps of mining.
The midstate and hash1 values are for these performance optimizations and can be ignored. Just look at the "data".
And when a standard sha256 implementation is used, only the first 80 bytes (160 hex characters) of the "data" are hashed.
Unfortunately, the JSON data presented in the getwork data structure has different endian characteristics than what is needed for hashing in the block example above.
They all say to go to the source for the answer, but the C++ source can be big and confusing. A simple alternative is the poold.py code. There is discussion of it here: New mining pool for testing. You only need to look at the first few lines of the "checkwork" routine, and the "bufreverse" and "bytereverse" functions, to get the byte ordering right. In the end it is just a matter of doing a reversal of the bytes in each 32-bit segment of the data. Yes - very odd. But endian issues are tricky and can end up that way....
Some other helpful information on the way "getwork" works can be found in discussions at:
Do I understand header hashing?
Stupid newbie question about the nonce
Note that finding the signal to noise in the original Bitcoin forum is getting very hard, and there is currently an Area51 proposal for a StackExchange site for Bitcoin and Crypto Currency in general. Come join us!
It sounds right, there is a script in javascript that do calculate the hash but I do not fully understand it so I don't know, maybe you understand it better if you look.
this.tryHash = function(midstate, half, data, hash1, target, nonce){
data[3] = nonce;
this.sha.reset();
var h0 = this.sha.update(midstate, data).state; // compute first hash
for (var i = 0; i < 8; i++) hash1[i] = h0[i]; // place it in the h1 holder
this.sha.reset(); // reset to initial state
var h = this.sha.update(hash1).state; // compute final hash
if (h[7] == 0) {
var ret = [];
for (var i = 0; i < half.length; i++)
ret.push(half[i]);
for (var i = 0; i < data.length; i++)
ret.push(data[i]);
return ret;
} else return null;
};
SOURCE: https://github.com/jwhitehorn/jsMiner/blob/4fcdd9042a69b309035dfe9c9ddf716119831a16/engine.js#L149-165
Frankly speaking
Bitcoin block hashing algorithm is not officially described by any source.
"
The hash calculation is documented at Block hashing algorithm.
"
should read
The hash calculation is "described" at Block hashing algorithm.
en.bitcoin.it/wiki/Block_hashing_algorithm
btw the example code in PHP comes with a bug (typo)
the example code in Python generates errors when run by Python3.3 for Windows XP 32
(missing support for string.decode)