Im trying to slice bytes such as
bytes memory bytesData = result[32:64];
and its throwing:
TypeError: Index range access is only supported for dynamic calldata arrays.
it works fine with calldata...
what about memory?
According to the Solidity docs, slicing memory arrays is not supported for now. As you've said, it does work on calldata bytes. This answer on EthereumSE seems to agree.
According to this question on EthSE, you can convert the memory to calldata with a workaround.
pragma solidity >=0.8.0 <0.9.0;
library BytesLib {
function slice(
bytes memory _bytes,
uint256 _start,
uint256 _length
)
internal
pure
returns (bytes memory)
{
require(_length + 31 >= _length, "slice_overflow");
require(_bytes.length >= _start + _length, "slice_outOfBounds");
bytes memory tempBytes;
// Check length is 0. `iszero` return 1 for `true` and 0 for `false`.
assembly {
switch iszero(_length)
case 0 {
// Get a location of some free memory and store it in tempBytes as
// Solidity does for memory variables.
tempBytes := mload(0x40)
// Calculate length mod 32 to handle slices that are not a multiple of 32 in size.
let lengthmod := and(_length, 31)
// tempBytes will have the following format in memory: <length><data>
// When copying data we will offset the start forward to avoid allocating additional memory
// Therefore part of the length area will be written, but this will be overwritten later anyways.
// In case no offset is require, the start is set to the data region (0x20 from the tempBytes)
// mc will be used to keep track where to copy the data to.
let mc := add(add(tempBytes, lengthmod), mul(0x20, iszero(lengthmod)))
let end := add(mc, _length)
for {
// Same logic as for mc is applied and additionally the start offset specified for the method is added
let cc := add(add(add(_bytes, lengthmod), mul(0x20, iszero(lengthmod))), _start)
} lt(mc, end) {
// increase `mc` and `cc` to read the next word from memory
mc := add(mc, 0x20)
cc := add(cc, 0x20)
} {
// Copy the data from source (cc location) to the slice data (mc location)
mstore(mc, mload(cc))
}
// Store the length of the slice. This will overwrite any partial data that
// was copied when having slices that are not a multiple of 32.
mstore(tempBytes, _length)
// update free-memory pointer
// allocating the array padded to 32 bytes like the compiler does now
// To set the used memory as a multiple of 32, add 31 to the actual memory usage (mc)
// and remove the modulo 32 (the `and` with `not(31)`)
mstore(0x40, and(add(mc, 31), not(31)))
}
// if we want a zero-length slice let's just return a zero-length array
default {
tempBytes := mload(0x40)
// zero out the 32 bytes slice we are about to return
// we need to do it because Solidity does not garbage collect
mstore(tempBytes, 0)
// update free-memory pointer
// tempBytes uses 32 bytes in memory (even when empty) for the length.
mstore(0x40, add(tempBytes, 0x20))
}
}
return tempBytes;
}
}
https://ethereum.stackexchange.com/questions/122029/how-does-bytes-utils-slice-function-work
Related
If we have the following code:
pragma solidity >= 0.5;
contract stringsContract {
function takesTwo(string memory str, uint idx) public pure returns (bytes memory) {
bytes memory bytesStr = bytes(str);
return bytesStr[idx];
}
}
Why do we get TypeError return argument bytes1 is not explicitly convertible to expected type (type of first return variable bytes memory).
The fix was to change bytes memory to bytes:
contract stringsContract {
function takesTwo(string memory str, uint idx) public pure returns (byte) {
bytes memory bytesStr = bytes(str);
return bytesStr[idx];
}
}
Nevertheless, I'm still curious about the reason of the compilation error. Any thoughts?
Variable of type 'bytes' and 'string' are special arrays. In first code snippet you are returning an array which is not yet possible in Solidity yet.
You can find docs: https://solidity.readthedocs.io/en/develop/types.html#bytes-and-strings-as-arrays
When you return in solidity the syntax is like (value_type value_name: optional), what you tried returning bytes which is an array and with name memory, memory itself is spacial keyword in solidity
I have huge data of bitset, stored in db. I want to upload the same to redis bitset, so I can perform bit operations on it. Is there a way to upload this data from either redis-cli or javascript code? I am using bitset.js npm module to load the bitset in my program from db.
One obvious way is to iterate my bitset array within my javascript code and keep calling redis.setbit(...) multiple times. Is there a way to upload all of them at once? If so how?
A bitset in Redis is actually just a string, so you can assign to it directly all at once. The bits in the string are the bits of the bitfield, set in left-to-right order. I.e. setting bit number 0 to 1 yields the binary number 10000000, or a single byte with the value 128. This looks like "\x80" when Redis prints it, which you can see for yourself by running setbit foo 0 1 and then get foo in Redis.
So to construct the right string to send to Redis, we just need to read the bits out of your BitSet and construct a buffer, one byte at a time, with the appropriate bits set.
Below is code that uses bitset.js and the redis npm module to transfer a BitSet in JavaScript into a Redis key. Note that this code assumes that the bitfield fits comfortably in memory.
let redis = require('redis'),
BitSet = require('./bitset');
let client = redis.createClient();
// create some data
let bs = new BitSet;
bs.set(0, 1);
bs.set(31, 1);
// calculate how many bytes we'll need
var numBytes = Math.ceil(bs.msb()/8);
// construct a buffer with that much space
var buffer = new Buffer(numBytes);
// for each byte
for (var i = 0; i < numBytes; i++) {
var byte = 0;
// iterate over each bit
for (var j = 0; j < 8; j++) {
// slide previous bits to the left
byte <<= 1;
// and set the rightmost bit
byte |= bs.get(i*8+j);
}
// put this byte in the buffer
buffer[i] = byte;
}
// now we have a complete buffer to use as our value in Redis
client.set('bitset', buffer, function (err, result) {
client.getbit('bitset', 31, function (err, result) {
console.log('Bit 31 = ' + result);
client.del('bitset', function () {
client.quit();
});
});
});
Is there some way to allocate an uninitialized slice in Go? A frequent pattern is to create a slice of a given size as a buffer, and then only use part of it to receive data. For example:
b := make([]byte, 0x20000) // b is zero initialized
n, err := conn.Read(b)
// do stuff with b[:n]. all of b is zeroed for no reason
This initialization can add up when lots of buffers are being allocated, as the spec states it will default initialize the array on allocation.
You can get non zeroed byte buffers from bufs.Cache.Get (or see CCache for the concurrent safe version). From the docs:
NOTE: The buffer returned by Get is not guaranteed to be zeroed. That's okay for e.g. passing a buffer to io.Reader. If you need a zeroed buffer use Cget.
Technically you could by allocating the memory outside the go runtime and using unsafe.Pointer, but this is definitely the wrong thing to do.
A better solution is to reduce the number of allocations. Move buffers outside loops, or, if you need per goroutine buffers, allocate several of them in a pool and only allocate more when they're needed.
type BufferPool struct {
Capacity int
buffersize int
buffers []byte
lock sync.Mutex
}
func NewBufferPool(buffersize int, cap int) {
ret := new(BufferPool)
ret.Capacity = cap
ret.buffersize = buffersize
return ret
}
func (b *BufferPool) Alloc() []byte {
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) == 0 {
return make([]byte, b.buffersize)
} else {
ret := b.buffers[len(b.buffers) - 1]
b.buffers = b.buffers[0:len(b.buffers) - 1]
return ret
}
}
func (b *BufferPool) Free(buf []byte) {
if len(buf) != b.buffersize {
panic("illegal free")
}
b.lock.Lock()
defer b.lock.Unlock()
if len(b.buffers) < b.Capacity {
b.buffers = append(b.buffers, buf)
}
}
so I am trying to read a filesystem disk, which has been provided.
So, what I want to do is read the 1044 byte from the filesystem. What I am currently doing is the following:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check[sizeof(char)*4]; //creates a buffer array 4 bytes long
fread(check, 1, 4, fp); //reads 4 bytes from the file
printf("%d",check); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
The value that check should be printing is 1. However I am getting negative values which keep changing everytime I run the file. I don't understand what I am doing wrong. Am I taking the right approach to reading bytes of the disk, and printing them.
What I basically want to do is read bytes of the disk, and read the values at certain bytes. Those bytes are fields which will help me understand the structure/format of the disk.
Any help would be appreciated.
Thank you.
This line:
int check[sizeof(char)*4];
allocates an array of 4 ints.
The type of check is therefore int*, so this line:
printf("%d",check);
prints the address of the array.
What you should do it allocate it as an int:
int check;
and then fread into it:
fread(&check, 1, sizeof(int), fp);
(This code, incidentally, assumes that int is 4 bytes.)
int check[sizeof(char)*4]; //creates a buffer array 4 bytes long
This is incorrect. You are creating an array of four integers, which are typically 32 bits each, and then when you printf("%d",check) you are printing the address of that array, which will probably change every time you run the program. I think what you want is this:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check; //creates a buffer array the size of one integer
fread(&check, 1, sizeof(int), fp); //reads an integer (presumably 1) from the file
printf("%d",check); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
Note that instead of declaring an array of integers, you are declaring just one. Also note the change from fread(check, ...) to fread(&check, ...). The first parameter to fread is the address of the buffer (in this case, a single integer) into which you want to read the data.
Keep in mind that while integers are probably 32 bits long, this isn't guaranteed. Also, in most operating systems, integers are stored with the least significant byte first on the disk, so you will only read 1 if the data on the disk looks like this at byte 1044:
0x01 0x00 0x00 0x00
If it is the other way around, 0x00 00 00 01, that will be read as 16777216 (0x01000000).
If you want to read more than one integer, you can use an array as follows:
if (fp = fopen("filesysFile-full", "r")) {
fseek(fp, 1044, SEEK_SET); //Goes to 1024th byte
int check[10]; //creates a buffer of ten integers
fread(check, 10, sizeof(int), fp); //reads 10 integers into the array
for (int i = 0; i < 10; i++)
printf("%d ", check[i]); //prints
int close = fclose(fp);
if (close == 0) {
printf("Closed");
}
}
In this case, check (without brackets) is a pointer to the array, which is why I've changed the fread back to fread(check, ...).
Hope this helps!
I was recently asked to complete a task for a c++ role, however as the application was decided not to be progressed any further I thought that I would post here for some feedback / advice / improvements / reminder of concepts I've forgotten.
The task was:
The following data is a time series of integer values
int timeseries[32] = {67497, 67376, 67173, 67235, 67057, 67031, 66951,
66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044,
67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620,
66579, 66596, 66713, 66852, 66715};
The series might be, for example, the closing price of a stock each day
over a 32 day period.
As stored above, the data will occupy 32 x sizeof(int) bytes = 128 bytes
assuming 4 byte ints.
Using delta encoding , write a function to compress, and a function to
uncompress data like the above.
Ok, so before this point I had never looked at compression so my solution is far from perfect. The manner in which I approached the problem is by compressing the array of integers into a array of bytes. When representing the integer as a byte I keep the calculate most
significant byte (msb) and keep everything up to this point, whilst throwing the rest away. This is then added to the byte array. For negative values I increment the msb by 1 so that we can
differentiate between positive and negative bytes when decoding by keeping the leading
1 bit values.
When decoding I parse this jagged byte array and simply reverse my
previous actions performed when compressing. As mentioned I have never looked at compression prior to this task so I did come up with my own method to compress the data. I was looking at C++/Cli recently, had not really used it previously so just decided to write it in this language, no particular reason. Below is the class, and a unit test at the very bottom. Any advice / improvements / enhancements will be much appreciated.
Thanks.
array<array<Byte>^>^ CDeltaEncoding::CompressArray(array<int>^ data)
{
int temp = 0;
int original;
int size = 0;
array<int>^ tempData = gcnew array<int>(data->Length);
data->CopyTo(tempData, 0);
array<array<Byte>^>^ byteArray = gcnew array<array<Byte>^>(tempData->Length);
for (int i = 0; i < tempData->Length; ++i)
{
original = tempData[i];
tempData[i] -= temp;
temp = original;
int msb = GetMostSignificantByte(tempData[i]);
byteArray[i] = gcnew array<Byte>(msb);
System::Buffer::BlockCopy(BitConverter::GetBytes(tempData[i]), 0, byteArray[i], 0, msb );
size += byteArray[i]->Length;
}
return byteArray;
}
array<int>^ CDeltaEncoding::DecompressArray(array<array<Byte>^>^ buffer)
{
System::Collections::Generic::List<int>^ decodedArray = gcnew System::Collections::Generic::List<int>();
int temp = 0;
for (int i = 0; i < buffer->Length; ++i)
{
int retrievedVal = GetValueAsInteger(buffer[i]);
decodedArray->Add(retrievedVal);
decodedArray[i] += temp;
temp = decodedArray[i];
}
return decodedArray->ToArray();
}
int CDeltaEncoding::GetMostSignificantByte(int value)
{
array<Byte>^ tempBuf = BitConverter::GetBytes(Math::Abs(value));
int msb = tempBuf->Length;
for (int i = tempBuf->Length -1; i >= 0; --i)
{
if (tempBuf[i] != 0)
{
msb = i + 1;
break;
}
}
if (!IsPositiveInteger(value))
{
//We need an extra byte to differentiate the negative integers
msb++;
}
return msb;
}
bool CDeltaEncoding::IsPositiveInteger(int value)
{
return value / Math::Abs(value) == 1;
}
int CDeltaEncoding::GetValueAsInteger(array<Byte>^ buffer)
{
array<Byte>^ tempBuf;
if(buffer->Length % 2 == 0)
{
//With even integers there is no need to allocate a new byte array
tempBuf = buffer;
}
else
{
tempBuf = gcnew array<Byte>(4);
System::Buffer::BlockCopy(buffer, 0, tempBuf, 0, buffer->Length );
unsigned int val = buffer[buffer->Length-1] &= 0xFF;
if ( val == 0xFF )
{
//We have negative integer compressed into 3 bytes
//Copy over the this last byte as well so we keep the negative pattern
System::Buffer::BlockCopy(buffer, buffer->Length-1, tempBuf, buffer->Length, 1 );
}
}
switch(tempBuf->Length)
{
case sizeof(short):
return BitConverter::ToInt16(tempBuf,0);
case sizeof(int):
default:
return BitConverter::ToInt32(tempBuf,0);
}
}
And then in a test class I had:
void CTestDeltaEncoding::TestCompression()
{
array<array<Byte>^>^ byteArray = CDeltaEncoding::CompressArray(m_testdata);
array<int>^ decompressedArray = CDeltaEncoding::DecompressArray(byteArray);
int totalBytes = 0;
for (int i = 0; i<byteArray->Length; i++)
{
totalBytes += byteArray[i]->Length;
}
Assert::IsTrue(m_testdata->Length * sizeof(m_testdata) > totalBytes, "Expected the total bytes to be less than the original array!!");
//Expected totalBytes = 53
}
This smells a lot like homework to me. The crucial phrase is: "Using delta encoding."
Delta encoding means you encode the delta (difference) between each number and the next:
67497, 67376, 67173, 67235, 67057, 67031, 66951, 66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044, 67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620, 66579, 66596, 66713, 66852, 66715
would turn into:
[Base: 67497]: -121, -203, +62
and so on. Assuming 8-bit bytes, the original numbers require 3 bytes apiece (and given the number of compilers with 3-byte integer types, you're normally going to end up with 4 bytes apiece). From the looks of things, the differences will fit quite easily in 2 bytes apiece, and if you can ignore one (or possibly two) of the least significant bits, you can fit them in one byte apiece.
Delta encoding is most often used for things like sound encoding where you can "fudge" the accuracy at times without major problems. For example, if you have a change from one sample to the next that's larger than you've left space to encode, you can encode a maximum change in the current difference, and add the difference to the next delta (and if you don't mind some back-tracking, you can distribute some to the previous delta as well). This will act as a low-pass filter, limiting the gradient between samples.
For example, in the series you gave, a simple delta encoding requires ten bits to represent all the differences. By dropping the LSB, however, nearly all the samples (all but one, in fact) can be encoded in 8 bits. That one has a difference (right shifted one bit) of -173, so if we represent it as -128, we have 45 left. We can distribute that error evenly between the preceding and following sample. In that case, the output won't be an exact match for the input, but if we're talking about something like sound, the difference probably won't be particularly obvious.
I did mention that it was an exercise that I had to complete and the solution that I received was deemed not good enough, so I wanted some constructive feedback seeing as actual companies never decide to tell you what you did wrong.
When the array is compressed I store the differences and not the original values except the first as this was my understanding. If you had looked at my code I have provided a full solution but my question was how bad was it?