The code below attempts to save a data stream to a file using fwrite. The first example using malloc works but with the second example the data stream is %70 corrupted. Can someone explain to me why the second example is corrupted and how I can remedy it?
short int fwBuffer[1000000];
// short int *fwBuffer[1000000];
unsigned long fwSize[1000000];
// Not Working *********
if (dataFlow) {
size = sizeof(short int)*length*inchannels;
short int tmpbuffer[length*inchannels];
int count = 0;
for (count = 0; count < length*inchannels; count++)
{
tmpbuffer[count] = (short int) (inbuffer[count]);
}
memcpy(&fwBuffer[saveBufferCount], tmpbuffer, sizeof(tmpbuffer));
fwSize[saveBufferCount] = size;
saveBufferCount++;
totalSize += size;
}
// Working ***********
if (dataFlow) {
size = sizeof(short int)*length*inchannels;
short int *tmpbuffer = (short int*)malloc(size);
int count = 0;
for (count = 0; count < length*inchannels; count++)
{
tmpbuffer[count] = (short int) (inbuffer[count]);
}
fwBuffer[saveBufferCount] = tmpbuffer;
fwSize[saveBufferCount] = size;
saveBufferCount++;
totalSize += size;
}
// Write to file ***********
for (int i = 0; i < saveBufferCount; i++) {
if (isRecording && outFile != NULL) {
// fwrite(fwBuffer[i], 1, fwSize[i],outFile);
fwrite(&fwBuffer[i], 1, fwSize[i],outFile);
if (fwBuffer[i] != NULL) {
// free(fwBuffer[i]);
}
fwBuffer[i] = NULL;
}
}
You initialize your size as
size = sizeof(short int) * length * inchannels;
then you declare an array of size
short int tmpbuffer[size];
This is already highly suspect. Why did you include sizeof(short int) into the size and then declare an array of short int elements with that size? The byte size of your array in this case is
sizeof(short int) * sizeof(short int) * length * inchannels
i.e. the sizeof(short int) is factored in twice.
Later you initialize only length * inchannels elements of the array, which is not entire array, for the reasons described above. But the memcpy that follows still copies the entire array
memcpy(&fwBuffer[saveBufferCount], &tmpbuffer, sizeof (tmpbuffer));
(Tail portion of the copied data is garbage). I'd suspect that you are copying sizeof(short int) times more data than was intended. The recipient memory overflows and gets corrupted.
The version based on malloc does not suffer from this problem since malloc-ed memory size is specified in bytes, not in short int-s.
If you want to simulate the malloc behavior in the upper version of the code, you need to declare your tmpbuffer as an array of char elements, not of short int elements.
This has very good chances to crash
short int tmpbuffer[(short int)(size)];
first size could be too big, but then truncating it and having whatever size results of that is probably not what you want.
Edit: Try to write the whole code without a single cast. Only then the compiler has a chance to tell you if there is something wrong.
Related
I have this piece of c/c++ code:
void * myThreadFun(void *vargp)
{
int start = atoi((char*)vargp) % nFracK;
printf("Thread start = %d, dQ = %d\n", start, dQ);
pthread_mutex_lock(&nItermutex);
nIter++;
pthread_mutex_unlock(&nItermutex);
}
void Opt() {
pthread_t thread[200];
char start[100];
for(int i = 0; i < 10; i++) {
sprintf(start, "%d", i);
int ret = pthread_create (&thread[i], NULL, myThreadFun, (void*) start);
printf("ret = %d on thread %d\n", ret, i);
}
for(int i = 0; i < 10; i++)
pthread_join(thread[i], NULL);
}
But it should create 10 threads. I don't understand why, instead, it creates n < 10 threads.
The ret value is always 0 (for 10 times).
But it should create 10 threads. I don't understand why, instead, it creates n < 10 threads. The ret value is always 0 (for 10 times).
Your program contains at least one data race, therefore its behavior is undefined.
The provided source is also is incomplete, so it's impossible to be sure that I can test the same thing you are testing. Nevertheless, I performed the minimum augmentation needed for g++ to compile it without warnings, and tested that:
#include <cstdlib>
#include <cstdio>
#include <pthread.h>
pthread_mutex_t nItermutex = PTHREAD_MUTEX_INITIALIZER;
const int nFracK = 100;
const int dQ = 4;
int nIter = 0;
void * myThreadFun(void *vargp)
{
int start = atoi((char*)vargp) % nFracK;
printf("Thread start = %d, dQ = %d\n", start, dQ);
pthread_mutex_lock(&nItermutex);
nIter++;
pthread_mutex_unlock(&nItermutex);
return NULL;
}
void Opt() {
pthread_t thread[200];
char start[100];
for(int i = 0; i < 10; i++) {
sprintf(start, "%d", i);
int ret = pthread_create (&thread[i], NULL, myThreadFun, (void*) start);
printf("ret = %d on thread %d\n", ret, i);
}
for(int i = 0; i < 10; i++)
pthread_join(thread[i], NULL);
}
int main(void) {
Opt();
return 0;
}
The fact that its behavior is undefined notwithstanding, when I run this program on my Linux machine, it invariably prints exactly ten "Thread start" lines, albeit not all with distinct numbers. The most plausible conclusion is that the program indeed does start ten (additional) threads, which is consistent with the fact that the output also seems to indicate that each call to pthread_create() indicates success by returning 0. I therefore reject your assertion that fewer than ten threads are actually started.
Presumably, the followup question would be why the program does not print the expected output, and here we return to the data race and accompanying undefined behavior. The main thread writes a text representation of iteration variable i into local array data of function Opt, and passes a pointer to that same array to each call to pthread_create(). When it then cycles back to do it again, there is a race between the newly created thread trying to read back the data and the main thread overwriting the array's contents with new data. I suppose that your idea was to avoid passing &i, but this is neither better nor fundamentally different.
You have several options for avoiding a data race in such a situation, prominent among them being:
initialize each thread indirectly from a different object, for example:
int start[10];
for(int i = 0; i < 10; i++) {
start[i] = i;
int ret = pthread_create(&thread[i], NULL, myThreadFun, &start[i]);
}
Note there that each thread is passed a pointer to a different array element, which the main thread does not subsequently modify.
initialize each thread directly from the value passed to it. This is not always a viable alternative, but it is possible in this case:
for(int i = 0; i < 10; i++) {
start[i] = i;
int ret = pthread_create(&thread[i], NULL, myThreadFun,
reinterpret_cast<void *>(static_cast<std::intptr_t>(i)));
}
accompanied by corresponding code in the thread function:
int start = reinterpret_cast<std::intptr_t>(vargp) % nFracK;
This is a fairly common idiom, though more often used when writing in pthreads's native language, C, where it's less verbose.
Use a mutex, semaphore, or other synchronization object to prevent the main thread from modifying the array before the child has read it. (Left as an exercise.)
Any of those options can be used to write a program that produces the expected output, with each thread responsible for printing one line. Supposing, of course, that the expectations of the output do not include that the relative order of the threads' outputs will be the same as the relative order in which they were started. If you want that, then only the option of synchronizing the parent and child threads will achieve it.
here is a part of my program code:
int test;
for(uint i = 0; i < 1700; i++) {
test++;
}
the whole program takes 0.5 seconds to finish, but when I change it to:
int test[1];
for(uint i = 0; i < 1700; i++) {
test[0]++;
}
it will takes 3.5 seconds! and when I change the int to double, it will gets very worse:
double test;
for(uint i = 0; i < 1700; i++) {
test++;
}
it will takes about 18 seconds to finish !!!
I have to increase an int array element and a double variable in my real for loop, and it will takes about 30 seconds!
What's happening here?! Why should it takes that much time for just an increment?!
I know a floating point data type like double has different structure from a fixed point data type like int, but is it the only cause for such a big different time? and what about the second example which is also an int array element?!
Thanks
You have answered your question yourself.
float (double) operations are different from integer ones. Even if you just add 1.0f.
Your second example takes longer than the first one just because you added some pointer refernces. An array in C is -bottom down- not much different from a pointer to the first element. Accessing any element, even the first one, would cause the machine code to load the starting address of the array multiply the index (0 in this case) with the length of each member (4 or whatever bytes int has) and add that (0) to the pointer. Then it has to dereference the pointer, meaning to acutally load the value at that very address. Add one and write back the result.
A smart modern compiler should optimize this a bit. When you want to avoid this optimization, then modify the code a bit and don`t use a constant for the index.
I never tried that with a modern objective-c compiler. But I guess that this code would take much loger than 3.5s to run:
int test[2];
int index = 0;
for(uint i = 0; i < 1700; i++) {
test[index]++;
}
If that does not make much of a change then try this:
-(void)foo:(int)index {
int test[2];
for(uint i = 0; i < 1700; i++) {
test[index]++;
}
}
and then call foo:0;
Give it a try and let us know :)
My computer crashes (I have to manually reset it) when I run my kernel function in a loop for 600+ times (it would not crash if it were like 50 times or so), and I'm not sure what's causing the crash.
My main is as follows:
int main()
{
int *seam = new int [image->height];
int width = image->width;
int height = image->height;
int *fMC = (int*)malloc(width*height*sizeof(int*));
int *fNew = (int*)malloc(width*height*sizeof(int*));
for(int i=0;i<numOfSeams;i++)
{
seam = cpufindSeamV2(fMC,width,height,1);
fMC = kernel_shiftSeam(fMC,fNew,seam,width,height,nWidth,1);
for(int k=0;k<height;k++)
{
fMC[(nWidth-1)+width*k] = INT_MAX;
}
}
and my kernel is :
int* kernel_shiftSeam(int *MCEnergyMat, int *newE, int *seam, int width, int height, int x, int direction)
{
//time measurement
float elapsed_time_ms = 0;
cudaEvent_t start, stop; //threads per block
dim3 threads(16,16);
//blocks
dim3 blocks((width+threads.x-1)/threads.x, (height+threads.y-1)/threads.y);
//MCEnergy and Seam arrays on device
int *device_MC, *device_new, *device_Seam;
//MCEnergy and Seam arrays on host
int *host_MC, *host_new, *host_Seam;
//total number of bytes in array
int size = width*height*sizeof(int);
int seamSize;
if(direction == 1)
{
seamSize = height*sizeof(int);
host_Seam = (int*)malloc(seamSize);
for(int i=0;i<height;i++)
host_Seam[i] = seam[i];
}
else
{
seamSize = width*sizeof(int);
host_Seam = (int*)malloc(seamSize);
for(int i=0;i<width;i++)
host_Seam[i] = seam[i];
}
cudaMallocHost((void**)&host_MC, size );
cudaMallocHost((void**)&host_new, size );
host_MC = MCEnergyMat;
host_new = newE;
//allocate 1D flat array on device
cudaMalloc((void**)&device_MC, size);
cudaMalloc((void**)&device_new, size);
cudaMalloc((void**)&device_Seam, seamSize);
//copy host array to device
cudaMemcpy(device_MC, host_MC, size, cudaMemcpyHostToDevice);
cudaMemcpy(device_new, host_new, size, cudaMemcpyHostToDevice);
cudaMemcpy(device_Seam, host_Seam, seamSize, cudaMemcpyHostToDevice);
//measure start time for cpu calculations
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start, 0);
//perform gpu calculations
if(direction == 1)
{
gpu_shiftSeam<<< blocks,threads >>>(device_MC, device_new, device_Seam, width, height, x);
}
//measure end time for cpu calcuations
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&elapsed_time_ms, start, stop );
execTime += elapsed_time_ms;
//copy out the results back to host
cudaMemcpy(newE, device_new, size, cudaMemcpyDeviceToHost);
//free memory
free(host_Seam);
cudaFree(host_MC); cudaFree(host_new);
cudaFree(device_MC); cudaFree(device_new); cudaFree(device_Seam);
//destroy event objects
cudaEventDestroy(start); cudaEventDestroy(stop);
return newE;
}
So, my program crashes when I call "kernel_shiftSeam" for many times, I also freed the memory using cudaFree so I don't know whether or not its a memory leak problem. It would be great if someone can point me in the right direction.
Could be heap problems. Try reordering the cudaFree statements in your kernel to be LIFO. Check release notes for any newer CUDA drivers that contain heap/leak fixes. On windows try installing process explorer 15.12 or newer as it shows GPU memory usage - and a leaky heap is easy to spot.
I'm using
for (int i = 1, i<100, i++)
int i = arc4random() % array count;
but I'm getting repeats every time. How can I fill out the chosen int value from the range, so that when the program loops I will not get any dupe?
It sounds like you want shuffling of a set rather than "true" randomness. Simply create an array where all the positions match the numbers and initialize a counter:
num[ 0] = 0
num[ 1] = 1
: :
num[99] = 99
numNums = 100
Then, whenever you want a random number, use the following method:
idx = rnd (numNums); // return value 0 through numNums-1
val = num[idx]; // get then number at that position.
num[idx] = val[numNums-1]; // remove it from pool by overwriting with highest
numNums--; // and removing the highest position from pool.
return val; // give it back to caller.
This will return a random value from an ever-decreasing pool, guaranteeing no repeats. You will have to beware of the pool running down to zero size of course, and intelligently re-initialize the pool.
This is a more deterministic solution than keeping a list of used numbers and continuing to loop until you find one not in that list. The performance of that sort of algorithm will degrade as the pool gets smaller.
A C function using static values something like this should do the trick. Call it with
int i = myRandom (200);
to set the pool up (with any number zero or greater specifying the size) or
int i = myRandom (-1);
to get the next number from the pool (any negative number will suffice). If the function can't allocate enough memory, it will return -2. If there's no numbers left in the pool, it will return -1 (at which point you could re-initialize the pool if you wish). Here's the function with a unit testing main for you to try out:
#include <stdio.h>
#include <stdlib.h>
#define ERR_NO_NUM -1
#define ERR_NO_MEM -2
int myRandom (int size) {
int i, n;
static int numNums = 0;
static int *numArr = NULL;
// Initialize with a specific size.
if (size >= 0) {
if (numArr != NULL)
free (numArr);
if ((numArr = malloc (sizeof(int) * size)) == NULL)
return ERR_NO_MEM;
for (i = 0; i < size; i++)
numArr[i] = i;
numNums = size;
}
// Error if no numbers left in pool.
if (numNums == 0)
return ERR_NO_NUM;
// Get random number from pool and remove it (rnd in this
// case returns a number between 0 and numNums-1 inclusive).
n = rand() % numNums;
i = numArr[n];
numArr[n] = numArr[numNums-1];
numNums--;
if (numNums == 0) {
free (numArr);
numArr = 0;
}
return i;
}
int main (void) {
int i;
srand (time (NULL));
i = myRandom (20);
while (i >= 0) {
printf ("Number = %3d\n", i);
i = myRandom (-1);
}
printf ("Final = %3d\n", i);
return 0;
}
And here's the output from one run:
Number = 19
Number = 10
Number = 2
Number = 15
Number = 0
Number = 6
Number = 1
Number = 3
Number = 17
Number = 14
Number = 12
Number = 18
Number = 4
Number = 9
Number = 7
Number = 8
Number = 16
Number = 5
Number = 11
Number = 13
Final = -1
Keep in mind that, because it uses statics, it's not safe for calling from two different places if they want to maintain their own separate pools. If that were the case, the statics would be replaced with a buffer (holding count and pool) that would "belong" to the caller (a double-pointer could be passed in for this purpose).
And, if you're looking for the "multiple pool" version, I include it here for completeness.
#include <stdio.h>
#include <stdlib.h>
#define ERR_NO_NUM -1
#define ERR_NO_MEM -2
int myRandom (int size, int *ppPool[]) {
int i, n;
// Initialize with a specific size.
if (size >= 0) {
if (*ppPool != NULL)
free (*ppPool);
if ((*ppPool = malloc (sizeof(int) * (size + 1))) == NULL)
return ERR_NO_MEM;
(*ppPool)[0] = size;
for (i = 0; i < size; i++) {
(*ppPool)[i+1] = i;
}
}
// Error if no numbers left in pool.
if (*ppPool == NULL)
return ERR_NO_NUM;
// Get random number from pool and remove it (rnd in this
// case returns a number between 0 and numNums-1 inclusive).
n = rand() % (*ppPool)[0];
i = (*ppPool)[n+1];
(*ppPool)[n+1] = (*ppPool)[(*ppPool)[0]];
(*ppPool)[0]--;
if ((*ppPool)[0] == 0) {
free (*ppPool);
*ppPool = NULL;
}
return i;
}
int main (void) {
int i;
int *pPool;
srand (time (NULL));
pPool = NULL;
i = myRandom (20, &pPool);
while (i >= 0) {
printf ("Number = %3d\n", i);
i = myRandom (-1, &pPool);
}
printf ("Final = %3d\n", i);
return 0;
}
As you can see from the modified main(), you need to first initialise an int pointer to NULL then pass its address to the myRandom() function. This allows each client (location in the code) to have their own pool which is automatically allocated and freed, although you could still share pools if you wish.
You could use Format-Preserving Encryption to encrypt a counter. Your counter just goes from 0 upwards, and the encryption uses a key of your choice to turn it into a seemingly random value of whatever radix and width you want.
Block ciphers normally have a fixed block size of e.g. 64 or 128 bits. But Format-Preserving Encryption allows you to take a standard cipher like AES and make a smaller-width cipher, of whatever radix and width you want (e.g. radix 2, width 16), with an algorithm which is still cryptographically robust.
It is guaranteed to never have collisions (because cryptographic algorithms create a 1:1 mapping). It is also reversible (a 2-way mapping), so you can take the resulting number and get back to the counter value you started with.
AES-FFX is one proposed standard method to achieve this. I've experimented with some basic Python code which is based on the AES-FFX idea, although not fully conformant--see Python code here. It can e.g. encrypt a counter to a random-looking 7-digit decimal number, or a 16-bit number.
You need to keep track of the numbers you have already used (for instance, in an array). Get a random number, and discard it if it has already been used.
Without relying on external stochastic processes, like radioactive decay or user input, computers will always generate pseudorandom numbers - that is numbers which have many of the statistical properties of random numbers, but repeat in sequences.
This explains the suggestions to randomise the computer's output by shuffling.
Discarding previously used numbers may lengthen the sequence artificially, but at a cost to the statistics which give the impression of randomness.
The best way to do this is create an array for numbers already used. After a random number has been created then add it to the array. Then when you go to create another random number, ensure that it is not in the array of used numbers.
In addition to using secondary array to store already generated random numbers, invoking random no. seeding function before every call of random no. generation function might help to generate different seq. of random numbers in every run.
I was recently asked to complete a task for a c++ role, however as the application was decided not to be progressed any further I thought that I would post here for some feedback / advice / improvements / reminder of concepts I've forgotten.
The task was:
The following data is a time series of integer values
int timeseries[32] = {67497, 67376, 67173, 67235, 67057, 67031, 66951,
66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044,
67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620,
66579, 66596, 66713, 66852, 66715};
The series might be, for example, the closing price of a stock each day
over a 32 day period.
As stored above, the data will occupy 32 x sizeof(int) bytes = 128 bytes
assuming 4 byte ints.
Using delta encoding , write a function to compress, and a function to
uncompress data like the above.
Ok, so before this point I had never looked at compression so my solution is far from perfect. The manner in which I approached the problem is by compressing the array of integers into a array of bytes. When representing the integer as a byte I keep the calculate most
significant byte (msb) and keep everything up to this point, whilst throwing the rest away. This is then added to the byte array. For negative values I increment the msb by 1 so that we can
differentiate between positive and negative bytes when decoding by keeping the leading
1 bit values.
When decoding I parse this jagged byte array and simply reverse my
previous actions performed when compressing. As mentioned I have never looked at compression prior to this task so I did come up with my own method to compress the data. I was looking at C++/Cli recently, had not really used it previously so just decided to write it in this language, no particular reason. Below is the class, and a unit test at the very bottom. Any advice / improvements / enhancements will be much appreciated.
Thanks.
array<array<Byte>^>^ CDeltaEncoding::CompressArray(array<int>^ data)
{
int temp = 0;
int original;
int size = 0;
array<int>^ tempData = gcnew array<int>(data->Length);
data->CopyTo(tempData, 0);
array<array<Byte>^>^ byteArray = gcnew array<array<Byte>^>(tempData->Length);
for (int i = 0; i < tempData->Length; ++i)
{
original = tempData[i];
tempData[i] -= temp;
temp = original;
int msb = GetMostSignificantByte(tempData[i]);
byteArray[i] = gcnew array<Byte>(msb);
System::Buffer::BlockCopy(BitConverter::GetBytes(tempData[i]), 0, byteArray[i], 0, msb );
size += byteArray[i]->Length;
}
return byteArray;
}
array<int>^ CDeltaEncoding::DecompressArray(array<array<Byte>^>^ buffer)
{
System::Collections::Generic::List<int>^ decodedArray = gcnew System::Collections::Generic::List<int>();
int temp = 0;
for (int i = 0; i < buffer->Length; ++i)
{
int retrievedVal = GetValueAsInteger(buffer[i]);
decodedArray->Add(retrievedVal);
decodedArray[i] += temp;
temp = decodedArray[i];
}
return decodedArray->ToArray();
}
int CDeltaEncoding::GetMostSignificantByte(int value)
{
array<Byte>^ tempBuf = BitConverter::GetBytes(Math::Abs(value));
int msb = tempBuf->Length;
for (int i = tempBuf->Length -1; i >= 0; --i)
{
if (tempBuf[i] != 0)
{
msb = i + 1;
break;
}
}
if (!IsPositiveInteger(value))
{
//We need an extra byte to differentiate the negative integers
msb++;
}
return msb;
}
bool CDeltaEncoding::IsPositiveInteger(int value)
{
return value / Math::Abs(value) == 1;
}
int CDeltaEncoding::GetValueAsInteger(array<Byte>^ buffer)
{
array<Byte>^ tempBuf;
if(buffer->Length % 2 == 0)
{
//With even integers there is no need to allocate a new byte array
tempBuf = buffer;
}
else
{
tempBuf = gcnew array<Byte>(4);
System::Buffer::BlockCopy(buffer, 0, tempBuf, 0, buffer->Length );
unsigned int val = buffer[buffer->Length-1] &= 0xFF;
if ( val == 0xFF )
{
//We have negative integer compressed into 3 bytes
//Copy over the this last byte as well so we keep the negative pattern
System::Buffer::BlockCopy(buffer, buffer->Length-1, tempBuf, buffer->Length, 1 );
}
}
switch(tempBuf->Length)
{
case sizeof(short):
return BitConverter::ToInt16(tempBuf,0);
case sizeof(int):
default:
return BitConverter::ToInt32(tempBuf,0);
}
}
And then in a test class I had:
void CTestDeltaEncoding::TestCompression()
{
array<array<Byte>^>^ byteArray = CDeltaEncoding::CompressArray(m_testdata);
array<int>^ decompressedArray = CDeltaEncoding::DecompressArray(byteArray);
int totalBytes = 0;
for (int i = 0; i<byteArray->Length; i++)
{
totalBytes += byteArray[i]->Length;
}
Assert::IsTrue(m_testdata->Length * sizeof(m_testdata) > totalBytes, "Expected the total bytes to be less than the original array!!");
//Expected totalBytes = 53
}
This smells a lot like homework to me. The crucial phrase is: "Using delta encoding."
Delta encoding means you encode the delta (difference) between each number and the next:
67497, 67376, 67173, 67235, 67057, 67031, 66951, 66974, 67042, 67025, 66897, 67077, 67082, 67033, 67019, 67149, 67044, 67012, 67220, 67239, 66893, 66984, 66866, 66693, 66770, 66722, 66620, 66579, 66596, 66713, 66852, 66715
would turn into:
[Base: 67497]: -121, -203, +62
and so on. Assuming 8-bit bytes, the original numbers require 3 bytes apiece (and given the number of compilers with 3-byte integer types, you're normally going to end up with 4 bytes apiece). From the looks of things, the differences will fit quite easily in 2 bytes apiece, and if you can ignore one (or possibly two) of the least significant bits, you can fit them in one byte apiece.
Delta encoding is most often used for things like sound encoding where you can "fudge" the accuracy at times without major problems. For example, if you have a change from one sample to the next that's larger than you've left space to encode, you can encode a maximum change in the current difference, and add the difference to the next delta (and if you don't mind some back-tracking, you can distribute some to the previous delta as well). This will act as a low-pass filter, limiting the gradient between samples.
For example, in the series you gave, a simple delta encoding requires ten bits to represent all the differences. By dropping the LSB, however, nearly all the samples (all but one, in fact) can be encoded in 8 bits. That one has a difference (right shifted one bit) of -173, so if we represent it as -128, we have 45 left. We can distribute that error evenly between the preceding and following sample. In that case, the output won't be an exact match for the input, but if we're talking about something like sound, the difference probably won't be particularly obvious.
I did mention that it was an exercise that I had to complete and the solution that I received was deemed not good enough, so I wanted some constructive feedback seeing as actual companies never decide to tell you what you did wrong.
When the array is compressed I store the differences and not the original values except the first as this was my understanding. If you had looked at my code I have provided a full solution but my question was how bad was it?