I have this:
const char changedValue [] = {0xCA,0x06,0x03,0x80,0x01,0x00};
and I need to calculate the total of the six bytes and add it to the end of that array the checksum of all bytes.
The size of a byte array with six bytes is... six.
If you need to include a (byte-size?) checksum, it must be one byte larger.
Related
I have numpy array:
A = np.array(['abcd','bcde','cdef'])
I need hash array of A: with function
B[i] = ord(A[i][1]) * 256 + ord(A[i][2])
B = np.array([ord('b') * 256 + ord('c'), ord('c') * 256 + ord('d'), ord('d') * 256 + ord('e')])
How I can do it?
Based on the question, I assume the string are ASCII one and all strings have a size bigger than 3 characters.
You can start by converting strings to ASCII one for sake of performance and simplicity (by creating a new temporary array). Then you can merge all the string in one big array without any copy thanks to views (since Numpy strings are contiguously stored in memory) and you can actually convert characters to integers at the same time (still without any copy). Then you can use the stride so to compute all the hash in a vectorized way. Here is how:
ascii = A.astype('S')
buff = ascii.view(np.uint8)
result = buff[1::ascii.itemsize]*256 + buff[2::ascii.itemsize]
Congratulation! Speed increase four times!
import time
import numpy as np
Iter = 1000000
A = np.array(['abcd','bcde','cdef','defg'] * Iter)
Ti = time.time()
B = np.zeros(A.size)
for i in range(A.size):
B[i] = ord(A[i][1]) * 256 + ord(A[i][2])
DT1 = time.time() - Ti
Ti = time.time()
ascii = A.astype('S')
buff = ascii.view(np.uint8)
result = buff[1::ascii.itemsize]*256 + buff[2::ascii.itemsize]
DT2 = time.time() - Ti
print("Equal = %s" % np.array_equal(B, result))
print("DT1=%7.2f Sec, DT2=%7.2f Sec, DT1/DT2=%6.2f" % (DT1, DT2, DT1/DT2))
Output:
Equal = True
DT1= 3.37 Sec, DT2= 0.82 Sec, DT1/DT2= 4.11
I'm implementing this tone-generator program and it works great:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/c2b953b6-3c85-4eda-a478-080bae781319/beep-beep?forum=vbgeneral
What I can't figure out, is why the following two lines of code:
BW.Write(Sample)
BW.Write(Sample)
One "write" makes sense, but why the second "write"?
The example is a bit cryptic but the wave file is configured to be 2 channels thus the two writes are simply sending the same audio data to both channels.
The wave header is this hardcoded bit:
Dim Hdr() As Integer = {&H46464952, 36 + Bytes, &H45564157, _
&H20746D66, 16, &H20001, 44100, _
176400, &H100004, &H61746164, Bytes}
Which decoded means:
H46464952 = 'RIFF' (little endian)
36+Bytes = Length of header + length of data
H45564157 = 'WAVE' (little endian)
H20746D66 = 'fmt ' (little endian)
16 = length of fmt chunk (always 16)
H20001 = 0x0001: PCM,
0x0002: 2 channels
44100 = sampleRate
176400 = sampleRate*numChannels*bytesPerSample = 44100*2*2
H100004 = 0x0004: numChannels*bytesPerSample,
0x0010: bitsPerSample (16)
H61746164 = 'data'
Bytes = size of data chunk
Using Open SCAD, I have a module that, like cube(), has a size parameter that can be a single value or a vector of three values. Ultimately, I want a vector of three values.
If the caller passes a single value, I'd like all three values of the vector to be the same. I don't see anything in the language documentation about detecting the type of an argument. So I came up with this hack:
module my_cubelike_thing(size=1) {
dimensions = concat(size, size, size);
width = dimensions[0];
length = dimensions[1];
height = dimensions[2];
// ... use width, length, and height ...
}
When size is a single value, the result of the concat is exactly what I want: three copies of the value.
When size is a three-value vector, the result of the concat is nine-value vector, and my code just ignores the last six values.
It works but only because what I want in the single value case is to replicate the value. Is there a general way to switch on the argument type and do different things depending on that type?
If type of size only can be single value or a vector with 3 values, the type can helpwise be found by the special value undef:
a = [3,5,8];
// a = 5;
if (a[0] == undef) {
dimensions = concat(a, a, a);
// do something
cube(size=dimensions,center=false);
}
else {
dimensions = a;
// do something
cube(size=dimensions,center=false);
}
But assignments are only valid in the scope in which they are defined , documnetation of openscad.
So in each subtree much code is needed and i would prefere to validate the type of size in an external script (e.g. python3) and write the openscad-code with the assignment of variables to a file, which can be included in the openscad-file, here my short test-code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
# size = 20
size = [20,15,10]
if type(size) == int:
dimensions = [size, size, size]
elif type(size) == list:
dimensions = size
else:
# if other types possible
pass
with open('variablen.scad', 'w') as wObj:
for i, v in enumerate(['l', 'w', 'h']):
wObj.write('{} = {};\n'.format(v, dimensions[i]))
os.system('openscad ./typeDef.scad')
content of variablen.scad:
l = 20;
w = 15;
h = 10;
and typeDef.scad can look like this
include <./variablen.scad>;
module my_cubelike_thing() {
linear_extrude(height=h, center=false) square(l, w);
}
my_cubelike_thing();
I'm trying to calculate the average Luminance of an RGB image. To do this, I find the luminance of each pixel i.e.
L(r,g,b) = X*r + Y*g + Z*b (some linear combination).
And then find the average by summing up luminance of all pixels and dividing by width*height.
To speed this up, I'm using pyopencl.reduction.ReductionKernel
The array I pass to it is a Single Dimension Numpy Array so it works just like the example given.
import Image
import numpy as np
im = Image.open('image_00000001.bmp')
data = np.asarray(im).reshape(-1) # so data is a single dimension list
# data.dtype is uint8, data.shape is (w*h*3, )
I want to incorporate the following code from the example into it . i.e. I would make changes to datatype and the type of arrays I'm passing. This is the example:
a = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
b = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
krnl = ReductionKernel(ctx, numpy.float32, neutral="0",
reduce_expr="a+b", map_expr="x[i]*y[i]",
arguments="__global float *x, __global float *y")
my_dot_prod = krnl(a, b).get()
Except, my map_expr will work on each pixel and convert each pixel to its luminance value.
And reduce expr remains the same.
The problem is, it works on each element in the array, and I need it to work on each pixel which is 3 consecutive elements at a time (RGB ).
One solution is to have three different arrays, one for R, one for G and one for B ,which would work, but is there another way ?
Edit: I changed the program to illustrate the char4 usage instead of float4:
import numpy as np
import pyopencl as cl
import pyopencl.array as cl_array
deviceID = 0
platformID = 0
workGroup=(1,1)
N = 10
testData = np.zeros(N, dtype=cl_array.vec.char4)
dev = cl.get_platforms()[platformID].get_devices()[deviceID]
ctx = cl.Context([dev])
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
Data_In = cl.Buffer(ctx, mf.READ_WRITE, testData.nbytes)
prg = cl.Program(ctx, """
__kernel void Pack_Cmplx( __global char4* Data_In, int N)
{
int gid = get_global_id(0);
//Data_In[gid] = 1; // This would change all components to one
Data_In[gid].x = 1; // changing single component
Data_In[gid].y = 2;
Data_In[gid].z = 3;
Data_In[gid].w = 4;
}
""").build()
prg.Pack_Cmplx(queue, (N,1), workGroup, Data_In, np.int32(N))
cl.enqueue_copy(queue, testData, Data_In)
print testData
I hope it helps.
For the past 4 to 5 hours I've been wrestling with this very bizarre issue. I have a an array of bytes which contain pixel values out of which I'll like to make an image of. The array represents 32 bit per component values. There is no Alpha channel, so the image is 96 bits/pixel.
I have specified all of this to the CGImageCreate function as follows:
CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone , provider, NULL, NO, kCGRenderingIntentDefault);
bytesPerRow is 3*width*4. This is so because there are 3 components per pixel, and each component takes 4 bytes (32 bits). So, total bytes per row is 3*4*width. The data provider is defined as follows:
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
This is where things get bizarre. In my array, I am explicity setting the values to be 0x000000FF (for all 3 channels) and yet, the image is coming out to be completely white. If I set the value to 0xFFFFFF00, the image comes out to be black. This is telling me that the program is, for some reason, not reading all of the 4 bytes for each component and is instead reading the least significant byte. I have tried all sorts of combinations - even including an Alpha channel, but it has made no difference to this.
The program is blind to this: 0xAAAAAA00. It simply reads this as 0. When I'm explicity specifying that the bits per component are 32 bits, shouldn't the function take this into account and actually read 4 bytes from the array?
The bytes array is defined as: bitmapData = (char*)malloc(bytesPerRow*height); And I am assigning values to the array as follows
for(i=0;i<width*height;i++)
{
*((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
Note that I address the array as an int to address 4 bytes of memory. i is multiplied by 12 because there are 12 bytes per pixel. The addition of 4 and 8 allow the loop to address the green and blue channels. Note that I have inspected the memory of the array in the debugger and that seems to be perfectly OK. The loop is writing to 4 bytes. Any sort of pointers to this would be MOST helpful. My ultimate goal is to be able to read 32 bit FITS files - for which I already have the program written. I am only testing the above code with the above array.
Here the code in its entirety if it matters. This is in drawRect:(NSRect)dirtyRect method of my custom view:
int width, height, bytesPerRow;
int i;
width = 256;
height = 256;
bytesPerRow = 3*width*4;
char *bitmapData;
bitmapData = (char*)malloc(bytesPerRow*height);
for(i=0;i<width*height;i++)
{
*((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone, provider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(space);
CGDataProviderRelease(provider);
CGContextRef theContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(theContext, CGRectMake(0,0,width,height), img);
I see a few things worth pointing out:
First, the Quartz 2D Programming Guide doesn't list 96-bpp RGB as a supported format. You might try 128-bpp RGB.
Second, you're working on a little-endian system*, which means LSB comes first. Change the values to which you set each component to 0x33000000EE and you will see a light grey (EE), not a dark grey (33).
Most importantly, bbum is absolutely right when he points out that your display can't render that range of color**. It's getting squashed down to 8-bpc just for display. If it's correct in memory, then it's correct in memory.
*: More's the pity. R.I.P PPC.
**: Maybe NASA has one that can?