Is it possible to control an actuator using dronekit python? - drone.io

I would like to know if it's possible to control an actuator using drone-kit python. In my case I'm using an IRIS+ with pixhawk and I would like to control a robot gripper(servo) and a gopro camera. I have a raspberry PI 2 with a WIFI dongle.
Thanks in advance.

You cannot control a gopro.
To control a servo, plug it into an empty channel on the pixhawk and use mission planner to set that channel as a servo channel. From dronekit, control it like this:
msg = vehicle.message_factory.command_long_encode(
0, 0, # target_system, target_component
mavutil.mavlink.MAV_CMD_DO_SET_SERVO, #command
0, #confirmation
1, # servo number
1500, # servo position between 1000 and 2000
0, 0, 0, 0, 0) # param 3 ~ 7 not used
# send command to vehicle
vehicle.send_mavlink(msg)

Related

How do sum a repeating sequences of numbers?

I'm collecting sensory data using two Raspberry Pi's for redundancy and sending the data to a server. The sensory data broadcasts distance (mile 0.1, 0.0, 0.4 ...). Eventually the count gets high and it starts back at zero. (99.8, 99.9, 0, 0.1 ...).
a) How do a run a SQL command to get to total distance travelled? (Where the count resets is variable. Could be Mile 10)
b) How do I get a more accurate sum using both sets of data? (One set might have 0, 10, 78 and the other 1, 12, 81. The total distance being 81-0 or 81.

Export to Exodus file programmatically similar to can.ex2 example in Paraview

I have a similar question to Writing an Exodus II file programmatically. The linked question has still no accepted answer.
I am performing numerical simulations in MATLAB. My output consists of polyhedra in 3d space and I know the displacement for every vertex in terms of a displacement vector for a finite number of time steps. I would like to visualize the displacement as an animation in Paraview. On YouTube I found a tutorial on animations for the can.ex2 example in Paraview.
Therefore, I would like to export my data (i.e. initial vertex positions + displacement for each time step) to Exodus 2 format or similar. If possible, I would like to avoid any existing library and write the file myself in MATLAB. I was already successful with .vtk/.vti/.obj/... writers for other parts of my project.
Can someone recommend a comprehensive description on how the .ex2 files should be written and/or code that I can orientate myself on? Unfortunately, I was not very successful with my own research. I am also open for suggestions of similar file formats that would be sufficient for my plans.
Edit
Because I was asked for a code example:
% Vertices of unit triangle contained in (x,y)-plane
initialVertexPos = [0, 1, 0;
0, 0, 1;
0, 0, 0];
nVertices = size(initialVertexPos, 2);
% Linear displacement of all vertices along z-axis
nTimeSteps = 10;
disp_x = zeros(nTimeSteps, nVertices);
disp_y = zeros(nTimeSteps, nVertices);
disp_z = repmat(linspace(0,1,nTimeSteps)', 1, nVertices);
% Position of vertex kVertex at time step kTime is given by
% initialVertexPos(:,kVertex) + [disp_x(kTime,kVertex); disp_y(kTime,kVertex); disp_z(kTime,kVertex)]

Passing array of images to compute shader

I am currently working on a project using the draft for Compute shaders in WebGL 2.0. [draft].
Nevertheless I do not think that my question is WebGL specific but more an OpenGL problem. The goal build a pyramid of images to be used in a compute shader. Each level should be a squares of 2^(n-level) values (down to 1x1) and contain simple integer/float values...
I already have the values needed for this pyramid, stored in different
OpenGL Images all with their according sizes. My question is: how would you pass my image array to a compute shader. I have no problem in restricting the amount of images passed to lets say 14 or 16... but it needs to be at-least 12. If there is a mechanism to store the images in a Texture and than use textureLod that would solve the problem, but I could not get my head around how to do that with Images.
Thank you for your help, and I am pretty sure there should be an obvious way how to do that, but I am relatively new to OpenGL in general.
Passing in a "pyramid of images" is a normal thing to do in WebGL. They are called a mipmap.
gl.texImage2D(gl.TEXUTRE_2D, 0, gl.R32F, 128, 128, 0, gl.RED, gl.FLOAT, some128x128FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 1, gl.R32F, 64, 64, 0, gl.RED, gl.FLOAT, some64x64FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 2, gl.R32F, 32, 32, 0, gl.RED, gl.FLOAT, some32x32FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 3, gl.R32F, 16, 16, 0, gl.RED, gl.FLOAT, some16x16FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 4, gl.R32F, 8, 8, 0, gl.RED, gl.FLOAT, some8x8FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 5, gl.R32F, 4, 4, 0, gl.RED, gl.FLOAT, some4x4FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 6, gl.R32F, 2, 2, 0, gl.RED, gl.FLOAT, some2x2FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 7, gl.R32F, 1, 1, 0, gl.RED, gl.FLOAT, some1x1FloatData);
The number in the 2nd column is the mip level, followed by the internal format, followed by the width and height of the mip level.
You can access any individual texel from a shader with texelFetch(someSampler, integerTexelCoord, mipLevel) as in
uniform sampler2D someSampler;
...
ivec2 texelCoord = ivec2(17, 12);
int mipLevel = 2;
float r = texelFetch(someSampler, texelCoord, mipLevel).r;
But you said you need 12, 14, or 16 levels. 16 levels is 2(16-1) or 32768x32768. That's a 1.3 gig of memory so you'll need a GPU with enough space and you'll have to pray the browser lets you use 1.3 gig of memory.
You can save some memory by allocating your texture with texStorage2D and then uploading data with texSubImage2D.
You mentioned using images. If by that you mean <img> tags or Image well you can't get Float or Integer data from those. They are generally 8 bit per channel values.
Or course rather than using mip levels you could also arrange your data into a texture atlas

Emboss an Image in HALCON

Original Image:
OpenCV Processed Image:
The first Image is the original.
The second image is OpenCV's processed image.
I want to realize the effect in HALCON too.
Can someone give me advise on which method or HALCON operator to use?
According to Wikipedia (Image embossing) this can be achieved by a convolutional filter. In the example you provided it seems that the embossing direction is South-West.
In HALCON you can use the operator convol_image to calculate the correlation between an image and an arbitrary filter mask. The filter would be similar to this:
Embossing filter matrix
To apply such a filter matrix in HDevelop you can use the following line of code:
convol_image (OriginalImage, EmbossedImage, [3, 3, 1, 1, 0, 0, 0, 0, 0, 0, 0, -1], 0)

clFFT performance evaluation

I have been working on performance evaluation of the clFFT library AMD Radeon R7 260x. The CPU is intel xeon inside and OS is centOS.
I have been studying the performance of 2D 16x16 clFFT with different batch modes (Parallel FFTs). I wondered to see the different results obtained from especially event profiling and gettimeofday.
The results of 2D 16x16 clFFT with different batch modes are as following,
Using EventProfiling:
batch kernel exec time(us)
1 320.7
16 461.1
256 458.3
512 537.7
1024 1016.8
Here, the batch represents the parallel FFTs and the kernel execution time represents the execution time in micro seconds.
Using gettimeofday
batch HtoD(us) kernelExecTime(us) DtoH(us)
1 29653 10850 39227
16 28313 10786 32474
256 26995 11167 39672
512 26145 10773 32273
1024 26856 11948 31060
Here, the batch represents the parallel FFTs, H to D represents data transfer time from host to device, the kernel exec time represents the kernel execution time and D to H represents the data transfer time from device to host and all are in micro seconds.
(I am sorry as I cant show you the results in good table format, I can not able to add tables here. hope you can read still).
Here are my questions,
1a) Why the kernel times obtained from EventProfiling are completely different from that of gettimeofday?
1b) Here the another question is that, which results are correct?
2) The data (w.r.t size) transfers increases as the batch size increases. Bur from the results of the gettimeofday, the data transfer times either the H to D or D to H are almost constant instead of growing as the batch size increases from 1 to 1024. Why is that?
clFinish( cl_queue);
// Copy data from host to device
gettimeofday(&t_s_gpu1, NULL);
clEnqueueWriteBuffer( cl_queue, d_data, CL_TRUE, 0, width*height*batchSize*sizeof(cl_compl_flt), h_src, 0, NULL, &event1);
clFinish( cl_queue);
clWaitForEvents(1, &event1);
gettimeofday(&t_e_gpu1, NULL);
checkCL( clAmdFftBakePlan( fftPlan, 1, &cl_queue, NULL, NULL) );
clAmdFftSetPlanBatchSize( fftPlan, batchSize );
clFinish( cl_queue);
gettimeofday(&t_s_gpu, NULL);
checkCL( clAmdFftEnqueueTransform( fftPlan, CLFFT_FORWARD, 1, &cl_queue, 0, NULL, &event, &d_data, NULL, NULL) );
clFinish( cl_queue);
clWaitForEvents(1, &event);
gettimeofday(&t_e_gpu, NULL);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_START, sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(event, CL_PROFILING_COMMAND_END, sizeof(time_end), &time_end, NULL);
totaltime=totaltime+time_end - time_start;
clFinish( cl_queue);
// Copy result from device to host
gettimeofday(&t_s_gpu2, NULL);
checkCL( clEnqueueReadBuffer(cl_queue, d_data, CL_TRUE, 0, width*height*batchSize*sizeof(cl_compl_flt), h_res, 0, NULL, &event2));
clFinish( cl_queue);
clWaitForEvents(1, &event2);
gettimeofday(&t_e_gpu2, NULL);
I will be looking for your comments and answers and load of thanks in advance.