how to set yaml-cpp node style? - yaml-cpp

I have a vector3 class.
class vector3
{
float x, y, z;
}
node["x"] = vector3.x;
node["y"] = vector3.y;
node["z"] = vector3.z;
The result is
x: 0
y: 0
z: 0
I want the result to be:
{x: 0, y: 0, z: 0}
If use the old API, I can use YAML::Flow to set the style:
YAML::Emitter emitter;
out << YAML::Flow << YAML::BeginMap << YAML::Key << "x" << YAML::Value << x << YAML::EndMap
Using the new API, how can I set the style?
I asked this question on a yaml-cpp project issue page:
https://code.google.com/p/yaml-cpp/issues/detail?id=186
I got the answer:
You can still use the emitter and set the flow style:
YAML::Emitter emitter;
emitter << YAML::Flow << node;
but the vector3 is part of the object. I specialize the YAML::convert<> template class
template<>
struct convert<vector3>
{
static Node encode(const vector3 & rhs)
{
Node node = YAML::Load("{}");
node["x"] = rhs.x;
node["y"] = rhs.y;
node["z"] = rhs.z;
return node;
}
}
so I need to return a node, but the emitter can't convert to a node.
I need the object to like that:
GameObject:
m_Layer: 0
m_Pos: {x: 0.500000, y: 0.500000, z: 0.500000}
How can I do this?

A while ago the node interface was extended in yaml-cpp to include a SetStyle() function adding the following line anywhere in encode should have the desired result
node.SetStyle(YAML::EmitterStyle::Flow);

Related

What is the problem with generated SPIR-V code and how to verify it?

I have some generated SPIR-V code which I want to use with the vulkan API. But I get an
Exception thrown at 0x00007FFB68D933CB (nvoglv64.dll) in vulkanCompute.exe: 0xC0000005: Access violation reading location 0x0000000000000008. when trying to create the pipline with vkCreateComputePipelines.
The API calls should be fine, because the same code works with a shader compiled with glslangValidator. Therefore I assume that the generated SPIR-V code must be illformed somehow.
I've checked the SPIR-V code with the validator tool from khronos, using spirv-val --target-env vulkan1.1 mainV.spv which exited without error. Anyhow it is also known that this tool is still incomplete.
I've also tried to use the Radeon GPU Analyzer to compile my SPIR-V code, which is also available online at the shader playground and this tool throws the error Error: Error: internal error: Bil::BilInstructionConvert::Create(60) Code Not Tested! which is not really helpful, but encourages the assumption that the code is malformed.
The SPIR-V code is unfortunately to long to post it here, but it is in the link of the shader playground.
Does anyone know what the problem is with my setting or has any idea how I can verify my SPIR-V code in a better way, without checking all 700 lines of code manually.
I don't thinkt the problem is there, but anyway here is the c++ host code:
#include "vulkan/vulkan.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#define BAIL_ON_BAD_RESULT(result) \
if (VK_SUCCESS != (result)) \
{ \
fprintf(stderr, "Failure at %u %s\n", __LINE__, __FILE__); \
exit(-1); \
}
VkResult vkGetBestComputeQueueNPH(vk::PhysicalDevice &physicalDevice, uint32_t &queueFamilyIndex)
{
auto properties = physicalDevice.getQueueFamilyProperties();
int i = 0;
for (auto prop : properties)
{
vk::QueueFlags maskedFlags = (~(vk::QueueFlagBits::eTransfer | vk::QueueFlagBits::eSparseBinding) & prop.queueFlags);
if (!(vk::QueueFlagBits::eGraphics & maskedFlags) && (vk::QueueFlagBits::eCompute & maskedFlags))
{
queueFamilyIndex = i;
return VK_SUCCESS;
}
i++;
}
i = 0;
for (auto prop : properties)
{
vk::QueueFlags maskedFlags = (~(vk::QueueFlagBits::eTransfer | vk::QueueFlagBits::eSparseBinding) & prop.queueFlags);
if (vk::QueueFlagBits::eCompute & maskedFlags)
{
queueFamilyIndex = i;
return VK_SUCCESS;
}
i++;
}
return VK_ERROR_INITIALIZATION_FAILED;
}
int main(int argc, const char *const argv[])
{
(void)argc;
(void)argv;
try
{
// initialize the vk::ApplicationInfo structure
vk::ApplicationInfo applicationInfo("VecAdd", 1, "Vulkan.hpp", 1, VK_API_VERSION_1_1);
// initialize the vk::InstanceCreateInfo
std::vector<char *> layers = {
"VK_LAYER_LUNARG_api_dump",
"VK_LAYER_KHRONOS_validation"
};
vk::InstanceCreateInfo instanceCreateInfo({}, &applicationInfo, static_cast<uint32_t>(layers.size()), layers.data());
// create a UniqueInstance
vk::UniqueInstance instance = vk::createInstanceUnique(instanceCreateInfo);
auto physicalDevices = instance->enumeratePhysicalDevices();
for (auto &physicalDevice : physicalDevices)
{
auto props = physicalDevice.getProperties();
// get the QueueFamilyProperties of the first PhysicalDevice
std::vector<vk::QueueFamilyProperties> queueFamilyProperties = physicalDevice.getQueueFamilyProperties();
uint32_t computeQueueFamilyIndex = 0;
// get the best index into queueFamiliyProperties which supports compute and stuff
BAIL_ON_BAD_RESULT(vkGetBestComputeQueueNPH(physicalDevice, computeQueueFamilyIndex));
std::vector<char *>extensions = {"VK_EXT_external_memory_host", "VK_KHR_shader_float16_int8"};
// create a UniqueDevice
float queuePriority = 0.0f;
vk::DeviceQueueCreateInfo deviceQueueCreateInfo(vk::DeviceQueueCreateFlags(), static_cast<uint32_t>(computeQueueFamilyIndex), 1, &queuePriority);
vk::StructureChain<vk::DeviceCreateInfo, vk::PhysicalDeviceFeatures2, vk::PhysicalDeviceShaderFloat16Int8Features> createDeviceInfo = {
vk::DeviceCreateInfo(vk::DeviceCreateFlags(), 1, &deviceQueueCreateInfo, 0, nullptr, static_cast<uint32_t>(extensions.size()), extensions.data()),
vk::PhysicalDeviceFeatures2(),
vk::PhysicalDeviceShaderFloat16Int8Features()
};
createDeviceInfo.get<vk::PhysicalDeviceFeatures2>().features.setShaderInt64(true);
createDeviceInfo.get<vk::PhysicalDeviceShaderFloat16Int8Features>().setShaderInt8(true);
vk::UniqueDevice device = physicalDevice.createDeviceUnique(createDeviceInfo.get<vk::DeviceCreateInfo>());
auto memoryProperties2 = physicalDevice.getMemoryProperties2();
vk::PhysicalDeviceMemoryProperties const &memoryProperties = memoryProperties2.memoryProperties;
const int32_t bufferLength = 16384;
const uint32_t bufferSize = sizeof(int32_t) * bufferLength;
// we are going to need two buffers from this one memory
const vk::DeviceSize memorySize = bufferSize * 3;
// set memoryTypeIndex to an invalid entry in the properties.memoryTypes array
uint32_t memoryTypeIndex = VK_MAX_MEMORY_TYPES;
for (uint32_t k = 0; k < memoryProperties.memoryTypeCount; k++)
{
if ((vk::MemoryPropertyFlagBits::eHostVisible | vk::MemoryPropertyFlagBits::eHostCoherent) & memoryProperties.memoryTypes[k].propertyFlags &&
(memorySize < memoryProperties.memoryHeaps[memoryProperties.memoryTypes[k].heapIndex].size))
{
memoryTypeIndex = k;
std::cout << "found memory " << memoryTypeIndex + 1 << " out of " << memoryProperties.memoryTypeCount << std::endl;
break;
}
}
BAIL_ON_BAD_RESULT(memoryTypeIndex == VK_MAX_MEMORY_TYPES ? VK_ERROR_OUT_OF_HOST_MEMORY : VK_SUCCESS);
auto memory = device->allocateMemoryUnique(vk::MemoryAllocateInfo(memorySize, memoryTypeIndex));
auto in_buffer = device->createBufferUnique(vk::BufferCreateInfo(vk::BufferCreateFlags(), bufferSize, vk::BufferUsageFlagBits::eStorageBuffer, vk::SharingMode::eExclusive));
device->bindBufferMemory(in_buffer.get(), memory.get(), 0);
// create a DescriptorSetLayout
std::vector<vk::DescriptorSetLayoutBinding> descriptorSetLayoutBinding{
{0, vk::DescriptorType::eStorageBuffer, 1, vk::ShaderStageFlagBits::eCompute}};
vk::UniqueDescriptorSetLayout descriptorSetLayout = device->createDescriptorSetLayoutUnique(vk::DescriptorSetLayoutCreateInfo(vk::DescriptorSetLayoutCreateFlags(), static_cast<uint32_t>(descriptorSetLayoutBinding.size()), descriptorSetLayoutBinding.data()));
std::cout << "Memory bound" << std::endl;
std::ifstream myfile;
myfile.open("shaders/MainV.spv", std::ios::ate | std::ios::binary);
if (!myfile.is_open())
{
std::cout << "File not found" << std::endl;
return EXIT_FAILURE;
}
auto size = myfile.tellg();
std::vector<unsigned int> shader_spv(size / sizeof(unsigned int));
myfile.seekg(0);
myfile.read(reinterpret_cast<char *>(shader_spv.data()), size);
myfile.close();
std::cout << "Shader size: " << shader_spv.size() << std::endl;
auto shaderModule = device->createShaderModuleUnique(vk::ShaderModuleCreateInfo(vk::ShaderModuleCreateFlags(), shader_spv.size() * sizeof(unsigned int), shader_spv.data()));
// create a PipelineLayout using that DescriptorSetLayout
vk::UniquePipelineLayout pipelineLayout = device->createPipelineLayoutUnique(vk::PipelineLayoutCreateInfo(vk::PipelineLayoutCreateFlags(), 1, &descriptorSetLayout.get()));
vk::ComputePipelineCreateInfo computePipelineInfo(
vk::PipelineCreateFlags(),
vk::PipelineShaderStageCreateInfo(
vk::PipelineShaderStageCreateFlags(),
vk::ShaderStageFlagBits::eCompute,
shaderModule.get(),
"_ZTSZZ4mainENK3$_0clERN2cl4sycl7handlerEE6VecAdd"),
pipelineLayout.get());
auto pipeline = device->createComputePipelineUnique(nullptr, computePipelineInfo);
auto descriptorPoolSize = vk::DescriptorPoolSize(vk::DescriptorType::eStorageBuffer, 2);
auto descriptorPool = device->createDescriptorPool(vk::DescriptorPoolCreateInfo(vk::DescriptorPoolCreateFlags(), 1, 1, &descriptorPoolSize));
auto commandPool = device->createCommandPoolUnique(vk::CommandPoolCreateInfo(vk::CommandPoolCreateFlags(), computeQueueFamilyIndex));
auto commandBuffer = std::move(device->allocateCommandBuffersUnique(vk::CommandBufferAllocateInfo(commandPool.get(), vk::CommandBufferLevel::ePrimary, 1)).front());
commandBuffer->begin(vk::CommandBufferBeginInfo(vk::CommandBufferUsageFlags(vk::CommandBufferUsageFlagBits::eOneTimeSubmit)));
commandBuffer->bindPipeline(vk::PipelineBindPoint::eCompute, pipeline.get());
commandBuffer->dispatch(bufferSize / sizeof(int32_t), 1, 1);
commandBuffer->end();
auto queue = device->getQueue(computeQueueFamilyIndex, 0);
vk::SubmitInfo submitInfo(0, nullptr, nullptr, 1, &commandBuffer.get(), 0, nullptr);
queue.submit(1, &submitInfo, vk::Fence());
queue.waitIdle();
printf("all done\nWoohooo!!!\n\n");
}
}
catch (vk::SystemError &err)
{
std::cout << "vk::SystemError: " << err.what() << std::endl;
exit(-1);
}
catch (std::runtime_error &err)
{
std::cout << "std::runtime_error: " << err.what() << std::endl;
exit(-1);
}
catch (...)
{
std::cout << "unknown error\n";
exit(-1);
}
return EXIT_SUCCESS;
}
Well after checking out line per line it showed that the problem is when working with pointers of pointers. For me it is still not clear from the specification that it is not allowed, but it is understandable that it does not work with logical pointers.
Still the behaviour is strange that the validator is not able to note that and that compiling the SPIRV code crashes instead of throwing a clear error message.
So in the end, it was the Shader code which was wrong.

In vulkan: I want save a depth image to file, but always got a error depth image

I want to save a depth image that from frame buffer render result.
1, I create a stage buffer used to save image data.
2, use vkCmdCopyImageToBuffer copy depth image to stage buffer.
3, use vkMapMemory map this stage buffer memory to host memory.
4, read host memory and write depth data to a file.
but always got an error depth image. I don't know where have wrong.
application window output.
bug depth image file.
(source file)
save depth image function:
VkDeviceSize size = WIDTH * HEIGHT * 4;
VkBuffer dstBuffer;
VkDeviceMemory dstMemory;
createBuffer(
size,
VK_BUFFER_USAGE_TRANSFER_DST_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT,
dstBuffer,
dstMemory);
VkCommandBuffer copyCmd = beginSingleTimeCommands();
// depth format -> VK_FORMAT_D32_SFLOAT_S8_UINT
VkBufferImageCopy region = {};
region.bufferOffset = 0;
region.bufferImageHeight = 0;
region.bufferRowLength = 0;
region.imageSubresource.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
region.imageSubresource.mipLevel = 0;
region.imageSubresource.baseArrayLayer = 0;
region.imageSubresource.layerCount = 1;
region.imageOffset = VkOffset3D{ 0, 0, 0 };
region.imageExtent = VkExtent3D{ swapChainExtent.width, swapChainExtent.height, 1};
vkCmdCopyImageToBuffer(
copyCmd,
depthImage, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
dstBuffer,
1,
&region
);
endSingleTimeCommands(copyCmd);
// Map image memory so we can start copying from it
void *data;
vkMapMemory(device, dstMemory, 0, size, 0, &data);
std::ofstream file(path, std::ios::out | std::ios::binary);
// ppm header
file << "P6\n" << WIDTH << "\n" << HEIGHT << "\n" << 255 << "\n";
float *row = (float*)data;
auto size_v = WIDTH * HEIGHT;
for (uint32_t y = 0; y < size_v; y++) {
file.write((char*)row + 1, 1);
file.write((char*)row + 1, 1);
file.write((char*)row + 1, 1);
row++;
}
file.close();
// Clean up resources
vkUnmapMemory(device, dstMemory);
vkFreeMemory(device, dstMemory, nullptr);
vkDestroyBuffer(device, dstBuffer, nullptr);
hope someone drag me out. thanks!
Assuming you've done all the transfer work correctly, your mapped data is basically an array of floats. This is reflected in your code by this line:
float *row = (float*)data;
However, when you actually write out the file you're treating the data like bytes...
file.write((char*)row + 1, 1);
So you're writing out 8 bytes of a 32 bit float. What you need is some function to convert from the float to a color value.
Assuming the depth value is normalized (I can't remember off the top of my head whether this is the case, or if it's dependent on the pipeline or framebuffer setup) and if you just want greyscale, you could use
uint8_t map(float f) {
return (uint8_t)(f * 255.0f);
}
and inside your file writing loop you'd so something like
uint8_t grey = map(*row);
file.write(&grey, 1);
file.write(&grey, 1);
file.write(&grey, 1);
++row;
Alternatively if you want some sort of color gradient for easier visulization you'd want a more complex mapping function...
vec3 colorWheel(float normalizedHue) {
float v = normalizedHue * 6.f;
if (v < 0.f) {
return vec3(1.f, 0.f, 0.f);
} else if (v < 1.f) {
return vec3(1.f, v, 0.f);
} else if (v < 2.f) {
return vec3(1.f - (v-1.f), 1.f, 0.f);
} else if (v < 3.f) {
return vec3(0.f, 1.f, (v-2.f));
} else if (v < 4.f) {
return vec3(0.f, 1.f - (v-3.f), 1.f );
} else if (v < 5.f) {
return vec3((v-4.f), 0.f, 1.f );
} else if (v < 6.f) {
return vec3(1.f, 0.f, 1.f - (v-5.f));
} else {
return vec3(1.f, 0.f, 0.f);
}
}
and in your file output loop...
vec3 color = colorWheel(*row);
uint8_t r = map(color.r);
uint8_t g = map(color.g);
uint8_t b = map(color.b);
file.write(&r, 1);
file.write(&g, 1);
file.write(&b, 1);
++row;

Write out a triangulation result to an OBJ file in CGAL

I'm trying to make use of a 2D triangulation using CGAL, and create an obj file. I'm able to create a 2D triangulation. I now want to make the 3rd coordinate 0, ie z=0, and create a obj file out of the result of the triangulation. The samples of CGAL seem quite confusing, and I'm not sure how to go about this.
Here is how I did it. Hope it helps someone.
// A modifier creating a triangle with the incremental builder.
template<class HDS>
class polyhedron_builder : public CGAL::Modifier_base<HDS> {
public:
std::vector<Triangulation>& t_;
polyhedron_builder(std::vector<Triangulation>& t) : t_(t) {}
void operator()(HDS& hds) {
typedef typename HDS::Vertex Vertex;
typedef typename Vertex::Point Point3;
// create a cgal incremental builder
CGAL::Polyhedron_incremental_builder_3<HDS> B(hds, true);
// calculte total vertices
int face_num = 0;
int vertice_num = 0;
for (auto& tri : t_) {
face_num += tri.number_of_faces();
vertice_num += tri.number_of_vertices();
}
std::cout << face_num << ", " << vertice_num << ", " << t_.size() << "\n";
B.begin_surface(face_num, vertice_num);
// add the polyhedron vertices
for (auto& tri : t_) {
for (auto itr = tri.finite_vertices_begin(); itr != tri.finite_vertices_end(); ++itr) {
B.add_vertex(Point3(itr->point().x(), itr->point().y(), 0));
}
}
// add the polyhedron triangles
for (auto& tri : t_) {
for (auto itr = tri.finite_faces_begin(); itr != tri.finite_faces_end(); ++itr) {
B.begin_facet();
B.add_vertex_to_facet(itr->vertex(0)->info());
B.add_vertex_to_facet(itr->vertex(1)->info());
B.add_vertex_to_facet(itr->vertex(2)->info());
B.end_facet();
}
}
// finish up the surface
B.end_surface();
}
};
void OBJfile::write_obj_file(const std::string& filename) {
CGAL::Polyhedron_3<CGAL::Simple_cartesian<double>> polyhedron;
unsigned index = 0;
std::vector<Triangulation> t_vector;
// here, contours is an internal object that tracks the polygon outlines
for (auto& contour : contours_) {
Triangulation t;
std::vector < std::pair<Point, unsigned> > polygon;
for (auto& pt : contour) {
Point point(pt.x(), pt.y());
polygon.push_back(std::make_pair(point, index++));
}
triangulate(polygon, t);
t_vector.push_back(t);
}
polyhedron_builder<HalfedgeDS> builder(t_vector);
polyhedron.delegate(builder);
// write the polyhedron out as a .OFF file
std::ofstream os("test.obj");
CGAL::File_writer_wavefront writer;
CGAL::generic_print_polyhedron(os, polyhedron, writer);
os.close();
}
void OBJfile::triangulate(const std::vector<std::pair<Point, unsigned>>& polygon_points, Triangulation& t) {
auto begin = polygon_points.begin();
auto end = polygon_points.end();
//std::istream_iterator<Point> end;
t.insert(begin, end);
}

What is the most reliable way to record a Kinect stream for later playback?

I have been working with Processing and Cinder to modify Kinect input on the fly. However, I would also like to record the full stream (depth+color+accelerometer values, and whatever else is in there). I'm recording so I can try out different effects/treatments on the same material.
Because I am still just learning Cinder and Processing is quite slow/laggy, I have had trouble finding advice on a strategy for capturing the stream - anything (preferably in Cinder, oF, or Processing) would be really helpful.
I've tried both Processing and OpenFrameworks. Processing is slower when displaying both images (depth and colour). OpenFrameworks slows a bit while writing the data to disk, but here's the basic approach:
Setup Openframeworks (open and compile any sample to make sure you're up and running)
Download the ofxKinect addon and copy the example project as described on github.
Once you've got OF and the ofxKinect example running, it's just a matter of adding a few variable to save your data:
In this basic setup, I've created a couple of ofImage instances and a boolean to toggle saving. In the example the depth and RGB buffers are saved into ofxCvGrayscaleImage instances, but I haven't used OF and OpenCV enough to know how to do something as simple as saving an image to disk, which is why I've used two ofImage instances.
I don't know how comfortable you are with Processing, OF, Cinder, so, for arguments' sake I'll assume you know you're way around Processing, but you're still tackling C++.
OF is pretty similar to Processing, but there are a few differences:
In Processing you have variables declaration and they're usage in the same file. In OF you've got a .h file where you declare you're variables and the .cpp file where you initialize and use your variables.
In Processing you have the setup()(initialize variables) and draw()(update variables and draw to screen) methods, while in OF you have setup() (same as in Processing), update() (update variables only, nothing visual) and draw() (draw to screen using updated values)
When working with images, you since you're coding in C++, you need to allocate memory first, as opposed to Processing/Java where you have memory management.
There's more differences that I won'te detail here. Do check out OF for Processing Users on the wiki
Back to the exampleKinect example, here my basic setup:
.h file:
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxKinect.h"
class testApp : public ofBaseApp {
public:
void setup();
void update();
void draw();
void exit();
void drawPointCloud();
void keyPressed (int key);
void mouseMoved(int x, int y );
void mouseDragged(int x, int y, int button);
void mousePressed(int x, int y, int button);
void mouseReleased(int x, int y, int button);
void windowResized(int w, int h);
ofxKinect kinect;
ofxCvColorImage colorImg;
ofxCvGrayscaleImage grayImage;
ofxCvGrayscaleImage grayThresh;
ofxCvGrayscaleImage grayThreshFar;
ofxCvContourFinder contourFinder;
ofImage colorData;//to save RGB data to disk
ofImage grayData;//to save depth data to disk
bool bThreshWithOpenCV;
bool drawPC;
bool saveData;//save to disk toggle
int nearThreshold;
int farThreshold;
int angle;
int pointCloudRotationY;
int saveCount;//counter used for naming 'frames'
};
and the .cpp file:
#include "testApp.h"
//--------------------------------------------------------------
void testApp::setup() {
//kinect.init(true); //shows infrared image
kinect.init();
kinect.setVerbose(true);
kinect.open();
colorImg.allocate(kinect.width, kinect.height);
grayImage.allocate(kinect.width, kinect.height);
grayThresh.allocate(kinect.width, kinect.height);
grayThreshFar.allocate(kinect.width, kinect.height);
//allocate memory for these ofImages which will be saved to disk
colorData.allocate(kinect.width, kinect.height, OF_IMAGE_COLOR);
grayData.allocate(kinect.width, kinect.height, OF_IMAGE_GRAYSCALE);
nearThreshold = 230;
farThreshold = 70;
bThreshWithOpenCV = true;
ofSetFrameRate(60);
// zero the tilt on startup
angle = 0;
kinect.setCameraTiltAngle(angle);
// start from the front
pointCloudRotationY = 180;
drawPC = false;
saveCount = 0;//init frame counter
}
//--------------------------------------------------------------
void testApp::update() {
ofBackground(100, 100, 100);
kinect.update();
if(kinect.isFrameNew()) // there is a new frame and we are connected
{
grayImage.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height);
if(saveData){
//if toggled, set depth and rgb pixels to respective ofImage, save to disk and update the 'frame' counter
grayData.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height,true);
colorData.setFromPixels(kinect.getCalibratedRGBPixels(), kinect.width, kinect.height,true);
grayData.saveImage("depth"+ofToString(saveCount)+".png");
colorData.saveImage("color"+ofToString(saveCount)+".png");
saveCount++;
}
//we do two thresholds - one for the far plane and one for the near plane
//we then do a cvAnd to get the pixels which are a union of the two thresholds.
if( bThreshWithOpenCV ){
grayThreshFar = grayImage;
grayThresh = grayImage;
grayThresh.threshold(nearThreshold, true);
grayThreshFar.threshold(farThreshold);
cvAnd(grayThresh.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);
}else{
//or we do it ourselves - show people how they can work with the pixels
unsigned char * pix = grayImage.getPixels();
int numPixels = grayImage.getWidth() * grayImage.getHeight();
for(int i = 0; i < numPixels; i++){
if( pix[i] < nearThreshold && pix[i] > farThreshold ){
pix[i] = 255;
}else{
pix[i] = 0;
}
}
}
//update the cv image
grayImage.flagImageChanged();
// find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
// also, find holes is set to true so we will get interior contours as well....
contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);
}
}
//--------------------------------------------------------------
void testApp::draw() {
ofSetColor(255, 255, 255);
if(drawPC){
ofPushMatrix();
ofTranslate(420, 320);
// we need a proper camera class
drawPointCloud();
ofPopMatrix();
}else{
kinect.drawDepth(10, 10, 400, 300);
kinect.draw(420, 10, 400, 300);
grayImage.draw(10, 320, 400, 300);
contourFinder.draw(10, 320, 400, 300);
}
ofSetColor(255, 255, 255);
stringstream reportStream;
reportStream << "accel is: " << ofToString(kinect.getMksAccel().x, 2) << " / "
<< ofToString(kinect.getMksAccel().y, 2) << " / "
<< ofToString(kinect.getMksAccel().z, 2) << endl
<< "press p to switch between images and point cloud, rotate the point cloud with the mouse" << endl
<< "using opencv threshold = " << bThreshWithOpenCV <<" (press spacebar)" << endl
<< "set near threshold " << nearThreshold << " (press: + -)" << endl
<< "set far threshold " << farThreshold << " (press: < >) num blobs found " << contourFinder.nBlobs
<< ", fps: " << ofGetFrameRate() << endl
<< "press c to close the connection and o to open it again, connection is: " << kinect.isConnected() << endl
<< "press s to toggle saving depth and color data. currently saving: " << saveData << endl
<< "press UP and DOWN to change the tilt angle: " << angle << " degrees";
ofDrawBitmapString(reportStream.str(),20,656);
}
void testApp::drawPointCloud() {
ofScale(400, 400, 400);
int w = 640;
int h = 480;
ofRotateY(pointCloudRotationY);
float* distancePixels = kinect.getDistancePixels();
glBegin(GL_POINTS);
int step = 2;
for(int y = 0; y < h; y += step) {
for(int x = 0; x < w; x += step) {
ofPoint cur = kinect.getWorldCoordinateFor(x, y);
ofColor color = kinect.getCalibratedColorAt(x,y);
glColor3ub((unsigned char)color.r,(unsigned char)color.g,(unsigned char)color.b);
glVertex3f(cur.x, cur.y, cur.z);
}
}
glEnd();
}
//--------------------------------------------------------------
void testApp::exit() {
kinect.setCameraTiltAngle(0); // zero the tilt on exit
kinect.close();
}
//--------------------------------------------------------------
void testApp::keyPressed (int key) {
switch (key) {
case ' ':
bThreshWithOpenCV = !bThreshWithOpenCV;
break;
case'p':
drawPC = !drawPC;
break;
case '>':
case '.':
farThreshold ++;
if (farThreshold > 255) farThreshold = 255;
break;
case '<':
case ',':
farThreshold --;
if (farThreshold < 0) farThreshold = 0;
break;
case '+':
case '=':
nearThreshold ++;
if (nearThreshold > 255) nearThreshold = 255;
break;
case '-':
nearThreshold --;
if (nearThreshold < 0) nearThreshold = 0;
break;
case 'w':
kinect.enableDepthNearValueWhite(!kinect.isDepthNearValueWhite());
break;
case 'o':
kinect.setCameraTiltAngle(angle); // go back to prev tilt
kinect.open();
break;
case 'c':
kinect.setCameraTiltAngle(0); // zero the tilt
kinect.close();
break;
case 's'://s to toggle saving data
saveData = !saveData;
break;
case OF_KEY_UP:
angle++;
if(angle>30) angle=30;
kinect.setCameraTiltAngle(angle);
break;
case OF_KEY_DOWN:
angle--;
if(angle<-30) angle=-30;
kinect.setCameraTiltAngle(angle);
break;
}
}
//--------------------------------------------------------------
void testApp::mouseMoved(int x, int y) {
pointCloudRotationY = x;
}
//--------------------------------------------------------------
void testApp::mouseDragged(int x, int y, int button)
{}
//--------------------------------------------------------------
void testApp::mousePressed(int x, int y, int button)
{}
//--------------------------------------------------------------
void testApp::mouseReleased(int x, int y, int button)
{}
//--------------------------------------------------------------
void testApp::windowResized(int w, int h)
{}
This is a very basic setup. Feel free to modify (add tilt angle to the saved data, etc.)
I'm pretty sure there are ways to improve this speedwise (e.g. don't update ofxCvGrayscaleImage instances and don't draw images to screen while saving, or stack a few frames and write them at interval as opposed to on every frame, etc.)
Goodluck

Problem with gdk.Pixbuf in gtk# Mono

I'm creating a small drawing program in Mono gtk# and using the Cairo graphics library. I'm coding and compiling on a MacOs X system. I have a drawable object which I put into Pixbuf at a certain time and then retrieve it later into the drawable object! The idea is to take a "snapshot" of the image in the drawable and then draw on top of it.
The problem is that when I put the Pixbuf back into the drawable it looks obscure, all yellow with stripes and it looks like a portion of the image is missing.
UPDATE: I ran the program on my linux and windows machines and there it works flawlessly! So this error is only on MacOs X.
Here's the code:
// use: gmcs -pkg:gtk-sharp-2.0 -pkg:mono-cairo ttv1.cs
using Gtk;
using Cairo;
using System;
public class Draw : Window
{
DrawingArea canvas;
public Gdk.Pixbuf pixbuf;
public Draw() : base("teikniteink")
{
canvas = new DrawingArea();
canvas.ExposeEvent += canvasExposed;
DeleteEvent += delegate { Application.Quit();};
KeyPressEvent += onKey;
SetDefaultSize(400,400);
SetPosition(WindowPosition.Center);
Add(canvas);
ShowAll();
}
private void onKey(object o, KeyPressEventArgs args)
{
switch (args.Event.Key)
{
case Gdk.Key.w:
Console.WriteLine("Key Pressed {0}", args.Event.Key);
// Send to Pixbuf
pixbuf = Gdk.Pixbuf.FromDrawable(canvas.GdkWindow, Gdk.Colormap.System,0,0,0,0,400,400);
// Save to output.png
pixbuf.Save ("output.png", "png");
break;
case Gdk.Key.e:
Console.WriteLine("Key Pressed {0}", args.Event.Key);
Gdk.GC g = new Gdk.GC(canvas.GdkWindow);
// Retrive from pixbuf
canvas.GdkWindow.DrawPixbuf (g,pixbuf,0,0,0,0,-1,-1,Gdk.RgbDither.Normal,0,0);
break;
}
}
private void canvasExposed(object o, ExposeEventArgs args)
{
using (Cairo.Context ctx = Gdk.CairoHelper.Create(canvas.GdkWindow))
{
PointD start = new PointD(100,100);
PointD end = new PointD(300,300);
double width = Math.Abs(start.X - end.X);
double height = Math.Abs(start.Y - end.Y);
double xcenter = start.X + (end.X - start.X) / 2.0;
double ycenter = start.Y + (end.Y - start.Y) / 2.0;
ctx.Save();
ctx.Translate(xcenter, ycenter);
ctx.Scale(width/2.0, height/2.0);
ctx.Arc(0.0, 0.0, 1.0, 0.0, 2*Math.PI);
ctx.Restore();
ctx.Stroke();
}
}
public static void Main()
{
Application.Init();
new Draw();
Application.Run();
}
}
It would be very much appreciated if someone knows whats going on here and can point me in the right direction to fix it.
I triggered the same problem in this manner:
gw = gtk_widget_get_window(GTK_WIDGET(GLOBALS->mainwindow));
if(gw)
{
gdk_drawable_get_size(gw, &w, &h);
cm = gdk_drawable_get_colormap(gw);
if(cm)
{
dest = gdk_pixbuf_new(GDK_COLORSPACE_RGB, FALSE, 8, w, h);
if(dest)
{
dest2 = gdk_pixbuf_get_from_drawable(dest, gw, cm, 0, 0, 0, 0, w, h);
if(dest2)
{
succ = gdk_pixbuf_save (dest2, *GLOBALS->fileselbox_text, "png", &err, NULL);
}
}
}
}
The gdk_pixbuf_get_from_drawable() function when the source drawable is a Quartz window has issues, specifically in how _gdk_quartz_image_copy_to_image() services it. In short, 256 bit vertical strips are converted but the conversion routine assumes the pixels are 24-bit RGB rather than 32-bit RGBA. The following patch fixed the problem for me:
--- gtk+/gdk/quartz/gdkimage-quartz.c 2011-12-03 14:24:03.000000000 -0600
+++ gtk+664894/gdk/quartz/gdkimage-quartz.c 2013-10-15 18:52:24.000000000 -0500
## -150,6 +150,10 ## _gdk_quartz_image_copy_to_image (GdkDraw
data = [rep bitmapData];
size = [rep size];
+ int bpr = [rep bytesPerRow];
+ int wid = size.width;
+ int bpx = bpr/wid;
+
for (y = 0; y < size.height; y++)
{
guchar *src = data + y * [rep bytesPerRow];
## -158,12 +162,15 ## _gdk_quartz_image_copy_to_image (GdkDraw
{
gint32 pixel;
+ if (bpx == 4) // fix gdk_pixbuf_get_from_drawable "yellow stripes"
+ pixel = src[0] << 16 | src[1] << 8 | src[2];
+ else
if (image->byte_order == GDK_LSB_FIRST)
pixel = src[0] << 8 | src[1] << 16 |src[2] << 24;
else
pixel = src[0] << 16 | src[1] << 8 |src[2];
- src += 3;
+ src += bpx;
gdk_image_put_pixel (image, dest_x + x, dest_y + y, pixel);
}
I don't know if this was fixed in future versions of the GTK OSX source. I use my own for producing binaries of gtkwave as I have some necessary patches that were seemingly never integrated into the jhbuild source tree long ago.
-Tony