I have to monocular USB cameras and I want to use ROS to make it a stereo camera. I am having a hard time finding a ROS package that publishes there 2 images in this format:
/my_stereo/left/camera_info
/my_stereo/left/image_raw
/my_stereo/right/camera_info
/my_stereo/right/image_raw
/my_stereo_both/parameter_descriptions
/my_stereo_both/parameter_updates
/my_stereo_l/parameter_descriptions
/my_stereo_l/parameter_updates
/my_stereo_r/parameter_descriptions
How can I do this? Any help is truly appreciated!
You can use:
http://wiki.ros.org/stereo_image_proc
You can change the publish topic name according to the documentation.
This tutorial provides an example, how exactly to publish the images.
I'm struggling with the disparity topic and posted a question here.
According to the already mentioned tutorial from 2016, the stereo_image_proc node was supposed to do a lot of job, but it looks like that it doesn't exist in ROS2 version. There are two nodes: disparity_node and point_cloud_node.
Related
Required Info
Camera Model
D435
Firmware Version
05.12.13.50
Operating System & Version
Linux (Ubuntu18.04.5)
Kernel Version (Linux Only)
4.9.201
Platform
NVIDIA JetsonNano B01
SDK Version
2.41.0
Language
ROS packages }
Segment
Robot
Hello, I need to use the obstacle avoidance function in the process of using D435. At present, there are two ways I have inquired:
1、Use depthimage_to_laserscan to convert the depth information into a lidar signal, but the current problem is that there is also a lidar on my robot. Now these two topics are scan, so there is a conflict. I don't know how to solve it.
2、I want to know whether the two lidar signals can be fused, and what configuration is needed to be fused. Is there any relevant information or code?
3、Using plotcloud2 point cloud information, I don't understand how to do this at present. Although the point cloud image can be seen on the map now, it does not have the effect of avoiding obstacles. And does this point cloud information need to be passed to AMCL? If so, how does it need to be delivered? So I hope someone can help me.
i am doing a project using deep learning and for this i need to take pictures from the kinect and evaluate them. My problem is the resolution of the pictures are 640x860. Due to this i wanted to know if ros freekinect or some library can increase the resolution given a yaml file or something like that? Thank you guys and sorry for the english
Im currently working on a project with Kinect one sensor and the camera resolution is 1920x1080.
if i am not wrong you are currently using the old xbox 360 kinect from what i see here.
I have not heard of libraries that can increase resolutiono yet(this does not mean it do not exist)
But my suggestion is to use the latest hardware found in Microsoft Store here. It cost about $150.
Cheers!
In my project I want to receive/send images to a USB device gadget. For this host side USB driver needs to be written. According to my understanding an image file cannot be directly transferred by reading and storing the bytes one by one till we encounter an EOF(as is done in a normal text file). So how do we do it?
I got a relevant topic on this at the following link:
What is most appropriate USB class to handle images and video transfer and streaming?
but still things were not clear. Should I use libptp with libusb to transfer the image files? i could not get any sample/example code which could explain if its possible or how its done. Thanks for the help in advance!
Regards,
Shweta
Also, from some more investigation, i think LibMTP can be used for image transfer. But to eork for that i guess we need LibUSB aslo installed. Is my understanding correct?
Have a look at this link if you are familiar with python and pygame. Also you can convert image to string and transfer it to other device by pyusb package in python.
I am building a network utility for OS X. I've gone through Apples documentation, but I cannot find the framework that allows my app to monitor incoming bytes. Can anybody point me in the right direction? Thank you for your time!
To get statistics on a network, you can use the sysctl system call. This is fairly thinly documented; there's another answer on StackOverflow that gives a brief example, and for more detail, I'd recommend looking at the netstat source code.
I think for something like this could be done with
http://www.wireshark.org/ or http://www.tastycocoabytes.com/cpa/
In Linux you could simply listen to the file that is associated with your network card.
But I don't think this can be done an easy way on OS X. But indeed there must be some way, thinking of LittleSnitch.
You can use libpcap, which is a portable library for doing packet captures used by tcpdump, Wireshark, and more. It's not an official Apple library, but it's BSD-licensed so you shouldn't have any problem using it.
This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.