In my project I want to receive/send images to a USB device gadget. For this host side USB driver needs to be written. According to my understanding an image file cannot be directly transferred by reading and storing the bytes one by one till we encounter an EOF(as is done in a normal text file). So how do we do it?
I got a relevant topic on this at the following link:
What is most appropriate USB class to handle images and video transfer and streaming?
but still things were not clear. Should I use libptp with libusb to transfer the image files? i could not get any sample/example code which could explain if its possible or how its done. Thanks for the help in advance!
Regards,
Shweta
Also, from some more investigation, i think LibMTP can be used for image transfer. But to eork for that i guess we need LibUSB aslo installed. Is my understanding correct?
Have a look at this link if you are familiar with python and pygame. Also you can convert image to string and transfer it to other device by pyusb package in python.
Related
I'm trying to get the video feed from usb camera attached to my Raspberry. Since it's not the dedicated one I can't just use raspivid or the raspicam that comes with uv4l to make changes to config that actually gives some effect at contrary to v4l2-ctl.
When I connect to the WebRTC server through the browser client it actually works at decent framerate. I don't yet understand how that technology works, but before jumping into it I was wondering if someone could tell me if it's possible to somehow (with client made in python or some other opencv magic) get that video feed.
Thanks in advance
I'm still interested if what I've talk about is possible, so if anyone with knowledge stumbles upon this thread, please let me know.
I've kinda solved my issue by using the mjpg-streamer experimental instead, it can be found here:
https://github.com/jacksonliam/mjpg-streamer
Now I'm getting over 8 fps, but it seems much more constant and really seems like I don't need more, compared to uv4l that gave me 3.5 fps with stutters.
The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"
I trying capturing the image in video mode from Canon Digital Camera IXUS 75 model using WIA type. But I didn't get any thing. If Photo mode, I can seen digital Camera storage data i.e., video,photos etc. So, is it required any .dll file or any stuff for capturing image in video mode. Even I tried different way's also
Using JavaCV.It detects Webcam,Laptop internal Camera.But it doesn't detect digital Camera device.
JTWAIN is not supported with windows 64-bit OS. So, I didn't tried with this.
Please help me. Either Canon digital camera software nor Java relevant stuff.
Assuming you still plan to use Java and if the problem is still an issue, I used Asprise's JTwain to get image from scanner, camera (not only laptop web cam) and etc. The only thing you need are the drivers for the devices. Grab an eval version from Asprise and check this step by step dev guide here. As far as I understood, this should be good enough for your requirements.
By the way, seeing the file structure of a camera does not always mean you have the device fully working.
I am building a network utility for OS X. I've gone through Apples documentation, but I cannot find the framework that allows my app to monitor incoming bytes. Can anybody point me in the right direction? Thank you for your time!
To get statistics on a network, you can use the sysctl system call. This is fairly thinly documented; there's another answer on StackOverflow that gives a brief example, and for more detail, I'd recommend looking at the netstat source code.
I think for something like this could be done with
http://www.wireshark.org/ or http://www.tastycocoabytes.com/cpa/
In Linux you could simply listen to the file that is associated with your network card.
But I don't think this can be done an easy way on OS X. But indeed there must be some way, thinking of LittleSnitch.
You can use libpcap, which is a portable library for doing packet captures used by tcpdump, Wireshark, and more. It's not an official Apple library, but it's BSD-licensed so you shouldn't have any problem using it.
Is it possible to capture all/any audio played by a PC into a system.io.stream, so that it can then be run through speech recognition (System.Speech.Recognition.SpeechRecognitionEngine)?
Essentially I'm looking to pefrom speech recognition on any audio on the client PC, google seems to suggest that capturing a stream like this can be done using Microsoft.DirectX.DirectSound, however I cannot honestly determine how. Any suggestions would be greatly appreciated.
Take a look at this question for a solution on Vista/Win7, and take a look at this one for WinXP.
Summary:
You can use Loopback recording with WASAPI in Vista/Win7, but there is no equivalent API in WinXP, however a partial solution can be achieved with a virtual soundcard driver.