Soon i will start working on a project that requires me to classify different objects using CNN (Convolutional Neural Networks) and track it using a drone. The camera i need should stream live video on FHD at #60. I searched a lot especially the go pro's camera but i didn't find anything related to how many frames during live stream. Hope you can suggest me some cameras.
You could use a GoPro camera with a HDMI to USB capture card, this will give you 1080p 30 or 60 fps depending on the resolution and frame rate chosen. I'd look for a OpenMV Cam H7 if possible, the camera is designed for computer vision.
Related
For the project, I need to transfer video from the OV2640 camera using the STM32F429IGT6 microcontroller to a 7-inch screen on Open429I-C. So I have a few questions.
What will be better in terms of FPS, quality and additional features: transfer video to this screen, or make a utility on a PC and transfer video there (especially considering that in the future it is possible to expand the functionality)?
Can someone share tips/materials/examples on this topic?
So I have done transfer learning on a Tensorflow model to detect rubik's cubes. Since I don't have a webcam, I am using an app called IP Webcam to use my phone's camera and grab the live feed with cv2, like this:
cap = cv.VideoCapture(0)
address = "http://{My IP}/video"
cap.open(address)
When I run the object detection in real-time (this is running on a gtx 1060), the model understandably can't keep up with the 30 fps of the camera, but instead of displaying, for example, the live detection at 10 fps, it seems to want to display all 30 frames even if it takes longer, resulting in the video feed not being real-time and if I move it takes around 5-10 seconds to show up in the video.
I don't know if this is an issue with Tensorflow or cv2? Is the issue that I'm not using a connected webcam?
If I scatter more infrared points around the room from separate infrared laser speckle projectors, and in doing so increase the point cloud resolution spread over objects in the room, will this result in a higher resolution 3d scan captured by the infrared camera on the Kinect 360? Or are there limitations on how much point cloud data the IR camera and/or Kinect software/hardware can process?
Found this paper which answered my question:
http://www.cs.cornell.edu/~hema/rgbd-workshop-2014/papers/Alhwarin.pdf
I got a question of image capture with a PC camera(integrated note book camera or web cam). While I am developing a computer vision system in which high quality image capture is the key issue, most of the current method is use VFW or directShow to capture video stream and snap one frame as an image.
However, this method could not get high quality image ( or using up the full capacity of the camera). For example, I got a 5 mega pixel web cam. but the video stream is maximum 720P(USB bandwidth problem?). Video streaming is wasting some of the camera sensors.
Could I video streaming and taking picture independently? like inputing video with a 640*480 video stream and render on the stream. then take a picture of 1280*720 from the same cam? I guess this would be a hardware problem? the new HTC one X camera?
In short, it's there a way for a PC system to take a picture ,full use of the sensor capacity, without video streaming and capture one frame. Is this a hardware related problem? Does common web cam support this? Or a Software problem, I should learn DirectShow things?
Thanks a lot.
I vaguely remember (some) video sources offer both a capture and still pin, the latter I assume would offer you higher quality. You can easily test this in GraphEdit. If it works then yes, you'll have to learn DirectShow. Or pay someone to code this for you.
Let's imagine that we have any of popular photocameras (like Canon or whatever) installed on a mechanical platform. This platform allows us to accurately adjust camera's lens direction to any interesting object. This platform is controlled from PC via microcontroller board. But we need a feedback from a photocamera - the image which currently appears on camera's display. Obviously, this feedback is required to be sure that the camera looks in a right direction. At the moment I don't know how to get a single shot image from photocamera by a microcontroller.
Could you please recommend me any directions to dig to ? Any recommendations on how to select photo camera (web cameras are not allowed) ? Any tips ?
Thank you in advance =)
Dwelch is right, you need to pick a "friendly" camera and work from there - google CHDK for a starter.
You could use the SPI interface of a micro to spoof being an SD card, and accept image data from the camera straight into the micro, but you would probably need quite a fast micro with a fair amount of RAM, especially if you want to do any processing on it.
Other than that, you could sample the camera's AV-output (if it has one), either into the micro or straight into the PC via a USB capture stick (or USB capture stick into micro if you're being a show-off), or maybe interrogate the camera over its USB or (insert name of proprietary port here) IO port.
Getting more hacky (yes, even more!) you could sniff the LCD data bus of the camera and steal the image from that, but that brings all sorts of pain, and tiny, tiny screws.