I would like to use my IP Camera as a real scanner.
The question is :
is it possible ( or is this type of driver already exists ) to have a real driver (TWAIN, WLA, ...) for a generic IP Camera (MJEPG or H264 stream)
which instead of scanning and preview, take one snapshot from this camera and offers some classic tools (like resizing, cropping, contrast, light level ?)
Basically I search a Windows / MacOS solution
Thanks in advance
If your IP Camera supports TWAIN, then it should be easy to develop your own application (I'm guessing that's what you're asking since you're asking this in Stack Overflow) that takes an image from the camera using TWAIN and then perform the image operations that you wanted (resize, crop, etc.).
I recommend using Python with the pytwain / twain module for this. You can code a very quick prototype to see if it works with your IP Camera. Then you can use OpenCV Python or Pillow (PIL) to perform the image processing and image operations. This solution is also cross platform (should work in Windows / Mac OS / Linux) which is what you also wanted.
Related
I'm looking to capture an image from my usb webcam in LabVIEW 2018. I've looked at older posts (the one from Lava working with the 'free' portions of V&M Toolkit, another webcam test that hangs my computer when trying to run and a few others). What is the best way to do this in the newer LabVIEWs? All the examples I've seen (none of which run correctly or well) are all from 2011-ish timeframe.
It depends on the task (like, for what you are going to use camera), but you could use NI Vision Acquisition Software - which provides set of functions to access the camera, acquire images and videos and process them (basically, IMAQ drivers is what you need). Or, if you are going to use your camera for some kind of test application (vision inspection) - then you'd better check Vision Builder for Automated Inspection.
Those are the easiest (but not the cheapest) ways to acquire images from the various cameras using LabVIEW.
UPDATE:
License scheme for the software could be found here - Licensing National Instruments Vision Software. Description of each software component is also here - Does My Camera Use NI-IMAQ, NI-IMAQdx or NI-IMAQ I/O?. So in order to use 3rd party USB camera, one need to have NI-IMAQdx, which requires license.
Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.
I trying capturing the image in video mode from Canon Digital Camera IXUS 75 model using WIA type. But I didn't get any thing. If Photo mode, I can seen digital Camera storage data i.e., video,photos etc. So, is it required any .dll file or any stuff for capturing image in video mode. Even I tried different way's also
Using JavaCV.It detects Webcam,Laptop internal Camera.But it doesn't detect digital Camera device.
JTWAIN is not supported with windows 64-bit OS. So, I didn't tried with this.
Please help me. Either Canon digital camera software nor Java relevant stuff.
Assuming you still plan to use Java and if the problem is still an issue, I used Asprise's JTwain to get image from scanner, camera (not only laptop web cam) and etc. The only thing you need are the drivers for the devices. Grab an eval version from Asprise and check this step by step dev guide here. As far as I understood, this should be good enough for your requirements.
By the way, seeing the file structure of a camera does not always mean you have the device fully working.
I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.
I'm looking for a way to determine how to know whether an application is using the GPU with Objective-C. I want to be able to determine if any applications currently running on the system have work going on on the GPU (ie: a reason why the latest MacBook Pros would switch to the discrete graphics over the Intel HD graphics).
I've tried getting the information by crossing the list of active windows with the list of windows that have their backing location stored in video memory using Quartz Window Services, but all that does is return the Dock application and I have other applications open that I know are using the GPU (Photoshop CS5, Interface Builder), that and the Dock doesn't require the 330m.
The source code of this utility gfxCardStatus might help....