How to capture the image from Canon Digital Camera IXUS 75 Video mode - camera

I trying capturing the image in video mode from Canon Digital Camera IXUS 75 model using WIA type. But I didn't get any thing. If Photo mode, I can seen digital Camera storage data i.e., video,photos etc. So, is it required any .dll file or any stuff for capturing image in video mode. Even I tried different way's also
Using JavaCV.It detects Webcam,Laptop internal Camera.But it doesn't detect digital Camera device.
JTWAIN is not supported with windows 64-bit OS. So, I didn't tried with this.
Please help me. Either Canon digital camera software nor Java relevant stuff.

Assuming you still plan to use Java and if the problem is still an issue, I used Asprise's JTwain to get image from scanner, camera (not only laptop web cam) and etc. The only thing you need are the drivers for the devices. Grab an eval version from Asprise and check this step by step dev guide here. As far as I understood, this should be good enough for your requirements.
By the way, seeing the file structure of a camera does not always mean you have the device fully working.

Related

Scanner driver for a Camera Stream (or IP Camera)

I would like to use my IP Camera as a real scanner.
The question is :
is it possible ( or is this type of driver already exists ) to have a real driver (TWAIN, WLA, ...) for a generic IP Camera (MJEPG or H264 stream)
which instead of scanning and preview, take one snapshot from this camera and offers some classic tools (like resizing, cropping, contrast, light level ?)
Basically I search a Windows / MacOS solution
Thanks in advance
If your IP Camera supports TWAIN, then it should be easy to develop your own application (I'm guessing that's what you're asking since you're asking this in Stack Overflow) that takes an image from the camera using TWAIN and then perform the image operations that you wanted (resize, crop, etc.).
I recommend using Python with the pytwain / twain module for this. You can code a very quick prototype to see if it works with your IP Camera. Then you can use OpenCV Python or Pillow (PIL) to perform the image processing and image operations. This solution is also cross platform (should work in Windows / Mac OS / Linux) which is what you also wanted.

LabVIEW 2018 USB Webcam Image Grab

I'm looking to capture an image from my usb webcam in LabVIEW 2018. I've looked at older posts (the one from Lava working with the 'free' portions of V&M Toolkit, another webcam test that hangs my computer when trying to run and a few others). What is the best way to do this in the newer LabVIEWs? All the examples I've seen (none of which run correctly or well) are all from 2011-ish timeframe.
It depends on the task (like, for what you are going to use camera), but you could use NI Vision Acquisition Software - which provides set of functions to access the camera, acquire images and videos and process them (basically, IMAQ drivers is what you need). Or, if you are going to use your camera for some kind of test application (vision inspection) - then you'd better check Vision Builder for Automated Inspection.
Those are the easiest (but not the cheapest) ways to acquire images from the various cameras using LabVIEW.
UPDATE:
License scheme for the software could be found here - Licensing National Instruments Vision Software. Description of each software component is also here - Does My Camera Use NI-IMAQ, NI-IMAQdx or NI-IMAQ I/O?. So in order to use 3rd party USB camera, one need to have NI-IMAQdx, which requires license.

Relocalize a smartphone on a preloaded point cloud

Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)

Inputting analogue data via USB

I am trying to build this device which takes analogue input from the earth , converts them into electrical impulses which I wish to input into a android smartphone for data analysis. I initially thought about using the 3.5mm jack of the android device. Apparently Android does not support input through the 3.5mm jack. So I decided to use the USB cord as the input.
Now my question is will my android phone or tablet directly able to read the USB data, or has it to be fed through some microcontroller??
I'm not sure I'm understanding your question correctly, are trying to measure soil conductivity and find out if your plants need water? which is easy. Or are you trying to build a heart monitor? which is a bit more complex.
Anyway if you are interested in conductivity measurement with Android, you may want to have a look at this device, it is driver free and works on Android.
http://www.yoctopuce.com/EN/products/usb-sensors/yocto-knob
I believe V-Alarm is using them as well
http://www.valarm.net/blog/use-valarm-sensor-for-flood-warning-and-water-detection

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.