I am trying to build this device which takes analogue input from the earth , converts them into electrical impulses which I wish to input into a android smartphone for data analysis. I initially thought about using the 3.5mm jack of the android device. Apparently Android does not support input through the 3.5mm jack. So I decided to use the USB cord as the input.
Now my question is will my android phone or tablet directly able to read the USB data, or has it to be fed through some microcontroller??
I'm not sure I'm understanding your question correctly, are trying to measure soil conductivity and find out if your plants need water? which is easy. Or are you trying to build a heart monitor? which is a bit more complex.
Anyway if you are interested in conductivity measurement with Android, you may want to have a look at this device, it is driver free and works on Android.
http://www.yoctopuce.com/EN/products/usb-sensors/yocto-knob
I believe V-Alarm is using them as well
http://www.valarm.net/blog/use-valarm-sensor-for-flood-warning-and-water-detection
Related
Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)
Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.
I need to read GPS coordinates using a VB.NET program directly from a GPS device connected to the computer via USB (bluetooth also OK but prefer USB). My constraints are:
The computer running the software is NOT connected to the internet. It is a stand-alone machine in a moving vehicle.
I need to be able to read GPS coordinates from the device while the vehicle moves and use the device to perform location-aware queries on a local database
The GPS device can be anything (e.g. Garmin GPS or GPS card without display), as long at it can be purchased off the shelf or over the internet.
The user group for this solution is quite small (about 40 users).
I have already checked out GPSGate (http://gpsgate.com/) and emailed my requirements to them. They replied, and I quote: "I am sorry but we have no product for you." (end of reply).
I also checked out Eye4Software) and tried using their demo product but it does not pick up my Garmin Nuvi via USB. They responded to my questions but unfortunately their OEM product is an ActiveX dll and I am looking for a .NET based solution.
So if anyone has a "home-grown" solution based on the .NET framework, that can be easily duplicated, I would really appreciate it. Many thanks!
Most of the USB GPS pucks will speak a standardized protocol called NMEA 0183. There are several .net protocols out there that decode this protocol, see here for some pointers to get started.
So, if when shopping around you just check that the device is able to generate NMEA you should be up and running in a minimum of time, and at a reasonable cost.
EDIT: a "gps puck" is a GPS receiver shaped more or less like a hockey puck, like this one
For in-car use there are specific versions that can be fixed onto the vehicle's roof
They are pretty common (many online shops carry them) but select them based on the chip that's inside, the popular Sirf Star 3 is still a solid performer, stable and accurate. I haven't had the chance to play with its successor, the Sirf Star 4 yet, and I'm not implying these are the only good chips around, only that I got most experience with this chip.
This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.
I am planning on doing a small arduino project and would like to know if what I'm thinking would work with a regular arduino board. I'm thinking of buying an Arduino Uno for my project, along with an IR LED and an IR sensor. So here's what I want to go with this:
I want to point the LED towards the sensor, so that the sensor is always detecting light. Then', I'll start "cutting" that light (say, with with my hand) several times. I want the arduino program to time the intervals between the times the light is "cut" and send these times to my computer via USB, so I can process this data.
I've seen many people talk about serial communication between an arduino board and a computer, but I'm not sure how that works. Will it use the same usb connector I use to upload programs to the board, or do I have to buy anything else?
EDIT: tl;dr: I guess my question, in the end, is twofold:
1) Am I able to "talk" to my computer using the built-in USB connector on the board, or is that used solely for uploading programs and I need to buy another one? and
2) Is this project feasible with an Arduino Uno board?
Thanks for the help!
Yes, your project is very feasible.
You use the built in USB connector to both program the device and communicate with it. Check out some examples on the Serial Reference Page
For reading the sensor, you'll want to use either a digital or analog input. For a digital input, you'll likely have to external components to control the light threshold, but it will provide a simple yes or no if something is in front of it. With an analog input, you can use a threshold in code to determine when your hand passes.
Timing can either be done on device with the Millis() function or on the connected computer.