Brickschema Information - schema

I am working on an IoT gateway like raspberry pi. So I want to use Brickschema for data modelling and data normalisation of sensors data. I had gone through a paper link and find some theory about brickschema. There I didn't find the implementation thing. Could you tell me how can I start working with Brickschema using python libraries and what I need to integrate with brick to model the sensor data..? Please share me some example to implement Brickschema for my IoT gateway.Thanks in advance.

Related

Relocalize a smartphone on a preloaded point cloud

Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)

LabVIEW 3D sensor mapping

I'm currently working with LabVIEW 2012 and I'm about to begin a project with the 3D sensor mapping (Sensor Mapping Express VI) from LabView.
I read about it and most of the time, they're talking about NI-DAQmx tasks, but for my project, I'd like to use data from shared variables that I wrote.
Does anyone knows whether it is possible and/or very difficult to do that, because I also see that we can put "free sensors to represent data you wire to the Express VI." So is it the answer to my question ?
Yes you can connect your data via the free sensor input this should not give you a problem. You can even connect simulated channels if need be.

How to Constantly Update Microcontrollers via USB

I have a computer that needs to constantly scan online information and then in accordance to what has been found, the micro controller ( assume an ardunio ) will act in a particular way.
However it seems that most micro controllers cannot be dynamically updated via USB cable. Is there a way to constantly give new instructions or commands to a previously uploaded program into the processor to make it do corresponding actions?
Thank you (I'm sorry if this isn't the right forum to post this question, but I couldn't find one for micro controllers :( )
However it seems that most micro controllers cannot be dynamically updated via USB cable
if you mean programming the microcontroller via usb then it is possible but not at all necessary. you could just send predefined instructions via USB (using LUFA for example) or UART (supported on most microcontrollers) or other data transfer protocols in order to change the state of your program on the hardware side.
if you're new to microcontrollers you should read one of many online tutorials on the subject. arduinos are specially designed for beginners and they have their own forums where you could ask questions. if you choose to go with AVR, i would recommend avrfreaks.
Use serial communications. Install a special driver found on mbed, and then also use PySerial to be able to constanly update the mbed (Any other microcontroller would work)

Opening Kinect datasets and/or SDK Samples

I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.

Streaming IP Camera solutions that do not require a computer?

I want to embed a video stream into my web page, which is part of our own cloud based software. The video should be low-latency (like video conferencing), and it would be preferable, but not required, for it to include audio. I am comfortable serving streaming binary data from the server-side, and embedding it into the page using HTML5 video.
What I am not comfortable with is the ability to capture the video data to begin with. The client does not already have a solution in place, and is looking to us for assistance. The video would be routed through our server equipment, and not be an embedded peice that connects directly to the video source.
It is a known quantity for us to use a USB or built-in camera from the computer. What I would like more information is about stand-alone cameras.
Some models of cameras have their own API documentation (example). It would seem from what I am reading that a manufacturer would typically have their own API which they repeat on many or all of their models, and that each manufacturer would be different in their API. However, I have only done surface reading and hope to gain more knowledge from someone who has already researched this, or perhaps even had first hand experience.
Do stand-alone cameras generally include an API? (Wouldn't this is a common requirement, so that security software can use multiple lines of cameras?) Or if not an API, how is the data retrieved from the on-board webserver? Is it usually flash based? Perhaps there is a re-useable video stream I could capture from there? Or is the stream formatting usually diverse?
What would I run into when trying to get the server-side to capture that data?
How does latency on a stand-alone device compare with a USB camera solution?
Do you have tips on picking out a stand-alone camera that would be a good fit for streaming through a server?
I am experienced at using JavaScript (both HTML5 and Node.JS), Perl and Java.
Each camera manufacturer has their own take on this from the point of access points; generally you should be able to ask for a snapshot or a MJPEG stream, but it can vary. Take a look at this entry on CodeProject; it tackles two common methodologies. Here's another one targeted at Foscam specifically.
Get a good NAS, I suggest Synology, check out their long list of supported IP Web Cams. You can connect them with a hub or with a router or whatever you wish. It's not a "computer" as-in "tower", but it does many computer jobs, and it can stay on while your computer is off or away, and do thing like like video feeds, torrents, backups, etc.
I'm not an expert on all the features, so I don't know how to get it to broadcast without recording, but even if it does then at least it's separate. Synology is a popular brand and there are lot of authorized and un-authorized plugins for it. Check them out and see if one suits you.