Raspberry3. Correct for perspective - camera

I'm new here, and i have googled my problem, but I find it hard to state it in simple words.
I have a gopigo 3 kit with a Raspberry Pi Camera Module. The camera is mounted at a 45 degree angle. I would like to transform the image in a way that makes it like it's taken from above. I can't find any simple way to do this. Any suggestions would be greatly appretiated. :)

Related

Wiiuse library - how to calculate the quaternion from the wm->exp.mp.angle_rate_gyro like the DSU Controller test

I currently have the wiiuse lib op and running with the motion plus output the angle_rate from the gyro. Now i want this to give me the output in angles either euler representation or best of with quaternions, and i am a little stuck here. any solutions code examples that could point me in the way of how to calculate these?
I have an example of wiimoteHook running with a DSU controller test that gives output the quaternion output and is exactly what i want to give further to my program.
my program i am working on is that i am trying to make the wii remote hold by a person with a position system using ultra sound that gives me a coordinate(x,y,z) in a world frame then I want the wiimote to give me the rotation in that point to teach a 6 axis robot a tool center point that in the end would imitate movement of the remote.
I hope that somebody can guide me in the way of getting the rotations from the wiimote.
Thanks in advance.

ReactNative - Listen to specific sound input - Vroom of Car

What am trying to do is, count the revving("vroom" sound) of a physical car, through my app. Am coding in ReactNative. And I don't plan to create something complex, like communicating with the Car's inbuilt computer or anything to do this.
But instead, I was planning to create the app to listen to the nearby sounds. So if the nearby sound is that of a revving, then the app will simply count it.
I have done other features in my app, but listening to the sound and detect if it's a "vroom" sound is what am stuck with.
Based on my research, I can see that I have to make use of the Fast Fourier Transform algorithm. But am confused at how I can implement it in my ReactNative app. Am still searching for a package that has an implementation.
I have seen some apps that can be used to tune the sounds of Violin, Guitar, etc. What am trying to do is similar to this, but pretty simple. Once I get a basic idea, I will be able to get going. In my case, my app will be listening to the high decibel sound.
Any inputs would be highly appreciated.
This is known as Acoustic Event Detection. Possibly you can use an Audio Classification approach. The best way to solve it is using supervised machine learning. For example a CNN on mel-spectrograms. Here is an introduction. You can do the same in JavaScript using Tensorflow.JS. The official documentation contains a tutorial.
One of the first steps is to collect a small dataset of examples of "vroom" sounds versus other loud non-vroom sounds.

Tensorflow: how to detect audio direction

I have a task: to determine the sound source location.
I had some experience working with tensorflow, creating predictions on some simple features and datasets. I assume that for this task, there would be necessary to analyze the sound frequences and probably other related data on training and then prediction steps. The sound goes from the headset, so human ear is able to detect the direction.
1) Did somebody already perform that? (unfortunately couldn't find any similar project)
2) What kind of caveats could I meet while trying to achieve that?
3) Am I able to do that using this technology approach? Are there any other sound processing frameworks / technologies / open source projects that could help me ?
I am asking that here, since my research on google, github, stackoverflow didn't show me any relevant results on that specific topic, so any help is highly appreciated!
This is typically done with more traditional DSP with multiple sensors. You might want to look into time difference of arrival(TDOA) and direction of arrival(DOA). Algorithms such as GCC-PHAT and MUSIC will be helpful.
Issues that you might encounter are: DOA accuracy is function of the direct to reverberant ratio of the source, i.e. the more reverberant the environment the harder it is to determine the source location.
Also you might want to consider the number of location dimensions you want to resolve. A point in 3D space is much more difficult than a direction relative to the sensors
Using ML as an approach to this is not entirely without merit but you will have to consider what it is you would be learning, i.e. you probably don't want to learn the test rooms reverberant properties but instead the sensors spatial properties.

Advise for camera buying

I am intending to buy a camera for my research about robotics. I am relatively new if I am ashamed to say that I am absolutely new to CV.
My job is detecting an objects and return the [x,y,z] coordinate. My platform is Ubuntu 12.04 and I intend to use python to program.
I read about some device on the internet like Kinect x360. But I have no ideas to choose the best one[price and suits my job (return x,y,z with precision < 5mm after caliberating, no entertainment needed)].
Please advice for the right one with suitable price or the best price.
Thanks so much.
With that few information about the problem I can just give you general advice.
The Kinect camera you mentioned not only captures images but can also give a rough estimation of the distance of every pixel from the camera. This can be very useful for determining the position of an object but I think you cannot obtain a 5mm precision with it.
I think your best bet is to go with two cameras and do stereo vision to obtain the distance. You can use normal cameras to do that but you'll need a high resolution to obtain the precision you want.
I hope it helps.

Kinect joint detection from top

I'm wondering, does the Kinect detects joints correctly when it's put on the top (on the ceiling).
I don't have necessary equipment to attach it to ceiling and test, but was wondering whether it reliably detects human. I'm ok even if it confuses the joints, actually.
Has anybody tested this?
From what I've seen while using it, the skeleton detection is iffy from any angle other than directly pointing at a person's front or back. A Kinect pointed straight down with people walking under it would almost certainly not detect anyone, because the human form from above does not look anything like it does from the front. I have had the Kinect pick up random people around me in odd positions (sitting, viewed from the side, etc), but the joints were largely spastic. If you have it mounted on the ceiling and pointed downwards at a sufficient angle to still see people from the front instead of from above.. it could do a fairly good job of picking them up.
So when you say on the ceiling do you mean pointing straight down or still looking at a fairly horizontal angle?
I did a little bit of testing with the Kinect mounted in a very high position (2.5 m, 70° to the ground). As answered by Coeffect it just doesn't work. It doesn't work with Microsoft SDK nor with OpenNI. What I can add is that the skeleton recognition only works if the user is facing the camera with her/his whole body-front. Even worse, both frameworks seem to expect the head at the top of the depth-frame.