How to track fast moving object using ARKit? - object-detection

I am trying to build an app on iOS in Swift, SwiftUi and using ARKit and RealityKit. I want the app to:
detect a soccer ball
detect a soccer goal
track the trajectory of the ball when shot at goal
detect when/if the ball hits/passes the goal
To detect the ball and goal using ARKit is working fine. However, to keep tracking the ball when in movement is very unreliable. And when its moving fast, ARKit fails to detect the ball at all.
I have tried to implement Apple Visions VNDetectTrajectoriesRequest using the following tutorial,https://developer.apple.com/documentation/vision/identifying_trajectories_in_video, which works as long as I am NOT using ARKit.
DetectTrajectory needs a CMSampleBuffer, and ARKit only returns a PixelBuffer and I wasn't able to solve this issue and get it to work smoothly (I tried converting the pixelbuffer to CMSampleBuffer, but that resulted in the app being SO slow it was useless in realtime). And even if I get this to work it wouldn't solve my issue regarding detecting when the soccer hits the goal.. For that I believe that using depth/ARkit is necessary..
Does anyone have a solution for tracking A FAST MOVING OBJECT USING ARKIT?
I am close to giving up, but I have found this guy on Youtube that is detecting a balls trajectory using ARKit, https://www.youtube.com/watch?v=B4yfp1UmM5s. I have written to him but received no reply this far. Have been trying to solve this for over a week but I can't find a solution. Please internet, help me!

I think that such a mission is impossible for iOS device in 2022.
Firstly, let's assume the average speed of a soccer ball is 12 m/s, and ARKit and Vision track it at 60 fps. Any object moving at 12 m/s is difficult to qualitatively track at that frame rate, it's obvious. Even MoCap systems use at least 120 fps for tracking of much slower movements.
Secondly, in 5 seconds the ball approximately covers a distance of 60 meters. This is a fairly large distance (for detection/recognition) at which such a small object as a soccer ball will be unrecognizable, especially since it also spins during flight.

Related

Defining a tracking area for the Kinect

Is it possible to specify a (rectangular) area for skeleton tracking with the Kinect (using any of the available SDKs)? I want to make sure that only users inside that designated area are tracked and that the sensor is not distracted by people outside it. Think of a game zone, in which a player interacts with the Kinect and where bystanders outside of the zone should be ignored lest they confuse the sensor.
The reason I want this is that many times the Kinect "locks" onto someone or even something, whether it should or not, and then it's difficult for the sensor to track other individuals, who come into tracking range. I want to avoid that by defining this zone.
It's not possible to specify a target area for the skeleton tracking with Microsoft's official SDKs, but there are some potential workarounds.
(Note that I'm not familiar with other SDKs for the Kinect, and note that I'm not sure if you are using the Kinect v1 or v2.)
If you are using the Kinect v1, note that it can track 6 players simultaneously (with a skeleton body position), but it can only provide full-body skeletal tracking for 2 players at a time. It's possible to specify which 2 players you want full skeletal tracking for in the official SDK, and you can do this based on which skeletons are in your target game zone.
If this isn't the problem, and the problem is that the Kinect (v1 or v2) has already detected 6 players and it can't detect a 7th individual that's in your game zone, then that is a more difficult problem. With the official SDK, you have no control over which 6 players are selected to be tracked. The sensor will lock onto the first 6 players it finds, so if a 7th player walks in, there is no simple way to lock onto that player.
However, there are some possible workarounds that involve resetting the sensor to clear all skeletons to re-select the 6 tracked skeletons (see the thread Skeleton tracking in crowds - Kinect v2):
Kinect body tracking is always scanning and finding candidate bodies
to track. The body tracking only locks on when it detects head and
shoulders of the person facing the camera. You could do something like
look for stable blob points in the target area and if there isn't a
tracked body, reset the Kinect Monitor service.
The SDK is resilient to this type of failure of the runtime, but it is
a hard approach. Additionally, you could employ a way to cover the
depth camera (your hand) to reset the tracking since this will make
all depth/ir invalid and will need to rebuild.
-- Carmine Sirignano - MSFT
In the same thread, RobAcheson points out that restarting the sensor is another workaround:
I've been using the by-hand method successfully for a while and that
definitely works - when I'm in the crowd :)
I have started calling KinectSensor.Close() and KinectSensor.Open()
when there are >6 skeletons if none are in the target area. That seems
to be working well too. Now I just need a crowd to test with.
-- RobAcheson

Local outdoor positioning system

I am trying to create a local positioning system with a accuracy of less than half a meter. I live in an area with poor cellular coverage(none) and I am in need of a system which would be able to track an object moving around near my house/surrounding area, this object could be a animal such as a cat or dog. I have looked at RTK, but they are too expensive, INS have to much drift over a long period. The garden around my house has trees bushes and a old barn which will stop line of sight solutions.
I want to be able to use an arduino or raspberry pi, and I want to keep the project under £100. If there are any systems that might work please respond.
Many thanks

Why does Kinect2 Fusion produce worst results, then Kinect1?

At my university we have several Kinect 1's and Kinect 2's. I am testing the quality of the Kinect Fusion results on both device and unexpectedly Kinect 2 produces worst results.
My testing environment:
Static camera scanning a static scene.
In this case if I check both results from Kinect 1 and 2, then it looks like Kinect 2 has a way smoother and nicer resulting point cloud, but if I check the scans from a different angle, then you can see the that Kinect 2 result is way worst even if the point cloud is smoother. As you can see on the pictures if I check the resulting point cloud from the same view as the camera was, then it looks nice, but as soon as I check it from a different angle then the Kinect 2 result is horrible, can't even tell that in the red circle there is a mug.
Moving camera scanning a static scene
In this case Kinect 2 has even worst results, then in the above mentioned case compared to Kinect 1. Actually I can't even reconstruct with Kinect 2 if I am moving it. On the other hand Kinect 1 does a pretty good job with moving camera.
Does anybody have any idea why is the Kinect 2 failing these tests against Kinect 1? As I mentioned above we have several Kinect cameras at my university and I tested more then one of them each, so this should not be a hardware problem.
I've experienced similar results when I was using Kinect for 3D reconstruction. Kinect 2 produced worse results compared to Kienct 1. In fact, I tried the InfiniTAM framework for doing 3D reconstruction. It too yielded similar results. What was different in my case compared to yours was that I was moving the camera around and the camera tracking was awful.
When I asked the authors of InfiniTAM about this, they provided the following likely explanation:
... the Kinect v2 has a time of flight camera rather than a structured
light sensor. Due to imperfections in the modulation of the active
illumination, it is known that most of these time of flight sensors
tend to have biased depth values, e.g. at a distance of 2m, everything
is about 5cm closer than measured, at a distance of 3m everything is
about 5cm further away than measured...
Apparently, this is not an issue with structured light cameras (Kinect v1 and the like). You can follow the original discussion here.

Any way to get stable/consistent FPS form kinect?

I am trying to record kinect files in .oni format, that I will later try to synchronize with other sensors. As such, it is very important that I get consistent fps, even if some frames are repeats.
From what I can see now, WaitAndUpdateAll does not guarantee that the frame rate is consistent. I will be recording for several minutes (20+), so I need to make sure there is no drift!
Does anyone know if it's possible to lock down the fps of the recording, and if not, how stable the recording fps of the kinect is? Thanks!
After some investigation of this issue, I put together the following write up on the topic:
http://denislantsman.com/?p=50
Putting it here so interested people can find it and not have to wrestle with this issue.
My guess would be to go with the PCL libary since the developers also work together with the ROS team where they also have to sync sensors a lot. But be warned I wasn't able capture XYZRGB clouds at 30 FPS on windows 7. If you only need XYZ to be captured you should be fine. Worst case you have to time stamp and sync all your data by yourself.

mach_msg_trap, - (void) mouseDragged and timer performance

Leopard 10.5.8, XCode 3.1.1; using runModalForWindow to implement (what is intended to be) a high performance mouse tracking mechanism where I have to do real-time complex bitmap modifications.
The modal loop runs, the timer fires, the mouse tracks... but the performance is abysmal, and it gets worse and worse the longer the runloop goes on. Instead of catching mouse messages every pixel or so, I get them every 5... 10.... 20 seconds.
Instruments shows that the majority of the time during this growing response bottleneck is being spent in mach_msg_trap (and yes, I have the perspective set to the running app), so the impression I am under is that it "thinks" it doesn't have any work to do, despite the fact that I'm dragging the mouse around with the button held down like a crazy person. There are no memory leaks showing up, and in my 8-core 2.8 GHZ machine, there's almost no CPU activity going on.
Again, the app is not spending much time in my code... so it's not a performance problem of mine. I've probably configured something wrong, or failed to configure it at all, or am simply approaching the whole idea wrong -- but I sure would appreciate some insight here. As it stands now, the dispatch of mouse messages and timer messages is absolutely unacceptable. You couldn't implement a crayon drawing program for someone immersed in cold molasses with the response times I'm getting.
EDIT: Some additional info: doesn't happen on my 10.5.8 macbook pro. Just the 8--core, 6-display Mac Pro. I tried taking the display code for the croprect in drawrect out, replaced it with an NSLog()... still drags on issuing mouse updates. Also tried rebooting and running without the usual complement of apps running. And with mirrored displays. No difference.
Imagine dragging a brush across the screen; at first, is paints smoothly, then gaps appear between brush placements, then they get larger, and this goes on until you're only getting one brush placement every 10 seconds. That's how this acts. Using NSlog() and various other tracking methods, I've determined that it is at least at the highest level occurring because the mouseDragged events slow down to a trickle. The question in a nutshell is, why would that happen?
Anyone?
OK, isolated it -- the problem comes from my Wacom Tablet mouse. Plug in a regular optical mouse, and everything runs great. Same thing on my Macbook pro, using the trackpad. Works fine.
The tablet is a Wacom Intuos 4 with the stock drivers as of January 2011. I'm going over to the Wacom site and reporting this next.
What a nightmare that was. I have spent over 100 hours on this, thinking I'd hosed some subtlety in the app handling, drawing, etc. Sheesh.