Is it possible to animate markers in ArcMap? - arcgis

I'm completely new to ArcGIS and ArcMap, but someone suggested this program to me for a project I'm working on.
I would like to animate individual entities on a map, and was wondering if it is possible to do so in ArcMap. I asked this earlier here and a member directed me to a tutorial on animating in ArcGIS. The animation in the guide was over a map spread (ie. each pixel on the map displays, say, a different color to indicate population data in the area). However I realized that if I zoom in a lot, eventually the image will degenerate into pixels, which is why I need an actual object to mark a certain point. I checked some online tutorials and it seems like we can place markers on the map. Can someone tell me if it is possible to animate these markers (for example via a for-loop)? And if so, could you point me in a direction where to start?
Thanks in advance!

You can animate layers in ArcMap is the short answer. Its not as simple as using the timeline feature in Google Earth for example though. But then ArcMap is much more than just a visualization tool.
This help page on the ESRI web help looks like a good place to start.
I'm not 100% sure what you mean by the image degenerates into pixels. Are you saying that the markers were single points in the layer. Unlike Google Earth you are not confined to simply plotting points on the map. You can draw completely arbitrary shapes in ArcMap, which can be defined to cover actual areas of the map, so when you zoom-in the shape gets larger.
The way you need to load data into ArcMap to produce an animation isn't too simple. There might be other ways to do this, but the way I know of is to generate a NetCDF file. This file contains a 3D matrix of layer data, where each layer is separated through time. Because you generate a matrix, you are effectively placing a raster image over the map. Thus if you want to cover a large area, each matrix becomes large, and you multiply that by the number of time slices you wish to animate over.
Once you have a NetCDF file with your data in however, getting ArcMap to animate it and produce say a .avi file is pretty simple.
You could try just loading some of the example NetCDF datasets into ArcMap to see how/if they will work to get you started.
Hope that helps.

The upcoming v10 will have better time-aware capabilities, which will allow for animation.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

How to generate wireframe from point clouds on top of observed objects

Iam trying to generate wireframe on top of objects after generating the point clouds. How can i get wireframes similar to the ones generated in the image?
I am able to run ORB SLAM2 and generate point clouds and save them. Iam even able to generate wireframe from .pcd files from the point cloud library.
However Iam looking for results such as the ones shown in this picture.
How can i approach towards this?
The target wireframe image
ORB-SLAM 2 is at its heart just a sparse feature-based SLAM. What you want can't be achieved with just that library, and furthermore, the image you give as an example is reprojecting a CDI mesh into the image. The only way you can get results like this is by having a 3D mesh of the object before you run your SLAM, and localise said mesh in the scene (there is a vast litterature on model-based SLAM, which I think is the best place for you to look into). The main idea in that case would be to match elements from the 3d mesh to elements in the image (whethere those are keypoints or some form of features), and use them either in your cost function or in some PnP-like scheme.

GoogleMaps lag at a large number of markers

I apologize in advance for my English.
I have in the application of more than 5000 coordinates.
These coordinates point to the objects in all the earth.
The fact that I add all at once to the map coordinates (marker), and the map is very lag because of this!
5-6 FPS when scrolling the map.
How do I add markers for the current location (the camera), and if I scroll the map, these markers to remove, and add new to the new location (the camera).
I know there is a function idleAtCameraPosition, but how to get a list of coordinates of the array (MutableArray), which is included in the camera? How to keep track of it?
I do not understand this. Can someone already did one of you is in your project?
I hope you understand that I want to convey to you.
You might want to check Marker Clustering:
By clustering your markers, you can put a large number of markers on a map without making the map hard to read. The marker clustering utility helps you manage multiple markers at different zoom levels.
When a user views the map at a high zoom level, the individual markers show on the map. When the user zooms out, the markers gather together into clusters, to make viewing the map easier.
To have some insights about loading too many marker, try reading the article about Too Many Markers!
Some applications are required to display a large number of locations or markers. Naively plotting thousands of markers on a map can quickly lead to a degraded user experience. Too many markers on the map cause both visual overload and sluggish interaction with the map. To overcome this poor performance, the information displayed on the map needs to be simplified.
Hope this helps!

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

3D Transformations on a Quartz2D path — Drawing Application

I'm in the planning stage of writing a Cocoa drawing application (for Mac, not iOS), and I'm trying to discern whether one of my features is technically possible via any of the drawing frameworks. Any help or relevant information would be greatly appreciated.
The idea is to apply a 3D transformation to an object drawn with Quartz2D. I've considered capturing the relevant portion of the canvas View (where objects are drawn) as an image and sending it to Core Animation, but that doesn't seem like the best option. Since this is a drawing application, it's less about 3D animation than it is about the transformed shape. This solution is also less than ideal because I assume that if the 2D object were a vector path rather a bitmap image, I would have to rasterize it to apply such a transformation. The ideal implementation would enable the user to dynamically rotate a flat object in 3 dimensions until she found a suitable orientation, lock in this transformation, and still be able to manually adjust the path's vector points.
Is this feasible? Would it require working directly with OpenGL? Help of any kind is most welcome.
Thank you!
Seems to me that anything you'd do with a 3D transform, you should be able to do with multiple affine transforms. See UIBezierPath's -applyTransform method.