Problem with display when modifying edges of adjacent features - arcgis

I am using ArcGIS Pro 3 to modify an existing layer to represent community boundaries as they existed in 1935. Everything displays just fine when I select the polygon I want to edit. But when I select Edges in the Modify Features pane the display changes to show wide magenta lines over every shape in the layer so I can't see anything. Hopefully a couple pictures will serve to clearly illustrate my dilemma.
I have talked to other GIS users at work and looked online at ESRI and Google, but I don't seem to be able to articulate the problem well enough to find any answers, or anyone else having the same problem.

Related

Make 3D figure of 2D images "projecting information" onto each other

Is there a way to make a z-stack of 2-D images, at the isometric view in 3-D, of points in each 2-D image projecting downwards to the next slice of 2-D images? I am certain there is a technical term for this, but I just don't have the vocabulary to find the most pertinent answer. Would someone be able to point me in the right direction?
Below, I've drawn an "idea" of what this looks like. I'd love to know if this is possible without re-inventing wheels for matplotlib or other Python plotting libraries.
The original question was posed for doing so in Python. After many months of searching, I found a way to do so in TikZ. I cannot consider this my original work, it is largely based on Pascal Seppecher's interaction diagram found here.
To reconstitute my question above, one can use the above template to define:
Agents of different shapes, specify fills
The frame (plane)
which they reside in
Flows of directed edges that communicate
how agents interact with each other in each plane
Inter-plane
interaction flows
https://texample.net/tikz/examples/interaction-diagram/

How to sculpt with odd linear artefacts

I am currently learning how to sculpt in Blender; working on my own projects after completing BlenderGuru's Beginner & Intermediate classes, and some of Grant Abbitts videos with pleasing results. I am trying to sculpt a plasmapistol with a skull on it, which can be seen in the reference photo that I have provided.
However, when I sculpt, I get these really odd linear artefacts (See picture below, circled in black). I added a Subsurf Modifier to the primitive UV Sphere, with the Viewport and Render Values set to 4, so it is a fairly fine mesh. However, these still artefacts occur.
I assume it is due to the stretching of the polygons when I grab the sphere with the Snake Hook tool and deform it to encompass the frontal part of the skull.
EDIT: Whilst writing this comment I went back, and switched on Dynamic Topology with Relative Detail selected.
It appears that I am no longer getting the issues that I was getting last night with the linear artefacts.
Can I confirm that these problems are a result of having the incorrect Dynamic Topology settings for using the Snake Hook Tool; I was using Constant Detail instead of Relative Detail, or is this being caused by another issue?
Also, any advice on avoiding common pitfalls when choosing the settings in sculpting would be most appreciated.
I will continue to ask this question incase anyone has a similar problem and it can be resolved by reading this.
Sculpt, showing lineations
Experimenting with Dynamic Topology
In Object Mode, does the object have uniform X, Y, & Z scaling? If not, you can apply the scale from the object menu.
Object ‣ Apply ‣ Scale / Rotation & Scale

Is There a Quick, Efficient Way to Add Large Numbers of Labels in Either ArcGIS or QGIS?

In 2007, when I was young and foolish and before I knew about Open Street Map, I started an urban historical map project. I was working in Illustrator, it was going to be an interactive Flash piece, and my process was to draw the maps first, with the thought that I'd label some, but not all, of the street later on.
As we know Flash was began to die about 2010 and I put the project away for a number of years. I picked it up again a couple years ago and continued my earlier practice of just drawing streets and water features, this time with the intention of making it a conventional web map. Now I'm pretty close to finishing the drawing of a five-layer (1871, 1903, 1932, 1952 and 2016) historical map of a medium-sized city, though it still lacks labels.
My problem now is how to add large numbers of labels, many of them duplicates. There could be as many as 10,000 for all five layers, though as a practical matter I may have to settle for a smallish fraction of that number. Based on web searches I gather my workflow is unusual and that mine is therefore an unusual problem.
I've exported my maps and brought them into QGIS and played with the software a little. The process of adding labels to objects doesn't seem terribly efficient or user-friendly, but that's probably due to my unfamiliarity with the program.
So my question is this: Are there any tricks to speed up the painful process of adding large numbers of duplicate labels in either QGIS or ArcGIS? Since so many of the streets exist in all five layers, functionality like the ability to select multiple objects in different layers and edit their attributes simultaneously in the Attribute Table would be a godsend. (Doesn't seem possible.) So would the ability to copy the attributes from one object and paste them onto other objects. Or the ability to do either of these things in Illustrator via a plugin and then export the data along with the shapes to a GIS program.
Thanks for your help!
If I understand the issue correctly I think are several different solutions. When you say that you
Typically for a spatial layer in ArcGIS or QGIS you define how to label all features in a layer once by defining a label scheme to use across all features, 1 or 1 million. This assumes that each feature in the layer has one or more attributes in the associated table for the layer.
How are you converting the Illustrator vectors to a spatial layer? DXF?
You will likely have better/faster responses to this question by posting it to the GIS Stack exchange. https://gis.stackexchange.com/

Full Page Text Recognition Dataset Creation

I have been reading OCR papers such as this one https://arxiv.org/pdf/1704.08628.pdf , and I am have trouble finding out how these datasets are actually generated.
In the linked paper, they use a regressor to predict the start location (a point) and height of a line of text. Then, based on that starting point and height, a second network performs OCR and end of line detection. I realize this is a very simplified explanation, but it follows that their dataset consists (at least in part) of full page text 'images' annotated with where each line begins, and then a transcription of the text on a given line. Alternatively, they could have just used the lower left point of bounding boxes as the start point and the height of the box as the word height (avoiding the need to re-annotate if the data was previously prepared using bounding boxes).
So how is a dataset like this actually created? Looking at other datasets it seems like there is some software that can create XML files containing the ground truths relevant to each image, can someone point me in the right direction? I've been googling around and finding lots of tools for annotating text with sentiment etc and other tools for annotating images for segmentation (for something like a YOLO network), but I'm coming up empty for the creation something like the Maurdoor dataset used in the linked paper.
Thank you
So after submitting this, the related threads window showed me many threads that my googling did not turn up. This http://www.prima.cse.salford.ac.uk/tools software seems to be what I was looking for, but I would still love to hear other ideas.

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.