I have a raster image with multiple polyline feature classes over it. The lines are not overlapping but they are in multiple different orientations. For every pixel in the raster, I want to calculate the length of the line through that pixel so that the result would be a raster with cells assigned a float value of zero to 2^0.5 times the cell size. What's the best way to do this? I'm using ArcPro with an advanced license.
You can have a look at the answers to a similar question here (using R --- but you have a license for that too)
https://gis.stackexchange.com/questions/119993/convert-line-shapefile-to-raster-value-total-length-of-lines-within-cell/120175
Related
I've successfully calibrated my camera and I can get the dimensions of a XLD in world coordinates with ContourToWorldPlaneXld and then HeightWidthRatioXld. This returns me the measures of a contour extracted from a shape.
Now I need to convert a value inserted by the user in mm (example in mm: 0.1) and get how many pixels the measure is, for example, to draw a line.
I need the pixel value as per request. I tried looking around in the Halcon documentation but I didn't find what I was looking for.
Also I read this answer but it' not exactly what I'm looking for.
I'm using Halcon Progress 21.11.
Edit: A possible solution could be obtaining the dimensions before converting them to world plane and then do something like pixel/world but I would prefer a better method if it exists.
I want to create a mathematical model for 2d bin packing optimization problem. I am not quite sure if it is bin packing problem it may be called strip packing, anyway let me introduce the problem.
1- There are some group of boxes to be placed on strips (see article 3.)
2- Each group contains a number of boxes which have same width and same hight. For example,
group A
100 boxes with width = 80cm and height = 120cm
group B
250 boxes with width = 150cm and height = 200cm
etc.
3- There are unlimited number of equal sized strips which have fixed width and height, for example
infinite number of Width = 800cm and Height 1400cm
4- The main goal is packing these boxes into minimum number of the strips. However, there are some restrictions to do this job.
5- If we think of the strips as a 2d row and column plane, at each column must has a fixed width of boxes. For example, if (column 0 and row 0) has a box w=100,h=80 then (column 0 and row 1) also has to has a box w=100,h=80. It is not allowed to be in the same column for diferent sized boxes. This rule is not valid for rows. Each row can have different sized boxes, there is no restriction.
6- It is not important to fill the whole strip. We want to fill strips with minimum space between boxes. The heighest column indicates a stop line through other columns and we calculate the loss value (space ratio over the whole strip area).
I tried to implement this optimization problem with GLPK linear programming tool. I have used a mathematical model from the paper (C. Blum, V. Schmid Solving the 2D bin packing problem by means of a hybrid evolutionary algorithm)
C. Blum, V. Schmid Solving the 2D bin packing problem by means of a hybrid evolutionary algorithm
This math model works great in the GLPK. However, it is designed for boxes for packing in x,y coordinates. If you see article 5 we want them in a fixed-width column fashion.
Can you please help me to modify the mathematical model to make possible to implement article 5.
Thank you all,
I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!
I have a large dicom mri dataset for several patients. For each patient, there is a folder including many 2d slices of .dcm files and the data of each patient has different sizes. For example:
patient1: PixelSpacing=0.8mm,0.8mm, SliceThickness=2mm, SpacingBetweenSlices=1mm, 400x400 pixels
patient2: PixelSpacing=0.625mm,0.625mm, SliceThickness=2.4mm, SpacingBetweenSlices=1mm, 512x512 pixels
So my question is how can I convert all of them into {Pixel Spacing} = 1mm,1mm and {Slice Thickness = 1mm}?
Thanks.
These are two different questions:
About harmonizing positions and pixel spacing, these links will be helpful:
Finding the coordinates (mm) of identical slice locations for two MR datasets acquired in the same scanning session
Interpolation between two images with different pixelsize
http://nipy.org/nibabel/dicom/dicom_orientation.html
Basically, you want to build your target volume and interpolate each of its pixels from the nearest neighbors in the source volumes.
About modifying the slice thickness: If you really want to modify the slice thickness rather than the slice distance, I do not see any chance to do this correctly with the source data you have. This is because the thickness says which width of the raw data was used to calculate the values for a slice in your stack (e.g. by averaging or calculating an integral). With a slice thickness of 2 or 2.4mm in the source volumes, you will not be able to reconstruct the gray values with a thickness of 1 mm. If your question was referring to slice distance rather than slice thickness, answer 1 applies.
I'm stuck with the following problem and I hope I can explain it coherent.
So, I have a number (about 10) of descrete positions on a coordinate system.
Now, I want to analyse data from a program where user could label each point as somethingA and somethingB.
I extracted the data points for each class. So I have about 60 points for the somethingA class and a little bit less for the other class. One class stands for good points and one for bad points. I want to find the positions which have the most good/bad labels. I do that with machine learning algorithms, I just want to visualize this with plots.
I now want to plot those points. So I make one plot per class. But since in every class every point occurs at least once, the two plots would look exactly the same.
But, the amount of occurences has a different distribution thoughout the positions.
Maybe point A has 20 occurences in class A and 1 in class B, both plots would look the same.
So, my question is: How can I take the number of occurences for points into account when plotting scatters in Matplotlib?
Either with different colors (like a heatmap?) maybe with a cool legend.
Or with different sizes (e.g. higher amount = bigger cirlce).
Any help would be appreciated!
I don't know if this helps you but I have had a problem where I wanted a scatterplot to reflect both positions as well as two variables that were attributed to the data points.
Since size and color in the scatter function do not allow variables themselves, meaning one has to specify color code and size in the usual way, meaning sth like
ax.scatter(..., c=whatEverFunction, s=numberOfOccurences, ...)
did not work for me.
what I did was to bin the values of the two variables I wanted to visualize. In my case the variable nodeMass and another variable.
for i in range(Number):
mask[i] = False
if(lowerBound1<variableOne[i]<upperBound1):
mask[i] = True & pmask[i]
if len(positionX[mask])>0:
ax.scatter(positionX[mask], positionY[mask], positionZ[mask],C='#424242',s=10, edgecolors='none')
for i in range(Number):
mask[i] = False
if(lowerBound2<variableOne[i]<upperBound2):
mask[i] = True & pmask[i]
if len(positionX[mask])>0:
ax.scatter(positionX[mask], positionY[mask], positionZ[mask],c='#9E0050',s=25,edgecolors='none')
I know it is not very elegant but it worked for me. I had to make as many for loops as I had bins in my variables. With if-querys and the masks I could at least avoid redundant or 'unreadable' plots.