Related

How to display 2D bar graph with number of counts on Y-axis and month/day on x-axis in LabVIEW?

I want to plot 2D bar graph in LabVIEW showing total number of counts on Y-axis and Month/day(abbreviated) on x-axis. How can I do it?
Drop an XY graph.
Go to the plot properties and set Bar Plots to have the style you want and Interpolation to be just points.
If you open the context help window and hover over the terminal for the graph, you can see the data types it supports. You want the cluster which has a 1D array for the X values (timestamps of a time in the specific day) and a 1D array for the Y values. Generate your data in that format and wire it into the graph.
Right click the X scale and select Formatting.... In the properties dialog, set the format to be absolute time and to only show the day and the month.
Run the VI and you should have your graph.

I want to plot this table using stacked probability area plot

I want to make a plot for this table using stacked probability area plot
enter image description here
enter image description here
The result of the table should be similar to this
the value of i changes form 0 to 1 and it is increased by 0.1
enter image description here

How to adjust Pixel Spacing and Slice Thickness in DICOM data?

I have a large dicom mri dataset for several patients. For each patient, there is a folder including many 2d slices of .dcm files and the data of each patient has different sizes. For example:
patient1: PixelSpacing=0.8mm,0.8mm, SliceThickness=2mm, SpacingBetweenSlices=1mm, 400x400 pixels
patient2: PixelSpacing=0.625mm,0.625mm, SliceThickness=2.4mm, SpacingBetweenSlices=1mm, 512x512 pixels
So my question is how can I convert all of them into {Pixel Spacing} = 1mm,1mm and {Slice Thickness = 1mm}?
Thanks.
These are two different questions:
About harmonizing positions and pixel spacing, these links will be helpful:
Finding the coordinates (mm) of identical slice locations for two MR datasets acquired in the same scanning session
Interpolation between two images with different pixelsize
http://nipy.org/nibabel/dicom/dicom_orientation.html
Basically, you want to build your target volume and interpolate each of its pixels from the nearest neighbors in the source volumes.
About modifying the slice thickness: If you really want to modify the slice thickness rather than the slice distance, I do not see any chance to do this correctly with the source data you have. This is because the thickness says which width of the raw data was used to calculate the values for a slice in your stack (e.g. by averaging or calculating an integral). With a slice thickness of 2 or 2.4mm in the source volumes, you will not be able to reconstruct the gray values with a thickness of 1 mm. If your question was referring to slice distance rather than slice thickness, answer 1 applies.

How to visualize (make plot) of regression output against categorical input variable? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I am doing linear regression with multiple variables. In my data I have n = 143 features and m = 13000 training examples. Some of my features are continuous (ordinal) variables (area, year, number of rooms). But I also have categorical variables (district, color, type). For now I visualized some of my feautures against predicted price. For example here is the plot of area against predicted price:
Since area is continuous ordinal variable I had no troubles visualizing the data. But now I wanted to somehow visualize dependency of my categorical variables (such as district) on predicted price.
For categorical variables I used one-hot (dummy) encoding.
For example that kind of data:
turned to this format:
If I were using ordinal encoding for districts this way:
DistrictA - 1
DistrictB - 2
DistrictC - 3
DistrictD - 4
DistrictE - 5
I would plot this values against predicted price pretty easy by putting 1-5 to X axis and price to Y axis.
But I used dummy coding and now I do not know how can I show (visualize) dependency between price and categorical variable 'District' represented as series of zeros and ones.
How can I make a plot showing a regression line of districts against predicted price in case of using dummy coding?
If you just want to know how much the different districts influence your prediction you can take a look at the trained coefficients directly. A high theta indicates that that district increases the price.
If you want to plot this, one possible way is to make a scatter plot with the x coordinate depending on which district is set.
Something like this (untested):
plot.scatter(0, predict(data["DistrictA"==1]))
plot.scatter(1, predict(data["DistrictB"==1]))
And so on.
(Possibly you need to provide an x vector of the same size as the filtered data vector.)
It looks even better if you can add a slight random perturbation to the x coordinate.

Core Plot Graph Label steps

I'm using Core Plot to draw graphs in my app.
I just encountered a problem:
I have dates on the X-Axis and I use a custom labeling policy.
If I only have a few records everything works fine
If I have many records all the labels are near and not useful :-(
So the question is: How can I decide which values display and which not to always have 10 labels, separated one from the other.
Divide the number of points by the number of labels you want and round up. For example, if you have 25 data points and want roughly 10 labels, label every third data point. You'll end up with 9 evenly spaced labels.