I was looking at graphwalker which is a model-based testing tool. It creates a model like an oriented graph and uses a generator and a stop condition to walk on that graph, something like:
random(edge_coverage(100)) //covers the graph random till all edges are selected (100%)
random(vertex_coverage(100)) //covers the graph random till all vertices are selected (100%)
There is another stop condition called requirement_coverage: usage random(requirement_coverage(100)).
From the description on the website it says:
requirement_coverage( an integer representing percentage of desired requirement coverage )
The stop criteria is a percentage number. When, during execution, the percentage of traversed requirements is reached, the test is stopped. If requirement is traversed more than one time, it still counts as 1, when calculating the percentage coverage.
What exactly are those traversed requirements?
This might be a little belated answer, but what I have found is:
https://github.com/GraphWalker/graphwalker-project/wiki/Requirements
Basically you can use REQTAG keyword on your vertices, mapping to some external requirement document reference (i.e. REQTAG: requirement1), and GraphWalker collects these requirements and applies the stop condition based on random(requirement_coverage(x)).
So in below example vertices are marked with the requirements tag, and using random(requirement_coverage(50)) would result in stopping after visiting two vertices, etc...
Related
I need to monitor an AC Voltage waveform and record the RMS value when the breakdown happens. I roughly know how to acquire data from videos I have watched, however, it is difficult for me to produce a solution that reads the Breakdown Voltage Value. Ideally, I would also take a screenshot along with the breakdown voltage value,
In case you are not familiar with this topic, When a breakdown happens the voltage will drop immediately to zero. So what I need is to measure the voltage just before it falls to zero, and if possible take a screenshot. This is an image of a normal waveform (black) with a breakdown one (red).
Naive solution*:
Take the data and get the Y values (this would depend on the datatype you have, which would depend on how you acquire the data).
Find the breakdown point by iterating over the values and maintaining a couple of flags (I would probably say track "got higher than X" and once that's true, track "got lower than Y").
From that, I would just say take the last N points (Get Array Subset) and get the array max. Or just track the maximum value as you run.
Assuming you have the graph in a control, you can just right click and select Create>>Invoke Node>>Export Image.
I would suggest trying playing with that with a VI with static data which you can repeatedly run to check how your code behaves.
*I don't know the problem domain and an not overly familiar with the various analysis VIs that ship with LV, so there are quite possibly more efficient ways of doing this.
Now a days i switched to sonar reports for static code review and performance improvement. Under the rules section I found that the cognitive complexity of my methods are high.
You can find cognitive complexity error in sonar as:
Go to Project->Issues Tab->Rules Drop-down->Cognitive Complexity
Below screen shot gives you a reference of sonar project:
I was not getting any way to calculate and reduce the cognitive complexity of my methods. Finally I found the accurate way to calculate the complexity and i will answer this in my post below. Please check out.
Cognitive Complexity
After searching some blogs and having chat with sonar team I found an easy definition and calculation of cognitive complexity which is as below:
Definition:
Cognitive Complexity, Because Testability != Understandability
Your written code must be as simple to understand as the above definition, simple.
less Cognitive Complexity more Readability
Let's see a method for example to calculate CC, right now I am referring kotlin language, see below image:
In above image there is a method getAppConfigData(), whose cognitive complexity is being measured. Right now the CC of this method is 18. As you can check in above screen shot there is a warning, which tells that the limit of maximum complexity is 15, which is lower than the current CC of this method.
Now the actual question is: How can I calculate the CC of my method at the time of development?
Follow below rules to get your CC of any method or class as:
Increment when there is a break in the linear (top-to-bottom,
left-to-right) flow of the code
Increment when structures that break
the flow are nested
Ignore "shorthand" structures that readably
condense multiple lines of code into one
So whenever above rules matches, just add + count to your CC and remember count will be increased according to level of code break, as example "if" condition gets +1 if it is the first code break but if you have used one more nested if then it will be a +2 for that inner "if" as shown in below image.
That's all I got in terms of Cognitive Complexity.
You can find everything related to CC at sonar blog
Thank You
More explained answer in Sonar Cognitive Complexity
Basic criteria and methodology A Cognitive Complexity score is
assessed according to three basic rules:
Ignore structures that allow multiple statements to be readably shorthanded into one
Increment (add one) for each break in the linear flow of the code
Increment when flow-breaking structures are nested
Additionally, a complexity score is made up of four different types of increments:
Nesting - assessed for nesting control flow structures inside each
other
Structural - assessed on control flow structures that are
subject to a nesting increment, and that increase the nesting count
Fundamental - assessed on statements not subject to a nesting
increment
Hybrid - assessed on control flow structures that are not
subject to a nesting increment, but which do increase the nesting
count
While the type of an increment makes no difference in the math - each increment adds one to the final score - making a distinction among the categories of features being counted makes it easier to understand where nesting increments do and do not apply. These rules and the principles behind them are further detailed in the following sections.
In my case the Cognitive Complexity was due to many number of if conditions.
My SonarQue allowed only 15 if and else if conditions
if() =>1
else if() => 2
.
.
.
else => 15
Suppose the if exceeds more than 15 conditions it showed me Cognitive Complexity.
Lack of High Schools in remote areas is a problem for students in developing country. Students in some locations are better than that in other. So, I have to find those locations. Now, the main problem is defining "BETTER". I have made some rules that will define the profile of a location.
Right now, I am concerned with the good students.
So, what I have done is-
1. Used some inferential statistics to and made some rules to come up with the conclusion that Location A,B,C,etc are the most potential locations where you can put the high schools because according to my rules these locations contain quality students.
I did all of the things above to label the data because I required to define "BETTER" and label the data so that I can now use machine learning algorithm to learn the factors which makes a location a potential location so that if I give a data point from test data to the model, it will instantly tell if the location is better or not.
Overview of the method:
For each location, I have these 4 information:
total_students_staying_for_high_school_education(A),
total_students_leaving_for_high_school_education_in_another_place(B),
mean_grade_point_of_students_of_type_B,
ratio (calculated as B/A),
For the location whose ratio > 1
I applied the chi-squared significance test to come up with a statistic which would tell me if students are leaving that place in significant amount than staying. I used ANOVA and then Tukey test to compare means_grade points and then find combinations of pairs of locations whose means vary and whose is greater than the others.
I then wrote a python program with a custom comparator that first compares if mean_grade of those points vary and returns the one with greater mean. If the means don't vary, the comparator return the location with the one whose chi-squared value is greater.
This is how, the whole process comes up with few suggestions of location and I call those location "BETTER".
What I am concerned about is-
1. How do I verify if my rules are valid? Or do I even need to verify it?
2. Most importantly, is mingling statistics with machine learning as described above an appropriate approach?Is there any major leakage in the method?Can anyone suggest a more general method?
I'm trying to run the hlda algorytmm and producing a descriptive hierarchy of the input documents. The problem is I'm running diverse parameters configs and trying to understand how it works in an "empirical way", because I can not match the ones that are being used in the original papers (I understand it's a different team). E.g. alpha in Mallet seems to be eta in the paper, but I'm not very sure. Besides, I can not know the boundaries for each of them. I mean, the range of possible values for each parameter.
In the source code, there is some help:
double alpha; // smoothing on topic distributions
double gamma; // "imaginary" customers at the next
double eta; // smoothing on word distributions.
First, I used the default values: alpha=10.0; gamma=1.0; eta = 0.1;
Then, I tryed running the algorythm by changing the values and interpret the results, but I can't understand the meaning of them. E.g. I think changing gamma (in Mallet) has an effect on the customers decition: to start a new node in the tree or to be placed in an existing one. So, if I set gamma = 0.5, less nodes should be produced, because 0.5 is half the probability of the default one, right? But the results with gamma=1 give me 87 nodes, and with gamma=0.5, it returns 98! And then, I'm asking me something new: is that a probability? I was trying to find the range of possible values in these two papers, but I didn't find them:
Hierarchical Topic Models andthe Nested Chinese Restaurant Process
The Nested Chinese Restaurant Process and BayesianNonparametric Inference of Topic Hierarchies
I know I could be missing something, because I don't have the a good background on this, but that's why I'm asking here, maybe someone already had this problem and can help me understanding those limits.
Thanks in advance!
It may be helpful to run multiple times with each hyperparameter setting. I suspect that gamma does not have a big influence on the final number of topics, and that what you are seeing could just be typical variability in the sampling process.
In my experience the parameter that has by far the strongest influence on the number of topics is actually eta, the topic-word smoothing.
I have an oracle database (11g spatial) that includes a series of area polygons and water mains. I'm trying to attribute each of these mains to the area in which it is contained and for the most part this is straightforward enough (using the SDO_CONTAINS function) but I'm not sure how to deal with mains that straddle multiple polygons due to errors in digitisation.
In cases like this what I'd ideally like to do is attribute a main to an area polygon if the majority of it's length (>50%) is contained within onit. I know that I can use the SDO_RELATE function to determine every polygon that any given main interacts with, but I don't know how to then go about determining how much of it's length is contained within each area.
The principle is like this:
Correlate mains and areas. Assuming you have many mains and many areas, the most efficient approach is to use SDO_JOIN
For each couple (main/area) returned, compute their intersection (SDO_GEM.SDO_INTERSECTION) and measure the length of that intersection (SDO_GEOM.SDO_LENGTH).
From those results, retain the area for each main where the length is the maximum
If you want a full SQL example, allow me a bit of time to write that using sample data.