I want to find out whether there is a relationship between how well the students did on a particular test and the level of dropout from education. I have a 2×2 matrix with the variables Level in test which takes the values level 1 and level 2, and the variable dropout which has the values not active and active. (you can say that level 1=pass the test and level 2=not passed).
I can see that I have a problem with the term called "simpson paradox", because I get that every single education in the faculty has a high p value indicating that there is no relationship between level in test and dropout. BUT when I group the data and perform the analysis for the whole faculty, I get a low p value indicating that there is a significant relationship between the variables.??
I have tried to read about the Simpson paradox, but I don't seem to get the information of how to deal with this problem?
I have read one place that one should not perform the test on aggregated data, but that cannot be true?
I really hope that someone can help me!
Kind Regards Maria
For the cross-tabs labeled education 2 and education 5 you have cell values less than 5 which violates the assumptions for running a chi-square. There are arguments to be made about how chi-square is robust enough of a test to withstand these limitations, but I would still reconsider your grouping methodology.
Since the total number of cases in 'Faculty' is higher, the data is enough to refute the independence hypothesis, hence low p-values. When the number of cases is small (your education 1 to education 5 tables), there is not enough data to show significance. A higher p-value here just says that the differences could be by chance.
This is not an example of Simpson's paradox.
Related
I have 9 bins named A through I containing the following number of objects:
A(8), B(7), C(6), D(7), E(5), F(6), G(6), H(6), I(6)
Objects from each bin fulfill a specific role and cannot be interchanged. I am selecting one object from each bin at random forming a "team" of 9 "players":
T_ijklmnopq = {a_i, b_j, c_k, d_l, e_m, f_n, g_o, h_p, i_q}
There are 15,240,960 such combinations - a huge number. I have means of evaluating performance of each "team" via a costly objective function, F(T_ijklmnopq). Thus, I can feasibly sample a limited number of random combinations, say no more than 500 samples.
Having results of such sampling, I want to predict the most likely best combination of "players". How to do it?
Keep in mind this is different from classical team selection because there is no meaningful evaluation of F() based on individual performance. For example, "player" a_6 may be good individually, but he may not "like" e_2 and therefore the performance of "team" containing the two suffers. Conversely, three mediocre players b_1, f_5, i_2 may be a part of an awesome "team". What's know is the whole "team" performance, that's all.
One more detail: contributions of the individual roles A through I are not weighted equally. Position of, say, E may be more important than, say, H. Unfortunately, these weights are not known upfront.
The described problem must be know to combinatorial analysts, but I haven't found anything exactly like it. Linear programming solutions with known individual "player" scores do not apply here. I will be most grateful for a specific name under which this problem is known to experts.
So far I have collected 400 samples. Here is a graph of the sorted F(T) values vs. a (arbitrary) sample number to illustrate that F(T) is "reasonable".
F(T) graph of sorted samples
Good afternoon and happy Friday, folks
I’m trying to automate a placement simulation of youth into residential treatment where they will have the highest likelihood of success. Success is operationalized as “not recidivating” within 3 years of entering treatment. Equations predicting recidivism have been generated for each location, and the equations have been applied to each individual in the scenario (based on youth characteristics like risk, age, etc., LOS). Each youth has predicted success rates for every location, which throws in a wrench: youth are not qualified for all of the treatment facilities for which they have predicted success rates. Indeed, treatment locations have differing, yet overlapping qualifications.
Let’s take a made-up example. Johnny (ID # 5, below) is a 15-year-old boy with drug charges. He could have “predicted success rates” of 91% for location A, 88% for location B, 50% for location C, and 75% for location D. Johnny is most likely to be successful (i.e., not recidivate within three years of entering treatment) if he is treated at location A; unfortunately, location A only accepts youth who are 17 years old or older; therefore, Johnny would not qualify for treatment here. Alternatively, for Johnny, location B is the next best location. Let us assume that Johnny is qualified for location B, but that all of location-B beds are filled; so, we must now look to location D, as it is now Johnny’s “best available” option at 75%.
The score so far: We are matching youth to available beds in location for which they qualify and might enjoy the greatest likelihood of success. Unfortunately, each location only has a certain number of available beds, and the number of available beds different across locations. The qualifications of entry into treatment facilities differ, yet overlap (e.g., 12-17 year-olds vs 14-20 year-olds).
In order to simulate what placement decisions might look like based on success rates, I went through the scenario describe above for over 400 youth, by hand, in excel. It took me about a week. I’d like to use PROC SQL imbedded in a SAS MACRO to automate these placement scenarios with the ultimate goals of a) obtain the ability to bootstrap iterations in order to examine effect sizes across distributions, b) save time, and c) prevent further brain damage from banging my head again desk and wall in frustration whilst doing this by hand. Whilst never having had the necessity—nay—the privilege of using SQL in my typical roll as a researcher, I believe that this time has now come to pass and I’m excited about it! Honestly. I believe it has the capacity I’m looking for. Unfortunately, it is beating the devil out of me!
Here’s what I’ve got cookin’ so far: I want to create and automate the placement simulation with the clever use of merging/joining/switching/or something like that.
I have two datasets (tables). The first dataset contains all of the youth information (one row per youth; several columns with demographics, location ranks, which correspond to the predicted success rates). The order of rows in the youth dataset (was/will be randomly generated (to simulate the randomness with which youth enter the system and are subsequently place into treatment). Note that I will be “cleaning” the youth dataset prior to merging such that rank-column cells will only be populated for programs for which a respective youth qualifies. This should take the “does the youth even qualify for the program” problem out of the equation.
However, it still leaves the issue of availability left to be contended with in the scenario.
The second dataset containing the treatment facility beds, with each row corresponding to an available bed in one of the treatment location; two columns contain bed numbers and location names. Each bed (row) has only one location cell populated, but locations will populate several cells.
Thus, in descending order, I want to merge each youth row with the available bed that represents his/her best chance of success, and so the merge/join/switch/thing should take place
on youth.Rank1= distinct TF.Location,
and if youth.Rank1≠ TF.location then
merge on youth.Rank2= TF.location,
if youth.Rank2≠ TF.location then merge at
youth.Rank3 = TF.location, etc.
Put plainly: “Merge on rank1 unless rank1 location is no longer available, then merge on rank2, unless rank2 location is no longer available, and on down the line, etc., etc., until all option are exhausted and foster care (i.e., alternative services). Is the only option.
I’ve had no success getting this to work. I haven’t even been successful getting the union function to work. About the only successful thing I’ve done in SQL so far is create a view of a single dataset. It’s pretty sad. I’ve been following this guidance, but I get hung up around the “where” command:
proc sql; /Calls the SQL procedure*/;
create table x as /*Tells SAS to create a table called x*/
select /*Specifies the column(s) to be selected*/
from /*Specificies the tables(s) (data sets) to be queried*/
where /*Subjests the data based on a condition*/
group by /*Classifies the data into groups based on the specified
column(s)*/
order by /*Sorts the resulting rows observations) by the specified
column(s)*/
; quit; /*Ends the proc sql procedure*/
Frankly, I’m stuck and I could use some advice. This greenhorn in me is in way over his head.
I appreciate any help or guidance anyone might lend.
Cheers!
P
The process you describe (and to be honest I skiped to the end so I might of missed something) does not lend itself to SQL because each step could affect the results of the next one. However, you want to get the most best results for the most kids. (I think a lot of that text was to convince us how important it is to help out). You don't actually give us anything we can really use to help since you don't give any details of your data model, your data, or expected results. There really is no way to answer this question. But I don't care -- I'm going to go forward with some suggestions because it is a friday and I've never done a stream of consciousness answer to a stream of consciousness question before. I will suggest you don't formulate your solution just in sql, but instead use a higher level program and engage is a process like the one described below -- because this a DB questions I've noted the locations where the DB might be involved.
Generate a list kids (this can be in a table -- called NEEDY-KID)
Have a list of locations to assign (this can also be a table LOCATION)
Run your matching for best fit from KID to location -- at this point don't worry about assign more than one kid to a location -- there can be duplicates (put this in table called KID2LOC using a query)
Check KID2LOC for locations assigned twice -- use some method to remove the duplicate ones so each loc is only assigned once. (remove from the KID2LOC using a query)
Prune the LOCATION list to remove assigned locations (once again -- a query)
If kids exist without a location go to 3 with new pruned location list.
Done.
Assume you are a test analyst working on a banking project to upgrade an existing automated teller machine system to allow customers to obtain cash advances from supported credit cards. The system should allow cash advances from 20 dollars to 500 dollars, inclusively, for all supported credit cards. The correct list of supported credit cards is American Express, Visa, Japan Credit Bank, Eurocard, and MasterCard. The user interface starts with a default amount of 100 dollars for advances, and the ATM keypad is used to increase or decrease that amount in 20-dollar increments.
Consider the decision table shown in table 1.0 that describes the handling of these transactions.
Table 1.0. Cash advance decision table
Check the table in attached image
Assume that you want to design a set of test cases where the following coverage is achieved:
Decision table coverage
Boundary values for allowed and disallowed advance amounts
Successful advance for each supported card
Design a set of test cases that achieves this level of coverage with the minimum possible number of test cases. Assume each test case consists of a single combination of conditions to create and a single combination of actions to check. How many test cases do you need?
Can someone help me understanding this problem and solution?
Thanks in Advance :-)
Decision table coverage Boundary values for allowed and disallowed advance amounts ->
Boundary values for your example will be: less than 0; 0; 20-500; 500+.
Equivalence partitioning, boundary value testing and decision table described here: http://www.maniuk.net/search/label/test%20design%20technique
Successful advance for each supported card ->
Set of instruction number 5 (in decision table) should be applied for all types of provided cars. Depends on risks #4 should be tested too.
Design a set of test cases that achieves this level of coverage with the minimum possible number of test cases. -->
a. If we can assume that cards work totally the same with the same limits and processing procedures so 9 test cases needed, during boundaries testing you can use different cards, so each card can be used. b. If we assume that some specific still exists in processing so 13 test cases needed (9 from previous test + 4 other cards to test instruction #5.
c. If cards has different limits by themselves addition verification will be needed.
and thanks for reading my thread.
I have read some of the previous posts on formatting/normalising input data for a Neural Network, but cannot find something that addresses my queries specifically. I apologise for the long post.
I am attempting to build a radial basis function network for analysing horse racing data. I realise that this has been done before, but the data that I have is "special" and I have a keen interest in racing/sportsbetting/programming so would like to give it a shot!
Whilst I think I understand the principles for the RBFN itself, I am having some trouble understanding the normalisation/formatting/scaling of the input data so that it is presented in a "sensible manner" for the network, and I am not sure how I should formulate the output target values.
For example, in my data I look at the "Class change", which compares the class of race that the horse is running in now compared to the race before, and can have a value between -5 and +5. I expect that I need to rescale these to between -1 and +1 (right?!), but I have noticed that many more runners have a class change of 1, 0 or -1 than any other value, so I am worried about "over-representation". It is not possible to gather more data for the higher/lower class changes because thats just 'the way the data comes'. Would it be best to use the data as-is after scaling, or should I trim extreme values, or something else?
Similarly, there are "continuous" inputs - like the "Days Since Last Run". It can have a value between 1 and about 1000, but values in the range of 10-40 vastly dominate. I was going to scale these values to be between 0 and 1, but even if I trim the most extreme values before scaling, I am still going to have a huge representation of a certain range - is this going to cause me an issue? How are problems like this usually dealt with?
Finally, I am having trouble understanding how to present the "target" values for training to the network. My existing results data has the "win/lose" (0 or 1?) and the odds at which the runner won or lost. If I just use the "win/lose", it treats all wins and loses the same when really they're not - I would be quite happy with a network that ignored all the small winners but was highly profitable from picking 10-1 shots. Similarly, a network could be forgiven for "losing" on a 20-1 shot but losing a bet at 2/5 would be a bad loss. I considered making the results (+1 * odds) for a winner and (-1 / odds) for a loser to capture the issue above, but this will mean that my results are not a continuous function as there will be a "discontinuity" between short price winners and short price losers.
Should I have two outputs to cover this - one for bet/no bet, and another for "stake"?
I am sorry for the flood of questions and the long post, but this would really help me set off on the right track.
Thank you for any help anyone can offer me!
Kind regards,
Paul
The documentation that came with your RBFN is a good starting point to answer some of these questions.
Trimming data aka "clamping" or "winsorizing" is something I use for similar data. For example "days since last run" for a horse could be anything from just one day to several years but tends to centre in the region of 20 to 30 days. Some experts use a figure of say 63 days to indicate a "spell" so you could have an indicator variable like "> 63 =1 else 0" for example. One clue is to look at outliers say the upper or lower 5% of any variable and clamp these.
If you use odds/dividends anywhere make sure you use the probabilities ie 1/(odds+1) and a useful idea is to normalize these to 100%.
The odds or parimutual prices tend to swamp other predictors so one technique is to develop separate models, one for the market variables (the market model) and another for the non-market variables (often called the "fundamental" model).
I am optimizing algorithmic strategies. In the process of choosing from a pool of many optimized strategies, I am in the phase of searching (evaluating) for robustness of the strategy.
Following the guidelines of Dr. Pardo's book "The Evaluation of Trading Strategies" in page 231 Dr. Pardo recomends, in the Numeral 3 to apply the following ratio to the optimized data:
" 3. The ratio of the total profit of all profitable simulations divided by the
total profit of all simulationsis significantly positive"
The Question: from the optimization results, I am not being able to properly understand what does Mr. Pardo means by stating "...all simulationsis significantly positive"; what does Mr. Pardo means by 'significantly positive?
a.) with 95% confidence level?
b.) with a certain p value?
c.) the relation of the average net profit of each simulation minus it' standard deviation
Even though the sentence might seem 'simple' I would REALLY like to understand what Mr. Pardo means by the statement and HOW to calculate it, in order to filter the most robust algorithmic strategies.
The aim of analyzing the optimization profile of an algorithmic simulation is to be able to filter robust strategies.
Therefore the ratio should help us to uncover if the simulation results are on the right track or not.
So, we would like to impose some 'penalties' to our results, so we can select the robust cases from those of doubtful (not robust) result.
I came to the following penalizing measures (found in the book of Mr. Pardo and other sources).
a.) we can use a market return (yearly value) as a benchmark, so all the simulations whose result are below such level, can be excluded from our analysis,
b.) some other benchmark to divide those 'robust' results from those more 'doubtful' (for example, deducing to each result one standard deviation)
From (a) and (b), we can create the ratio:
the total sum of all profitable simulations divided by the profitable results considered robust
The ratio should be greater or equal than 1.
If the ratio is equal to 1 then it means that our simulation result has given interesting results (we are analyzing the positive values in this ratio, but profitable results should always be compared to the negative results also).
If the ratio is greater from 1, then we have not reach the possible scenario, and the result should be compared with the other tests for optimizations.
While simulating trading algorithms, no result is absolute but partial and it's value is taken in relationship to what we expect from the algorithm.
If someone has a better explanation or idea or concept you might find interesting please share, I would gladly read it.
Best regards to all.
Remark on the subject
With all due respect to the subject ( published in 2008 ) the term robustness has its own meaning if-and-only-if the statement also clarifies in which particular respect is the robustness measured and against what phenomena is it to be exposed & tested the Model-under-review's response ( against what perturbances -- type and scale -- shall the Model-under-test hold its robust behaviour, measures of which were both defined and quantified a-priori the test ).
In any case, where such context of the robustness is not defined, the material, be it printed by any bold name, sounds -- and forgive me to speak in plain English -- just like a PR-story, an over-hyped e-zine headline or like a paid advertorial.
Serious quantitative model evaluations, the more if one strives to perform an optimisation ( with respect to some defined quantitative goal ), requires a more thorough insight into the subject than to axiomatically post a trivial "must-have" imperative of
large-average && small-HiLo-range && small StDev.
Any serious Quant-Modelling effort, if it were not to just spoil the consumed hundreds-of-thousands CPU core hours of deep parametric-spaces' scans, shall incorporate a serious parametrisation decision in either dimension of the main TruTrading Strategy sub-spaces --
{ aSelectPOLICY, aDetectPOLICY, anActPOLICY, anAllocatePOLICY, aTerminatePOLICY }
A failure to do so, either cripples the model or leads to a blind-belief, where it is hard to guess, whether the former or the latter is a greater of the both Quant-sins.
Remark on the cited hypothesis
The book states, without any effort to proof the construction, that:
The more robust trading strategywill have an optimization profile with a: 1. Largeaverageprofit 2. Small maximum-minimumrange3. Small standarddeviation
Is it correct?
Now kindly spend a few moments and review this 4D-animated view of a Model-under-test ( visualisation of which is reduced into just four dimensions for easier visual perception ), where none of the above stands true.
<aMouseRightCLICK>.openPictureOnAnotherTab to see full HiRes picture details
Based on contemporary state-of-art adaptive money-management practice, that fails to be correct, be it due to a poor parametrisation ( thus artificially leading the model into a rather "flat-profits" sub-space of aParamSetVectorSPACE )
or due to a principal mis-concept or a poor practice ( including the lack thereof ) of the implementation of the most powerful profit-booster ever -- the very money-management model sub-space.
Item 1 becomes insignificant at all.
Item 2 works right on the contrary to the stated postulate.
Item 3 cannot yield anything but the opposite due to 1 & 2 above.