Another 'R could not find function "ggline"' - ggplot2

Before you give me the boilerplate advice about looking into loading packages, let me just say that I have loaded all of my packages. I have read previous posts about this and nothing has worked so far: I removed 'ggplot2','ggpubr','plyr','tidyverse', basically all of my packages and reinstalled them, with dependencies. I have tried require(ggplot2) and require(ggpubr). And I'm still getting this issue.
My code was working a couple of days ago, but I closed R and now I'm getting an error.
tiff(file="P:/School/Dissertation/Analysis/Results/Figures/means_GNG_PACC_slope.tiff")
ggline(plotcogs, x="TIME", y="GNG_PACC_slope", add=c("mean_sd","jitter"), color="STIM", palette=c("#00AFBB","#FC4E07"), size=0.75, point.size=1.25, xlab="Time from start of stim (min)", ylab="Mean", legend="right")
dev.off()
Error:
Error in ggline(plotcogs, x = "TIME", y = "GNG_PACC_slope", add = c("mean_sd", :
could not find function "ggline"
Any ideas why this would work recently but now isn't, in spite of me removing/reinstalling/loading the packages?
Example data block:
PARTIDA ORDER VISIT STIM TIME GNG_PACC_slope
1 1 5 1 1 -0.149925037
1 1 5 1 2 0.239808153
1 1 5 1 3 -0.299401198
1 1 4 2 1 -0.3003003
1 1 4 2 2 0.4
1 1 4 2 3 -0.4
4 2 4 1 1 -0.133333333
4 2 4 1 2 0.239808153
4 2 4 1 3 -0.085689803

Related

why all testcases not getting reported to xray

i am triggering my test from jenkins
Total Tests: 393 (±0)
Failed Configurations: 0 (±0)
Failed Tests: 18 (±0)
but in Xray cloud - TOTAL TESTS: 60 PASSED- 46 , FAILED-14 for entire regression...
My TestNG Report.xml-https://pastebin.com/xV671g4F
The issue you're facing is because you are doing data-driven testing and several test methods are executed plenty of times.
You can try searching for the Test issue that corresponds to GETSubjectForAuthTeacher", and look at execution details to see the Test Run information. Probably inside of it, you'll see 68 entries/inner results.
$ fgrep '<test-method' ~/Downloads/xV671g4F.txt | egrep -o 'name="\w+"' | uniq -c
1 name="GETCountryForAuthTeacher"
1 name="GETCountryForGuestTeacher"
1 name="GETCountryForAuthStudent"
1 name="GETCountryForGuestStudent"
8 name="GETGradeForAuthTeacher"
8 name="GETGradeForAuthStudent"
8 name="GETGradeForGuestStudent"
8 name="GETDegreeForAuthTeacher"
1 name="GETDegreeforGuestTeacher"
1 name="GETDegreeforTeacherNegative"
8 name="GETDegreeForAuthStudent"
8 name="GETDegreeforGuestStudent"
1 name="GETDegreeforStudentNegative"
3 name="GETMajorForAuthTeacher"
1 name="GETMajorforGuestTeacher"
1 name="GETMajorforTeacherNegative"
3 name="GETMajorForAuthStudent"
3 name="GETMajorforGuestStudent"
1 name="GETMajorforStudentNegative"
88 name="GETSubjectForAuthTeacher"
8 name="GETCurriculumForAuthTeacher"
8 name="GETK12ChaptersForAuthTeacher"
5 name="GETSkillChaptersForAuthTeacher"
2 name="GETTestPrepChaptersForAuthTeacher"
3 name="GETUniversityChaptersForAuthTeacher"
2 name="GETK12TestsForAuthTeacher"
2 name="GETUniversityTestsForAuthTeacher"
2 name="GETK12SectionsForAuthTeacher"
2 name="GETUniversitySectionsForAuthTeacher"
6 name="GETK12SkillForAuthTeacher"
1 name="GETK12SkillNegativeForTeacher"
1 name="GETSkillForGuestTeacher"
4 name="GETK12SkillTopicsForAuthTeacher"
2 name="GETK12TestPrepTopicsForAuthTeacher"
8 name="GETK12TopicsForAuthTeacher"
4 name="GETCoursesForAuthTeacher"
1 name="GETCountryForAuthAdmin"
1 name="GETCountryForGuestAdmin"
8 name="GETGradeForAuthAdmin"
8 name="GETDegreeForAuthAdmin"
1 name="GETDegreeforAdminNegative"
1 name="GETDegreeforGuestAdmin"
3 name="GETMajorForAuthAdmin"
1 name="GETMajorforAdminNegative"
1 name="GETMajorforGuestAdmin"
88 name="GETSubjectForAuthAdmin"
8 name="GETCurriculumForAuthAdmin"
8 name="GETK12ChaptersForAuthAdmin"
5 name="GETSkillChaptersForAuthAdmin"
2 name="GETTestPrepChaptersForAuthAdmin"
3 name="GETUniversityChaptersForAuthAdmin"
2 name="GETK12TestsForAuthAdmin"
1 name="GETUniversityTestsForAuthAdmin"
2 name="GETK12SectionForAuthAdmin"
2 name="GETUniversitySectionForAuthAdmin"
6 name="GETK12SkillForAuthAdmin"
1 name="GETK12SkillNegativeForAdmin"
1 name="GETSkillForGuestAdmin"
4 name="GETK12SkillTopicsForAuthAdmin"
2 name="GETK12TestPrepTopicsForAuthAdmin"
8 name="GETK12TopicsForAuthAdmin"
4 name="GETCoursesForAuthAdmin"
1 name="SearchCurriculumTagForAuthAdmin"
1 name="SearchGradeTagForAuthAdmin"
1 name="SearchSubjectTagForAuthAdmin"
1 name="GETCurriculumTagForAuthAdmin"
1 name="GETGradeForAuthAdmin"
1 name="GETSubjectTagForAuthAdmin"

pandas: idxmax for k-th largest

Having df of probabilities distribution, I get max probability for rows with df.idxmax(axis=1) like this:
df['1k-th'] = df.idxmax(axis=1)
and get the following result:
(scroll the tables to the right if you can not see all the columns)
0 1 2 3 4 5 6 1k-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1
the question is how to get the 2-th, 3th, etc probabilities, so that I get the following result?:
0 1 2 3 4 5 6 1k-th 2-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6 0
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4 3
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1 4
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5 4
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1 2
Thank you!
My own solution is not the prettiest, but does it's job and works fast:
for i in range(7):
p[f'{i}k'] = p[[0,1,2,3,4,5,6]].idxmax(axis=1)
p[f'{i}k_v'] = p[[0,1,2,3,4,5,6]].max(axis=1)
for x in range(7):
p[x] = np.where(p[x]==p[f'{i}k_v'], np.nan, p[x])
The loop does:
finds the largest value and it's column index
drops the found value (sets to nan)
again
finds the 2nd largest value
drops the found value
etc ...

how to calculate the specific accumulated amount in t-sql

For each row, I need to calculate the integer part from dividing by 4. For each subsequent row, we add the remainder of the division by 4 previous and current lines and look at the whole part and the remainders from dividing by 4. Consider the example below:
id val
1 22
2 1
3 1
4 2
5 1
6 6
7 1
After dividing by 4, we look at the whole part and the remainders. For each id we add up the accumulated points until they are divided by 4:
id val wh1 rem1 wh2 rem2 RESULT(wh1+wh2)
1 22 5 2 0 2 5
2 1 0 1 (3/4=0) 3%4=3 0
3 1 0 1 (4/4=1) 4%4=0 1
4 2 0 2 (2/4=0) 2%4=2 0
5 1 0 1 (3/4=0) 3%4=3 0
6 7 1 2 (5/4=1) 5%4=1 2
7 1 0 1 (2/4=0) 2%4=1 0
How can I get the next RESULT column with sql?
Data of project:
http://sqlfiddle.com/#!18/9e18f/2
The whole part from the division into 4 is easy, the problem is to calculate the accumulated remains for each id, and to calculate which of them will also be divided into 4

MDX: iif condition on the value of dimension

I have 1 Virtual cube consists of 2 cubes.
Example of fact table of 1st cube.
id object_id time_id date_id state
1 10 2 1 0
2 11 5 1 0
3 10 7 1 1
4 10 3 1 0
5 11 4 1 0
6 11 7 1 1
7 10 8 1 0
8 11 5 1 0
9 10 7 1 1
10 10 9 1 2
Where State: 0 - Ok, 1 - Down, 2 - Unknown
For this cube I have one measure StateCount it should count States for each object_id.
Here for example we have such result:
for 10 : 3 times Ok , 2 times Down, 1 time Unknown
for 11 : 3 times Ok , 1 time Down
Second cube looks like this:
id object_id time_id date_id status
1 10 2 1 0
2 11 5 1 0
3 10 7 1 1
4 10 3 1 1
5 11 4 1 1
Where Status: 0 - out, 1 - in. I keep this in StatusDim.
In this table I keep records that should not be count. If object have status 1 that means that I have exclude it from count.
If we intersect these tables and use StateCount we will receive this result:
for 10 : 2 times Ok , 1 times Down, 1 time Unknown
for 11 : 2 times Ok , 1 time Down
As far as i know, i must use calculated member with IIF condition. Currently I'm trying something like this.
WITH MEMBER [Measures].[StateTimeCountDown] AS(
iif(
[StatusDimDown.DowntimeHierarchy].[DowntimeStatus].CurrentMember.MemberValue
<> "in"
, [Measures].[StateTimeCount]
, null )
)
The multidimensional way to do this would be to make attributes from your state and status columns (hopefully with user understandable members, i. e. using "Ok" and not "0"). Then, you can just use a normal count measure on the fact tables, and slice by these attributes. No need for complex calculation definitions.

Issue with new data in Project Job Scheduling example

thanks to your previous answer, I have been able to create a new small example (with two simple projects A and B made by two jobs each A1,A2,B1,B2) in "Project Job scheduling".
The files load correctly but the result is not the expected one.
The result seems to be influenced by the projects files order (in the main txt file)
If I load the project A data before the project B data and run the example I get one result, if I invert the sequence and load B as "first" project and run it, I get a completely different result.
Since this does not make sense, I am sure I am doing something wrong. Could you help me finding out what?
To be precise... if I load AA_j1011_7.mm first I get a total time of 19
while if I load AA_j1011_8.mm as fist file, I get a total time of 15 (which is the expected result, btw)
Under I attach the main txt file and the two related project files (.mm)
Thanks in advance
Alessandro
Main File
2
0
50
j10.mm/AA_j1011_7.mm
0
50
j10.mm/AA_j1011_8.mm
2
1 1
AA_j1011_7.mm
************************************************************************
file with basedata : mm11_.bas
initial value random generator: 1182272221
************************************************************************
projects : 1
jobs (incl. supersource/sink ): 4
horizon : 50
RESOURCES
- renewable : 2 R
- nonrenewable : 0 N
- doubly constrained : 0 D
************************************************************************
PROJECT INFORMATION:
pronr. #jobs rel.date duedate tardcost MPM-Time
1 2 0 50 0 0
************************************************************************
PRECEDENCE RELATIONS:
jobnr. #modes #successors successors
1 1 1 2
2 1 1 3
3 1 1 4
4 1 0
************************************************************************
REQUESTS/DURATIONS:
jobnr. mode duration R 1 R 2
------------------------------------------------------------------------
1 1 0 0 0
2 1 6 1 0
3 1 6 0 1
4 1 0 0 0
************************************************************************
RESOURCEAVAILABILITIES:
R 1 R 2
1 1
************************************************************************
AA_j1011_8.mm
************************************************************************
file with basedata : mm11_.bas
initial value random generator: 1182272221
************************************************************************
projects : 1
jobs (incl. supersource/sink ): 4
horizon : 50
RESOURCES
- renewable : 2 R
- nonrenewable : 0 N
- doubly constrained : 0 D
************************************************************************
PROJECT INFORMATION:
pronr. #jobs rel.date duedate tardcost MPM-Time
1 2 0 50 0 0
************************************************************************
PRECEDENCE RELATIONS:
jobnr. #modes #successors successors
1 1 1 2
2 1 1 3
3 1 1 4
4 1 0
************************************************************************
REQUESTS/DURATIONS:
jobnr. mode duration R 1 R 2
------------------------------------------------------------------------
1 1 0 0 0
2 1 2 1 0
3 1 7 0 1
4 1 0 0 0
************************************************************************
RESOURCEAVAILABILITIES:
R 1 R 2
1 1
************************************************************************
Actually it does make sense, due to the NP-complete/NP-hard nature of planning problems...
Read this blog about the search space of planning problems to grok the understanding. It's also partially in this chapter in the manual.
If there's 1 optimal solution (sometimes there are multiple) and the algorithms have enough time to reach that (sometimes this takes billions of years, even though they are near optimal in seconds), then indeed it should be the same solution.