Splitting the Filed Name (Table's Header) into Two Separate Lines - header

I have a dataset of the following structure:
Company.ID DDR (25632) PTL (89567)
2512 89 74
9875 78 96
7892 14 73
I would like to split the header into two different lines. With other words the second part of the header should or could be the first variable. How is possible to transform the dataset into the desired form (see below):
Company.ID DDR PTL
- (25632) (89567)
2512 89 74
9875 78 96
7892 14 73
To replicate the above example in Qlik, run the code below:
LOAD * Inline [
[Company.ID], [DDR (25632)], [PTL (89567)]
2512,89,74
9875,78,96
7892,14,73
];
Any help or tipp would be highly appreciated!

You need to loop columns, rename them and concatenate with new values. Here is example which I've written:
table:
LOAD * Inline [
Company.ID, DDR (25632), PTL (89567)
2512,89,74
9875,78,96
7892,14,73
];
For i=1 to NoOfFields('table')
LET vField = FieldName($(i),'table');
LET vFieldName_$(i) = SubField('$(vField)',' ',1);
LET vFieldValue_$(i) = SubField('$(vField)',' ',2);
If '$(vField)' <> '$(vFieldName_$(i))' THEN
Rename Field '$(vField)' TO '$(vFieldName_$(i))';
EndIf
next
Concatenate(table)
Load * Inline [
'$(vFieldName_1)', '$(vFieldName_2)', '$(vFieldName_3)'
'$(vFieldValue_1)', '$(vFieldValue_2)', '$(vFieldValue_3)'
];

Related

Match and add exactly line

folks! I am new in programming and stuck in some moments. So, I have two files:
file1 contains:
s145 12 32 56
s430 48 56 20
s76 45 arg in
file2 contains only the name of s values:
s145 protos
s430 retus
s76 cosess
I want to add name of s values in new line, after the last column. Could someone show me how I can solve this issue. Thanks.

translate Dataframe using crosswalk in julia

I have a very large dataframe (original_df) with columns of codes
14 15
21 22
18 16
And a second dataframe (crosswalk) which maps 'old_codes' to 'new_codes'
14 104
15 105
16 106
18 108
21 201
22 202
Of course, the resultant df (resultant_df) that I would like would have values:
104 105
201 202
108 106
I am aware of two ways to accomplish this. First, I could iterate through each code in original_df, find the code in crosswalk, then rewrite the corresponding cell in original_df with the translated code from crosswalk. The faster and more natural option would be to leftjoin() each column of original_df on 'old_codes'. Unfortunately, it seems I have to do this separately for each column, and then delete each column after its conversion column has been created -- this feels unnecessarily complicated. Is there a simpler way to convert all of original_df at once using the crosswalk?
You can do the following (I am using column numbers as you have not provided column names):
d = Dict(crosswalk[!, 1] .=> crosswalk[!, 2])
resultant_df = select(original_df, [i => ByRow(x -> d[x]) for i in 1:ncol(original_df)], renamecols=false)

u-sql: filtering out empty// Null strings (microsoft academic graph)

I am new to u-sql of azure datalake analytics.
I want to do what I think is a very simple operations but ran into trouble.
Basically: I want to create a query which ignore empty string.
using it in select works, but not in WHERE statement.
Below the statement I am making and the cryptic error I get
JOB
#xsel_res_1 =
EXTRACT
x_paper_id long,
x_Rank uint,
x_doi string,
x_doc_type string,
x_paper_title string,
x_original_title string,
x_book_title string,
x_paper_year int,
x_paper_date DateTime?,
x_publisher string,
x_journal_id long?,
x_conference_series_id long?,
x_conference_instance_id long?,
x_volume string,
x_issue string,
x_first_page string,
x_last_page string,
x_reference_count long,
x_citation_count long?,
x_estimated_citation int?
FROM #"adl://xmag.azuredatalakestore.net/graph/2018-02-02/Papers.txt"
USING Extractors.Tsv()
;
#xsel_res_2 =
SELECT
x_paper_id AS x_paper_id,
x_doi.ToLower() AS x_doi,
x_doi.Length AS x_doi_length
FROM #xsel_res_1
WHERE NOT string.IsNullOrEmpty(x_doi)
;
#xsel_res_3 =
SELECT
*
FROM #xsel_res_2
SAMPLE ANY (5)
;
OUTPUT #xsel_res_3
TO #"/graph/2018-02-02/x_output/x_papers_x6.tsv"
USING Outputters.Tsv();
THE ERROR
Vertex failed
Vertex failure triggered quick job abort. Vertex failed: SV1_Extract[0][1] with error: Vertex user code error.
VertexFailedFast: Vertex failed with a fail-fast error
E_RUNTIME_USER_EXTRACT_ROW_ERROR: Error occurred while extracting row after processing 10 record(s) in the vertex' input split. Column index: 5, column name: 'x_original_title'.
E_RUNTIME_USER_EXTRACT_EXTRACT_INVALID_CHARACTER_AFTER_QUOTED_FIELD: Invalid character following the ending quote character in a quoted field.
Row selected
Component
RUNTIME
Message
Invalid character following the ending quote character in a quoted field.
Resolution
Column should be fully surrounded with double-quotes and double-quotes within the field escaped as two double-quotes.
Description
Invalid character is detected following the ending quote character in a quoted field. A column delimiter, row delimiter or EOF is expected. This error can occur if double-quotes within the field are not correctly escaped as two double-quotes.
Details
Row Delimiter: 0x0
Column Delimiter: 0x9
HEX: 61 76 6E 69 20 74 65 72 6D 69 6E 20 75 20 70 6F 76 61 6C 6A 73 6B 6F 6A 20 6C 69 73 74 69 6E 69 20 69 20 6E 61 74 70 69 73 75 20 67 20 31 31 38 35 09 22 50 6F 20 6B 6F 6E 63 75 22 ### 20 28 73 74 61 72 69 20 68 72
UPDATE
BY the way, the operations work on other datasets, so the problem is not the syntax as far as I can tell
//Define schema of file, must map all columns
#searchlog =
EXTRACT UserId int,
Start DateTime,
Region string,
Query string,
Duration int,
Urls string,
ClickedUrls string
FROM #"/Samples/Data/SearchLog.tsv"
USING Extractors.Tsv();
#searchlog_1 =
SELECT * FROM #searchlog
WHERE NOT string.IsNullOrEmpty(ClickedUrls );
OUTPUT #searchlog_1
TO #"/Samples/Output/SearchLog_output_x1.tsv"
USING Outputters.Tsv();
This is an unfortunate error display for this case.
Assuming text is utf-8, you can use a site like www.hexutf8.com to convert the hex to:
avni termin u povaljskoj listini natpisu g 1185 "Po koncu" (Stari hr
It looks like the input row contains at least one " character that is not properly escaped. It should look like this:
avni termin u povaljskoj listini natpisu g 1185 ""Po koncu"" (Stari hr
#Saveenr's answer assumes that the values in your file are all quoted. Alternatively, if they are not quoted (and do not contain your column separator as values), then setting Extractors.Tsv(quoting:false) could help as well.

How to get fitted values from clogit model

I am interested in getting the fitted values at set locations from a clogit model. This includes the population level response and the confidence intervals around it. For example, I have data that looks approximately like this:
set.seed(1)
data <- data.frame(Used = rep(c(1,0,0,0),1250),
Open = round(runif(5000,0,50),0),
Activity = rep(sample(runif(24,.5,1.75),1250, replace=T), each=4),
Strata = rep(1:1250,each=4))
Within the Clogit model, activity does not vary within a strata, thus there is no activity main effect.
mod <- clogit(Used ~ Open + I(Open*Activity) + strata(Strata),data=data)
What I want to do is build a newdata frame at which I can eventually plot marginal fitted values at specified locations of Open similar to a newdata design in a traditional glm model: e.g.,
newdata <- data.frame(Open = seq(0,50,1),
Activity = rep(max(data$Activity),51))
However, when I try to run a predict function on the clogit, I get the following error:
fit<-predict(mod,newdata=newdata,type = "expected")
Error in Surv(rep(1, 5000L), Used) : object 'Used' not found
I realize this is because clogit in r is being run throught Cox.ph, and thus, the predict function is trying to predict relative risks between pairs of subjects within the same strata (in this case= Used).
My question, however is if there is a way around this. This is easily done in Stata (using the Margins Command), and manually in Excel, however I would like to automate in R since everything else is programmed there. I have also built this manually in R (example code below), however I keep ending up with what appear to be incorrect CIs in my real data, as a result I would like to rely on the predict function if possible. My code for manual prediction is:
coef<-data.frame(coef = summary(mod)$coefficients[,1],
se= summary(mod)$coefficients[,3])
coef$se <-summary(mod)$coefficients[,4]
coef$UpCI <- coef[,1] + (coef[,2]*2) ### this could be *1.96 but using 2 for simplicity
coef$LowCI <-coef[,1] - (coef[,2]*2) ### this could be *1.96 but using 2 for simplicity
fitted<-data.frame(Open= seq(0,50,2),
Activity=rep(max(data$Activity),26))
fitted$Marginal <- exp(coef[1,1]*fitted$Open +
coef[2,1]*fitted$Open*fitted$Activity)/
(1+exp(coef[1,1]*fitted$Open +
coef[2,1]*fitted$Open*fitted$Activity))
fitted$UpCI <- exp(coef[1,3]*fitted$Open +
coef[2,3]*fitted$Open*fitted$Activity)/
(1+exp(coef[1,3]*fitted$Open +
coef[2,3]*fitted$Open*fitted$Activity))
fitted$LowCI <- exp(coef[1,4]*fitted$Open +
coef[2,4]*fitted$Open*fitted$Activity)/
(1+exp(coef[1,4]*fitted$Open +
coef[2,4]*fitted$Open*fitted$Activity))
My end product would ideally look something like this but a product of the predict function....
Example output of fitted values.
Evidently Terry Therneau is less a purist on the matter of predictions from clogit models: http://markmail.org/search/?q=list%3Aorg.r-project.r-help+predict+clogit#query:list%3Aorg.r-project.r-help%20predict%20clogit%20from%3A%22Therneau%2C%20Terry%20M.%2C%20Ph.D.%22+page:1+mid:tsbl3cbnxywkafv6+state:results
Here's a modification to your code that does generate the 51 predictions. Did need to put in a dummy Strata column.
newdata <- data.frame(Open = seq(0,50,1),
Activity = rep(max(data$Activity),51), Strata=1)
risk <- predict(mod,newdata=newdata,type = "risk")
> risk/(risk+1)
1 2 3 4 5 6 7
0.5194350 0.5190029 0.5185707 0.5181385 0.5177063 0.5172741 0.5168418
8 9 10 11 12 13 14
0.5164096 0.5159773 0.5155449 0.5151126 0.5146802 0.5142478 0.5138154
15 16 17 18 19 20 21
0.5133829 0.5129505 0.5125180 0.5120855 0.5116530 0.5112205 0.5107879
22 23 24 25 26 27 28
0.5103553 0.5099228 0.5094902 0.5090575 0.5086249 0.5081923 0.5077596
29 30 31 32 33 34 35
0.5073270 0.5068943 0.5064616 0.5060289 0.5055962 0.5051635 0.5047308
36 37 38 39 40 41 42
0.5042981 0.5038653 0.5034326 0.5029999 0.5025671 0.5021344 0.5017016
43 44 45 46 47 48 49
0.5012689 0.5008361 0.5004033 0.4999706 0.4995378 0.4991051 0.4986723
50 51
0.4982396 0.4978068
{Warning} : It's actually rather difficult for mere mortals to determine which of the R-gods to believe on this one. I've learned so much R and statistics form each of those experts. I suspect there are matters of statistical concern or interpretation that I don't really understand.

How to prevent duplication in view when item is child of an element and also parent of itself

Have mapping table consist of data
PARENT_SYS_OBJECT_ID SYS_OBJECT_ID
18 38
38 38
and inside the view I have two condition for the case statement
WHEN SE.SYS_OBJECT_ID = 38 AND PSOM.PARENT_SYS_OBJECT_ID = 18 THEN
WHEN SE.SYS_OBJECT_ID = 38 AND PSOM.PARENT_SYS_OBJECT_ID = 38
However this will only happen with one case but not the other.
meaning one case will have data, but the other case would have null.
how should I fix this?
Thanks a bunch!