Open Refine: Exporting nested XML with templating - openrefine

I have a question regarding the templating option for XML in Open Refine. Is it possible to export data from two columns in a nested XML-structure, if both columns contain multiple values, that need to be split first?
Here's an example to illustrate better what I mean. My columns look like this:
Column1
Column2
https://d-nb.info/gnd/119119110;https://d-nb.info/gnd/118529889
Grützner, Eduard von;Elisabeth II., Großbritannien, Königin
https://d-nb.info/gnd/1037554086;https://d-nb.info/gnd/1245873660
Müller, Jakob;Meier, Anina
Each value separated by semicolon in Column1 has a corresponding value in Column2 in the right order and my desired output would look like this:
<rootElement>
<recordRootElement>
...
<edm:Agent rdf:about="https://d-nb.info/gnd/119119110">
<skos:prefLabel xml:lang="zxx">Grützner, Eduard von</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/118529889">
<skos:prefLabel xml:lang="zxx">Elisabeth II., Großbritannien, Königin</skos:prefLabel>
</edm:Agent>
...
</recordRootElement>
<recordRootElement>
...
<edm:Agent rdf:about="https://d-nb.info/gnd/1037554086">
<skos:prefLabel xml:lang="zxx">Müller, Jakob</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/1245873660">
<skos:prefLabel xml:lang="zxx">Meier, Anina</skos:prefLabel>
</edm:Agent>
...
</recordRootElement>
<rootElement>
(note: in my initial posting, the position of the root element was not indicated and it looked like this:
<edm:Agent rdf:about="https://d-nb.info/gnd/119119110">
<skos:prefLabel xml:lang="zxx">Grützner, Eduard von</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/118529889">
<skos:prefLabel xml:lang="zxx">Elisabeth II., Großbritannien, Königin</skos:prefLabel>
</edm:Agent>
)
I managed to split the values separated by ";" for both columns like this
{{forEach(cells["Column1"].value.split(";"),v,"<edm:Agent rdf:about=\""+v+"\">"+"\n"+"</edm:Agent>")}}
{{forEach(cells["Column2"].value.split(";"),v,"<skos:prefLabel xml:lang=\"zxx\">"+v+"</skos:prefLabel>")}}
but I can't find out how to nest the splitted skos:prefLabel into the edm:Agent element. Is that even possible? If not, I would work with seperate columns or another workaround, but I wanted to make sure, if there's a more direct way before.
Thank you!
Kristina

I am going to expand the answer from RolfBly using the Templating Exporter from OpenRefine.
I do have the following assumptions:
There is some other column left of Column1 acting as record identifying column (see first screenshot).
The columns actually have some proper names
The columns URI and Name are the only columns with multiple values. Otherwise we might produce empty XML elements with the following recipe.
We will use the information about records available via GREL to determine whether to write a <recordRootElement> or not.
Recipe:
Split first Name and then URI on the separator ";" via "Edit cells" => "Split multi-valued cells".
Go to "Export" => "Templating..."
In the prefix field use the value
<?xml version="1.0" encoding="utf-8"?>
<rootElement>
Please note that I skipped the namespace imports for edm, skos, rdf and xml.
In the row template field use the value:
{{if(row.index - row.record.fromRowIndex == 0, '<recordRootElement>', '')}}
<edm:Agent rdf:about="{{escape(cells['URI'].value, 'xml')}}">
<skos:prefLabel xml:lang="zxx">{{escape(cells['Name'].value, 'xml')}}</skos:prefLabel>
</edm:Agent>
{{if(row.index - row.record.fromRowIndex == row.record.rowCount - 1, '</recordRootElement>', '')}}
The row separator field should just contain a linebreak.
In the suffix field use the value:
</rootElement>

Disclaimer: If you're keen on using only OpenRefine, this won't be the answer you were hoping for. There may be ways in OR that I don't know of. That said, here's how I would do it.
Edit The trick is to keep URL and literal side by side on one line. b2m's answer below does just that: go from right to left splitting, not from left to right. You can then skip steps 2 and 3, to get the result in the image.
split each column into 2 columns by separator ;. You'll get 4 columns, 1 and 3 belong together, and 2 and 4 belong together. I'm assuming this will be the case consistently in your data.
export 1 and 3 to a file, and export 2 and 4 to another file, of any convenient format, using the custom tabular exporter.
concatenate those two files into one single file using an editor (I use Notepad++), or any other method you may prefer. Several ways to Rome here. Result in OR would be something like this.
You then have all sorts of options to put text strings in front, between and after your two columns.
In OR, you could use transform on column URL to build your XML using the below code
(note the \n for newline, that's probably just a line feed, you may want to use \r\n for carriage return + line feed if you're using Windows).
'<edm:Agent rdf:about="' + value + '">\n<skos:prefLabel xml:lang="zxx">' + cells.Name.value + '</skos:prefLabel>\n</edm:Agent>'
to get your XML in one column, like so
which you can then export using the custom tabular exporter again. Or instead you could use Add column based on this column in a similar manner, if you want to retain your URL column.
You could even do this in the editor without re-importing the file back into OR, but that's beyond the scope of this answer.

Related

Querying full and sub-strings via multi-valued parameter using SQL

I am building a report with Microsoft SSRS (2012) having a multi-value parameter #parCode for the user to filter for certain codes. This works perfectly fine. Generally, my query looks like this:
SELECT ...
FROM ...
WHERE
TblCode.Code IN (#Code)
ORDER BY...
The codes are of following type (just an excerpt):
C73.0
C73.1
...
C79.0
C79.1
C79.2
Now, in additon to filtering for multiple of these codes I would like to als be able to filter for sub-strings of the codes. Meaning, when the user enters (Example 1)
C79
for #parCodes The output should be
C79.0
C79.1
C79.2
So eventually the user should be able to enter (Example 2)
C73.0
C79
for #parCodes and the output would be
C73.0
C79.0
C79.1
C79.2
I managed to implement both functionalities seperately, so either filtering for multiple "complete" codes or filterting for sub-string of code, but not both simultaneously.
I tried to do something like
...
WHERE
TblCode.Code IN (#parCode +'%')
ORDER BY...
but this screws up the Example 2. On the other hand, if I try to work with LIKE or = instead of IN statement, then I won't be able to make the parameter multi-valued.
Does anyone have an idea how to realize such functionality or whether IN statement pared with multi-valued parameters simply doesn't allow for it?
Thank you very much!
Assuming you are using SQL server
WHERE (
TblCode.Code IN (#parCode)
OR
CASE
WHEN CHARINDEX('.', Code)>0 THEN LEFT(TblCode.Code, CHARINDEX('.', TblCode.Code)-1)
ELSE TblCode.Code
END IN (#parCode)
)
The first clause makes exact match so for your example matches C73.0
The second clause matches characters before the dot character so it would get values C79.0, C79.1, C79.2 etc
Warning: Filtering using expressions would invalidate the use of an index on TblCode.Code

Is there a more efficient way to parse a fixed txt file in Access than using queries?

I have a few large fixed with text files that have multiple specification formats in them. I need to parse out the txt files based on a character with a set location in the file. That character can have a different position in the file.
I have written queries for each of the different specifications (95 of them) with the start position and length hard coded into the query using the mid() function with a WHERE() function to filter the [Record Identifier] from the specification. As you can see below the 2 specifications in the WHERE() function have different placements in the txt file.
\\\
SELECT Mid([AllData],1,5) AS PlanNumber, Mid([AllData],6,4) AS Spaces1, Mid([AllData],10,3) AS Filler1, Mid([AllData],13,11) AS SSN, Mid([AllData],24,1) AS AccountIdentifier, Mid([AllData],25,5) AS Filler2, Mid([AllData],30,2) AS RecordIdentifier, Mid([AllData],32,1) AS FieldType, Mid([AllData],33,4) AS Filler3, Mid([AllData],37,8) AS HireDate, Mid([AllData],45,8) AS ParticipationDate, Mid([AllData],53,8) AS VestinDate, Mid([AllData],61,8) AS DateOfBirth, Mid([AllData],77,1) AS Spaces2, Mid([AllData],78,1) AS Reserved1, Mid([AllData],79,1) AS Reserved2, Mid([AllData],80,1) AS Spaces3
FROM TBL_Company1
WHERE (((Mid([AllData],30,2))="02") AND ((Mid([AllData],32,1))="D"));
\\\
Or
\\\
SELECT Mid([AllData],1,5) AS PlanNumber, Mid([AllData],6,4) AS Spaces1, Mid([AllData],10,3) AS Filler1, Mid([AllData],13,11) AS SSN, Mid([AllData],24,1) AS AccountIdentifier, Mid([AllData],25,7) AS RecordIdentifier, Mid([AllData],32,22) AS StreetAddressForBank, Mid([AllData],54,20) AS CityForBank, Mid([AllData],74,2) AS StateForBank, Mid([AllData],76,5) AS ZipCodeForBank
FROM TBL_Company1
WHERE (((Mid([AllData],25,7))="49EFTAD"));
\\\
Is there a way to Parse out this without having to hard code every position and length into the code?
I was thinking of having a table with all of the specifications in it and have an import function look to the specification table and parse out the data accordingly to a new table or maybe something else.
What I have done is not very scalable and if the format changes a little I would have to go back to each query to change it.
Any Help is greatly appreciated
I think in your situation, I'd want to be able to generate the SQL statement dynamically, as you suggest.
I'd have a table something like:
Format#,Position,OutColName,FromPos,Length,WhereValue
1,1,"PlanNumber",1,5,
1,2,"Spaces1",6,4,
...
1,n,,30,2,"02"
1,n+1,,32,1"D"
and then some VBA to process it and build and execute the SQL string(s). The SELECT clause entries would be recognized by having a value in the OutColName field and WHERE clause entries by values in the the WhereValue column.
Of course this is only more "efficient" in the sense that it's a bit easier to code up new formats or fix/modify existing ones.

Splunk : formatting a csv file during indexing, values are being treated as new columns?

I am trying to create a new field during indexing however the fields become columns instead of values when i try to concat. What am i doing wrong ? I have looked in the docs and seems according ..
Would appreciate some help on this.
e.g.
.csv file
**Header1**, **Header2**
Value1 ,121244
transform.config
[test_transformstanza]
SOURCE_KEY = fields:Header1,Header2
REGEX =^(\w+\s+)(\d+)
FORMAT =
testresult::$1.$2
WRITE_META = true
fields.config
[testresult]
INDEXED = True
The regex is good, creates two groups from the data, but why is it creating a new field instead of assigning the value to result?. If i was to do ... testresult::$1 or testresult::$2 it works fine, but when concatenating it creates multiple headers with the value as headername. Is there an easier way to concat fields , e.g. if you have a csv file with header names can you just not refer to the header names? (i know how to do these using calculated fields but want to do it during indexing)
Thanks

populate the content of two files into a final file using pentaho(different fields)

I have two files.
file A has, 3 columns
Sno,name,age,key,checkvalue
file B has 3 columns
Sno,title,age
I want to merge these two into final file C which has
Sno,name,age,key,checkvalue
I tried renaming "title" to "name" and then I used "Add constants" to add the other two field.
but, when i try to merge these, I get the below error
"
The name of field number 3 is not the same as in the first row received: you're mixing rows with different layout. Field [age String] does not have the same name as field [age String].
"
How to solve this issue.
After getting the input from file B. You use a select values and remove title column. Then you use a Add constants step and add new columns name,key,checkvalue and make Set empty string? to Y. Finally do the join accordingly. So it won't fail since both the files have same number of columns. Hope this helps.
Actually, there was an issue with the fields ... field mismatch. I used "Rename" option and it got fixed.

How to make LIKE in SQL look for specific string instead of just a wildcard

My SQL Query:
SELECT
[content_id] AS [LinkID]
, dbo.usp_ClearHTMLTags(CONVERT(nvarchar(600), CAST([content_html] AS XML).query('root/Physicians/name'))) AS [Physician Name]
FROM
[DB].[dbo].[table1]
WHERE
[id] = '188'
AND
(content LIKE '%Urology%')
AND
(contentS = 'A')
ORDER BY
--[content_title]
dbo.usp_ClearHTMLTags(CONVERT(nvarchar(600), CAST([content_html] AS XML).query('root/Physicians/name')))
The issue I am having is, if the content is Neurology or Urology it appears in the result.
Is there any way to make it so that if it's Urology, it will only give Urology result and if it's Neurology, it will only give Neurology result.
It can be Urology, Neurology, Internal Medicine, etc. etc... So the two above used are what is causing the issue.
The content is a ntext column with XML tag inside, for example:
<root><Location><location>Office</location>
<office>Office</office>
<Address>
<Address1>1 Road</Address1>
<Address2></Address2>
<City>Qns</City>
<State>NY</State>
<zip>14404</zip>
<phone>324-324-2342</phone>
<fax></fax>
<general></general>
<from_north></from_north>
<from_south></from_south>
<from_west></from_west>
<from_east></from_east>
<from_connecticut></from_connecticut>
<public_trans></public_trans>
</Address>
</Location>
</root>
With the update this content column has the following XML:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<Physicians>
<name>Doctor #1</name>
<picture>
<img src="phys_lab coat_gradation2.jpg?n=7529" />
</picture>
<gender>M</gender>
<langF1>
English
</langF1>
<specialty>
<a title="Neurology" href="neu.aspx">Neurology</a>
</specialty>
</Physicians>
</root>
If I search for Lab the result appears because there is the text lab in the column.
This is what I would do if you're not into making a CLR proc to use Regexes (SQL Server doesn't have regex capabilities natively)
SELECT
[...]
WHERE
(content LIKE #strService OR
content LIKE '%[^a-z]' + #strService + '[^a-z]%' OR
content LIKE #strService + '[^a-z]%' OR
content LIKE '%[^a-z]' + #strService)
This way you check to see if content is equal to #strService OR if the word exists somewhere within content with non-letters around it OR if it's at the very beginning or very end of content with a non-letter either following or preceding respectively.
[^...] means "a character that is none of these". If there are other characters you don't want to accept before or after the search query, put them in every 4 of the square brackets (after the ^!). For instance [^a-zA-Z_].
As I see it, your options are to either:
Create a function that processes a string and finds a whole match inside it
Create a CLR extension that allows you to call .NET code and leverage the REGEX capabilities of .NET
Aaron's suggestion is a good one IF you can know up front all the terms that could be used for searching. The problem I could see is if someone searches for a specific word combination.
Databases are notoriously bad at semantics (i.e. they don't understand the concept of neurology or urology - everything is just a string of characters).
The best solution would be to create a table which defines the terms (two columns, PK and the name of the term).
The query is then a join:
join table1.term_id = terms.term_id and terms.term = 'Urology'
That way, you can avoid the LIKE and search for specific results.
If you can't do this, then SQL is probably the wrong tool. Use LIKE to get a set of results which match and then, in an imperative programming language, clean those results from unwanted ones.
Judging from your content, can you not leverage the fact that there are quotes in the string you're searching for?
SELECT
[...]
WHERE
(content LIKE '%""Urology""%')