Adding nested triple to blank nodes in OntoRefine not possible? - graphdb

I'm using the latest version of OntoRefine and want to add a nested triple to a blank node. I'm following the exact instructions on Ontotext website. In the interface there is supposed to appear an arrow next to the object (in the picture on the right side, just under the waste bucket). But it does not appear when the object is a blank node.
It does appear when I create an IRI, but in my case a blank node would be the way to go. When I import the files that are in the tutorial on the Ontotext website, I still do not see the nested triples that are in the example. However...... when you export as RDF, the correct triples - including the ones with the blank nodes as subject - are created.
So it seems that the functionality works, but the interface does not support it. I was also able to manually change the JSON file for my own data and got the correct RDF triples.
Question is: am I the only one that has a problem with the interface? Am I doing something wrong or is this a bug? Curious to know what your experiences are.

in what version of GraphDB are you facing this issue, as problems with blank nodes in the OntoRefine's RDF Mapper have been addressed in GraphDB version 9.5.

This bug has reappeared, but will be fixed in OntoRefine 1.1.1 to be released next week.

Related

SPARQL Query tab in Protege doesn't show anything

I open the SPARQL Query tab in Protege but the result is this:
How can I write my query?
It happens in version 2.0.1 of Protégé SPARQL Plugin, bundled with
Protege 5.2, after any ontology is loaded.
 
Update the the "OWLAPI RDF Library" and "Protege SPARQL Plugin" to the latest version with the auto-update mechanism (which appears when starting up Protege or can be activated through the File > Check for plugins… menu):
 
Source: https://github.com/protegeproject/sparql-query-plugin/issues/10.
For Protege 5.5.0, you may also need to add query view from menu:
Window--> View-> Query View-> SPARQL Query
And drop it into the SPARQL Query tab.
try to enlarge the heap size for Protégé.
I have had the same problem for SPARQL Query and others. Plugin update never worked for me and each time I just delete the new plugins and Protégé works as If nothing happens.
So the solution I have found till now is to not update plugins but to enlarge heap size.
How to do it?
There is a wiki for Protégé 3 and 4 here: https://protegewiki.stanford.edu/wiki/Setting_Heap_Size but not for Protégé 5 so I was supposed to figure it myself.
Well, you should just open the Protege.l4j.ini file and change the heap size limits from {200,500} to {2000,5000} (I am not sure If your computer will stand that, you must read the wiki above to know more about it).
It should be like this:
-Xms2000M
-Xmx5000M
-Xss16M
Hope this can help someone there.
Best regards,
AmelieSpark.
Another way around is downloading the "Snap SPARQL Query" plugin and adding the view on the SPARQL tab.

Realm database performance

I'm trying to use this database with react-native. First of all, i've found out that it can't retrieve plain objects - i have to retrieve all of the properties in the desired object tree recursively. And it takes about a second per object (~50 numeric props). Slow-ish!
Now, i've somehow imported ~9000 objects in it (each up to 1000 chars including titles). Looks like there is no easy ay to import it, at least it is not described in docs. Anyway, that's acceptable. But now i've found out that my database size (default.realm) is 3.49GB (!). The JSON file, which i was importing is only 6.5mb. I've opened default.realm with Realm Browser and it shows only those ~9000 objects, nothing else. Why so heavy?
Either, i don't understand something very fundamental about this database or it is complete garbage. I really hope i'm wrong. What am I doing wrong?
Please make sure you are not running in chrome debug mode. This is probably why things seem so slow. As far as the file size issue goes, it would be helpful if you posted your code to help figure out why that is happening.

Unmarshaling SOS DescribeSensor response via JSONIX yields incomplete object

I am attempting to use jsonix to unmarshal xml response from an SOS DescribeSensor request. In the broader scope I am going to be using jsonix to unmarshal all responses from SOS, particularly 2.0. I noticed that the response uses SML or SensorML namespace and so I added the extra module dependencies and sub-dependencies (namely GML_3_1_1, SWE_1_0_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, and of course SensorML_1_0_1). Before I added these I noticed the return was a generic json (see first screenshot, particularly near sml:physicalsystem). After I added the dependencies I got an error in my console during part of the unmarshaling process which I do not understand (see second screenshot). Here is a link to the xml response from the server for reference. https://drive.google.com/file/d/0B8LdnPVJpHz7M3VGb0FZc2lQcjQ/view?usp=sharing. I would really like to understand if this has anything to do with the ordering of the modules when I create the context though I believe it is fine. Once the solution to this is discovered I have two follow up questions.
Is it reasonable to expect (in general) that using the modules built from the ogc-schemas on the highsource github page should allow me to handle all responses via jsonix? i.e. every element will always be mapped to a defined type. I know these schemas/mappings are very complicated.
Are there any other tools I can use to verify the modules or validate them against schemas to make life easier rather than tracking down elements on an individual basis or tracing through various module files when jsonix seems to parse incorrectly?
Thanks in advance - Richard3d
var context = new Jsonix.Context([XLink_1_0, GML_3_2_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, GML_3_1_1, SWE_1_0_1, SensorML_1_0_1, OWS_1_1_0, SWE_2_0, SWES_2_0, WSN_T_1, WS_Addr_1_0_Core, OM_2_0, ISO19139_GMD_20070417, ISO19139_GCO_20070417, ISO19139_GSS_20070417, ISO19139_GTS_20070417, ISO19139_GSR_20070417, Filter_2_0, SOS_2_0]);
Disclaimer: I am the author of jsonix and main dev of ogc-schemas.
First of all, you're on the right track, stay on it.
Yes, if you have all the required mappings then you should get a "nice" JSON with all the properties with specific types, cardinatilities etc.
The goal of Jsonix is to provide bi-directional XML<->JSON conversion with deterministic structure, types and cardinalities.
The goal of OGC Schemas is to provide JAXB and Jsonix mappings for all of the OGC Schemas.
So togethere these two should allow to transform any OGC XMLs from/to JSON.
"Generic JSON" was actually just DOM. If a property allows DOM and Jsonix does not have mapping for certain element, it is just taken as DOM. You were just missing SensorML mappings.
You're right the structure of schema dependencies is very complex. But this is something we should take to OGC. :) It's a bit crazy that you need, like, a dozen of schemas to read sensor data. I was actually intending to build automatic loading of dependencies but did not yet implement this feature.
The next GML_3_1_1.AbstractFeatureType problem is probably this issue. Try changing the order of mappings (move GML_3_1_1 to the earlier places). Actually the order of mappings should not be significant, but, well, there's a bug.
Tools to cross-check - no, probably not. My approach is to do roundtrip tests (unmarshal-marshal-unmarshal-check equality). From experience, there are normally a couple of caveats at the start, but then it works by design. Of course there are bugs in Jsonix and there may be problems with mappings, but this gets sorted out.
Also feel to create a support project here:
https://github.com/highsource/jsonix-support
For instance https://github.com/highsource/jsonix-support/s/sos.
Here's an example of such a support project:
https://github.com/highsource/jsonix-support/tree/master/l/lightstalker89
I need this because just downloading XML from Google Drive (a) takes me effort to set up the support project (b) legally dangerous as I have not idea where this XML comes from and if I have rights/license to add these files to my test suites.

Access closure property names in the content block at runtime

I want to evaluate my content blocks before running my test suite but the closures' property names is in bytecode already. I'm ooking for the cleanest solution (compared with parsing source manually).
Already tried solution outlined in this post (and I'd still wind up doing some RegEx/parsing) but could only get it to work via script execution engine. It failed in IDE and GroovyConsole. Rather than embedding a Groovy script in project's code, I thought I'd try using Geb's native classes.
Is building on the suggestion about extending Geb Navigators here viable for Geb's PageContentSupport class whose contentTemplates contain a LinkedHashMap of exactly what I need? If yes, would someone provide guidance? If no, any suggestions?
It is currently not possible to get hold of all content elements for a given page/module. Feel free to create an issue for this in Geb's bug tracker, but remember that all that Geb can provide is either a list of content element names or a map from these names to closures that create these elements.
Having that information isn't a generic solution to your problem because it's possible for content elements to take parameters and there are situations where your content elements will be available on the page only after some other actions are performed (for example you have to click on button to reveal a section of a page that uses ajax to retrieve it's content). So I'm afraid that simply going over all elements and checking if they don't throw any errors will not cut it.
I'm still struggling to see what would "evaluating" all content elements prior to running the suite buy you. Are you after verifying that your content elements still work to get a faster feedback than running the whole suite? I'm pretty sure that you won't be able to fully automate detection of content definitions that don't work anymore. In my view it will be more effort than it's worth.

Dynamic Magento Grid built with database query

This is my first time asking a question here so, be nice;p .. I'm working with Magento (and Zend Framework) for the first time and I'm trying to build a custom grid that will populate based off of a manually written query. I'm trying to extend the Mage_Core_Model_Mysql4_Collection_Abstract to allow a query to be loaded into it and then parse the select fields in the extended Grid class to make it dynamic... Is this even possible or am I beating a dead horse? I've been at it for a week now and I'm not getting anywhere. The problem seems to be that inside the __Model_mysql4_Collection class has to be initialized with a resource model using _init() in the constuct
As a learning exercise use the module creator to make an admin grid page and see how that is done. Or even modify it's output to get what you need.
There will be a grid container block, a grid block (with _prepareCollection and _prepareColumns methods), a model, a resource model (representing a single record) and a collection resource model (representing several records).
Providing your own _init methods shouldn't be any sort of problem. Perhaps you'd like to post your config.xml and code so far as a pastebin or something.