Buffer allocation exception in protege - sparql

I am working on a large ontology through Protege (5.5.0). Whenever
I try to add the SPARQL Query tab to my viewport my ontology, it only gives a blank interface under the SPARQL query tab. And only what it says is the following lines.
“An error occurred whilst creating the view
BufferAllocationException:
Not enough memory to allocate buffers to grow from 0 -> 32 element.”`
I tried to increase the heap size in the JAVA control panel. Screenshot is given below.
But the problem is same as before.
I increasaed the heap size as follows
Somewhere I found a solution to this problem by updating the owlApi RDF library but I didn't find this plugin in the list of plugins.
Can you please refer me to a solution?
It would be a great favour.
I tried to have a SPARQL query tab on Protege. There should be a proper entry field there but the whole window of SPARQL query is blank as follows.
SPARQL query tab

The heap size should be changed in the Protege boot script - this would be a fine named run.sh or run.bat, depending on your OS. The change you show might not take effect, if Protege is running with a different VM than the one you set the parameter for.
OWLAPI is not a plugin for Protege, it's a library Protege relies on and it's shipped in the /bundles folder. To update to a newer build, you can download the maven artifact for owlapi osgidistribution and replace the file in the bundles folder with the new file. However, I don't believe it's related to the problem you describe.

Related

Setting default config for Visual Graph in GraphDB

Is it possible to set a custom advanced graph configuration as a default config to use upon clicking 'Visual Graph' button (e.g. from the 'Graph overview' sidebar) in GraphDB Free 8.4.1?
I have declared advanced configurations that are suitable for exploring my data, but they are only available through 'Visual Graph' menu. I would like to use them (at least one) also when switching to graph visualisation from the view of triples.
I haven't found such an option so far. A desperate move could be to rewrite the URL and manually add a fixed '?config=my_config' parameter, but I hope there is a better way do solve it.
There is no option to configure the default queries through GDB Workbench, but what you can do is change the queries in the files in your GDB distribution directly. They are located in graphdb/graphdb-/lib/workbench/WEB-INF/lib/graphdb-framework-graph-explore-.jar/graph-explore-queries/.. resourceLinks.sparql is the query for links expansion.

Emacs equivalent of Xcode's "Open Quickly"

I'm trying to get a Cocoa development environment working in Emacs, and I'm 80% of the way there. The one feature I miss is Xcode's "Open Quickly", which basically performs a fuzzy match of the string you type against the filenames referenced in the Xcode workspace and the symbols defined in those files.
My problem is that our project is huge: if I generate a TAGS file using etags for the .h and .m files in our project's sub-directories, the result is over a gig in size and Emacs complains "TAGS file is large. Really open?", and if I say yes, then Emacs hangs and becomes essentially unusable. Of course, this is before I've even considered indexing tags for system libraries. I've also tried projectile, but unfortunately it's similarly unusable on a project of my size (on the order of a full minute to find a match).
It occurs to me that all the indexing information I really want is in the Xcode projects themselves, so if I had an Emacs package that could parse them and traverse their dependencies, that might be a start, but I'm not aware of any such package.
Any suggestions/solutions in this respect?
I've never found a single function quite as convenient as Xcode's "Open Quickly", but these days I use
helm-projectile-git-grep when I want to match on strings I know to be in the filenames, and
helm-git-grep for quick searches through the contents of the files themselves.
I've found that this gets me really close to what I wanted in my original question.

Pharo 3.0 - Is persistence automatic?

I noticed that after running into an issue last night, relaunching Pharo 3.0 didn't "undo" my working set - everything appeared to be as it was when I closed it. I saw where Fuel is included with Pharo now - does it automatically persist your session? I was under the impression that you had to do some tricks to make it actually work with your application.
Am I wrong?
Pharo uses an Image. The image basically is the snapshot of your memory contents when you use Pharo.
Upon startup this image is loaded from the image-file into memory and Pharo starts to run. The inverse happens when you save (snapshot) your session: the current state/memory is saved to the .image file. That includes all tools opened in the current session, all running processes and all live objects.
This has nothing to do with Fuel, which is a separate object serialization library.
There are two mechanisms in Pharo:
The image. The image is a memory snapshot containing all the objects (and in particular the compiled methods and classes as objects). When you save the image, you are saving the complete state of the system to disk. You can open an image (it loads the memory back and the execution continues where it stopped). In fact there is also another file that is called the change file. This file contains the textual representation of the classes and methods you edited. The tools are using this file to show you method code for example.
Now in addition to the concept of image (memory snapshot). The system records in permanence your code edition. After each compilation phase, the change is committed to the changes file. You can see what you did using the changeSorter or version browser (note that if you do not save your image, your changes will not be browsable using a changesorter because it is a simple tools). Now even if you did not save your image, your changes are logged in the changes file. There is a way to recover your changes using the "Recovery lost changes..." menu item under the Tools menu.
With this tools you can browse all the changes that have been recorded automatically and replay them. We are working on new tools for the future.
Now in general you should not rely on such tools. Using the Pharo distributed version management system (monticello) to create packages and publish them on forges such as SmalltalkHub.
Finally Fuel is an object serializer that is not used for saving Pharo snapshot. Fuel is a fast serializers that people used when they want to select what they serialize - usually graphs of objects.
All this information is also available in the free Pharo books: http://pharobyexample.org
and http://rmod.lille.inria.fr/pbe2/

MarkLogic facets on binary content

I ingested large binary into MarkLogic using the content ingestion framework, leaving the binary files on the file system, and I used the transformation to extract metadata from the images into properties. When I search on this content using the search API it does not return facets. I believe that this happens because the fragment returned contains the pointer to the image on the file system and not the properties document. Is there any way around this? I'd like to created faceted navigation base upon the properties.
If you take a look at the Search Developer's Guide for 5.0, section 2.2.6 talks about the fragment scope option that is new in 5.0, I think that will handle your case. There's an example in there showing how to create a facet on the last-modified property using a local fragment scope, and it sounds like that pattern might be what you're looking for.
If the search API doesn't handle this use-case, you could always call cts:element-values and cts:frequency yourself. You can still use search:parse and search:resolve to provide query parsing and basic search results.
http://docs.marklogic.com/5.0doc/docapp.xqy#search.xqy?start=1&cat=all&query=cts%3Aelement-values&button=search

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.