Cannot retrieve inferences for named individual added via OWL API without reloading ontology - api

In my application I need to add named individuals to an ontology. At a later point I need to be able to retrieve these named individuals and determine their inferred types, but for some reason I am not able to retrieve their types. I get either an exception or an empty set depending on the OWL reasoner I am using.
Here is a self contained example illustrating the problem:
package owl.api.test.StandaloneOWLNamedIndividualRetrievalv5;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Set;
import java.util.stream.Collectors;
import org.semanticweb.HermiT.ReasonerFactory;
import org.semanticweb.owlapi.apibinding.OWLManager;
import org.semanticweb.owlapi.model.IRI;
import org.semanticweb.owlapi.model.OWLClass;
import org.semanticweb.owlapi.model.OWLClassAssertionAxiom;
import org.semanticweb.owlapi.model.OWLDataFactory;
import org.semanticweb.owlapi.model.OWLIndividual;
import org.semanticweb.owlapi.model.OWLOntology;
import org.semanticweb.owlapi.model.OWLOntologyManager;
import org.semanticweb.owlapi.model.OWLOntologyStorageException;
import org.semanticweb.owlapi.model.PrefixManager;
import org.semanticweb.owlapi.model.parameters.ChangeApplied;
import org.semanticweb.owlapi.reasoner.NodeSet;
import org.semanticweb.owlapi.reasoner.OWLReasoner;
import org.semanticweb.owlapi.reasoner.OWLReasonerFactory;
import org.semanticweb.owlapi.search.EntitySearcher;
import org.semanticweb.owlapi.util.DefaultPrefixManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.Marker;
import org.slf4j.MarkerFactory;
import openllet.owlapi.OpenlletReasonerFactory;
import uk.ac.manchester.cs.jfact.JFactFactory;
public class App {
private static Logger logger = LoggerFactory
.getLogger(owl.api.test.StandaloneOWLNamedIndividualRetrievalv5.App.class);
// Why This Failure marker
private static final Marker WTF_MARKER = MarkerFactory.getMarker("WTF");
public static void main(String[] args) {
try {
// Setup physical IRI for storing ontology
Path path = Paths.get(".").toAbsolutePath().normalize();
IRI loadDocumentIRI = IRI.create("file:" + path.toFile().getAbsolutePath() + "/SimpleOntology.owl");
logger.trace("documentIRI=" + loadDocumentIRI);
IRI saveDocumentIRI = IRI.create("file:" + path.toFile().getAbsolutePath() + "/SimpleOntologyUpdated.owl");
logger.trace("documentIRI=" + saveDocumentIRI);
// Initialize
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory dataFactory = manager.getOWLDataFactory();
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(loadDocumentIRI);
// OWLReasonerFactory reasonerFactory = new JFactFactory();
OWLReasonerFactory reasonerFactory = new ReasonerFactory();
// OWLReasonerFactory reasonerFactory = OpenlletReasonerFactory.getInstance();
OWLReasoner reasoner = reasonerFactory.createReasoner(ontology);
PrefixManager pm = new DefaultPrefixManager(ontology.getOntologyID().getOntologyIRI().get().getIRIString());
// Get references to a new named individual and an existing class
OWLIndividual individual = dataFactory.getOWLNamedIndividual("#ind1", pm);
OWLClass owlClass = dataFactory.getOWLClass("#ClassB", pm);
// Create class assertion axiom
OWLClassAssertionAxiom classAssertionAxiom = dataFactory.getOWLClassAssertionAxiom(owlClass, individual);
// Add class assertion axiom to ontology
ChangeApplied changeApplied = manager.addAxiom(ontology, classAssertionAxiom);
logger.trace("ChangeApplied = " + changeApplied);
if (changeApplied.equals(ChangeApplied.SUCCESSFULLY)) {
try {
manager.saveOntology(ontology, saveDocumentIRI);
} catch (OWLOntologyStorageException e) {
logger.error(e.getMessage());
}
}
// Now try to retrieve the individual
logger.trace(
"Trying to retrieve individual = " + classAssertionAxiom.getIndividual().asOWLNamedIndividual());
Set<Object> classExpressionTypes = EntitySearcher.getTypes(classAssertionAxiom.getIndividual(), ontology)
.collect(Collectors.toSet());
logger.trace("Individual = " + classAssertionAxiom.getIndividual() + " has types based on EntitySearcher "
+ classExpressionTypes);
NodeSet<OWLClass> types = reasoner.getTypes(classAssertionAxiom.getIndividual().asOWLNamedIndividual(),
false);
logger.trace("Individual = " + classAssertionAxiom.getIndividual()
+ " has types based on reasoner.getTypes " + types);
} catch (Throwable t) {
logger.error(WTF_MARKER, t.getMessage(), t);
}
}
}
Here is the Simple ontology I use to test against:
<?xml version="1.0"?>
<rdf:RDF xmlns="http://www.semanticweb.org/2017/simple#"
xml:base="http://www.semanticweb.org/2017/simple"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:xml="http://www.w3.org/XML/1998/namespace"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#">
<owl:Ontology rdf:about="http://www.semanticweb.org/2017/simple"/>
<owl:Class rdf:about="http://www.semanticweb.org/2017/simple#ClassA"/>
<owl:Class rdf:about="http://www.semanticweb.org/2017/simple#ClassB">
<rdfs:subClassOf rdf:resource="http://www.semanticweb.org/2017/simple#ClassA"/>
</owl:Class>
</rdf:RDF>
After running this code I expect that it will determine that the types of individual ind1 are Thing, ClassA and ClassB.
Thinking that this problem is related to perhaps to a specific OWL reasoner, I have tried using JFact, HermiT and Openllet. JFact throws a NullPointerException, HermiT only returns owl:Thing and Openllet nothing. However, when I save the changes to the ontology to file and reload it, I can find the inferred types of the individual I have added using any of these reasoners.
I have tested this with versions 5.1.2 and 4.5.0 of the OWL API. I have also tried calling reasoner.precomputeInferences() even though the documentation states that this is not necessary, but it made no difference.

reasonerFactory.createReasoner(ontology) creates a buffering reasoner, i.e. it has to be synchronized manually after you changed the ontology.
More details from the Javadoc:
Ontology Change Management (Buffering and Non-Buffering Modes)
At creation time, an OWLReasoner will load the axioms in the root
ontology imports closure. It will attach itself as a listener to the
OWLOntologyManager that manages the root ontology. The reasoner will
listen to any OWLOntologyChanges and respond appropriately to them
before answering any queries. If the BufferingMode of the reasoner
(the answer to getBufferingMode() is BufferingMode.NON_BUFFERING) the
ontology changes are processed by the reasoner immediately so that any
queries asked after the changes are answered with respect to the
changed ontologies. If the BufferingMode of the reasoner is
BufferingMode.BUFFERING then ontology changes are stored in a buffer
and are only taken into consideration when the buffer is flushed with
the flush() method. When reasoning, axioms in the root ontology
imports closure, minus the axioms returned by the
getPendingAxiomAdditions() method, plus the axioms returned by the
getPendingAxiomRemovals() are taken into consideration. Note that
there is no guarantee that the reasoner implementation will respond to
changes in an incremental (and efficient manner) manner.
Two options:
Call reasoner.flush() before you're asking the reasoner for inferences.
Create a non-buffering reasoner, i.e. use reasonerFactory.createNonBufferingReasoner(ontology)

the problem is that the reasoner does use the ontology at it is when the reasoner is created. I don't now if you use Protege (Desktop), which uses the OWL API under the hood. I you do and also use a reasoner in Protege you should have noticed that you have to refresh the reasoner after you made changes to the ontology. In Protege this is also indicated in the status line at the bottom of the window.
You have to recreate the reasoner every time you do changes to the ontology. The solve the problem in your example add the following line before block where you retrieve the individuals:
reasoner = reasonerFactory.createReasoner(ontology);
Best regards
Jens

Related

Opentelemetry 1.4.0 context propagation

I was running Opentelemetry 0.18rc1 and my application was working perfectly.
I'm using the W3C Trace Context specification for context propagation. For injection and extraction i used TraceContextTextMapPropagator
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.context import get_current
prop = TraceContextTextMapPropagator()
carrier = {}
prop.inject(set_in_carrier=dict.__setitem__, carrier=carrier, context=get_current())
and for extraction in the next micro-service, i used:
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.propagators import textmap
prop = TraceContextTextMapPropagator()
carrier_getter = textmap.DictGetter()
context = prop.extract(carrier_getter, request.headers)
When I tried to upgrade to the latest opentelemetry 1.4.0 my injection and extraction methods stopped working. It seems the DictGetter() class was removed from the new version, so i don't know how to set the getter parameter in the extract method. Also, the set_in_carrier was replaced with a setter parameter in the inject method and I'm not sure how to to set this one either.
How do I implement the Inject and Extract methods in opentelemetry 1.4.0 for W3C Trace Context specification?
set_in_carrier was replaced with something called Setter and DefaultSetter implements that to set value in dict-like carriers (ex. HTTP headers). You don't need to explicitly pass the current context because If you don't pass any context it takes the current context. And Inject into carrier would be simplified to
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
prop = TraceContextTextMapPropagator()
carrier = {}
prop.inject(carrier=carrier)
And when you extract it would be
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
prop = TraceContextTextMapPropagator()
context = prop.extract(carrier=request.headers)
Here is a working example when injecting and extracting happens in the same file but it real world it is usually happens in different services.
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (BatchSpanProcessor,
ConsoleSpanExporter)
from opentelemetry.trace.propagation.tracecontext import \
TraceContextTextMapPropagator
trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
tracer = trace.get_tracer(__name__)
prop = TraceContextTextMapPropagator()
carrier = {}
# Injecting
with tracer.start_as_current_span("first-span") as span:
prop.inject(carrier=carrier)
print("Carrier after injecting span context", carrier)
# Extracting
ctx = prop.extract(carrier=carrier)
with tracer.start_as_current_span("next-span", context=ctx):
pass

Create data property XSD:string with Jena

the problem sounds so simple: I would like to create an data property for an individual as XSD:string in my ontology.
I can create properties of XSD:DateTime, XSD:Float or XSD:int, but if I use XSD:string, I get a untyped property!
I created a minimal example, which create an ontology with one class, one individual an two data properties. A DateTime, which works like expected and one string, which has no type in the ontology.
I tried with Jena versions 3.4 and 3.0.1 and have no idea who to fix it.
package dataproperty;
import java.io.FileOutputStream;
import org.apache.jena.datatypes.xsd.XSDDatatype;
import org.apache.jena.ontology.OntModel;
import org.apache.jena.rdf.model.ModelFactory;
import org.apache.jena.rdf.model.Property;
import org.apache.jena.rdf.model.Resource;
import org.apache.jena.rdf.model.ResourceFactory;
public class DataProperty {
public static void main(String[] args) throws Exception {
OntModel model = ModelFactory.createOntologyModel();
String OWLPath = "DataProp.owl";
try{
String NS = "http://www.example.org/ontology.owl#";
//Create Ontology
model.createClass(NS+"Test");
Resource r = model.createResource(NS+"Test");
model.createIndividual(NS+"Indi1", r);
r = model.createResource(NS+"Indi1");
model.createDatatypeProperty(NS+"Name");
model.createDatatypeProperty(NS+"Date");
//Add Data Properties
Property p = model.getProperty(NS+"Name");
model.add(r, p, ResourceFactory.createTypedLiteral("MyName", XSDDatatype.XSDstring));
p = model.getProperty(NS+"Date");
model.add(r, p, ResourceFactory.createTypedLiteral("2017-08-12T09:03:40", XSDDatatype.XSDdateTime));
//Store the ontology
FileOutputStream output = null;
output = new FileOutputStream(OWLPath);
model.write(output);
}catch (Exception e) {
System.out.println("Error occured: " + e);
throw new Exception(e.getMessage());
}
}
}
It is not untyped in RDF 1.1 - it's written in short form (better compatibility).
e.g.
https://www.w3.org/TR/turtle/
Section 2.5.1
"If there is no datatype IRI and no language tag, the datatype is xsd:string."

Netflix Ribbon and Polling for Server List

I'm currently trying out the Netflix Ribbon library and I'm trying to dynamically update a list of available endpoints to load balance.
I've successfully created a httpResourceGroup that uses a configuration based server list, e.g.:
httpResourceGroup = Ribbon.createHttpResourceGroup("searchServiceClient",
ClientOptions.create()
.withMaxAutoRetriesNextServer(3)
.withLoadBalancerEnabled(true)
.withConfigurationBasedServerList(serverList))
However, I'd like to be able to use a DynamicServerList in the httpResourceGroup. I've managed to build a load balancer as follows:
LoadBalancerBuilder.<Server>newBuilder()
.withDynamicServerList(servicesList)
.buildDynamicServerListLoadBalancer();
but I can't find a way to swap out the load balancer configured by the httpResourceGroup ClientOptions.
Anyone know how I can do this?
The solution is to not specify withConfigurationBasedServerList() when constructing an HttpResourceGroup since this I believe this is meant for a fixed list though I am not sure. There are many ways to initialize a dynamic load balancer (typically you would never swap it out, but reuse the same load balancer and swap out new Servers as they become available or go away. The most straightforward way to do this might be via Archaius-based configuration.
Option 1
Create a config.properties file on the classpath containing the following
ribbon.NIWSServerListClassName=com.example.MyServerList
ribbon.NFLoadBalancerRuleClassName=com.netflix.loadbalancer.RoundRobinRule
Option 2
System.setProperty("ribbon.NIWSServerListClassName", "com.example.MyServerList");
System.setProperty("ribbon.NFLoadBalancerRuleClassName", "com.netflix.loadbalancer.RoundRobinRule");
Create a ServerList implementation
import java.util.Arrays;
import java.util.List;
import com.netflix.loadbalancer.Server;
import com.netflix.loadbalancer.ServerList;
public class MyServerList implements ServerList<Server> {
#Override
public final List<Server> getUpdatedListOfServers() {
// TODO do some fancy stuff here
return Arrays.asList(new Server("1.2.3.4", 8888), new Server("5.6.7.8", 9999));
}
#Override
public final List<Server> getInitialListOfServers() {
return Arrays.asList(new Server("1.2.3.4", 8888), new Server("5.6.7.8", 9999));
}
}
Run the code
HttpResourceGroup httpResourceGroup = Ribbon.createHttpResourceGroup("searchServiceClient",
ClientOptions.create()
.withMaxAutoRetriesNextServer(3);
HttpRequestTemplate<ByteBuf> recommendationsByUserIdTemplate = httpResourceGroup.newTemplateBuilder("recommendationsByUserId", ByteBuf.class)
.withMethod("GET")
.withUriTemplate("/users/{userId}/recommendations")
.withFallbackProvider(new RecommendationServiceFallbackHandler())
.withResponseValidator(new RecommendationServiceResponseValidator())
.build();
Observable<ByteBuf> result = recommendationsByUserIdTemplate.requestBuilder()
.withRequestProperty("userId", “user1")
.build()
.observe();
It sounds like you already have a ServerList implementation which is where you would do any event driven updates to your server list, but keep the load balancer the same.

Lucene 4.1 : How split words that contains "dots" when indexing?

I'l trying to figure out what I should do to index my keywords that contains "." .
ex : this.name
I want to index the terms : this and name in my index.
I use the StandardAnalyser. I try to extends the WhitespaceTokensizer or extends TokenFilter, but I'm not sure if I'm in the right direction.
if I use the StandardAnalyser, I'll obtain "this.name" as a keyword, and that's not what I want, but the analyser do the rest correctly for me.
You can put a CharFilter in front of StandardTokenizer that converts periods and underscores to spaces. MappingCharFilter will work.
Here's MappingCharFilter added to a stripped-down StandardAnalyzer (see the original 4.1 version here):
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.charfilter.MappingCharFilter;
import org.apache.lucene.analysis.charfilter.NormalizeCharMap;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopAnalyzer;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;
import org.apache.lucene.util.Version;
import java.io.IOException;
import java.io.Reader;
public final class MyAnalyzer extends StopwordAnalyzerBase {
private int maxTokenLength = 255;
public MyAnalyzer() {
super(Version.LUCENE_41, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
}
#Override
protected TokenStreamComponents createComponents
(final String fieldName, final Reader reader) {
final StandardTokenizer src = new StandardTokenizer(matchVersion, reader);
src.setMaxTokenLength(maxTokenLength);
TokenStream tok = new StandardFilter(matchVersion, src);
tok = new LowerCaseFilter(matchVersion, tok);
tok = new StopFilter(matchVersion, tok, stopwords);
return new TokenStreamComponents(src, tok) {
#Override
protected void setReader(final Reader reader) throws IOException {
src.setMaxTokenLength(MyAnalyzer.this.maxTokenLength);
super.setReader(reader);
}
};
}
#Override
protected Reader initReader(String fieldName, Reader reader) {
NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder();
builder.add(".", " ");
builder.add("_", " ");
NormalizeCharMap normMap = builder.build();
return new MappingCharFilter(normMap, reader);
}
}
Here's a quick test to demonstrate it works:
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.BaseTokenStreamTestCase;
public class TestMyAnalyzer extends BaseTokenStreamTestCase {
private Analyzer analyzer = new MyAnalyzer();
public void testPeriods() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"this.name; here.i.am; sentences ... end with periods.",
new String[] { "name", "here", "i", "am", "sentences", "end", "periods" } );
}
public void testUnderscores() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"some_underscore_term _and____ stuff that is_not in it",
new String[] { "some", "underscore", "term", "stuff" } );
}
}
If I understand you correctly, you need to use a tokenizer that removes dots -- that is, any name that contains a dot should be split at that point ("here.i.am" becomes "here" + "i" + "am").
you are getting caught by behavior documented here:
However, a dot that's not followed by whitespace is considered part of a token.
StandardTokenizer introduces some more complex to parsing rules than you may not be looking for. This one, in particular, is intended to prevent tokenization of URLs, IPs, idenifiers, etc. A simpler implementation might suit your needs, like LetterTokenizer.
If that doesn't really suit your needs (and it might well turn out to be throwing the baby out with the bathwater), then you may need to modify StandardTokenizer yourself, which is explicitly encouraged by the Lucene docs:
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
Sebastien Dionne: I didn't understand how to split a word, do I have to parse the document char by char ?
Sebastien Dionne: I still want to know how to split a token into multiple part, and index them all
You may have to write a custom analyzer.
Analyzer is a combination of Tokenizer and possibly a chain of TokenFilter instances.
Tokenizer : Takes in the input text passed by you probably as a java.io.Reader. It
JUST breakdowns the text. Doesn't alter, just breaks it down.
TokenFilter : Takes in the token emitted by Tokenizer, adds / removes / alters tokens and emits the same one by one until all are finished.
If it replaces a token with multiple tokens based on requirements, buffers all, emits them one by one to the Indexer.
You may check following resource, unfortunately, you may have to sign-up for a trial membership.
By writing a custom analyzer, you can breakdown the text the way you want to. You may even use some existing components like LowercaseFilter. Fortunately, it is achievable with Lucene to come up with some Analyzer that serves your purpose if you couldn't find that as a built-in or on the web.
" Writing Custom Filters: Lucene in Action 2"

ejb3.1 #Startup.. #Singleton .. #PostConstruct read from XML the Objects

I need to initialize a set of static String values stored in an XML files [ I know this is against the EJB spec ]
as shown below since the over all Idea is to not hardcore within EJB's the JNDI info
Utils.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<entry key="jndidb">java:jdbc/MYSQLDB10</entry>
<entry key="jndimdbque">java:jms/QueueName/remote</entry>
<entry key="jndi1">DBConnections/remote</entry>
<entry key="jndi2">AddressBean/remote</entry>
</properties>
The Onload of ejbserver startup code is as follows ...
inpstrem = clds.getClassLoaders(flename) Reads the Util.xml and stores the same in Hashtable key value pare....
package com.ejb.utils;
import java.io.InputStream;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import java.util.Properties;
import java.util.TreeMap;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.ConcurrencyManagement;
import javax.ejb.Singleton;
import javax.ejb.Startup;
#Singleton
#Startup
public class StartupUtils {
private final String INITFILENAME = "/System/Config/Utils.xml";
private static Hashtable HTINITFLENME=null,HTERRINITFLENME=null,HTCMMNFLENME=null;
public StartupUtils() {
HTINITFLENME = new Hashtable();
HTERRINITFLENME = new Hashtable();
}
public void printAll(Hashtable htcmmnflenme){
Enumeration ENUMK = null, VALS = null;
String KEY = "", VALUE = "";
ENUMK = htcmmnflenme.keys();
while (ENUMK.hasMoreElements()) {
KEY = null;VALUE = null;
KEY = (ENUMK.nextElement().toString().trim());
VALUE = htcmmnflenme.get(KEY).toString().trim();
InitLogDisplay(KEY + " :::: " + VALUE);
}
}
public static void InitLogDisplay(String Datadisplay){
System.out.println(Datadisplay);
}
public Hashtable getDataProp(String flename){
Map htData = null;
InputStream inpstrem = null;
ClassLoaders clds = null;
Enumeration enumk = null, vals = null;
String key = "", value = "";
Properties props = null;
Hashtable htx = null;
try {
clds = new ClassLoaders();
inpstrem = clds.getClassLoaders(flename);
props = new Properties();
props.loadFromXML(inpstrem);
enumk = props.keys();
vals = props.elements();
htData = new HashMap();
htData = new TreeMap();
while (enumk.hasMoreElements()) {
key = (enumk.nextElement().toString().trim());
value = (vals.nextElement().toString().trim());
htData.put(key,value);
}
clds = null;
props = null;
inpstrem.close();
} catch (Exception e) {
e.printStackTrace();
}finally{
key = ""; value = "";
enumk = null;vals = null;
clds=null;
props=null;
}
htx = new Hashtable();
htx.putAll(htData);
return htx;
}
public void setUtilsPropDetails(){
HTINITFLENME = getDataProp(INITFILENAME);
this.printAll(HTINITFLENME);
}
public static Hashtable getUtilsPropDetails(){
return HTINITFLENME;
}
#PostConstruct
public void startOnstartup(){
this.setUtilsPropDetails();
this.printAll();
}
#PreDestroy
public void startOnshutdown(){
try {
this.finalize();
} catch (Throwable e) {
e.printStackTrace();
}
}
}
On startup of EJB server "this.printAll(HTINITFLENME);" prints the key values of the XML file hoever If an external Call is made via any other EJB's to the method "getUtilsPropDetails()" does not return the key values....
Am i doing something wrong ??????
Have you considered using the deployment descriptor and having the container do this work for you?
There are of course <resource-ref>, <resource-env-ref>, <ejb-ref> and <env-entry> elements to cover externally configuring which things should be made available to the bean for lookup. For example:
<resource-ref>
<res-ref-name>db</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<mapped-name>java:jdbc/MYSQLDB10</mapped-name>
</resource-ref>
I'm not sure how your vendor handles mapped-name (that particular element is vendor specific), but there will be an equivalent syntax to specify the datasource you want.
The singleton can then lookup java:comp/env/db and return the datasource to other EJBs.
If you are in a compliant Java EE 6 server, then you can change the name to <res-ref-name>java:app/db</res-ref-name> and then anyone in the app can lookup the datasource without the need to get it from the singleton. Global JNDI is a standard feature of Java EE 6 and designed for exactly this.
You can put those elements in the ejb-jar.xml, web.xml or application.xml. Putting them in the application.xml will make the one entry available to the entire application and give you one place to maintain everything.
Global resources can also be injected via:
#Resource(name="java:app/db")
DataSource dataSource;
If for some reason you didn't want to use those, at the very least you could use the <env-entry> element to externalize the strings.
EDIT
See this other answer for a much more complete description of JNDI as it pertains to simple types. This of course can be done where the name/value pairs are not simple types and instead are more complex types like DataSource and Topic or Queue
For example:
<resource-ref>
<res-ref-name>myDataSource</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myJmsConnectionFactory</res-ref-name>
<res-type>javax.jms.ConnectionFactory</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myQueueCF</res-ref-name>
<res-type>javax.jms.QueueConnectionFactory</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myTopicCF</res-ref-name>
<res-type>javax.jms.TopicConnectionFactory</res-type>
</resource-ref>
<resource-env-ref>
<resource-env-ref-name>myQueue</resource-env-ref-name>
<resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
</resource-env-ref>
<resource-env-ref>
<resource-env-ref-name>myTopic</resource-env-ref-name>
<resource-env-ref-type>javax.jms.Topic</resource-env-ref-type>
</resource-env-ref>
<persistence-context-ref>
<persistence-context-ref-name>myEntityManager</persistence-context-ref-name>
<persistence-unit-name>test-unit</persistence-unit-name>
</persistence-context-ref>
<persistence-unit-ref>
<persistence-unit-ref-name>myEntityManagerFactory</persistence-unit-ref-name>
<persistence-unit-name>test-unit</persistence-unit-name>
</persistence-unit-ref>
See the JNDI and simple types answer for look and injection syntax.
I see the name and type, but where's the value?
Configuring what actual things these names refer to has historically been done in a separate vendor specific deployment descriptor, such as sun-ejb-jar.xml or openejb-jar.xml or whatever that vendor requires. The vendor-specific descriptor and the standard ejb-jar.xml descriptor combined provide the guaranteed portability apps require.
The ejb-jar.xml file offering only standard things like being able to say what types of resources the application requires and what names the application has chosen to use to refer to those resources. The vendor-specific descriptor fills the gap of mapping those names to actual resources in the system.
As of EJB 3.0/Java EE 5, we on the spec groups departed from that slightly and added the <mapped-name> element which can be used in the ejb-jar.xml with any of the references shown above, such as <resource-ref>, to the vendor-specific name. Mapped name will never be portable and its value will always be vendor-specific -- if it is supported at all.
That said, <mapped-name> can be convenient in avoiding the need for a separate vendor-specific file and achieves the goal of getting vendors-specific names out of code. After all, the ejb-jar.xml can be edited when moving from one vendor to another and for many people that's good enough.