Jena model.setNsPrefixes through java.lang.NullPointerException - nullpointerexception

Here is the code:
*Location location = Location.create("target/DBLP");
Dataset dataset = TDBFactory.createDataset(location);
dataset.begin(ReadWrite.READ);
Model model = dataset.getDefaultModel();
HashMap<String, String> prifixMap = new HashMap<>();
prifixMap.put("rdf","<http://www.w3.org/1999/02/22-rdf-syntax-ns#>");
try{
model.setNsPrefixes(prifixMap);
}
catch (Exception e)
{
e.printStackTrace();
}*
It always throws a java.lang.NullPointerException when it executes the line of model.setNsPrefixes(), and the detail printStackTrace is as following:
java.lang.NullPointerException
at org.apache.jena.tdb.store.DatasetPrefixesTDB.readPrefix(DatasetPrefixesTDB.java:86)
at org.apache.jena.sparql.graph.GraphPrefixesProjection.get(GraphPrefixesProjection.java:101)
at org.apache.jena.sparql.graph.GraphPrefixesProjection.set(GraphPrefixesProjection.java:79)
at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefix(PrefixMappingImpl.java:75)
at org.apache.jena.shared.impl.PrefixMappingImpl.setNsPrefixes(PrefixMappingImpl.java:163)
at org.apache.jena.rdf.model.impl.ModelCom.setNsPrefixes(ModelCom.java:1043)
at ReadTransaction.(ReadTransaction.java:32)
at ReadTransaction.main(ReadTransaction.java:133)
I have checked jena API, and I could not find any solution.
Thanks for any answer!

The following code worked for me:
Location location = Location.create("target/DBLP");
Dataset dataset = TDBFactory.createDataset(location);
dataset.begin(ReadWrite.WRITE); // changed from READ to WRITE
Model model = dataset.getDefaultModel();
HashMap<String, String> prefixMap = new HashMap<>();
prefixMap.put("rdf", "http://www.w3.org/1999/02/22-rdf-syntax-ns#"); // removed '<' and '>'
try {
model.setNsPrefixes(prefixMap);
} catch (Exception e) {
e.printStackTrace();
}
The key is to change the transaction type from READ to WRITE. You are trying to write data so you must be in a write transaction!

Related

I keep getting the error java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: java.io.FileOutputStream

private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) {
DefaultTableModel model = (DefaultTableModel) jTable1.getModel();
Vector<Vector> tableData = model.getDataVector();
//Saving of object in a file
try {
FileOutputStream file = new FileOutputStream("StudentFile.bin");
ObjectOutputStream output = new ObjectOutputStream(file);
// Method for serialization of object
output.writeObject(file);
output.close();
file.close();
} catch (Exception ex) {
ex.printStackTrace();
}
new ComputerScience_IA_UI().setVisible(true);
this.dispose();//to close the current jframe
}
this is my code
I wanted it to save data in a file
But every time it always gave me the same error java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: java.io.FileOutputStream
this is the right code
try {
FileInputStream file = new FileInputStream("StudentFile.bin");
ObjectInputStream input = new ObjectInputStream(file);
Vector<Vector> tableData = (Vector<Vector>)input.readObject();
input.close();
file.close();
DefaultTableModel model = (DefaultTableModel)jTable1.getModel();
for (int i = 0; i < tableData.size(); i++) {
Vector row = tableData.get(i);
model.addRow(new Object [] {row.get(0), row.get(1), row.get(2), row.get(3), row.get(4), row.get(5)});
}
} catch (Exception ex) {
ex.printStackTrace();
}

Jena TBD error "org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write"

I am trying to run a query on a Jena triplestore database (TDB) that is being written to concurrently. However, I am running into an error that seems to indicate an attempt to read from the database before the in-process write is committed. I think I am using TBD's locking functionality properly, but it is possible I am not; I'm new to this framework.
The error is "org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write".
The read code is:
String query;
Dataset dataset = TDBFactory.createDataset("./tdb/");
List<String> outputMessageTexts = new ArrayList<>();
Model datasetModel = dataset.getDefaultModel();
dataset.begin(ReadWrite.READ);
datasetModel.enterCriticalSection(Lock.READ);
try {
QueryExecution queryExecution = QueryExecutionFactory.create(query, dataset);
Model resultModel = queryExecution.execConstruct();
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
RDFDataMgr.write(byteArrayOutputStream, resultModel, RDFFormat.TURTLE);
outputMessageTexts.add(new String(byteArrayOutputStream.toByteArray()));
} catch (Exception e) {
e.printStackTrace();
} finally {
datasetModel.leaveCriticalSection() ;
dataset.end();
}
The write code:
Model model = dataset.getDefaultModel();
dataset.begin(ReadWrite.WRITE);
model.enterCriticalSection(Lock.WRITE);
try {
Reader messageReader = new StringReader(inputMessageText);
model.read(messageReader, null, "TURTLE");
dataset.commit();
} catch (Exception e) {
e.printStackTrace();
} finally {
model.leaveCriticalSection() ;
dataset.end();
}

throwing exception inside the java 8 stream foreach

I am using java 8 stream and I can not throw the exceptions inside the foreach of stream.
stream.forEach(m -> {
try {
if (isInitial) {
isInitial = false;
String outputName = new SimpleDateFormat(Constants.HMDBConstants.HMDB_SDF_FILE_NAME).format(new Date());
if (location.endsWith(Constants.LOCATION_SEPARATOR)) {
savedPath = location + outputName;
} else {
savedPath = location + Constants.LOCATION_SEPARATOR + outputName;
}
File output = new File(savedPath);
FileWriter fileWriter = null;
fileWriter = new FileWriter(output);
writer = new SDFWriter(fileWriter);
}
writer.write(m);
} catch (IOException e) {
throw new ChemIDException(e.getMessage(),e);
}
});
and this is my exception class
public class ChemIDException extends Exception {
public ChemIDException(String message, Exception e) {
super(message, e);
}
}
I am using loggers to log the errors in upper level. So I want to throw the exception to top. Thanks
Try extending RuntimeException instead. The method that is created to feed to the foreach does not have that type as throwable, so you need something that is runtime throwable.
WARNING: THIS IS PROBABLY NOT A VERY GOOD IDEA
But it will probably work.
Why are you using forEach, a method designed to process every element, when all you want to do, is to process the first element? Instead of realizing that forEach is the wrong method for the job (or that there are more methods in the Stream API than forEach), you are kludging this with an isInitial flag.
Just consider:
Optional<String> o = stream.findFirst();
if(o.isPresent()) try {
String outputName = new SimpleDateFormat(Constants.HMDBConstants.HMDB_SDF_FILE_NAME)
.format(new Date());
if (location.endsWith(Constants.LOCATION_SEPARATOR)) {
savedPath = location + outputName;
} else {
savedPath = location + Constants.LOCATION_SEPARATOR + outputName;
}
File output = new File(savedPath);
FileWriter fileWriter = null;
fileWriter = new FileWriter(output);
writer = new SDFWriter(fileWriter);
writer.write(o.get());
} catch (IOException e) {
throw new ChemIDException(e.getMessage(),e);
}
which has no issues with exception handling. This example assumes that the Stream’s element type is String. Otherwise, you have to adapt the Optional<String> type.
If, however, your isInitial flag is supposed to change more than once during the stream processing, you are definitely using the wrong tool for your job. You should have read and understood the “Stateless behaviors” and “Side-effects” sections of the Stream API documentation, as well as the “Non-interference” section, before using Streams. Just converting loops to forEach invocations on a Stream doesn’t improve the code.

ANTLR test class not compiling?

I've cobbled together some code to test a lexer/parser grammar but I'm a stuck on how to create the appropriate file input / stream objects to parse a file. My code is as follows, and I'm getting an error about giving the BasicLexer class constructor an ANTLRInputStream instead of a CharStream, and a similar message with giving the BasicParser a CommonTokenStream (it expects TokenStream). Any ideas on where I've gone wrong?
public static void main(String[] args) throws Exception {
String filename = args[0];
InputStream is;
try {
is = new FileInputStream(filename);
//is.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
ANTLRInputStream in = new ANTLRInputStream(is);
BasicLexer lexer = new BasicLexer(in);
CommonTokenStream tokens = new CommonTokenStream(lexer);
BasicParser parser = new BasicParser(tokens);
parser.eval();
}

How to use Lucene library to extract n-grams?

I am having a rough time trying to wrap my head around the Lucene library. This is what I have so far:
public void shingleMe()
{
try
{
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_35);
FileReader reader = new FileReader("test.txt");
ShingleAnalyzerWrapper shingleAnalyzer = new ShingleAnalyzerWrapper(analyzer, 2);
shingleAnalyzer.setOutputUnigrams(false);
TokenStream stream = shingleAnalyzer.tokenStream("contents", reader);
CharTermAttribute charTermAttribute = stream.getAttribute(CharTermAttribute.class);
while (stream.incrementToken())
{
System.out.println(charTermAttribute.toString());
}
}
catch (FileNotFoundException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (IOException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
It fails at stream.incrementToken(). It's my understanding that the ShingleAnalyzerWrapper uses another Analyzer to create a shingle analyzer object. From there, I convert it to a token stream which is then parsed using an attribute filter. However, it always results in this exception:
Exception in thread "main" java.lang.AbstractMethodError: org.apache.lucene.analysis.TokenStream.incrementToken()Z
Thoughts? Thanks in advance!
AbstractMethodError cannot occur as a result of wrong API usage -- it must be the result of compiling against one JAR and then running against a different one. Since you are using both Lucene Core and Lucene Analyzers JAR here, double-check your compile-time and runtime JAR classpaths.