I am trying to edit some classes of pdfbox with private data members.So I copied the org folder and pasted it in my src folder . Now when I am creating an object of PdfTextStripper class I am getting an error named "java.lang.ExceptionInInitializerError"
This is part of the code inside PdfTextStripper class where exception is happening
static
{
String path = "org/apache/pdfbox/resources/text/BidiMirroring.txt";
InputStream input = PDFTextStripper.class.getClassLoader().getResourceAsStream(path);
try
{
parseBidiFile(input);
}
catch (IOException e)
{
LOG.warn("Could not parse BidiMirroring.txt, mirroring char map will be empty: "
+ e.getMessage());
}
finally
{
try
{
input.close();// error is in this line
}
catch (IOException e)
{
LOG.error("Could not close BidiMirroring.txt ", e);
}
}
};
So compiler points to this line as an error.
Why is this exception happening. When I use jar file then I dont get any exception so why am i getting one now? How to solve this?
If any any one faces similar problem, just copy the resources folder too. It worked for me.
Related
I'm using this library:
"io.github.microutils:kotlin-logging:2.0.4"
with this logging implementation:
"ch.qos.logback:logback-classic:1.2.3"
In my code I call:
private val logger = KotlinLogging.logger{}
and then use this logger as follows:
logger.debug("message")
this runs fine until I try to debug my code at which point the following to two NoSuchMethodErrors pop up in the library:
private static IMarkerFactory bwCompatibleGetMarkerFactoryFromBinder() throws
NoClassDefFoundError {
try {
return StaticMarkerBinder.getSingleton().getMarkerFactory();
} catch (NoSuchMethodError var1) {
return StaticMarkerBinder.SINGLETON.getMarkerFactory();
}
}
And:
private static MDCAdapter bwCompatibleGetMDCAdapterFromBinder() throws
NoClassDefFoundError {
try {
return StaticMDCBinder.getSingleton().getMDCA();
} catch (NoSuchMethodError var1) {
return StaticMDCBinder.SINGLETON.getMDCA();
}
}
(the first time I try to log something)
Others on my team do not experience this issue. they are on macs, in case that matters.
If, I just continue running the code everything is fine as the exception is caught, but I don't want to hit continue twice anytime I want to debug. I'm willing to ignore exceptions if that is possible, or better yet, fix the underlying issue.
Recently I am doing a eclipse plugin project with eclipse_RCP. But I encountered some issues with eclipse UI when I wanted to print a large number of messages in the console of plugin.
The messages are from a complex process which could be considered as a factory producing messages all the time and never stop (until the client stop the process of course).
When I printed the message before (the message is short), I just needed to call the function -org.eclipse.ui.console.MessageConsoleStream.println().
BUT this time ,when I tried like before at first , the runtime-EclipseApplication (launch the debug mode) stopped responding and then tell me out of memory.
It seems like that the eclipse will read all the messages in the memory and THEN print them to the console one time .So when the number of message is large ,it will out of memory.
My issue is what can I do if I want to print the message line by line in the console ?
My description may be not accurate. Below is the java code:
public void print(Process p) {
BufferedReader in = new BufferedReader(
new InputStreamReader(p.getInputStream()),1024);
String line = "";
try {
while ((line = in.readLine()) != null) {
//it is correct when print in the main console
System.out.println(line);
//when print in plugin console .it is out of memory
//this is the function
//org.eclipse.ui.console.MessageConsoleStream.println()
println(line);
}
in.close();
this.flush();
this.close();
p.destroy();
}
catch (IOException e) {
e.printStackTrace();
}
}
Then I try to write to a file at first and let the MessageConsoleStream read from the file every 1000 messages,but it looks like the same .
public void print(Process p) {
BufferedReader in = new BufferedReader(
new InputStreamReader(p.getInputStream()),1024);
String line = "";
char []tem = new char[1024];
int i = 0 ;
try {
File temp = File.createTempFile("temp", ".tep",new File("E:/"));
FileWriter out = new FileWriter(temp);
MessageConsoleStream mcs = null;
while((line = in.readLine())!=null){
if(i<=1000){
System.out.println(line);
out.write(line+"\n", 0, line.length()+1);
i++;
}
else{
i=0 ;
out.flush();
out.close();
FileReader fr=new FileReader(temp);
mcs = CConsole.getMessageStream("consoleName", "file name");
while( fr.read(tem, 0, 1024)!=-1){
mcs.print(String.valueOf(tem));
}
mcs.flush();
mcs.close();
fr.close();
out = new FileWriter(temp,false);
}
}
if(i!= 0){
mcs = CConsole.getMessageStream("consoleName", "file name");
out.flush();
out.close();
FileReader fr=new FileReader(temp);
while( fr.read(tem, 0, 1024)!=-1){
mcs.print(String.valueOf(tem));
}
mcs.flush();
mcs.close();
}
in.close();
p.destroy();
}
catch (IOException e) {
e.printStackTrace();
}
}
All the ways above will make the eclipse out of memory when the number of messages more than 600,000 (then I stop the process ,otherwise it will out of memory).
It looks like the ecplipse wants to print all of them one time but not line by line.So it reads and reads again until out of memory.
BTW,I find a note in the org.eclipse.ui.console.MessageConsoleMessage.java——
Clients should avoid writing large amounts of output to this stream
in the UI thread. The console needs to process the output in the UI
thread and if the client hogs the UI thread writing output to the
console, the console will not be able to process the output.
That is not the real reason ,isn't it ?
I also notice that both the cdt and jdt are ok when printing a large number of message .How did they do ?
THANKS!
You have to use the flush() method every so often to write the MessageConsoleStream out to the console.
The flush() method is part of the IOConsoleOutputStream class, in the org.eclipse.ui.console package. The flush() method is not well documented, so I can see how you might have missed it.
I´m writing on an service to watch for the existence different files in diffent folders...
I´m using filesystemwatchers to get the events.
As a part of the deployment one of the watched folders is deleted and new created from time to time.
As a result the service throws an error and is stopped...
Is it possible to catch that kind of error and recreate the filewatcher on the new folder by the service?
Catch the deleted event, and then reschedule with timed poll to watch a new one?
I don't have a compiler to hand right now but I knocked up this pseudo code:
using System;
using System.IO;
public class Watcher : IDisposable{
void Dispose(){ watcher.OnDeleted -= onDelete; }
string file;
FileSystemWatcher watcher;
FileSystemEventHandler onDelete;
public class Watch(string file, FileSystemEventHandler onDelete) {
this.file = file;
watcher = new FileSystemWatcher{ Path = file }
this.OnDelete = onDelete;
watcher.Deleted += onDelete;
watcher.NotifyFilter = ...; // looking for delete event;
// Begin watching.
watcher.EnableRaisingEvents = true;
}
}
public static class watch {
Watcher watcher;
public static void Main() {
watcher = new Watcher("somedir", ondeleted);
SetUpChangeWatchers();
while(true){
// stuff!
}
CleanUpChangeWatchers();
}
private static void ondeleted(object source, RenamedEventArgs e){
CleanUpChangeWatchers();
watcher.Dispose();
while(!directoryRecreated(file)){
Thread.Sleep(...some delay..);
}
SetUpChangeWatchers();
watcher = new Watcher("somedir", ondeleted);
}
}
You can handle this with the .deleted event. However, if you delete the directory assigned to the filesystemwatcher.Path, it may cause an error. One way around this is to assign the parent of the watched directory to filesystemwatcher.Path. Then it should catch the deletion in the .deleted event.
It is also possible to have an error inside the handler if you try to access the directory just deleted. When this happens, you may not get the normal breakpoint and it seems like it's caused by the deletion itself.
I made a test class against the repository methods shown below:
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
public void RemoveFile(File FileToRemove)
{
_session.Delete(FileToRemove);
_session.Flush();
}
And the test class:
try
{
Data.File crashFile = new Data.File();
crashFile.UniqueName = "NonUniqueFileNameTest";
crashFile.Extension = ".abc";
repo.AddFile(crashFile);
Assert.Fail();
}
catch (Exception e)
{
Assert.IsInstanceOfType(e, typeof(ArgumentException));
}
// Clean up the file
Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault();
repo.RemoveFile(removeFile);
The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?
Edit
The other stay the same, only added Evict as suggested
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
_session.Evict(FileToAdd);
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
No difference to the result.
Call _session.Evict(FileToAdd) in the catch block. Although the save fails, FileToAdd is still a transient object in the session and NH will attempt to persist (insert) it the next time the session is flushed.
NHibernate Manual "Best practices" Chapter 22:
This is more of a necessary practice than a "best" practice. When
an exception occurs, roll back the ITransaction and close the ISession.
If you don't, NHibernate can't guarantee that in-memory state
accurately represents persistent state. As a special case of this,
do not use ISession.Load() to determine if an instance with the given
identifier exists on the database; use Get() or a query instead.
I've got a rule like this:
declaration returns [RuntimeObject obj]:
DECLARE label value { $obj = new RuntimeObject($label.text, $value.text); };
Unfortunately, it throws an exception in the RuntimeObject constructor because $label.text is null. Examining the debug output and some other things reveals that the match against "label" actually failed, but the Antlr runtime "helpfully" continues with the match for the purpose of giving a more helpful error message (http://www.antlr.org/blog/antlr3/error.handling.tml).
Okay, I can see how this would be useful for some situations, but how can I tell Antlr to stop doing that? The defaultErrorHandler=false option from v2 seems to be gone.
I don't know much about Antlr, so this may be way off base, but the section entitled "Error Handling" on this migration page looks helpful.
It suggests you can either use #rulecatch { } to disable error handling entirely, or override the mismatch() method of the BaseRecogniser with your own implementation that doesn't attempt to recover. From your problem description, the example on that page seems like it does exactly what you want.
You could also override the reportError(RecognitionException) method, to make it rethrow the exception instead of print it, like so:
#parser::members {
#Override
public void reportError(RecognitionException e) {
throw new RuntimeException(e);
}
}
However, I'm not sure you want this (or the solution by ire_and_curses), because you will only get one error per parse attempt, which you can then fix, just to find the next error. If you try to recover (ANTLR does it okay) you can get multiple errors in one try, and fix all of them.
You need to override the mismatch and recoverFromMismatchedSet methods to ensure an exception is thrown immediately (examples are for Java):
#members {
protected void mismatch(IntStream input, int ttype, BitSet follow) throws RecognitionException {
throw new MismatchedTokenException(ttype, input);
}
public Object recoverFromMismatchedSet(IntStream input, RecognitionException e, BitSet follow) throws RecognitionException {
throw e;
}
}
then you need to change how the parser deals with those exceptions so they're not swallowed:
#rulecatch {
catch (RecognitionException e) {
throw e;
}
}
(The bodies of all the rule-matching methods in your parser will be enclosed in try blocks, with this as the catch block.)
For comparison, the default implementation of recoverFromMismatchedSet inherited from BaseRecognizer:
public Object recoverFromMismatchedSet(IntStream input, RecognitionException e, BitSet follow) throws RecognitionException {
if (mismatchIsMissingToken(input, follow)) {
reportError(e);
return getMissingSymbol(input, e, Token.INVALID_TOKEN_TYPE, follow);
}
throw e;
}
and the default rulecatch:
catch (RecognitionException re) {
reportError(re);
recover(input,re);
}