Reload Lucene Suggester Index - lucene

How do I store and reload a Lucene suggester index?
This is how a build a Suggester index:
def buildAutoCompleteIndex(path:Path, data:List[Map[String,Any]])
:BlendedInfixSuggester = {
val directory = FSDirectory.open(path)
val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())
autoComplete.build(new EntityIteratorStub())
data.map { d =>
autoComplete.add(d("text").asInstanceOf[BytesRef],
d("contexts").asInstanceOf[Set[BytesRef]],
d("weight").asInstanceOf[Long],
d("payload").asInstanceOf[BytesRef])
}
autoComplete.refresh
autoComplete
}
However, if I try to check if the index exists on, say, server restart, I get a "suggester was not built" exception.
def checkIfIndexExists(path:Path):BlendedInfixSuggester = {
val directory = FSDirectory.open(path)
val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())
try {
// exception occurs here ->
if (autoComplete.lookup("a", 1, true, false).length > 0) autoComplete
else null
} catch {
case NonFatal(e) => {
println("Index does not exist, recreating at " + path)
null
}
}
}
Edit ==========================
Found this in Lucene AnalyzingInfixSuggester:
#Override
public boolean store(DataOutput in) throws IOException {
return false;
}
#Override
public boolean load(DataInput out) throws IOException {
return false;
}
Does this mean that storing-reloading of Suggester indexes cannot be done?

Using commit solved the problem.
def checkIfIndexExists(path:Path):BlendedInfixSuggester = {
val directory = FSDirectory.open(path)
val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())
try {
if (autoComplete.getCount > 0) autoComplete
else null
} catch {
case NonFatal(e) => null
}
}
def buildAutoCompleteIndex(path:Path, data:List[Map[String,Any]])
:BlendedInfixSuggester = {
val directory = FSDirectory.open(path)
val autoComplete = new BlendedInfixSuggester(directory, new StandardAnalyzer())
// Just build a stub iterator to get started with
autoComplete.build(new EntityIteratorStub())
data.map { d =>
autoComplete.add(d("text").asInstanceOf[BytesRef],
d("contexts").asInstanceOf[Set[BytesRef]],
d("weight").asInstanceOf[Long],
d("payload").asInstanceOf[BytesRef])
}
autoComplete.refresh
autoComplete.commit
autoComplete
}

Related

Why are my swagger docs showing 'additionalProperties = false' for my custom schema filter?

I have this SchemaFilter in my swagger config
public class SmartEnumSchemaFilter : ISchemaFilter
{
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
if (!TryGetSmartEnumValues(context.Type, out var values))
{
return;
}
var openApiInts = new OpenApiArray();
openApiInts.AddRange(values.Select(x => new OpenApiInteger(x.Value)));
schema.Type = "integer";
schema.Enum = openApiInts;
schema.Properties = null;
schema.Description = string.Join(", ", values.Select(v => $"{v.Value}: {v.Name}"));
}
}
It is working well, but for some reason this "additional properties" always appears in the docs:
But only for this particular schema - all the other schemas don't have it. Is there some way of removing it?
I tried setting both
schema.AdditionalProperties = null;
schema.AdditionalPropertiesAllowed = false;
but it made no difference

How to validate error body in Webflux/Webclient

I have a handler method for an endpoint, that is this one:
public Mono<ServerResponse> create(ServerRequest serverRequest) {
Validator validator = new CreateUserValidator();
Mono<UserDto> userDtoMono = serverRequest.bodyToMono(UserDto.class);
return userDtoMono.flatMap(user ->
{
Errors errors = new BeanPropertyBindingResult(user, UserDto.class.getName());
validator.validate(user, errors);
if (errors == null || errors.getAllErrors().isEmpty()) {
return userService.create(user).flatMap(aa -> ServerResponse.status(HttpStatus.CREATED)
.contentType(MediaType.APPLICATION_JSON).body(fromValue(aa))).onErrorResume(this::handleException);
} else {
Set<String> errors1 = new HashSet<String>();
errors.getAllErrors().forEach(message -> {
errors1.add(message.getDefaultMessage());
});
return handleException(new InvalidAttributesException(errors1));
}
});
}
private Mono<ServerResponse> handleException(Throwable exception) {
ErrorResponse errorResponse = new ErrorResponse();
if (exception instanceof InvalidAttributesException) {
InvalidAttributesException asd = (InvalidAttributesException) exception;
asd.getErrors().forEach(error ->
errorResponse.addMessage(messagesService.getMessage(error)));
} else {
errorResponse.addMessage(messagesService.getMessage(exception.getMessage()));
}
logger.info("Error:" + errorResponse);
return ServerResponse.status(HttpStatus.BAD_REQUEST).body(fromValue(errorResponse));
}
As you can see, if the validator fails, the method return a bad request error with a ErrorResponse as a body.
I use a WebClient in order to test it. The WebClient has a filter to get the ErrorResponse in case of a error status:
WebClient client = WebClient.builder().clientConnector(new
ReactorClientHttpConnector(HttpClient.create(ConnectionProvider.newConnection()))).filter(ExchangeFilterFunction.ofResponseProcessor(clientResponse ->
{
if (clientResponse.statusCode().isError()){
return clientResponse.bodyToMono(ErrorResponse.class).flatMap(errorResponse ->
Mono.error(new InvalidAttributesException(new HashSet<>(errorResponse.getMessages())))
);
}
return Mono.just(clientResponse);
})).baseUrl("http://localhost:8080").build();
Mono<ErrorResponse> response = (Mono<ErrorResponse>) client.post().uri(thingsEndpoint(url)).accept( MediaType.APPLICATION_JSON ).body(Mono.just(userDto),UserDto.class).ti
.exchange();
response.subscribe(as -> {
List<String> expectedMessages = new ArrayList<>();
expectedMessages.add("name is mandatory");
expectedMessages.add("email is mandatory");
assertTrue(as.getMessages().containsAll(expectedMessages));
});
But it doesn't work. When I debug the test, it seems that when the exchange() method is called returns an exception before calling the endpoint. What am I doing bad?

org.apache.fop.fo.flow.ExternalGraphic catches and logs ImageException I want to handle myself

I am transforming an Image into pdf for test purposes.
To ensure that the Image is compatible with the printing process later on, I'm running a quick test print during the upload.
I'm creating a simple Test-PDF with a transformer. When I try to print an image with an incompatible format, the ImageManager of the transformer throws an ImageException, starting in the preloadImage() function:
public ImageInfo preloadImage(String uri, Source src)
throws ImageException, IOException {
Iterator iter = registry.getPreloaderIterator();
while (iter.hasNext()) {
ImagePreloader preloader = (ImagePreloader)iter.next();
ImageInfo info = preloader.preloadImage(uri, src, imageContext);
if (info != null) {
return info;
}
}
throw new ImageException("The file format is not supported. No ImagePreloader found for "
+ uri);
}
throwing it to:
public ImageInfo needImageInfo(String uri, ImageSessionContext session, ImageManager manager)
throws ImageException, IOException {
//Fetch unique version of the URI and use it for synchronization so we have some sort of
//"row-level" locking instead of "table-level" locking (to use a database analogy).
//The fine locking strategy is necessary since preloading an image is a potentially long
//operation.
if (isInvalidURI(uri)) {
throw new FileNotFoundException("Image not found: " + uri);
}
String lockURI = uri.intern();
synchronized (lockURI) {
ImageInfo info = getImageInfo(uri);
if (info == null) {
try {
Source src = session.needSource(uri);
if (src == null) {
registerInvalidURI(uri);
throw new FileNotFoundException("Image not found: " + uri);
}
info = manager.preloadImage(uri, src);
session.returnSource(uri, src);
} catch (IOException ioe) {
registerInvalidURI(uri);
throw ioe;
} catch (ImageException e) {
registerInvalidURI(uri);
throw e;
}
putImageInfo(info);
}
return info;
}
}
throwing it to :
public ImageInfo getImageInfo(String uri, ImageSessionContext session)
throws ImageException, IOException {
if (getCache() != null) {
return getCache().needImageInfo(uri, session, this);
} else {
return preloadImage(uri, session);
}
}
Finally it gets caught and logged in the ExternalGraphic.class:
/** {#inheritDoc} */
public void bind(PropertyList pList) throws FOPException {
super.bind(pList);
src = pList.get(PR_SRC).getString();
//Additional processing: obtain the image's intrinsic size and baseline information
url = URISpecification.getURL(src);
FOUserAgent userAgent = getUserAgent();
ImageManager manager = userAgent.getFactory().getImageManager();
ImageInfo info = null;
try {
info = manager.getImageInfo(url, userAgent.getImageSessionContext());
} catch (ImageException e) {
ResourceEventProducer eventProducer = ResourceEventProducer.Provider.get(
getUserAgent().getEventBroadcaster());
eventProducer.imageError(this, url, e, getLocator());
} catch (FileNotFoundException fnfe) {
ResourceEventProducer eventProducer = ResourceEventProducer.Provider.get(
getUserAgent().getEventBroadcaster());
eventProducer.imageNotFound(this, url, fnfe, getLocator());
} catch (IOException ioe) {
ResourceEventProducer eventProducer = ResourceEventProducer.Provider.get(
getUserAgent().getEventBroadcaster());
eventProducer.imageIOError(this, url, ioe, getLocator());
}
if (info != null) {
this.intrinsicWidth = info.getSize().getWidthMpt();
this.intrinsicHeight = info.getSize().getHeightMpt();
int baseline = info.getSize().getBaselinePositionFromBottom();
if (baseline != 0) {
this.intrinsicAlignmentAdjust
= FixedLength.getInstance(-baseline);
}
}
}
That way it isn't accessible for me in my code that uses the transformer.
I tried to use a custom ErrorListener, but the transformer only registers fatalErrors to the ErrorListener.
Is there any way to access the Exception and handle it myself without changing the code of the library?
It was easier than I thought. Before I call the transformation I register a costum EventListener to the User Agent of the Fop I'm using. This Listener just stores the Information what kind of Event was triggered, so I can throw an Exception if it's an ImageError.
My Listener:
import org.apache.fop.events.Event;
import org.apache.fop.events.EventListener;
public class ImageErrorListener implements EventListener
{
private String eventKey = "";
private boolean imageError = false;
#Override
public void processEvent(Event event)
{
eventKey = event.getEventKey();
if(eventKey.equals("imageError")) {
imageError = true;
}
}
public String getEventKey()
{
return eventKey;
}
public void setEventKey(String eventKey)
{
this.eventKey = eventKey;
}
public boolean isImageError()
{
return imageError;
}
public void setImageError(boolean imageError)
{
this.imageError = imageError;
}
}
Use of the Listener:
// Start XSLT transformation and FOP processing
ImageErrorListener imageListener = new ImageErrorListener();
fop.getUserAgent().getEventBroadcaster().addEventListener(imageListener);
if (res != null)
{
transformer.transform(xmlDomStreamSource, res);
}
if(imageListener.isImageError()) {
throw new ImageException("");
}
fop is of the type Fop ,xmlDomStreamSource ist the xml-Source I want to transform and res is my SAXResult.

Having Trouble with ObjectInputStream/OutputStream

I am having trouble with my programs ability to save my Maps to a file. Here are my two methods for writing and reading my maps and arraylist.
Here is my read method:
private void getData() throws IOException, ClassNotFoundException {
File f_Instructors = new File(PSLTrackerInfo.file + "instructors.brent");
File f_Students = new File(PSLTrackerInfo.file + "students.brent");
File f_Times = new File(PSLTrackerInfo.file + "times.brent");
if (f_Instructors.exists()) {
try (ObjectInputStream in = new ObjectInputStream(new
BufferedInputStream(new FileInputStream(f_Instructors)))) {
//Add theList back in
if (in.readObject() != null) {
TreeMap<Instructor, Set<Student>> read = null;
while(in.available() > 0) {
read = (TreeMap<Instructor, Set<Student>>)
in.readObject();
}
if (read != null) {
for (Instructor key : read.keySet()) {
System.out.println(key);
Set<Student> values = read.get(key);
PSLTrackerInfo.addInstructor(key, values);
}
System.out.println("Instructors Found! Reading...");
} else {
System.out.println("No instructor data saved.1");
}
} else {
System.out.println("No instructor data saved.2");
}
in.close();
}
}
//Add times back in
if (f_Times.exists()) {
try (ObjectInputStream in = new ObjectInputStream(new
BufferedInputStream(new FileInputStream(f_Times)))) {
if (in.readObject() != null) {
TreeMap<Student, ArrayList<Date>> readTimes = null;
while(in.available() > 0) {
readTimes = (TreeMap<Student, ArrayList<Date>>) in.readObject();
}
if (readTimes != null) {
for (Student key : readTimes.keySet()) {
System.out.println(key);
ArrayList<Date> values = readTimes.get(key);
PSLTrackerInfo.addTimes(key, values);
}
System.out.println("Dates Found! Reading...");
} else {
System.out.println("No dates saved.");
}
} else {
System.out.println("No dates saved.");
}
in.close();
}
}
//Add newStudents back in
if (f_Students.exists()) {
try (ObjectInputStream in = new ObjectInputStream(new
BufferedInputStream(new FileInputStream(f_Students)))) {
if (in.readObject() != null) {
ArrayList<Student> readStudents = null;
while (in.available() > 0) {
readStudents = (ArrayList<Student>) in.readObject();
}
if (readStudents != null) {
PSLTrackerInfo.setTheList(readStudents);
}
System.out.println("New students found! Reading...");
} else {
System.out.println("No new students data saved.");
}
in.close();
}
}
}
And Here is my Writing method:
private void saveData() {
System.out.println("Saving Data...");
File f_Instructors = new File(PSLTrackerInfo.file + "instructors.brent");
File f_Students = new File(PSLTrackerInfo.file + "students.brent");
File f_Times = new File(PSLTrackerInfo.file + "times.brent");
ObjectOutputStream out_Instructors = null;
ObjectOutputStream out_Students = null;
ObjectOutputStream out_Times = null;
try {
out_Instructors = new ObjectOutputStream(new
BufferedOutputStream(new FileOutputStream(f_Instructors)));
out_Students = new ObjectOutputStream(new
BufferedOutputStream(new FileOutputStream(f_Students)));
out_Times = new ObjectOutputStream(new
BufferedOutputStream(new FileOutputStream(f_Times)));
out_Instructors.writeObject(PSLTrackerInfo.getMap());
out_Times.writeObject(PSLTrackerInfo.getTimes());
out_Students.writeObject(PSLTrackerInfo.getList());
out_Instructors.flush();
out_Students.flush();
out_Times.flush();
out_Instructors.close();
out_Students.close();
out_Times.close();
} catch (IOException ex) {
Logger.getLogger(PrivateLessonsTrackerGUI.class.getName())
.log(Level.SEVERE, null, ex);
}
System.exit(0);
}
Sorry if it is a little confusing I have 3 files to save 3 different objects, if there is a way to save it into one file let me know but I just was getting a lot of errors that I couldn't figure out how to solve so this is what I ended up doing. Thanks for any help given.
To EJP: I tried this
TreeMap<Instructor, Set<Student>> read = null;
try {
read = (TreeMap<Instructor, Set<Student>>)
in.readObject();
} catch (EOFException e) {
System.out.println("Caught EOFException!");
}
And even when there was data in it when it was written to the file, I got an EOFException everytime.
readObject() doesn't return null unless you wrote a null. If you're using that as a test for end of stream, it is invalid. The correct technique is to catch EOFException.
You are calling it and throwing away the result if it isn't null, and then calling it again. The second call will throw EOFException if there isn't another object in the file. It won't give you the same result as the first call. It's a stream.
available() is also not a valid test for end of stream. That's not what it's for. See the Javadoc. Again, the correct technique with readObject() is to catch EOFException.

Apache lucene 4.3.1 - Index reader is not reach to the last indexed document

In My App I have documents represents my data for each category, my application perform an automatic index to new and the modified documents.
if i performed index for all documents in one category, its work fine and retrieve a correct results, but the problem is, if i modified or create new document its will not retrieve it, if its matched my search query.
usually keeps return all docs except the last modified one.
any help please ?
I have this IndexWriter config :
private IndexWriter getIndexWriter() throws IOException {
Directory directory = FSDirectory.open(new File(filepath));
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_43, IndexFactory.ANALYZER);
config.setRAMBufferSizeMB(350);
TieredMergePolicy tmp = new TieredMergePolicy();
tmp.setUseCompoundFile(false);
config.setMergePolicy(tmp);
ConcurrentMergeScheduler scheduler = (ConcurrentMergeScheduler) config.getMergeScheduler();
scheduler.setMaxThreadCount(2);
scheduler.setMaxMergeCount(20);
IndexWriter writer = new IndexWriter(directory, config);
writer.forceMerge(1);
return writer;
My Collector :
public void collect(int docNum) throws IOException {
try {
if ((getCount() == getMaxSearchLimit() + 1) && getMaxSearchResults() != null) {
setCounterExceededLimit(true);
return;
}
addDocKey();// method to add and render the matching docs by customize way
} catch(IOException exp) {
if (!getErrors().toArrayList(getApplication().getLocale()).contains(exp.getMessage())) {
getErrors().addError(exp.getMessage());
}
} catch (BusinessException bEx) {
if (!getErrors().containsError(bEx.getErrorNumber())) {
getErrors().addError(bEx);
}
} catch (CounterExceededLimitException counterEx) {
return;
}
}
#Override
public boolean acceptsDocsOutOfOrder() {
// TODO Auto-generated method stub
return true;
}
#Override
public void setNextReader(AtomicReaderContext context) throws IOException {
// TODO Auto-generated method stub
}
#Override
public void setScorer(Scorer scorer) throws IOException {
// TODO Auto-generated method stub
}
acually i have this busniess logic to save my doc, then i asked if the doc saved successfully to add it to the index process.
public boolean saveDocument(CategoryDocument doc) {
boolean saved = false;
// code to save my doc
if(saved) {
//add this document to the index process
IndexManager.getInstance().addToIndex(this);
}
}
then my index manager create a new thread to handle indexing this doc.
here is my process to index my data document :
private void processDocument(IndexDocument indexDoc, DocKey docKey, boolean addToIndex) throws SearchException, BusinessException {
CategorySetting catSetting = docKey.getCategorySetting();
Integer catID = catSetting.getID();
IndexManager manager = IndexManager.getInstance();
IndexWriter writer = null;
try {
//Delete the lock file in case previous index operation failed to delete it
File lockFile = new File(filepath, IndexWriter.WRITE_LOCK_NAME);
if (lockFile != null && lockFile.exists()) {
lockFile.delete();
}
if(!manager.isGlobalIndexingProcess(catID)) {
writer = getIndexWriter();
} else {
writer = manager.getGlobalIndexWriter(catID);
}
writer.forceMerge(1);
removeDocument(docKey, writer);
if (addToIndex) {
writer.addDocument(indexDoc.getLuceneIndexDoc());
}
} catch(IOException exp) {
throw new SearchException(exp.getMessage(), true);
} finally {
if(!manager.isGlobalIndexingProcess(catID)) {
if (writer != null) {
try {
writer.close(true);
} catch(IOException ex) {
throw new SearchException(ex);
}
}
}
}
}
Use lucene search and search for the word or phrase that you edited in the document and let us know whether you get the correct hits or not. If you didn't get any hits then probably you are not indexing edited or newly added documents.