ArrayIndexOutOfBounds exception for an empty string on #RestQuery - jax-rs

I have a handler in my rest easy lamda application
with the url: http://localhost:8080/store/search/v1/suggest
public List<Map<String, Object>> storeSearch(
#RestHeader("id") String id,
#RestQuery final String q,
#RestQuery final String columns) {
if (StringUtils.isBlank(q)) {
logger.error("Search query is empty");
return Collections.EMPTY_LIST;
}
The app is doing well for below cases but failing if q is empty
http://localhost:8080/store/search/v1/suggest?q=ab
http://localhost:8080/store/search/v1/suggest
failing here
http://localhost:8080/store/search/v1/suggest?q=
Can you please suggest me what I am missing here.

Related

is it possible to get the execute sql that contains parameter when debugging mybatis source

I am setting a breakpoint in mybatis source BaseExecutor's queryFromDatabase function in Intellij Idea, this code block look like this:
private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
List<E> list;
localCache.putObject(key, EXECUTION_PLACEHOLDER);
try {
list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
} finally {
localCache.removeObject(key);
}
localCache.putObject(key, list);
if (ms.getStatementType() == StatementType.CALLABLE) {
localOutputParameterCache.putObject(key, parameter);
}
return list;
}
but the boundSql content shows sql like this:
select * from article where channel_id in (?)
is it possible to get the execute sql in the trace? because the channel_id has more than 100 and the sql also contains other filter condition.

Trying to do a structural replace in IntelliJ

I want to replace my annotations from #RequestMapping to #GetMapping, #PutMapping ... annotations. When I looked at the Structural Find/Replace in IntelliJ looked like it could do the job.
I tried adding the following in the search
#RequestMapping( $key$ = $value$)
Added a filter on the key. text=method.
Now I want to extract the from the value (RequestMethod.GET) , the word after . (period). and then in the replacement add
#[Word(TitleCase)]Mapping( [everything except the key,value that was extracted in the search])
Haven't been able to figure out how to go about this. Would be nice to know if this can't be done, or any suggestions on how to do this. Looked at some of the other questions here on SO, but didn't find anything that could help. Most of the answers are to use regex in those cases.
Before:
#RequestMapping(
value = "/channels/{channel_name}",
method = RequestMethod.POST,
produces = MediaType.APPLICATION_JSON_VALUE,
consumes = MediaType.APPLICATION_JSON_VALUE)
public Channel updateChannel(
#PathVariable("channel_name") String channelName,
#Valid #RequestBody Channel channel) {
return channelService.updateChannel(channelName, channel);
}
#RequestMapping(
value = "/channels/{channel_name}",
method = RequestMethod.DELETE,
produces = MediaType.APPLICATION_JSON_VALUE)
public Channel deleteChannel(
#PathVariable("channel_name") String channelName) {
return channelService.deleteChannel(channelName);
}
After
#PostMapping(value = "/channels/{channel_name}",
produces = MediaType.APPLICATION_JSON_VALUE,
consumes = MediaType.APPLICATION_JSON_VALUE)
public Channel updateChannel(
#PathVariable("channel_name") String channelName,
#Valid #RequestBody Channel channel) {
return channelService.updateChannel(channelName, channel);
}
#DeleteMapping(
value = "/channels/{channel_name}",
produces = MediaType.APPLICATION_JSON_VALUE)
public Channel deleteChannel(
#PathVariable("channel_name") String channelName) {
return channelService.deleteChannel(channelName);
}
I would do this dirty, with regex:
Replace RequestMethod.(.)(.+)(?=,) into RequestMethod.\U$1\L$2 (\L would turn the text into lowercase.)
Replace #RequestMapping\((\s+)(.+)(\s+?)(.+)RequestMethod.(.+?), into #$5Mapping\($1$2$3.
Then simplify this replacement chain:
Replace #RequestMapping\((\s+)(.+)(\s+?)(.+)RequestMethod.(\S)(.+?), into #\U$5\L$6\EMapping\($1$2
Update: Noticed the first parameter value is not specified whether in the line of #Mapping or a standalone line.
If you need it in the line of #Mapping, replace #RequestMapping\((\s+)(.+)(\s+?)(.+)RequestMethod.(\S)(.+?),\s into #\U$5\L$6\EMapping\($2$3.
If you need it to a standalone line, replace #RequestMapping\((\s+)(.+)(\s+?)(.+)RequestMethod.(\S)(.+?), into #\U$5\L$6\EMapping\($1$2.

Can't get search for modification time to work

The example in package org.apache.lucene.demo works for text search.
But I can't get it to work using and displaying modification time.
It seems that the field modified is handled but no success using it.
Running SearchFiles prints hits for
Enter query:
+kompl*
but nothing here
+kompl* +modified:[0 TO 9999999999999]
Can someone provide an example for this?
I had the wrong assumption that file attributes are somehow implicitly available to me.
But ok, I had to do it by myself.
For indexing I added a simple integer
// provide stored date integer to query for [yyyymmdd]
Date dt = new Date(lastModified);
int myDays = (dt.getYear()+1900)*100*100 + (dt.getMonth()+1)*100 + dt.getDate();
doc.add(new IntPoint("moddate", myDays ));
doc.add(new StoredField("moddateVal", myDays ));
For searching I handle this field by an extended parser
public static class QueryParserModdate extends QueryParser {
public QueryParserModdate(String f, Analyzer a) {
super(f, a);
}
protected Query getRangeQuery(String field, String part1, String part2,
boolean startInclusive, boolean endInclusive)
throws ParseException {
if (field.equalsIgnoreCase("moddate")) {
int part1Int = Integer.MIN_VALUE;
int part2Int = Integer.MAX_VALUE;
try {
part1Int = Integer.parseInt(part1);
} catch (Exception e) {
...
Query query = IntPoint.newRangeQuery("moddate", part1Int,
part2Int);
return query;
}
return super.getRangeQuery(field, part1, part2, startInclusive,
endInclusive);
}
For sure not beautiful but working for me.

Data is written to BigQuery but not in proper format

I'm writing data to BigQuery and successfully gets written there. But I'm concerned with the format in which it is getting written.
Below is the format in which the data is shown when I execute any query in BigQuery :
Check the first row, the value of SalesComponent is CPS_H but its showing 'BeamRecord [dataValues=[CPS_H' and In the ModelIteration the value is ended with a square braket.
Below is the code that is used to push data to BigQuery from BeamSql:
TableSchema tableSchema = new TableSchema().setFields(ImmutableList.of(
new TableFieldSchema().setName("SalesComponent").setType("STRING").setMode("REQUIRED"),
new TableFieldSchema().setName("DuetoValue").setType("STRING").setMode("REQUIRED"),
new TableFieldSchema().setName("ModelIteration").setType("STRING").setMode("REQUIRED")
));
TableReference tableSpec = BigQueryHelpers.parseTableSpec("beta-194409:data_id1.tables_test");
System.out.println("Start Bigquery");
final_out.apply(MapElements.into(TypeDescriptor.of(TableRow.class)).via(
(MyOutputClass elem) -> new TableRow().set("SalesComponent", elem.SalesComponent).set("DuetoValue", elem.DuetoValue).set("ModelIteration", elem.ModelIteration)))
.apply(BigQueryIO.writeTableRows()
.to(tableSpec)
.withSchema(tableSchema)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE));
p.run().waitUntilFinish();
EDIT
I have transformed BeamRecord into MyOutputClass type using below code and this also doesn't work:
PCollection<MyOutputClass> final_out = join_query.apply(ParDo.of(new DoFn<BeamRecord, MyOutputClass>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) {
BeamRecord record = c.element();
String[] strArr = record.toString().split(",");
MyOutputClass moc = new MyOutputClass();
moc.setSalesComponent(strArr[0]);
moc.setDuetoValue(strArr[1]);
moc.setModelIteration(strArr[2]);
c.output(moc);
}
}));
It looks like your MyOutputClass is constructed incorrectly (with incorrect values). If you look at it, BigQueryIO is able to create rows with correct fields just fine. But those fields have wrong values. Which means that when you call .set("SalesComponent", elem.SalesComponent) you already have incorrect data in the elem.
My guess is the problem is in some previous step, when you convert from BeamRecord to MyOutputClass. You would get a result similar to what you're seeing if you did something like this (or some other conversion logic did this for you behind the scenes):
convert BeamRecord to string by calling beamRecord.toString();
if you look at BeamRecord.toString() implementation you can see that you're getting exactly that string format;
split this string by , getting an array of strings;
construct MyOutputClass from that array;
Pseudocode for this is something like:
PCollection<MyOutputClass> final_out =
beamRecords
.apply(
ParDo.of(new DoFn() {
#ProcessElement
void processElement(Context c) {
BeamRecord record = c.elem();
String[] fields = record.toString().split(",");
MyOutputClass elem = new MyOutputClass();
elem.SalesComponent = fields[0];
elem.DuetoValue = fields[1];
...
c.output(elem);
}
})
);
Correct way of doing something like this is to call getters on the record instead of splitting its string representation, along these lines (pseudocode):
PCollection<MyOutputClass> final_out =
beamRecords
.apply(
ParDo.of(new DoFn() {
#ProcessElement
void processElement(Context c) {
BeamRecord record = c.elem();
MyOutputClass elem = new MyOutputClass();
//get field value by name
elem.SalesComponent = record.getString("CPS_H...");
// get another field value by name
elem.DuetoValue = record.getInteger("...");
...
c.output(elem);
}
})
);
You can verify something like this by adding a simple ParDo where you either put a breakpoint and look at the elements in the debugger, or output the elements somewhere else (e.g. console).
I was able to resolve this issue using below methods :
PCollection<MyOutputClass> final_out = record40.apply(ParDo.of(new DoFn<BeamRecord, MyOutputClass>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws ParseException {
BeamRecord record = c.element();
String strArr = record.toString();
String strArr1 = strArr.substring(24);
String xyz = strArr1.replace("]","");
String[] strArr2 = xyz.split(",");

How to handle null pointer exceptions in elasticsearch

I'm using elasticsearch and i was trying to handle the case when the database is empty
#SuppressWarnings("unchecked")
public <M extends Model> SearchResults<M> findPage(int page, String search, String searchFields, String orderBy, String order, String where) {
BoolQueryBuilder qb = buildQueryBuilder(search, searchFields, where);
Query<M> query = (Query<M>) ElasticSearch.query(qb, entityClass);
// FIXME Currently we ignore the orderBy and order fields
query.from((page - 1) * getPageSize()).size(getPageSize());
query.hydrate(true);
return query.fetch();
}
the error at return query.fetch();
i'm trying to implement a try and catch statement but it's not working, any one can help with this please?