Sample Mongodb collection
[
{"sno":4,"data":"data-4"},
{"sno":3,"data":"data-3"},
{"sno":2,"data":"data-2"},
{"sno":1,"data":"data-1"},
]
Spring Data Code:
PageRequest pageable = new PageRequest(page--, size);
return dao.findAll(pageable);
If I pass page as 1 and size as 1, i m getting the below result, which is correct.
{"sno":4,"data":"data-4"}
If I pass page as 1 and size as 2, see the below
Expected:
{"sno":4,"data":"data-4"}
{"sno":3,"data":"data-3"}
Actual:
{"sno":3,"data":"data-3"}
{"sno":2,"data":"data-2"}
It skips the first record, looks like its a issue with Spring Data for mongodb implementation. I have tried with explicit Sort(DESC,"sno") in pageable, still same result.
Did anyone experience this issue?
Actually page starts from 0.
I've tried the same data as you and actually if I call:
PageRequest pageable = new PageRequest(0, 1, new Sort(Sort.Direction.DESC, "sno"));
dao.findAll(pageable);
I will get:
{"sno":4,"data":"data-4"}
The same if you need first two records in order just call with
new PageRequest(0, 2, new Sort(Sort.Direction.DESC, "sno"));
Related
I'm looking to return the id or better yet, all information that was inserted, using a raw query with TypeORM and NestJS. Example as follows:
await connection.manager.query(`INSERT INTO...`)
When assigning the query to a constant and console logging it below, it does not yield any helpful information:
OkPacket {
fieldCount: 0,
affectedRows: 1,
insertId: 0,
serverStatus: 2,
warningCount: 1,
message: '',
protocol41: true,
changedRows: 0
}
As you can see, it returns no pertinent information, the insertId above is obviously incorrect, and it returns this every time, regardless of the actual parameters of the query.
I know with more typical TypeORM queries you can use .return(['name_of_column_you_want_returned']).execute()
and it will return the relevant information just fine. Is there any way to do this with a raw query? Thank you!
tl;dr You're getting the raw mariadb driver response (OkPacket) from the INSERT command, and you'd need a new SELECT query to see the data.
You're using the TypeORM EntityManager, and the docs don't mention a return value. Looking at the source code for query, the return type is any. Since it's a raw query, it probably returns an object based on the type of database you're using rather than having a standard format.
In this case, you're using MariaDb, which returned an OkPacket. Here's the documentation:
https://mariadb.com/kb/en/ok_packet/
I'm trying to use PDI to read data from an API (json) and now I'm simply trying to use json input to get a few specific fields but the get fields button on the input step gives me.
ERROR (version 8.3.0.0-371, build 8.3.0.0-371 from 2019-06-11 11.09.08 by buildguy) : Index 1 out of bounds for length 1
all the steps execute fine, and produce data - just not the json input step doesn't wnat to give me the fields option! - I've tired the text file and json oput and both write valid json so IDK whats going on....
PS. this is my first time using PDI
ISSUE 2:
It looks like PDI uses jayway for its json path parsing so I've been using this site https://jsonpath.herokuapp.com/ jayway selection which gives me my expected path. When I put that into the 'fields' of the json input dialog I only get the FIRST instance of that path value vs it actually parsing the json and giving me every instance, and can't figure out why though I assume it has something to do with PDI's row based view on things but I also don't know how to get it to understand that its json and it should be giving me back all values that match that path.
UPDATE 1:
I've been looking at this https://forums.pentaho.com/threads/135882-Parsing-JSON-data-without-knowing-field-names/ it seems like this Modified Java Script Value step might be the way to go. Will continue testing.
UPDATE 2
OK - Used the MJSV as posted above along with a select fields step and finally able to get the key's
var obj = JSON.parse(mydata);
var keys = Object.keys(obj);
for (var i = 0; i < Object.keys(obj).length; i++) {
var row = createRowCopy(getOutputRowMeta().size());
var idx = getInputRowMeta().size();
row[idx++] = keys[i];
putRow(row);
}
trans_Status = SKIP_TRANSFORMATION;
I'm using #Query from the spring data package and I want to query on the last element of an array in a document.
For example the data structure could be like this:
{
name : 'John',
scores: [10, 12, 14, 16]
},
{
name : 'Mary',
scores: [78, 20, 14]
},
So I've built a query, however it is complaining that "error message 'unknown operator: $slice' on server"
The $slice part of the query, when run separately, is fine:
db.getCollection('users').find({}, {scores: { $slice: -1 })
However as soon as I combine it with a more complex check, it gives the error as mentioned.
db.getCollection('users').find{{"$and":[{ } , {"scores" : { "$slice" : -1}} ,{"scores": "16"}]})
This query would return the list of users who had a last score of 16, in my example John would be returned but not Mary.
I've put it into a standard mongo query (to debug things), however ideally I need it to go into a spring-data #query construct - they should be fairly similar.
Is there anyway of doing this, without resorting to hand-cranked java calls? I don't see much documentation for #Query, other than it takes standard queries.
As commented with the link post, that refers to aggregate, how does that work with #Query, plus one of the main answers uses $where, this inefficient.
The general way forward with the problem is unfortunately the data, although #Veeram's response is correct, it will mean that you do not hit indexes. This is an issue where you've got very large data sets of course and you will see ever decreasing return times. It's something $where, $arrayElemAt cannot help you with. They have to pre-process the data and that means a full collection scan. We analysed several queries with these constructs and they involved a "COLSCAN".
The solution is ideally to create a field that contains the last item, for instance:
{
name : 'John',
scores: [10, 12, 14, 16],
lastScore: 16
},
{
name : 'Mary',
scores: [78, 20, 14],
lastScore: 14
}
You could create a listener to maintain this as follows:
#Component
public class ScoreListener extends AbstractMongoEventListener<Scores>
You then get the ability to sniff the data and make any updates:
#Override
public void onBeforeConvert(BeforeConvertEvent<Scores> event) {
// process any score and set lastScore
}
Don't forget to update your indexes (!):
#CompoundIndex(name = "lastScore", def = "{"
+ "'lastScore': 1"
+ " }")
Although this does contain a disadvantage of a slight duplication of data, in current Mongo (3.4) this really is the only way of doing this AND to include indexes in the search mechanism. The speed differences were dramatic, from nearly a minute response time down to milliseconds.
In Mongo 3.6 there may be better ways for doing that, however we are fixed on this version, so this has to be our solution.
I am trying to execute a query on my database using mongo java driver but am facing some problem.
I have executed this query from the mongo shell and it seems to work fine.
db.userData.aggregate([ { $group: {"_id": "$username"}},{$group:{"_id":"userCount","counter":{$sum:1}}}])
This returns to me the total number of unique users in my databse.
Result is
{ "_id" : "userCount", "counter" : 5 }
I want to try the same in JAVA. I can do aggregate queries with 1 $group like the following (returns unique users with number of documents in the database for that particular user).
AggregateIterable<Document> iterable = db.getCollection("userData").aggregate(asList(
new Document("$group", new Document("_id", "$username").append("count", new Document("$sum", 1)))));
But i do not know what to do if my query uses $group twice.
AggregateIterable<Document> iterable = db.getCollection("userData").aggregate(asList(
new Document("$group", new Document("_id", "$username").append("count", new Document("$sum", 1))),
new Document("$group", new Document("_id", "$username").append("count", new Document("$sum", 1)))));
This somehow returns the correct answer but it is confusing to me how it is working. Can somebody explain a bit.
Hello I have a problem with elasticsearch php api, elastica.
if I run this:
$elasticaQueryMatch= new Elastica\Query\Match();
$elasticaQueryMatch->setField('fax', "16147591649");
$elasticaResultSet = $elasticaIndex->search($elasticaQueryMatch);
var_dump($elasticaResultSet);
I get 7 results and the telephone number for all of the results is "16147591649"
Then if I run this:
$elasticaQueryMatch= new Elastica\Query\Match();
$elasticaQueryMatch->setField('telephone', "16147591649");
$elasticaResultSet = $elasticaIndex->search($elasticaQueryMatch);
var_dump($elasticaResultSet);
I get 0 results
Fixed it by creating a new index, changed my mapping and then rebuilt my index. It was the mapping and the analyzers for certain fields that were causing issues.