Aerospike Select Limit Java Client - aerospike

How can I limit the records using java client in Aerospike. It simply scan all records. I want to limit the query only one and then fetch the next one.

Scans can only limit records by percent.
ScanPolicy policy = new ScanPolicy();
policy.scanPercent = 1;
client.scanAll(policy, ns, set, myCallback);

Related

SQL query, offset and limit

I'm using splunk. Postgres SQL.
I have 2.1M rows to pull up in ASC order.
This is my standard query that is working:
WHERE we_student_id = 5678
ORDER BY audit.b_created_date DESC NULLS LAST
then I usually use: (if data is more than 1-2M. I split them into batches)
FETCH FIRST 500000 ROWS ONLY
OFFSET 500000 ROWS FETCH NEXT 500000 ROWS ONLY
this time, my client requested to extract them by ASC order based on audited id not audit_created_date.
I used:
WHERE student_id = 5678
ORDER BY audit.audited_id ASC NULLS LAST
==========
I tried to pull up the first 500k.
I used:
*ORDER BY ASC NULLS LAST
LIMIT 500000 OFFSET 0*
The result is just 100k.
I tried to put maxrows=0 before my select statement with the same query
*ORDER BY ASC NULLS LAST
LIMIT 500000 OFFSET 0*
but I'm getting an error: canceling statement due to user request.
I tried this query to get the first 400k instead of 500k and removed the OFFSET 0. And I'm still using maxrows=0 before my select statement
*ORDER BY ASC NULLS LAST
LIMIT 400000*
There's a result 400k.
When I tried to extract the next 400k, I queried
*LIMIT 400000 OFFSET 400000*
I encountered the error again: canceling statement due to user request.
Usually, I can pull up 2M rows on Database. I usually use "FETCH FIRST 1000000" Then offset the other batch. My usual query on DB is
ORDER BY DESC NULLS LAST and use FETCH first and OFFSET
But this time, my client wants to get the data by ASC order.
I tried FETCH FIRST 400000 ROWS ONLY query and there's a 400k result. but whenever I increase the number to 500000, I get this error: canceling statement due to user request.
I usually use maxrows=0 because Splunk only shows the first 100k rows. Most of my data are 1-2 Million.
This error only happened when the client requested the reports by ASC order.
I just want to pull up the 2.1M rows on the database and I don't know how to pull it up by ASC order. I don't know if I'm using OFFSET and LIMIT correctly.

SQL COUNT items that meet and do not meet WHERE condition when applying a LIMIT (on AWS SELECT)

I have a SQL question.
I have a table with a list of rows of format [user:String, score:Double]
I would like to COUNT the number of items (number of users) in my table where the score > xx (input that I specify). I need to use LIMIT as I use AWS select on a boto3 lambda function (there is a max memory). I would like to know how many items have been scanned to reach this limit.
For example, if I LIMIT to 1000, maybe I will need to scan 3000 items, 2000 items will be < xx and 1000 items (the limit) will be > xx so I get a feel that my user will be in the top 33% (arguable I know as it depends if the subset is representative etc :) )
How to do it (and how to do it on AWS select, as there are some functions that are not available like "order by" etc)?
EDIT: To add details, see the following picture.
I can run select count(*) FROM s3object[*][*] s where s.score>14 limit 5
and I will get 1 row ok.
Now, if I have 1 million users, and I have to limit the results to 1000 (because of memory). How I do I know how many items where scanned to get to these 1000 rows ?
I would like to COUNT the number of items (number of users) in my table where the score > xx (input that I specify).
Isn't the query you want a simple aggregation query with a filter?
select count(*)
from t
where score > ?;
? is a parameter with the limit that you specify. This always returns one row, so there is no need for LIMIT.

Elasticsearch SQL retrieve more than 1000

Implementing SQL query in Elasticsearch, not able to extract records more than 1000 in spite of LIMIT > 1000. Tried "index.mapping.total_fields.limit": 5000, but did not help.
POST _sql?format=txt
{
"query": "SELECT column1 FROM table LIMIT 5000"
}
TL;DR not possible at the moment
It is possible to run the same queries without a LIMIT however in that case if the maximum size (10000) is passed, an exception will be returned as Elasticsearch SQL is unable to track (and sort) all the results returned.
https://www.elastic.co/guide/en/elasticsearch/reference/current/sql-limitations.html

Limit 1 should be used with queryRow in Yii?

I am usually using queryRow to get a single record. eg:-
$lastReport = Yii::app()->db->createCommand(
'SELECT * FROM report ORDER BY created DESC'
)->queryRow();
I looked the MySQL log to know which query is used for it.
SELECT tbl_report.* FROM report ORDER BY created DESC
It seems that Yii is retrieving all the records from the table and return the first record.
So I think we should use LIMIT 1 whenever we are using queryRow. eg:-
$lastReport = Yii::app()->db->createCommand(
'SELECT * FROM report ORDER BY created DESC LIMIT 1'
)->queryRow();
Since the queryRow is returning "the first row (in terms of an array) of the query result", Yii should automatically add the limit. otherwise user will use this query to get a single record and that will cause to performance degradation.
Is my understanding is correct or I missed something?
Yii should not add limit 1 to query, because queryRow is designed to get results row by row, for example in while. Yii has limited functionality with raw SQL code, but Query Builder is available:
$user = Yii::app()->db->createCommand()
->select('id, username, profile')
->from('tbl_user u')
->join('tbl_profile p', 'u.id=p.user_id')
->where('id=:id', array(':id'=>$id))
->queryRow();
More information available here: http://www.yiiframework.com/doc/guide/1.1/en/database.query-builder
You should rather use ActiveRecordClassName::model()->findByPk($id); or ActiveRecordClassName::model()->find($criteria), because it uses defaultScope() and other improvements.

How to select a specific number of bytes from SQLite table?

I have a simple SQLite table with 1 column where I'm selecting a random number of records:
SELECT * FROM vocabulary ORDER BY RANDOM() LIMIT 100;
Is there a way to select a specific number of bytes, instead of rows? Something along the lines of:
SELECT * FROM vocabulary ORDER BY RANDOM() LIMIT BYTES 1024;
You can't limit your select via the SQLite-engine to a specific number of bytes across rows. Note though that LIMIT simply stops reading when the limit is reached. You can do the same thing by keeping a count in your calling code and then stop reading the data once you've reached the number of bytes you want.
Precisely how will depend on what environment you're programming in.