I make a query via createBuilder() and when executing it (getQuery()->execute()->toArray())
I got 10946 elements. I want to paginate it, so I pass it to:
$paginator = new \Phalcon\Paginator\Adapter\QueryBuilder(array(
"builder" => $builder,
"limit" => $limit,
"page" => $current_page
));
$limit is 25 and $current_page is 1, but when doing:
$paginator->getPaginate();
$page->total_items;
returns 1.
Is that a bug or am I missing something?
UPD: it seems like when counting items it uses created sql with limit. There is no difference what limit is, limit divided by items per page always equals 1. I might be mistaken.
UPD2: Colleague helped me to figure this out, the bug was in the query phalcon produces: count() of the group by counts grouped elements. So a workaround looks like:
$dataCount = $builder->getQuery()->execute()->count();
$page->next = $page->current + 1;
$page->before = $page->current - 1 > 0 ? $page->current - 1 : 1;
$page->total_items = $dataCount;
$page->total_pages = ceil($dataCount / 100);
$page->last = $page->total_pages;
I know this isn't much of an answer but this is most likely to be a bug. Great guys at Phalcon took on a massive job that is too big to do it properly in their little free time and things like PHQL, Volt and other big but non-core components do not receive as much attention as we'd like. Also given that most time in the past 6 months was spent on v2 there are nearly 500 bugs about stuff like that and it's counting. I came across considerable issues in ORM, Volt, Validation and Session, which in the end made me stick to other not as cool but more proven solutions. When v2 comes out I'm sure all attention will on the bug list and testing, until then we are mostly on our own. Given that it's all C right now, only a few enthusiast get involved, with v2 this will also change.
If this is the only problem you are hitting, the best approach is to update your query to get the information you need yourself without getPaginate().
Related
How can I get a list of all topics that I created?
I think it should be something like
%SEARCH{ "versions[-1].info.author = '%USERNAME%" type="query" web="Sandbox" }%
but that returns 0 results.
With "versions[-1]" I get all topics, and with "info.author = '%USERNAME%'" a list of the topics where the last edit was made by me. Having a list of all topics where any edit was made by me would be fine, too, but "versions.info.author = '%USERNAME%'" again gives 0 results.
I’m using Foswiki-1.0.9. (I know that’s quite old.)
The right syntax would be
%SEARCH{ "versions[-1,info.author='%USERNAME%']" type="query" web="Sandbox"}%
But that's not performing well, i.e. on your old Foswiki install.
Better is to install DBCacheContrib and DBCachePlugin and use
%DBQUERY{"createauthor='%WIKINAME%'"}%
This plugin caches the initial author in a way it does not have to retrieve the information from the revision system for every topic under consideration during query time.
How do you do something like gun.get({startkey, endkey}) ?
Previously: https://github.com/amark/gun/issues/479
#qwe123wsx #sebastianmacias apologies for the delay! Originally posted at: https://github.com/amark/gun/issues/479
The wire spec has a protocol for this but it isn't implemented yet. It looks something like this:
gun.on('out', {get: {'#': {'>': 'a', '<': 'b'}}});
However this doesn't work yet. I would recommend instead:
(1) Pagination behavior is very different from one app to another and will be hard for us to create a "one-size-fits-all" solution, so it would be highly helpful if you could implement your own* pagination and make it available as a user-module, then we can learn from your experience (what worked, what didn't) and make the best solution part of core.
(2) Your app will probably work fine without pagination in the meanwhile, while it can be built (it is targeted for after 1.0), and then as your app becomes more popular, it should be fairly easy to add in without much refactor, once you need it and it is available.
... * How to build your own?
Lots of good articles on this, best one I've seen yet is from Neo4j on how to do it in a graph database (which applies to gun as well) https://graphaware.com/neo4j/2014/08/20/graphaware-neo4j-timetree.html .
Another rough idea is you model your data based on pagination or time. So rather than having ALL tweets go into user's tweet table, instead, the user's tweet table is a table of DAYS (or weeks), and then you put the tweet inside the week table. Now when you load the data, you can scan/skip based off of week very easily while it being super bandwidth efficient.
Rough PSEUDO code:
function onTweetSend(tweet){
gun.get('user').get('alice').get('tweets').get(Date.uniqueYear() + Date.uniqueWeek()).set(tweet)
}
function paginateUserTweet(howMany, cb){
var range = convertToArrayOfUniqueWeekNamesFromToday(howMany);
var all = [];
range.forEach(function(week){
gun.get('user').get('alice').get('tweets').get(week).load(function(tweets){
all.push(tweets);
if(all.length < range.length){ return }
all = flattenArray(all);
cb(all);
});
});
}
Now we can use https://gun.eco/docs/RAD#lex
gun.get(...).get({'.': {'>': startkey, '<': endkey}, '%': 50000}).map().once(...)
So I'm working on a rails app for a building that keeps track of water usage/collection and electricity use/solar generation, etc. These are stored as measurement rows, attached to sensors, which are attached to programs (location in the building, essentially) and subtypes (attached to types - water, electricity).
I'm doing some graphing with chartkick, and the database calls related to this are way too slow. They'll be much faster on the production servers, but there will also be far more data.
Here's the helper method that has the chart generation and database call in it:
def stackedSubtypeChart(grouping)
rsubs = #resource.subtypes
.order(:usage?) #add usage types after gen types
.map{|stype| [
stype.name,stype.measurements #this takes too long!
.where("date >= ?", params[:start]) #(4 calls!!)
.where("date <= ?", params[:stop])
.group_by_period(grouping, :date).maximum(:amount)]}
rsubs = rsubs.map {|stype|
{name: stype[0],
data: stype[1]}}
ret = column_chart rsubs,
stacked: true,
library: { :series => {0 => { type: "line"}}}
end
#resource is defined in the controller as:
#resource = Type.includes(:subtypes => :sensors).find_by_resource('electricity')
I've commented the line that's responsible for there being multiple calls, which is definitely part of the problem. This takes two seconds to load on my (admittedly very very old) computer with a month of data.
I could really use help with both changing the map so that this is one call instead of however-many-subtypes calls, and with reducing what I'm pulling in so each call isn't taking half a second. I don't have a ton of experience optimizing this sort of thing and I'm not really sure how to start doing more than I have here already.
Might be helpful to look into ActiveRecord Explain to dig into the SQL. There's a good screencast that explains (pun totally intended) pretty well.
After a lot of bashing my head against a wall, I stumbled across this, which is a much faster single query that grabs all the data + data connections I need. It's a little hard to format but it works.
rsubs = Measurement
.where("measurements.date >= ? AND measurements.date <= ?",
offset(params[:start], -1, grouping),
offset(params[:stop], 1, grouping))
.joins(sensor: {subtype: :type})
.where("types.resource = ?", #rname)
.order('subtypes."usage?"')
.group_by_period(grouping, :date).group("subtypes.id, subtypes.name").maximum(:amount)
I have a list of Products with a field called 'Title' and I have been trying to get a list of initial letters with not much luck. The closes I have is the following that dosn't work as 'Distinct' fails to work.
atoz = Product.objects.all().only('title').extra(select={'letter': "UPPER(SUBSTR(title,1,1))"}).distinct('letter')
I must be going wrong somewhere,
I hope someone can help.
You can get it in python after the queryset got in, which is trivial:
products = Project.objects.values_list('title', flat=True).distinct()
atoz = set([i[0] for i in products])
If you are using mysql, I found another answer useful, albeit using sql(django execute sql directly):
SELECT DISTINCT LEFT(title, 1) FROM product;
The best answer I could come up with, which isn't 100% ideal as it requires post processing is this.
atoz = sorted(set(Product.objects.all().extra(select={'letter': "UPPER(SUBSTR(title,1,1))"}).values_list('letter', flat=True)))
I want to retrieve all the files from a cabinet (called 'Wombat Insurance Co'). Currently I am using this DQL query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);
This is ok except it only returns a maximum of 100 results. If there are 5000 files in the cabinet I want to get all 5000 results. Is there a way to use pagination to get all the results?
I have tried this query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend)
ENABLE (RETURN_RANGE 0 100 'r_object_id DESC');
with the intention of getting results in 100 file increments, but this query gives me an error when I try to execute it. The error says this:
com.emc.documentum.fs.services.core.CoreServiceException: "QUERY" action failed.
java.lang.Exception: [DM_QUERY2_E_UNRECOGNIZED_HINT]error:
"RETURN_RANGE is an unknown hint or is being used incorrectly."
I think I am using the RETURN_RANGE hint correctly, but maybe I'm not. Any help would be appreciated!
I have also tried using the hint ENABLE(FETCH_ALL_RESULTS 0) but this still only returns a maximum of 100 results.
To clarify, my question is: how can I get all the files from a cabinet?
You have already accepted an answer which is using DFS.
Since your are playing with DFC, these information might help you.
DFS:
If you are using DFS, you have to aware about the number of concurrent sessions that you can consume with DFS.
I think it is 100 or 150.
DFC:
Actually there is a limit that you can fetch via DFC (I'm not sure with DFS).
Go to your DFC application(webtop or da or anything) and check the dfc.properties file.
# Maximum number of results to retrieve by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results = 100
# Maximum number of results to retrieve per source by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results_per_source = 400
dfc.properties.full or similar file is there and you can verify these values according to your system.
And I'm talking about the ContentServer side, not the client side dfc.properties file.
If you use ENABLE (RETURN_TOP) hint with DFC, there are 2 ways to fetch the results from the ContentServer.
Object based
Row based
You have to configure this by using the parameter return_top_results_row_based in the server.ini file.
All of these changes for the documentum server side, not for your DFC/DQL client.
Aha, I've figured it out. Using DFS with Java (an abstraction layer on top of DFC) you can set the starting index for query results:
String queryStr = "select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);"
PassthroughQuery query = new PassthroughQuery();
query.setQueryString(queryStr);
query.addRepository(repositoryStr);
QueryExecution queryEx = new QueryExecution();
queryEx.setCacheStrategyType(CacheStrategyType.DEFAULT_CACHE_STRATEGY);
queryEx.setStartingIndex(currentIndex); // set start index here
OperationOptions operationOptions = null;
// will return 100 results starting from currentIndex
QueryResult queryResult = queryService.execute(query, queryEx, operationOptions);
You can just increment the currentIndex variable to get all results.
Well, the hint is being used incorrectly. Start with 1, not 0.
There is no built-in limit in DQL itself. All results are returned by default. The reason you get only 100 results must have something to do with the way you're using DFC (or whichever other client you are using). Using IDfCollection in the following way will surely return everything:
IDfQuery query = new DfQuery("SELECT r_object_id, object_name "
+ "FROM dm_document(all) WHERE FOLDER('/System', DESCEND)");
IDfCollection coll = query.execute(session, IDfQuery.DF_READ_QUERY);
int i = 0;
while (coll.next()) i++;
System.out.println("Number of results: " + i);
In a test environment (CS 6.7 SP1 x64, MS SQL), this outputs:
Number of results: 37162
Now, there's proof. Using paging is however a good idea if you want to improve the overall performance in your application. As mentioned, start counting with the number 1:
ENABLE(RETURN_RANGE 1 100 'r_object_id DESC')
This way of paging requires that sorting be specified in the hint rather than as a DQL statement. If all you want is the first 100 records, try this hint instead:
ENABLE(RETURN_TOP 100)
In this case sorting with ORDER BY will work as you'd expect.
Lastly, note that adding (all) will not only find all documents matching the specified qualification, but all versions of every document. If this was your intention, that's fine.
I've worked with DFC API (with Java) for a while but I don't remember any default limit on queries, IIRC we've always got all of the documents, there weren't any limit. Actually (according to my notes) we have to set the limit explicitly with, for example, enable (return_top 2000). (As far I know the syntax might be depend on the DBMS behind EMC Documentum.)
Just a guess: check your dfc.properties file.