Count showing (visible) rows of a QTreeView - pyqt5

I found this question and this question.
I've also searched elsewhere.
The situation is that (starting with all root item children collapsed) my code iterates through the tree expanding the parents of those items whose data matches a certain criterion.
I just want to find the total number of showing rows displayed in the QTreeView at the end of that process. NB I use the word "showing" rather than "visible" as this is not a question about viewports: I want the total number showing assuming a viewport large enough not to have to create a vertical scrollbar.
Is there really no simple way to achieve this? Counting the total children displayed by, for example, counting all children of all parents which are expanded in this manner, as they get expanded, would be quite complex: sometimes, for example, two siblings meet the criterion, so the first expands its parent, but the second obviously doesn't. Not only that, but a node located deep in the tree expands not only its own parent, but (if necessary) its grandparent, great-grandparent, etc.
In view of the complexity of the foregoing, another possibility would be to iterate through the tree again, after expanding, in order to count the rows displayed. This seems a ludicrous effort just to get such a simple piece of information.
Please note that I'm talking about QTreeViews, not QTableViews. With the latter it appears that it possible to use table_view.verticalHeader().count(). But a QTreeView doesn't have the method verticalHeader.

QTreeView provides the indexAbove() and indexBelow() functions, and the latter:
Returns the model index of the item below index.
def count_showing_rows(self):
count = 0
index = self.model().index(0, 0)
while index.isValid():
count += 1
index = self.indexBelow(index)
return count

Related

TABLEAU Totals not matching what's in view

I've been dealing with this issue in various ways throughout my time with this dataset in Tableau.
As you can see, the Total count of properties for each city is including properties that have been successfully filtered out of view. Why? The dyn.RANKED Profitable Investments (grouped) variable on the Filter shelf is an attempt to double down on the same as the first line of the Calculated Field - to ignore the unwanted properties in each city. The view ignores them, but the totals do not.
If the Watershed Property pill is removed from the Rows shelf, then the dyn.NumProps_in_City results shown on the table are each the same as the Totals you see here (i.e., despite the first line of the calculated field, properties that do not meet that opening condition are being counted)...despite the view with the Watershed pill knowing not to show them.
Also if the Watershed Property pill is removed from the Rows shelf, then the dyn.RANKED Profitable Investments (grouped) variable on the Filter shelf suddenly only has one category to choose from (i.e., 'INVEST') if you go to edit the filter. Which would be great since that's the category I care about, but not if the counts are including things that are not in that category despite the filter.
Messing around with Include, Exclude, and Fixed in the calculated field doesn't seem to work here since I can't figure out how to get around various aggregate/non-aggregate and/or ATTR errors no matter where I place them. Plus, my incorrect counts are not suffering from an LOD issue - the LOD is correct - it's an issue of not consistently filtering out the unwanted rows at the desired LOD.
Please advise!
Thanks,
Christian
It seems that dyn.Ranked calculated field calculates the value prior to filtering. This may happen if you have used any LOD calculations in the syntax.
Simply right click such fields on filters shelf and click add to context. This will cause LOD calculations to calculate after the filtering.
see this link, the context filters are above the LOD calculations, in order of precedence; but measure filters are below the LOD calcs. Therefore if measures are used as filters, these have to be added to context so that their order of precedence is above such calculations

How to set the explicit order for child table rows for one-to-many SQL relation?

Imagine a database with two tables, lists (with id and name) and items (with id, list_id, which is a foreign key linking to lists.id, and name) and the application with ORM and the corresponding models.
A task: have a way in the application to create/edit/view the list and the items inside it (that should be pretty easy), but also saving the order of the items within one list and allowing to reorder the items within one list (so, a user creates the items list, then swaps two items, then when displaying the list, the items order should be preserved) or deleting items.
What is the best way to implement it, database-wise? Which db structure should I use for it?
I see these ways of solving it:
not using the external table for items, but storing everything in a list document (as a postgres jsonb column for example) - can work but I suppose that's not RDBMS way to do it and if the user would want to update the single item, the whole list object would need to be updated
having a position field in items table and adding a way to manage the position in the API - can work, but it's quite complicated (like, handling the cases where the position is the same for some items, handling swapping items, handling items deletions and having to decrease the position of all the items coming after the deleted one etc.)
Is there a simple way of implementing it? Like the one used in production by some big companies? I'm really curious about how such cases are handled in real life.
This is more theoretical question, so no code samples here (except for the db structure).
This is a good question, which as far as I know doesn't have any simple answers. I once came up with a solution for a high volume photo sharing site using an item table with columns list_id and position as you describe. The key to performance was to minimize renumbering as this database had millions of photos (and more than 2^32 likes).
The only operation was to move a single item to another point in the list (before or after another item in the list). This would work by first assigning positions with large steps, e.g. 1000, 2000, 3000. Whenever an item is moved between two others the average is used, e.g. move from pos=3000 to 1500. Eventually you can try to move an item between two items that have consecutive position numbers. Then you choose to renumber items either above or below depending on which way requires fewer updates (e.g. if there were a run of consecutive positions). This was done using RANK and #vars as I recall on MySQL 5.7.
This did work well resolving a problem where there was intermittent unavailability in production due to massive renumberings that were occurring before when consecutive positions were used.
I was able to dig up a couple of the queries (that was meant to go into a blog post ages ago). Turns out this was MySQL before RANK() was a thing which is why the #shuffle_rank variable was used. The + 0 (and the + 1) is because this is the actual SQL sent to the query but it was generated in code. This is to find the first gap below (greater than) position 120533287:
SELECT shuffle_rank, position
FROM (SELECT #shuffle_rank := #shuffle_rank + 1 AS shuffle_rank, position
FROM `gallery_items`
JOIN (SELECT #shuffle_rank := 0) initialize_rank_var
WHERE `gallery_items`.`gallery_id` = 14103882 AND (position >= 120533287)
ORDER BY position ASC) positionable_items
WHERE ABS(120533287 - position) >= shuffle_rank + 0 LIMIT 1
Here's the update query after the above query and supporting code decided that 3 rows need to be updated to make a gap. The + 1 here may be larger if renumbering with some gap if there's room.
UPDATE `gallery_items`
SET position = -222 + (#shuffle_rank := #shuffle_rank + 1)
WHERE `gallery_items`.`gallery_id` = 24669422
AND (position >= -222)
AND ((SELECT #shuffle_rank := 0) = 0)
ORDER BY position ASC
LIMIT 3
Note that this pair of actual queries aren't for the same operation seeing as they have different gallery_id values (aka list_id).

Lucene.Net: How to grab the next 10 results after a document that has a certain field value?

Suppose I have a query in Lucene.Net 3.0.3 that matches 1,000,000 documents, and each document has a field named ProductID with a unique value. How can I grab the next 10 items immediately after a specific ProductID?
For instance, grab me the next 10 items after ProductID 4264423.
The ProductID could be anywhere within the 1,000,000 matches, and sorted however I wish.
One brute force solution is to loop through all the ScoreDocs, and use the FieldCache to find the correct ProductID, then grab the next 10. However, that seems inefficient, since we'd need to populate a huge ScoreDocs array.
Another idea is to use a custom Collector, along with FieldCache to look for the correct ProductID, but as far as I know, Collector's aren't sorted.
Perhaps a solution is to use a combination of a custom Collector with a PriorityQueue, use the FieldCache to find the correct ProductID, note the Score of that document, then grab the next 10 items based on Score. (Although, if there are similar Score values, how is that handled?)
Please provide code samples, as I'm a Lucene.Net newbie. (Sample code preferably in C#.)
If a custom Collector + PriorityQueue is a viable option, here is some sample code to assist: https://stackoverflow.com/a/7938433/1145177

MongoDB infinite scroll sorted results

I am having a problem trying to achieve the following:
I'd like to have a page with 'infinite' scrolling functionality and all the results fetched to be sorted by certain attributes. The way the code currently works is, it places the query, sorts the results, and displays them. The problem is, that once the user reaches the bottom of the page and new query is placed, the results from this query are sorted, but in its own context. That is, if you have a total of 100 results, and the first query display only 50, then they are sorted. But the next query (for the next 50) sorts the results only based on these 50 results, not based on the 100 (total results).
So, do I have to fetch all the results at once, sort them, and then apply some pagination logic to them or there's a way for MongoDB to actually have infinite scrolling (AJAX requests) with sorting applying to the results?
There's a few ways to do this with MongoDB. You can use the .skip() and .limit() commands (documented here: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-CursorMethods) to apply pagination to the query.
Alternatively, you could add a clause to your query like: {sorted_field : {$gt : <value from last record>}}. In other words, filter out matches of the query whose sorted value is less than that of the last resulting item from the current page of results. For example, if page 1 of results returns documents A through D, then to retrieve the next page 2 you repeat the same query with the additional filter x > D.
Let me preface this by saying that I have no experience with MongoDB (though I am aware that it is a NoSQL database).
This question, however, is somewhat of a general database one (you'd probably get more responses tagging it as such). I've implemented such a feature using Cassandra (another, albiet quite different NoSQL database), however the same principles apply.
Use the sorted-by attribute of the last retrieved record, and conduct a range search based on it in the database. So, assuming your database consists of the following set of letters:
A
B
C
D
E
F
G
..and you were retrieving 2 letters at a time, you'd retrieve A, B first. When more records are needed, you'd use B to conduct a range search on the set of letters in the database. In plain English this would be something like:
Get the letters that appear after B, limit the results to 2
From a brief look at the MongoDB tutorial, it looks like you have conditional operators to help you implement this.

Want an efficient approach to retrieving records from a database when the retrieval is weighted and balanced

Im working on something incredibly unique..... a property listings website. ;)
It displays a list of properties. For each property a teaser image and some caption data is displayed. If the teaser image and caption takes a site visitors interest, they can click on it and get a full property profile. All very standard.
The customer wants to be able to allow property owners to add multiple teaser images and to be able to track which teaser images got the most click throughs. No worries there.
But they also want to allow the property owner to weight each teaser image to control when it is shown. So for 3 images with weightings of 2, 6, 2, the 2nd image would be shown 6/10 times. This needs to be balanced. If the first 6 times the 2nd image is shown, it cant be shown again until the 1st and 3rd images have be shown twice each.
So I need to both increment how often an image has been retrieved and also retrieve images in a balanced way. Forget about actual image handling, Im actually just talking about Urls.
Note incrementing how often it has been retrieved is a different animal to incrementing how often it has captured a click through.
So i can think of a few different ways to approach the problem using database triggers or maybe some LINQ2SQL, etc but it strikes me that someone out there will know of a solution that could be orders fo magnitude faster than what i might come up with.
My first rough idea is to have a schema like so:
TeaseImage(PropId, ImageId, ImageUrl, Weighting, RetrievedCount, PropTotalRetrievedCount)
and then
select ImageRanks.*
from (Select t.ImageID,
t.ImageUrl,
rank() over (partition by t.RetrievedCount order by sum(t.RetrievedCount) desc) as IMG_Rank
from TeaseImage t
where t.RetrievedCount<t.Weighting
group by t.PropID) ImageRanks
where ImageRanks.IMG_Rank <= 1
And then
1. for each ImageId in the result set increment RetrievedCount by 1 and then
2. for each PropId in ResultSet increment PropTotalRetrievedCount by 1 and then
3. for each PropId in ResultSet check if PropTotalRetrievedCount ==10 and if so reset it to PropTotalRetrievedCount = 0 and RetrievedCount=0 for each associated ImageId
Which frankly sounds awful :(
So any ideas?
Note: if I have to step out of the datalayer I'd be using C# / .Net. Thanks.
If you want to do this entirely in your database, you could split your table in two:
Image(ImageId, ImageUrl)
TeaseImage(TeaseImageId, PropId, ImageId, DateLastAccessed)
The TeaseImage table manages weightings by storing additional (redundant) copies of each property-image pair. So an image with a weight of six would get six records.
Then the following query gives you the least-recently used record.
select top 1 ti.TeaseImageId, i.ImageUrl
from TeaseImage ti
join Image i
on i.ImageId = ti.ImageId
where ti.PropId = #PropId
order by ti.DateLastAccessed
Following the select, just update the record's DateLastAccessed. (Or even update it as part of the select procedure, depending on how fault-tolerant you need to be.)
Using this technique would give you fine-grained control over the order of image delivery, (by seeding their DateLastAccessed values appropriately) and you could easily modify the ratios if need be.
Of course, as the table grows, the additional records would degrade query performance earlier than other approaches, but depending on the cost of the query relative to everything else that's going on that may not be significant.