How do I count occurrences of a property value in a collection? - vba

I have some data that I arrange into a collection of custom class objects.
Each object has a couple of properties aside from its unique name, which I will refer to as batch and exists
There are many objects in my collection, but only a few possible values of batch (although the number of possibilities is not pre-defined).
What is the easiest way to count occurrences of each possible value of batch?
Ultimately I want to create a userform something like this (values are arbitrary, for illustration):
Batch A 25 parts (2 missing)
Batch B 17 parts
Batch C 16 parts (1 missing)
One of my ideas was to make a custom "batch" class, which would have properties .count and .existcount and create a collection of those objects.
I want to know if there is a simpler, more straightforward way to count these values. Should I scrap the idea of a secondary collection and just create some loops and counter variables when I generate my userform?

You described well the two possibilities that you have:
Loop over your collection every time you need the count
Precompute the statistics, and access it when needed
This is a common choice one often has to do. I think here it is between performance vs. complexity.
Option 1 with a naive loop implementation will take you an O(n) time, where n is the size of your collection. And, unless your collection is static, you will have to compute it everytime you need your statistics. On the bright side, the naive looping is fairly trivial to write. Performance on frequent queries and/or large collections could suffer.
Option 2 is fast for retrieval, O(1) basically. But everytime your collection changes, you need to recompute your statistics. However this is incremental recomputing, i.e. you do not have to go through the whole collection but just over the changed items. But that means you need to deal with all the possibilities of updates (new item, deleted item, updated items). So that's a bit more complex than the naive loop. Now if your collections are entirely new all the time, and you query them only once, you have little to gain here.
So up to you to decide where to tradeoff according to the parameters of your problems.

Related

Redis bitmap split key division strategy

I'm grabbing and archiving A LOT of data from the Federal Elections Commission public data source API which has a unique record identifier called "sub_id" that is a 19 digit integer.
I'd like to think of a memory efficient way to catalog which line items I've already archived and immediately redis bitmaps come to mind.
Reading the documentation on redis bitmaps indicates a maximum storage length of 2^32 (4294967296).
A 19 digit integer could theoretically range anywhere from 0000000000000000001 - 9999999999999999999. Now I know that the datasource in question does not actually have 99 quintillion records, so they are clearly sparsely populated and not sequential. Of the data I currently have on file the maximum ID is 4123120171499720404 and a minimum value of 1010320180036112531. (I can tell the ids a date based because the 2017 and 2018 in the keys correspond to the dates of the records they refer to, but I can't sus out the rest of the pattern.)
If I wanted to store which line items I've already downloaded would I need 2328306436 different redis bitmaps? (9999999999999999999 / 4294967296 = 2328306436.54). I could probably work up a tiny algorithm determine given an 19 digit idea to divide by some constant to determine which split bitmap index to check.
There is no way this strategy seems tenable so I'm thinking I must be fundamentally misunderstanding some aspect of this. Am I?
A Bloom Filter such as RedisBloom will be an optimal solution (RedisBloom can even grow if you miscalculated your desired capacity).
After you BF.CREATE your filter, you pass to BF.ADD an 'item' to be inserted. This item can be as long as you want. The filter uses hash functions and modulus to fit it to the filter size. When you want to check if the item was already checked, call BF.EXISTS with the 'item'.
In short, what you describe here is a classic example for when a Bloom Filter is a great fit.
How many "items" are there? What is "A LOT"?
Anyway. A linear approach that uses a single bit to track each of the 10^19 potential items requires 1250 petabytes at least. This makes it impractical (atm) to store it in memory.
I would recommend that you teach yourself about probabilistic data structures in general, and after having grokked the tradeoffs look into using something from the RedisBloom toolbox.
If the ids ids are not sequential and very spread, keep tracking of which one you processed using a bitmap is not the best option since it would waste lot of memory.
However, it is hard to point the best solution without knowing the how many distinct sub_ids your data set has. If you are talking about a few 10s of millions, a simple set in Redis may be enough.

How to implement a scalable, unordered collection in DynamoDB?

I am looking into implementing a scalable unordered collection of objects on top of Amazon DynamoDB. So far the following options have been considered:
Use DynamoDB document data types (map, list) and use document path to access stand-alone items. This has one obvious drawback for collection being limited to 400KB of data, meaning perhaps 1..10K objects depending on their size. Less obvious drawback is that cost of insertion of a new object into such collection is going to be huge: Amazon specifies that the write capacity will be deducted based on the total item size, not just newly added object -- therefore ~400 capacity units for inserting 1KB object when approaching the size limit. So considering this ruled out?
Using composite primary hash + range key, where primary hash remains the same for all objects in the collection, and range key is just something random or an atomic counter. Obvious drawback is that having identical hash key results in bad key distribution -- cardinality is low when there are collections with large number of objects. This means bad partitioning, and having a scale issue with all reads/writes on the same collection being stuck to one shard, becoming subject to 3000 reads / 1000 writes per second limitation of DynamoDB partition.
Using global secondary index with secondary hash + range key, where hash key remains the same for all objects belonging to the same collection, and range key is just something random or an atomic counter. Similar to above, partitioning becomes poor for the GSI, and it will become a bottleneck with too many identical hashes draining all the provisioned capacity to the index rapidly. I didn't find how the GSI is implemented exactly, thus not sure how badly it suffers from low cardinality.
Question is, whether I could live with (2) or (3) and suffer from non-ideal key distribution, or is there another way of implementing collection that was overlooked, or perhaps I should at all consider looking into another nosql database engine.
This is a "shooting from the hip" answer, what you end up doing may depend on how much and what type of reading and writing you do.
Two things the dynamo docs encourage you to avoid are hot keys and, in general, scans. You noted that in cases (2) and (3), you end up with a hot key. If you expect this to scale (large collections), the hot key will probably hurt more and more, especially if this is a write-intensive application.
The docs on Query and Scan operations (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html) say that, for a query, "you must specify the hash key attribute name and value as an equality condition." So if you want to avoid scans, this might still force your hand and put you back into that hot key situation.
Maybe one route would be to embrace doing a scan operation, but just have one table devoted to your collection. Then you could just have a fully random (well distributed) hash key and do a scan every time. This assumes you always want everything from the collection (you didn't say). This will still hurt if you scale up to a large collection, but if you always want the full set back, you'll have to deal with that pain regardless. If you just want a subset, you can add a limit parameter. This would help performance, but you will always get back the same subset (or you can use the last evaluated key and keep going). The docs also mention parallel scans.
If you are using AWS, elasticache/redis might be another route to try? The first pass might code up a lot faster/cleaner than situation (1) that you mentioned.

LIST alternative in redis

Redis.io
The main features of Redis Lists from the point of view of time
complexity is the support for constant time insertion and deletion of
elements near the head and tail, even with many millions of inserted
items. Accessing elements is very fast near the extremes of the list
but is slow if you try accessing the middle of a very big list, as it
is an O(N) operation.
what is the LIST alternative when the data is too high and writes are lesser than Reads
This is something I'd definitely benchmark before doing, but if you're really hitting a performance issue accessing items in the middle of the list, there are a couple of alternatives that really depend on your use case.
Don't make a list so big, age out/trim pieces that don't matter any more.
Memoize hot sections of the list. If a particular paginated range is being requested much more often than others, make that it's own list. Check if it exists already, and if it doesn't create a subset of your list in the paginated range.
Bucket your list from the beginning into "manageable sizes" (for whatever your definition of managable is). If a list is purely additive (no removal from the list), you could use the modulus index of an item as part of the key so that your list is stored in smaller buckets. Ex: key = "your_key_name_" + index % 100000

SQL Read/Write efficiency

Is there any diffrenece in the performance of read and write operations in SQL? Using Linq to SQL in an ASP.NET MVC application, I often update many values in one of my tables in single posts (during this process, many posts of this type will come in rapidly from the user, although the user is unable to submit new data until the previous update is complete). My current implementation is to loop through the input (a list of the current values for each row), and write them to the field (nullable int). I wonder if there would be any performance difference if instead I read the current db value, and only wrote if it has changed. Most of these operations change the values for roughly 1/4 to 2/3 of the rows, some change fewer, and few change more than 2/3 of the rows.
I don't know much about the comparative speeds of these operations (or if there is even any difference). Is there any benefit to be gained from doing this? If so, what table sizes would benefit the most/not benefit at all, and would there be any percentage of the rows changing that would be a threshold for this improvement?
It's always faster to read.
A write is actually always a read followed by a write.
SQL needs to know which row to write to, which involves reading either an index or the table itself in a seek or scan operation, then writing to the appropriate row.
Writing also needs to update any applicable indexes. Depending on the circumstance, the index may get "updated" even when the data doesn't change.
As a very general rule, it's a good idea only to modify the data that needs to be changed.

Way to create a frozen table-view in SQLite?

I've got an SQLite table with potentially hundreds of thousands of entries, which is being added to (and occasionally removed from) in the background at irregular intervals. The UI needs to display this table in an arbitrary user-selected sorted order, within a wxWidgets wxListCtrl item.
I'm planning to use a wxLC_VIRTUAL list control, and query the table for small groups of items as needed using LIMIT and OFFSET, but I foresee trouble. When the background process makes changes to items that are "above" the currently-viewed ones, I can't see any way to know how the offsets of the currently-viewed items will change.
Is there some SQLite trick to handle this? Maybe a way to identify what offset a particular record is at in a specific sorted order, without iterating through all of the records returned by a SELECT statement?
Alternatively, is there some way to create an unchanging view of the database at a particular time, without a time-consuming duplication of it?
If all else fails, I can store the changed items and add them later, but I'm hoping I won't have to.
Solved it by creating a query to find the index of an item, by counting the number of items that are "less than" (in the user-defined order) the one I'm looking for. A little complex to write, because of the user-defined ordering, but it works, and runs surprisingly fast even on a huge table.