ListIterator kind for Set - iterator

ListIterator can be used to traverse in both ways over the List.
Why don't we have something similar to ListIterator for Set? Is it because it is not ordered? Please advice.

Short answer:
Yes, because it's not ordered.
Long answer:
LISTS
In a singly linked List every element has a reference to its following element.
In a doubly linked List every element has a reference to its successor and its predecessor.
So it is easy to implement the next method of a Iterator. To iterate a list we just run through the next references of the list elements. A traverse iteration in a doubly linked list will run the pred. references. In a singly linked list, the list order will be inverted and iterated.
So a order is defined.
(src)
SETS
A Set is managed by a HashFunction
(src)
The advantage is, that the lookup function in a set improves to O(1). But we loose the references between the set elements. So it gets harder to iterate in both ways over the Set in both ways. There are ways to iterate this Set. But to traverse over the Set we need to define an order. But it is not.

Related

Why is indices a property of collection rather than list?

Kotlin lists have the useful property indices that provides the range of valid indices.
But according to https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/indices.html it is actually a property not just of lists but of collections. Tried an experiment and sure enough, a set also has this property.
But sets cannot be indexed by integer the way lists can. So it's not meaningful to talk about the indices of a set.
Given that, why is it a property of collections in general rather than just lists (and arrays)?
But sets cannot be indexed by integer the way lists can. So it's not meaningful to talk about the indices of a set.
Having the ability the randomly access an element by index is one of the usages of indices. But you could be using them as integer keys. Basically they are defined as a integer range mapping to elements, so this works for any collection. In code, it is simply implemented as [0..size)

IndexedDB using an index versus a key range?

In indexedDB, if the keys are arrays of integers such as [n,0] through [n,m], for operations that involve getting all the records in which the first element of the array key is n or opening a cursor on the same set of records, is there any advantage to using an index on an additonal property that stores n over using a key range?
Reasons to think an index may not be better include that the browser has to maintain the index for each change to the object store, an additional property has to be added to each record to store already stored data n, and little may be gained since the keys in the index will always point to consecutive records in the object store rather than dispersed throughout.
If the number of different values of n are likely no more than 1,000 and for m no more than 50, is using an index superior to a key range?
Thank you.
I guess the purpose of indexedDB is to have object store locally.
It is not sql that you need to update columns in every object.
since you change the object structure (saying by adding property)
it is true that all the objects in the store must be rewriten as you said...
emm well... another option for you is to update the db with another store
which contain somthing similar to forien key in sql or uniqe key which store the other stored objects extentions... and in it every obj item is also supposed to be same structured.
I think this is the point you start to use onupgradeneeded intansively.

Filtering out DataRows that don't have Parents

I have a two tables in a DataSet that have a Parent-Child relation. The Parent is built from external data that I don't have control over, while the Child is sourced from the database that I am using for my internal data. This means that I can't always be certain that there will be a Parent for my Children. If this happens, I would prefer to filter out the Children from the DataGridView that is consuming the data via a BindingSource; I don't want to remove the rows from the database because settings changes (or other events I have no control over) might reintroduce rows that were previously present and removed.
I had previously had to figure out how to go the opposite way, to create Children for previously unencountered Parents via:
dim newrows() as ParentRow = ParentTable.Select("Count(Child(MyRelation).ForeignKeyColumn) < 1")
For Each row as ParentRow in newrows
ChildRow.AddChildRow(row, otherData)
Next
I thought I could use a similar approach:
BindingSource.Filter = "PARENT(MyRelation).KeyColumn IS NOT NULL"
But this filters out everything. Investigating, I discovered that running
ChildTable.Select("PARENT(MyRelation).KeyColumn IS NULL")(0).GetParentRow("MyRelation").Item("KeyColumn")
on a table where the given result has a parent succeeds and gives a value, which seems to contradict the Select statement, so clearly something unexpected is happening with the Select statement.
I am at a loss. Is there any way to filter out (but retain in the backend) rows that don't have a parent via the BindingSource? What am I not seeing here?
Answer
It turns out that making a Boolean column with an expression that calculates Parent.Key IS NOT NULL works and is filterable by the BindingSource, without throwing a flag for the row having been changed.
Possible Explanation
Combined with the observation that the original method only fails when constraints are turned off, this makes me think that this might be a design decision by Microsoft in light of the fact that, when constraints are off, there is no guarantee that a Child will only have one Parent. So the filtering by a Parent column will fail when the constraints are off, but filtering by a column calculated by the Parent column doesn't care about the constraints and so is okay. I just have to do my own work to make sure that there is only one Parent; fortunately this is guaranteed by the data source I am generating the Parent from.

Nhibernate bag collection when is it re-creating?

I made a lot of examples to check when bag collection is recreating during adding or removing item from collection. I read that in http://knol.google.com/k/nhibernate-chapter-16-improving-performance section 16.5.1. Taxonomy:
Bags are the worst case. Since a bag
permits duplicate element values and
has no index column, no primary key
may be defined. NHibernate has no way
of distinguishing between duplicate
rows. NHibernate resolves this problem
by completely removing (in a single
DELETE) and recreating the collection
whenever it changes. This might be
very inefficient.
I made bidirectional of type one to many(Person -> Addresses) and the following tests:
Test 1: Inverse= false; action=insert,update,remove,count; Collection types: Set, Bag
Result: Collections behave exactly the same!
Test 2: Inverse= true; action=insert,update,remove,count; Collection types: Set, Bag
Result: Collections behave almost the same! I only see difference in adding new item to bag collection - when i do that collection is not filled with data from db.
I was using nhibernate profiler/session statystics for analizying changes in session object and in database. But i did not see any recreating items of collection, whed did it happend? i memory?
Recreating collections applies only for entities loaded from the database. When running tests in the same session that the entities were created, NHibernate knows that the collections are empty, manipulates it in memory and saves only the final state to the database on transaction commit/session flush.
I've done similiar tests - see this blog entry for example of re-creating bag collection.

How to find all nodes in a subtree in a recursive SQL query?

I have a table which defines a child-parent relationship between nodes:
CREATE TABLE node ( ' pseudo code alert
id INTEGER PRIMARY KEY,
parentID INTEGER, ' should be a valid id.
)
If parentID always points to a valid existing node, then this will naturally define a tree structure.
If the parentID is NULL then we may assume that the node is a root node.
How would I:
Find all the nodes which are decendents of a given node?
Find all the nodes under a given node to a specific depth?
I would like to do each of these as a single SQL (I expect it would necessarily be recursive) or two mutually recursive queries.
I'm doing this in an ODBC context, so I can't rely on any vendor specific features.
Edit
No tables are written yet, so adding extra columns/tables is perfectly acceptable.
The tree will potentially be updated and added to quite often; auxillary data structures/tables/columns would be possible, though need to be kept up-to-date.
If you have any magic books you reach for for this kind of query, I'd like to know.
Many thanks.
This link provides a tutorial on both the Adjacency List Model (as described in the question), and the Nested Set Model. It is written as part of the documentation for MySQL.
What is not discussed in that article is insertion/delection time, and maintenance cost of the two approaches. For example:
a dynamically grown tree using the Nested Set Model would seem to need some maintenance to maintain the nesting (e.g. renumbering all left and right set numbers)
removal of a node in the adjacency list model would require updates in at least one other row.
If you have any magic books you reach for for this kind of query, I'd like to know.
Celko's Trees and Hierarchies in SQL For Smarties
Store the entire "path" from the root node's ID in a separate column, being sure to use a separator at the beginning and end as well. E.g. let's say 1 is the parent of 5, which is the parent of 17, and your separator character is dash, you would store the value -1-5-17- in your path column.
Now to find all children of 5 you can simply select records where the path includes -5-
The separators at the ends are necessary so you don't need to worry about ID's that are at the leftmost or rightmost end of the field when you use LIKE.
As for your depth issue, if you add a depth column to your table indicating the current nesting depth, this becomes easy as well. You look up your starting node's depth and then you add x to it where x is the number of levels deep you want to search, and you filter out records with greater depth than that.