Aframe: How do I link entities, so the user can link/unlink entities, and so entities will animate together, and interact together - entity

How would I link and unlink multiple entities together so that they can be animated together.
An example is that there is a small pile of entities. When I click on this pile it spreads apart and floats upwards towards the user, so it's not a pile any more but a series of discreet entities each separated by a small distance.
The pile exists of 3 entities A, B, and C
If I click on the entity with id A then they all scale/position/rotate back into a pile.
If I click on entity id B then all entities move to the left. If I click on entity C then C leaves the pile and it's movements are no longer associated with the pile.
There is another pile with entities X, Y and Z
If entity X, Y, or Z is near entity C, then entity C joins the X, Y, Z pile. If the user clicks on entity Z and drags it over to be near entity A or B then entity Z joins pile A & B,
So then if entity A is clicked then A, B and Z will scale, rotate, and position together.

I believe the core question is how to re-parent entities to and from entity containers, assuming it is understood that animating/moving an entity container moves all its children, and how to listen to click events. Here's a container:
<a-entity id="groupContainer" animation__position="..." animation__scale="..." animation__rotation="...">
<a-entity class="child"></a-entity>
<a-entity class="child"></a-entity>
<a-entity class="child"></a-entity>
</a-entity>
There isn't a clean way to re-parent the A-Frame entities at the DOM level yet since detaching/re-attaching will remove/reinitialize all components. You can move the entity out with three.js.
var someOtherContainer = document.getElementById('someOtherContainer').object3D;
var childToReparent = document.querySelector('#someChildToRemoveFromContainer');
someOtherContainer.add(childToReparent);

Related

Best way to represent a stockroom

I'm currently working on a project where I try to optimize the way objects are assigned a spot in a stockroom. Where we want the most frequently bought products as close to the front as possible, with a decreasing order towards the back. I was thinking of somehow representing this using an array, where array[x] = 0 represents a wall, array[x] = 1 represents a path where one could walk, and array[x] = Object represents some item in a shelf. With properties such as distance from drop off point, etc.
Any one have any tips or things I should think about?

single person detection in latest bodypix

When I try the bokeh segmentation effect using body-pix#1.0.0, It detects/segments the person (A) in front of the camera. If another person (B) is standing behind, away from A, B is being blurred out. If the person B comes very close to the contour of A, then person B is also getting detected. This is the preferred behaviour.
Now when I try with body-pix#2.0.0, both Person A and B are getting detected even though I am using segmentPerson API. Pls note, person B is standing much away from person A, still both are getting detected. The advantage I see with 2.0 is that the contour of the person detected is much more accurate and smoother than that in 1.0 which had a gap in the contour and the bokeh effect was missing around this gap. In 2.0, the contour is more accurate. But multiple people are getting detected. Is there any parameter I could tweak to restrict this to single person detection and use the smoother contour?
Thanks
For those who wants to know the answer. Source: https://github.com/tensorflow/tfjs/issues/2547
If you want to use BodyPix 2.0 to only blur just a subset of people (e.g. the large people), a quick way would be to use BodyPix 2.0's Multi-Person Segmentation API: https://github.com/tensorflow/tfjs-models/tree/master/body-pix#multi-person-segmentation.
This method returns an array of PersonSegmentation object. In your case it will be an array of two PersonSegmentation object: one for Person A and one for Person B.
You could then remove certain people (in your case Person B) from that array and pass the resulting array (with only 1 element: Person A) to the drawBokehEffect https://github.com/tensorflow/tfjs-models/tree/master/body-pix#bodypixdrawbokeheffect.
To automate this process for other cases (3 or more people):
Each PersonSegmentation object has a .pose field that contains the 2D coordinates (in image pixel space) of the person's 17 keypoints. They can be used to compute the smallest bounding box area for each person. The person bounding box area can then be used as a criteria to remove small people in the image.

Cannot read property 'kids' of undefined - or how to break a circular dependency of signals in Elm?

While elm-make succeeds, I get the following error in the browser:
Cannot read property 'kids' of undefined
I assume it's because I have a circular dependency of signals:
model -> clicks -> model
Here is the relevant code:
model : Signal Model
model =
Signal.foldp update initialModel clicks
clicks : Signal Action
clicks =
let
clickPosition = Mouse.position
|> Signal.sampleOn Mouse.clicks
tuplesSignal = Signal.map3 (,,) clickPosition Window.dimensions model
in
...
It feels like model is implemented as a common practice in Elm, so I should challenge the clicks -> model dependency.
Here is some context:
I'm building a sliding puzzle game using canvas:
When user clicks a tile that can move, it should move. Otherwise, the click should be ignored.
clicks will produce the following actions: Left, Right, Up, Down
For example, if user clicks on tiles 12, 11, 8, 15 (in this order), clicks should be: Down -> Right -> Up
The problem is, that in order to calculate which tile was clicked, I need to know the board dimensions (e.g. 4 rows and 4 columns in the picture above). But, board dimensions are stored in the model (imagine user interface that allows users to change board dimensions).
How do I get out of this circular dependency?
In this case I think you should go more low-level in what you call an input, and accept click positions as inputs. Then you can compose an update function in your Signal.foldp out of two others:
The first turns the clicks and model into a Maybe Direction (assuming the Left, Right, Up, Down are constructors of type Direction)
The second function that takes a Direction value and the model to calculate the new model. You can use Maybe.map and Maybe.withDefault to handle the Maybe part of the result of the first function.
Separating these two parts, even though you produce and consume Direction immediately, makes your system more self-documenting and shows the conceptual split between the raw input and the restricted "actual" inputs.
P.S. There are conceivable extensions to the Elm language that would allow you to more easily write this input preprocessing in signal-land. But such extensions would make the signal part of the language so much more powerful that it's unclear if it would make "the right way" to structure programs harder to discover.

Finding cycles in a graph (not necessarily Hamiltonian or visiting all the nodes)

I have graph like one in Figure 1 (the first image) and want to connect the red nodes to have cycle, but cycles do not have to be Hamiltonian like Figure 2 and Figure 3 (the last two images). The problem has much bigger search space than TSP since we can visit a node twice. Like the TSP, it is impossible to evaluate all the combinations in a large graph and I should try heuristic, but the problem is that, unlike the TSP, the length of cycles or tours is not fixed here. Because, visiting all the blue nodes is not mandatory and this cause having variable length including some of the blue nodes. How can I generate a possible "valid" combination every time for evaluation? I mean, a cycle can be {A, e, B, l, k, j, D, j, k, C, g, f, e} or {A, e, B, l, k, j, D, j, i, h , g, C, g, f, e}, but not {A, e, B, l, k, C, g, f, e} or {A, B, k, C, i, D}.
Update:
The final goal is to evaluate which cycle is optimal/near optimal considering length and risk (see below). So I am not only going to minimize the length but minimizing the risk as well. This cause not being able to evaluate risk of cycle unless you know all its nodes sequence. Hope this clarifies why I can not evaluate new cycle at the middle of its generating process.
We can:
generate and evaluate possible cycles one by one;
or generate all possible cycles and then do their evaluation.
Definition of the risk:
Assume cycle is a ring which connects primary node (one of the red nodes) to all other red nodes. In case of failure in any part (edge) of the ring, no red nodes should be disconnected form the primary node (this is desired). However there are some edges we have to pass twice (due to not having Hamiltonian cycle which connects all the red nodes) and in case of failure in those edges, some of red nodes may be totally disconnected. So risk of cycle is summation of the length of risky edges (we have twice in our ring/tour) multiplied by number of red nodes we lose in case of cutting each risky edge.
A real example of 3D graph I am working on including 5 red nodes and 95 blue nodes is in below:
And here is link to Excel sheet containing adjacency matrix of the above graph (the first five nodes are red and the rests are blue).
Upon a bit more reflection, I decided it's probably better to just rewrite my solution, as the fact that you can use red nodes twice, makes my original idea of mapping out the paths between red nodes inefficient. However, it isn't completely wasted, as the blue nodes between red nodes is important.
You can actually solve this using a modified version of BFS, as more-less a backtracking algorithm. For each unique branch the following information is stored, most of which simply allows for faster rejection at the cost of more space, only the first two items are actually required:
The full current path. (list with just the starting red node)
The remaining red nodes. (initially all red nodes)
The last red node. (initially the start red node)
The set of blue nodes since last red node. (initially empty)
The set of nodes with a count of 1. (initially empty)
The set of nodes with a count of 2. (initially empty)
The algorithm starts with a single node then expands adjacent nodes using BFS or DFS, this repeats until the result is a valid tour or is the node to be expanded is rejected. So the basic psudoish code (current path and remaining red points) looks something like below. Where rn is the set of red nodes, t is the list of valid tours, p/p2 is a path of nodes, r/r2 is a set of red nodes, v is the node to be expanded, and a is a possible node to expand to.
function PATHS2HOME(G,rn)
create a queue Q
create a list t
p = empty list
v ← rn.pop()
r ← rn
add v to p
Q.enqueue((p,r))
while Q is not empty
p, r ← Q.dequeue()
if r is empty and the first and last elements of p are the same:
add p to t
else
v ← last element of p
for all vertices a in G.adjacentVertices(v) do
if canExpand(p,a)
p2 ← copy(p)
r2 ← copy(r)
add a to the end of p2
if isRedNode(a) and a in r2
remove a from r2
Q.enqueue( (p2,r2) )
return t
The following conditions prevent expansion of a node. May not be a complete list.
Red nodes:
If it is in the set of nodes that have a count of 2. This is because the red node would have been used more than twice.
If it is equal to the last red node. This prevents "odd" tours when a red node is adjacent to three other blue nodes. Thus say the red node A, was adjacent to blue nodes b, c and d. Then you would end a tour where part of the tour looks like b-A-c-A-d.
Blue nodes:
If it is in the set of nodes that have a count of 2. This is because the red node would have been used more than twice.
If it is in the set of blue nodes since last red node. This is because it would cause a cycle of blue nodes between red nodes.
Possible optimizations:
You could map out the paths between red nodes, use that to build something of a suffix tree, that shows red nodes that can be reached given the following path Like. The benefit here is that you avoid expanding a node if the path that expansion leads to red nodes that have already been visited twice. Thus this is only a useful check once at least 1 red node has been visited twice.
Use a parallel version of the algorithm. A single thread could be accessing the queue, and there is no interaction between elements in the queue. Though I suspect there are probably better ways. It may be possible to cut the runtime down to seconds instead of hundreds of seconds. Although that depends on the level of parallelization, and efficiency. You could also apply this to the previous algorithm. Actually the reasons for which I switched to using this algorithm are pretty much negated by
You could use a stack instead of a queue. The main benefit here is by using a depth-first approach, the size of the queue should remain fairly small.

Autolayout constraints between two dynamically resizing views

Premise
I have a superview C that simply contains two subviews A and B. ASCII art:
+-----------+
| view A |
| view B |
+-----------+
Here's what I want:
A's top must be pinned to C's top. A's height is not pinned to anything; it actually changes depending on its contents: it's a scrollview-less NSTextView.
B's top must always be 10 pixels from A's bottom. B's bottom must always be pinned to C's bottom.
The entire view C should be split between A and B, and the division between A and B must be decided by A's current height (which is decided by NSTextView), and B should fill any remaining space not taken by A.
In other words: A stays at the top. B fills out the rest of the superview. As A grows, B is pushed downwards.
The problem
Interface Builder always creates undeletable constraint that pins B's top to C's top. This means that B will always be positioned at a specific Y position. If I give B a height constraint, this doesn't happen, but that is not what I want.
I have tried implementing the superview's updateConstraints to delete this IB-generated constraint. That sort of works but when I do this, B's top is never adjusted and seems to be set arbitrarily. It doesn't matter what I set the constraint priority to, B ends up positioned either at the bottom of C, or at the top, or somewhere far off screen. Also, A seems to grow to fill the entirety of C.
Here is the auto-created constraint I can't get rid of:
Additional details
I should add that C is a cell view in a view-based NSTableView. I calculate the required height to fit A and B in tableView:heightOfRow, and expect the contraints to lay everything out.
Answer for posterity: Having a configuration like a described is apparently not possible with constraints. My solution so far, which works:
Create a constraint on A that sets a specific height. (In IB, I set a dummy height.)
Create a constraint on B that sets a specific top.
Don't specify a vertical spacing between A and B. (At least in my case this triggered weird behaviour in NSTableView.)
In your controller or view code, compute out A's height and set it using the constraint.constant property.
Also in your controller or view code, compute out B's top (using A's height) and set it using the constraint.constant property.