This is an example from elm-mdl
Button.render Mdl [0] model.mdl
[ Button.raised
, Button.ripple
, Button.onClick PollMsg
]
[ text "Fetch new"]
Index here is [0]. I assume if I am building a single page application I can put indexes like ["page0", 0]. Is that right?
But does that mean that all the mdl data for all the buttons in the application is in memory? How is it working?
elm-mdl uses these identifiers for its internal handling. In particular, these ids are the reason you can get away with such simple calls to update and view.
As #pierrebeitz already explained, these integers reside in the application memory, but that's not a problem (even if you had hundreds of buttons).
As for working with elm-mdl in larger contexts, you have at least these two options available:
Use a number-combination like [0, 0], [0, 1] and so on. This is particularly useful, if you generate multiple similar elm-mdl components (f.ex. radio buttons or such) in a loop. In your code, the first index may be hard-coded, but the second integers stem from the looping.
For the overall picture you need to keep in mind, that these integers are local to your actual elm-mdl Model instance. In other words, creating a separate Model allows you to re-use these integers. This means you can use TEA components where a component has its own model, including an elm-mdl Model. As each component has free reign over the identifiers it uses, you have an easier time not to mess them up.
All that being said, I consider these indexes the worst part of elm-mdl. I would really appreciate if they could get rid of them (without overly complicating the resulting code of course), but for the time being I consider them the price tag for an otherwise awesome material design library.
Related
I have been trying to wrap my head around how ECS works when there are components which are shared or dependent. I've read numerous articles on ECS and can't seem to find a definitive answer to this.
Assume the following scenario:
I have an entity which has a ModelComponent (or MeshComponent), a PositionComponent and a ParticlesComponent (or EmitterComponent).
The ModelRenderSystem needs both the ModelComponent and the PositionComponent.
The ParticleRenderSystem needs ParticlesComponent and the PositionComponent.
In the ModelRenderSystem, for cache efficiency / locality, I would like run through all the ModelComponents which are in a compact array and render them, however for each model I need to pull the PositionComponent. I haven't even started thinking about how to deal with the textures, shaders etc for each model (which will definitely blow the cache).
A similar issue with the ParticleRenderSystem.. I need both the ParticlesComponent as well as the PositionComponent, and I want to be able to run through all ParticlesComponents in a cache efficient / friendly manner.
I considered having ModelComponent and ParticlesComponent each having their own position, but they will need to be synched every time the models position changes (imagine a particle effect on a character). This adds another entity or component which needs to track and synch components or values (and potentially negates any cache efficiency).
How does everyone else handle these kinds of dependency issues?
One way to reduce the complexity could be to invert flow of data.
Consider that your ModelRenderSystem has a listener callback that allows the entity framework to inform it that an entity has been added to the simulation that contains both a position and model component. During this callback, the system could register a callback on the position component or the system that owns that component allowing the ModelRenderSystem to be informed when that position object changes.
As the change events from the position changes come in, the ModelRenderSystem can queue up a list of modifications it must replicate during its update phase and then during update, its really a simple lookup each modifications model and set the position to the value in the event.
The benefit is that per frame, you're only ever replicating position changes that actually changed during the frame and you minimize lookups needed to replicate the data. While the update of the position propagates to various systems of interest may not be as cache friendly, the gains you observe otherwise out weigh that.
Lastly, don't forget that systems do not necessarily need to iterate over the components proper. The components in your entity system exist to allow you to toggle plug-able behavior easily. The systems can always manage a more cache friendly data structure and using the above callback approach, allows you to do that and manage data replication super easily with minimal coupling.
I have a project in which I am organizing my variables/tags using categories like "PartA", "PartB", "Data", "HMI", and of course the requisite "Debug".
So a few examples of random tags would be:
Debug.ReadWriteTimer
HMI.ReportViewerMode
Data.IndexResult
Data.ActiveDirectory
PartA.InspectionResult
But I have several variables that I am using across the program as logistical devices, such as counters, indices, and (non-debug) timers, that don't really fit in the the few categories that I listed above.
I've considered the following but none of those seem to fit either:
Global.tagname
Program.tagname
Devices.tagname
What is a clear and logical naming convention for program-level "tools" like these that would be instantly recognized by someone looking over the tag database for the first time?
(Context for the curious: this particular project is created using a machine vision software called Cognex Designer, which utilizes the C# language in an interface that is the illegitimate child of RSLogix and LabVIEW.)
misc, shortcut of miscellaneous, is/was often used to categorize items that couldn't be put in other categories.
This is if you must use a category, otherwise the lack of category perfectly describes the miscellaneous property of a variable.
I've decided to use "App", short for Application, as the category for these items. I believe it's clearer than "Program" and not easily confused with scope, like "Global", and the abbreviation will help alleviate confusion with .NET's Application object.
There is a note on the cytoscape.js website that says:
"Note that a collection is immutible by default, meaning that the set of elements within a collection can not be changed. The API returns a new collection with different elements when necessary, instead of mutating the existing collection. This allows the developer to safely use set theory operations on collections, use collections functionally, and so on."
Does this mean it is not really suitable to use in the creation of online 'network editor' ie. where the user can interact to add and delete nodes and edges to the existing graph?
If I understand the note above it would mean that adding a new node would mean reconstructing the whole graph from scratch (but with the new node) and then presumably performing a complete redraw. Is this correct?
A collection is a set of elements; the set merely points to all the individual elements. You can think of it like an array of elements: The array just holds the elements. Different arrays/sets can have different, similar, overlapping elements, etc.
Cytoscape.js is very suitable for the purpose you mention. There are already projects that have live, collaborative editors (similar to google docs, online office, etc but for graphs). For example, a simple one that I created is codenamed "Factoid" for biological processes. Though I really think it ought to have a better, more accurate name -- you can still look through the code for a live collaboration example with Cytoscape.js. Because you can listen to events easily, it's relatively straightforward to send diffs (or even just events) back and forth between the server and the client.
Adding an element is inexpensive: It just adds the single element and redraws if opportune. It's even cheaper with cy.batch() for modifying lots of elements in a row.
I am writing 3D geometry visualization software for schools. I am designing my engine as an Entity-Component system, because it has served me well in games. In this case I have some specific requirements:
There is a limited amount of different geometries I need to render. I would like to render these in batches. So I render all lines as one batch, all triangles as one batch, all planes as one batch, ... It works well even with transparent objects, since I am using depth peeling and don't need to sort them by distance.
One logical object will typically have more than one mesh associated: e.g. a plane entity has a border "child"-entity that has four lines as its body, these lines all share the same material.
I would like to have a clean design, so I am trying to stay true to the no-code-in-components principle and same-structure for one type of components.
What I have now is: A different component type for each type of geometry (point, line, plane, ...). The corresponding system stores a batch with a mesh + instance data and renders it in one draw call. The instance data for different types of geometry is different, hence I decided to go with one component type per geometry type. (A bad design?)
Question:
Now I'm wondering how to handle entities that seem to need multiple components of the same type, like the plane border, that has a body consisting of four lines.
I could think of several solutions, which all have draw-backs:
1. Make each line of the border entity an entity itself. Each would have a "line" component and a "child" component. That would model the border and the lines as five entities, with the four lines attached to the border entity via "child" component. This seems like quite a waste of entities. Some special entities would have several dozens of children then.
2. Allow the border entity to have multiple components of the "line" type. This seems like a hack, since all ECS article I've seen discourage using multiple components of the same type on one entity.
3. Make a unified "geometry" component that may contain an arbitrary number of elementary geometries. That would introduce quite some indirections, but seems like the best solution to me, at the moment.
Could someone help me to sort this chaotic thoughts into a good solution? I'm sure I'm missing a straight-forward approach, but I just couldn't find one yet.
I have a lot of experience in programming (10+ years), but unfortunately, just recently started with Entity-Component systems. So I'm still struggling with the concept, it seems.
Thank you very much.
I have an application that allows the user to drill down through data from a single large table with many columns. It works like this:
There is a list of distinct top-level table values on the screen.
User clicks on it, then the list changes to the distinct next-level values for whatever was clicked on.
User clicks on one of those values, taken to 3rd level values, etc.
There are about 50 attributes they could go through, but it usually ends up only being 3 or 4. But since those 3 or 4 vary among the 50 possible attributes, I have to persist the selections to the browser. Right now I do it in a hideous and bulky hidden form. It works, but it is delicate and suboptimal. In order for it to work, the value of whatever level attribute is on the screen is populated in the appropriate place on the hidden form on the click event, and then a jQuery Ajax POST submits the form. Ugly.
I have also looked at Backbone.js, but I don't want to roll another toolkit into this project while there may be some other simple convention that I'm missing. Is there a standard Rails Way of doing something like this, or just some better way period?
Possible Approaches to Single-Table Drill-Down
If you want to perform column selections from a single table with a large set of columns, there are a few basic approaches you might consider.
Use a client-side JavaScript library to display/hide columns on demand. For example, you might use DataTables to dynamically adjust which columns are displayed based on what's relevant to the last value (or set of values) selected.
You can use a form in your views to pass relevant columns names into the session or the params hash, and inspect those values for what columns to render in the view when drilling down to the next level.
Your next server-side request could include a list of columns of interest, and your controller could use those column names to build a custom query using SELECT or #pluck. Such queries often involve tainted objects, so sanitize that input thoroughly and handle with care!
If your database supports views, users could select pre-defined or dynamic views from the next controller action, which may or may not be more performant. It's at least an idea worth pursuing, but you'd have to benchmark this carefully, and make sure you don't end up with SQL injections or an unmanageable number of pre-defined views to maintain.
Some Caveats
There are generally trade-offs between memory and latency when deciding whether to handle this sort of feature client-side or server-side. It's also generally worth revisiting the business logic behind having a huge denormalized table, and investigating whether the problem domain can't be broken down into a more manageable set of RESTful resources.
Another thing to consider is that Rails won't stop you from doing things that violate the basic resource-oriented MVC pattern. From your question, there is an implied assumption that you don't have a canonical representation for each data resource; approaching Rails this way often increases complexity. If that complexity is truly necessary to meet your application's requirements then that's fine, but I'd certainly recommend carefully assessing your fundamental design goals to see if the functional trade-offs and long-term maintenance burdens are worth it.
I've found questions similar to yours on Stack Overflow; there doesn't appear to be an API or style anyone mentions for persisting across requests. The best you can do seems to be storage in classes or some iteration on what you're already doing:
1) Persistence in memory between sessions/requests
2) Coping with request persistence design-wise
3) Using class caching