What is a proper way to separate data structure logic from its graphical representation? - oop

It's more of a software design question, than strictly programming, so I'll paste UML diagrams instead of code for everyone's convenience.
Language is Java, so variables are implied references.
I'm writing an app, that helps edit a very simple data structure, that looks like this:
On my first trial run I've included all the drawing-related and optimization code into the data structure, that is, every Node knew how to draw itself and kept a reference to one of shared cached bitmaps. UML:
It was easy to add (fetch a corresponding bitmap and you're done) and remove (paint background color over previously mentioned bitmap). Performance-wise it was nice, but code-wise it was messy.
So on the next iteration I decided to split things, but I may have went to far and things got messy yet again:
Here data structure and its logic is completely separated, which is nice. I can easily load it from file or manipulate in some way before it needs to be drawn, but when it comes to drawing things get uncomfortable.
The classic way would be to change data then call invalidate() on drawing wrapper,but that's inefficient for many small changes. So to, say, delete 1 Tile Id have to either have Drawn representation be independent of Data and call deketeTile() for both separately, or funnel all commands to Data through Drawing class. Things get even messier when I try to add different drawing methods via Strategy pattern or somehow else. The horror:
What wis a clean efficient way to organize interactions with Model and View?

First, definitely decouple the app logic from UI. Make some model for your schematic. That will solve your trouble to unit test the app model, as you already said. Then I would try the Observer pattern. But given that a schematic can have lots and lots of graphical components (your Tiles), I would change the usual setup for notifying every observer when something changes in the model, to notifying only the corresponding GraphicalComponent (Tile), when a Component gets changed in the Model. Your UI asks Model to do things, and gets called back in some parts to update. This will be automatic, no duplicated calls, just the initial observer registry on GraphicalComponent creation.

Related

ECS / CES shared and dependent components and cache locality

I have been trying to wrap my head around how ECS works when there are components which are shared or dependent. I've read numerous articles on ECS and can't seem to find a definitive answer to this.
Assume the following scenario:
I have an entity which has a ModelComponent (or MeshComponent), a PositionComponent and a ParticlesComponent (or EmitterComponent).
The ModelRenderSystem needs both the ModelComponent and the PositionComponent.
The ParticleRenderSystem needs ParticlesComponent and the PositionComponent.
In the ModelRenderSystem, for cache efficiency / locality, I would like run through all the ModelComponents which are in a compact array and render them, however for each model I need to pull the PositionComponent. I haven't even started thinking about how to deal with the textures, shaders etc for each model (which will definitely blow the cache).
A similar issue with the ParticleRenderSystem.. I need both the ParticlesComponent as well as the PositionComponent, and I want to be able to run through all ParticlesComponents in a cache efficient / friendly manner.
I considered having ModelComponent and ParticlesComponent each having their own position, but they will need to be synched every time the models position changes (imagine a particle effect on a character). This adds another entity or component which needs to track and synch components or values (and potentially negates any cache efficiency).
How does everyone else handle these kinds of dependency issues?
One way to reduce the complexity could be to invert flow of data.
Consider that your ModelRenderSystem has a listener callback that allows the entity framework to inform it that an entity has been added to the simulation that contains both a position and model component. During this callback, the system could register a callback on the position component or the system that owns that component allowing the ModelRenderSystem to be informed when that position object changes.
As the change events from the position changes come in, the ModelRenderSystem can queue up a list of modifications it must replicate during its update phase and then during update, its really a simple lookup each modifications model and set the position to the value in the event.
The benefit is that per frame, you're only ever replicating position changes that actually changed during the frame and you minimize lookups needed to replicate the data. While the update of the position propagates to various systems of interest may not be as cache friendly, the gains you observe otherwise out weigh that.
Lastly, don't forget that systems do not necessarily need to iterate over the components proper. The components in your entity system exist to allow you to toggle plug-able behavior easily. The systems can always manage a more cache friendly data structure and using the above callback approach, allows you to do that and manage data replication super easily with minimal coupling.

How to quickly analyse the impact of a program change?

Lately I need to do an impact analysis on changing a DB column definition of a widely used table (like PRODUCT, USER, etc). I find it is a very time consuming, boring and difficult task. I would like to ask if there is any known methodology to do so?
The question also apply to changes on application, file system, search engine, etc. At first, I thought this kind of functional relationship should be pre-documented or some how keep tracked, but then I realize that everything can have changes, it would be impossible to do so.
I don't even know what should be tagged to this question, please help.
Sorry for my poor English.
Sure. One can technically at least know what code touches the DB column (reads or writes it), by determining program slices.
Methodology: Find all SQL code elements in your sources. Determine which ones touch the column in question. (Careful: SELECT ALL may touch your column, so you need to know the schema). Determine which variables read or write that column. Follow those variables wherever they go, and determine the code and variables they affect; follow all those variables too. (This amounts to computing a forward slice). Likewise, find the sources of the variables used to fill the column; follow them back to their code and sources, and follow those variables too. (This amounts to computing a backward slice).
All the elements of the slice are potentially affecting/affected by a change. There may be conditions in the slice-selected code that are clearly outside the conditions expected by your new use case, and you can eliminate that code from consideration. Everything else in the slices you may have inspect/modify to make your change.
Now, your change may affect some other code (e.g., a new place to use the DB column, or combine the value from the DB column with some other value). You'll want to inspect up and downstream slices on the code you change too.
You can apply this process for any change you might make to the code base, not just DB columns.
Manually this is not easy to do in a big code base, and it certainly isn't quick. There is some automation to do for C and C++ code, but not much for other languages.
You can get a bad approximation by running test cases that involve you desired variable or action, and inspecting the test coverage. (Your approximation gets better if you run test cases you are sure does NOT cover your desired variable or action, and eliminating all the code it covers).
Eventually this task cannot be automated or reduced to an algorithm, otherwise there would be a tool to preview refactored changes. The better you wrote code in the beginning, the easier the task.
Let me explain how to reach the answer: isolation is the key. Mapping everything to object properties can help you automate your review.
I can give you an example. If you can manage to map your specific case to the below, it will save your life.
The OR/M change pattern
Like Hibernate or Entity Framework...
A change to a database column may be simply previewed by analysing what code uses a certain object's property. Since all DB columns are mapped to object properties, and assuming no code uses pure SQL, you are good to go for your estimations
This is a very simple pattern for change management.
In order to reduce a file system/network or data file issue to the above pattern you need other software patterns implemented. I mean, if you can reduce a complex scenario to a change in your objects' properties, you can leverage your IDE to detect the changes for you, including code that needs a slight modification to compile or needs to be rewritten at all.
If you want to manage a change in a remote service when you initially write your software, wrap that service in an interface. So you will only have to modify its implementation
If you want to manage a possible change in a data file format (e.g. length of field change in positional format, column reordering), write a service that maps that file to object (like using BeanIO parser)
If you want to manage a possible change in file system paths, design your application to use more runtime variables
If you want to manage a possible change in cryptography algorithms, wrap them in services (e.g. HashService, CryptoService, SignService)
If you do the above, your manual requirements review will be easier. Because the overall task is manual, but can be aided with automated tools. You can try to change the name of a class's property and see its side effects in the compiler
Worst case
Obviously if you need to change the name, type and length of a specific column in a database in a software with plain SQL hardcoded and shattered in multiple places around the code, and worse many tables present similar column namings, plus without project documentation (did I write worst case, right?) of a total of 10000+ classes, you have no other way than manually exploring your project, using find tools but not relying on them.
And if you don't have a test plan, which is the document from which you can hope to originate a software test suite, it will be time to make one.
Just adding my 2 cents. I'm assuming you're working in a production environment so there's got to be some form of unit tests, integration tests and system tests already written.
If yes, then a good way to validate your changes is to run all these tests again and create any new tests which might be necessary.
And to state the obvious, do not integrate your code changes into the main production code base without running these tests.
Yet again changes which worked fine in a test environment may not work in a production environment.
Have some form of source code configuration management system like Subversion, GitHub, CVS etc.
This enables you to roll back your changes

Does it make sense to allow retrieval of data from OpenGL's context

I am trying to abstract some of OpenGLs concepts into an object oriented style, wrapping elements like Buffers, Arrays, Vertices etc. into objects that save their access-id, data-types, buffer-sizes, used indices etc. and provice further simplifications to their usage.
Though right now I mentioned: Does anyone actually want to reaccess this data that was once pushed into the GPU? Are functions like glGetBufferSubData actually ever used other than for Debugging, since the documentation of these functions on the official wiki isn't very elaborate and I have never seen it in any tutorial.
GL is the general conecpt that everything can be queried. Reading back stuff that you yourself put should be avoided and is usually more expensive than if you keep a local copy. However, there is also data which is generated by the GPU which you might read back. Examples of this are of course frambeuffer contents, textures you rendered into, or vertex data which you stored via transform feedback into a buffer. So yes, there are real use cases for things like glGetBufferSubData() (although I prefer buffer mappings in most situations).
If you need support for such operations is another matter entirely, and one whoch I think is off-topic here and primarily opinion-based. The problem with those abstractions one builds without the intended use case in mind is that one tends to over-abstract things. YMMV.
I wrote a program to generate meshes using transform feedback, and needed to read the data in buffers to save the resulting mesh.
The transform feedback generated the data. It wasn't data that I originally pushed there.
So, yes.

Rails 3 - Process text input to create multiple models

In the app I'm working on, Courses have many Problems, which in turn have many Steps. Right now there is a form for adding Problems to Courses (and then Steps can be added to those problems). What we want is to have a form that just has a field for LaTeX input, and then process the TeX to create multiple problems with their steps.
At the moment, we're doing this all in the Problems controller. We have two methods, texnew which is identical to new except it has a different view that redirects to the other new method: texcreate, which uses helper methods to extract the problems and steps (using a series of regexes), tries to create them, and flashes somewhat informative error messages if something goes wrong.
The thing is, I keep on reading that we're really not supposed to be doing a bunch of stuff in the controller, and we should favor doing things in the model instead. Virtual attributes might be the right idea for taking in a text field and processing it to create a single problem, but I can't figure out how to make it work for multiple problems, or how to generate any sort of error messages if something goes wrong somewhere along the way.
Is there some better/more idiomatic way to do this?
You don't really need virtual attributes for this if all your relationships are setup properly. You can use the new rails3 nested attributes. There is a good article on them here. This will allow you to rely more on model validation logic and keep the lean controller fat model idiom that rails encourages.

How do I serialise Lambdas and Event delegates when Tombstoning on the Windows Phone 7?

I've been using the Game State Management sample which has worked so far. I've hit a snag though: when Tombstoning, the screens are serialised; the only trouble is, the MessageBoxScreen has event handlers for Accepted and Cancelled.
What's the best way to serialise these? I did a bit of research on using Expression Trees but this seemed overly complex for what I wanted to do.
How do you serialise these? Or... What alternative approach do you use to save the state of a screen that contains delegates?
I'd definitely steer clear of attempting to serialize anything remotely resembling a lambda, or for that matter, named methods. Remember: you're storing state, and nothing else.
Depending on how far and wide your various assignments to these delegates are, you might be able to get away with maintaining a Dictionary<String, WhateverDelagateType>, serializing the keys and looking up the callbacks after deserialization.
Another thing to consider--I'm no expert, but reading between the lines it sounds as if you're working towards tombstoning a very temporary modal dialog. Do you really want that? You might be better off bringing your user right to the high scores table, or whatever follows your dialog, on his/her return.
I decided against this. I instead persists game flow as a kind of 'flow chart'.
The flow chart is declared in code and has properties 'LastShape' and 'LastResultFromShape'.
In my code, I rebuild the flow chart definitions each time, something like this:
flowChart.AddShape( "ShowSplash" );
flowChart.AddLine( "MainMenu", ()=>lastResult=="Clicked" || lastResult=="TimedOut");
flowChart.AddShape( "MainMenu");
flowChart.AddLine( #"ShowOptions", ()=>lastResult=="OptionsClicked");
flowChar.AddLine( #"ShowSplash", ()=>lastResult==#"TimedOut");
etc.etc.
The flow goes from the top down, so 'AddLine' relates to the last shape added.
After tombstoning, I just read the last shape and the last result and decide where to go in the flowchart based on that.