Does it make sense to allow retrieval of data from OpenGL's context - oop

I am trying to abstract some of OpenGLs concepts into an object oriented style, wrapping elements like Buffers, Arrays, Vertices etc. into objects that save their access-id, data-types, buffer-sizes, used indices etc. and provice further simplifications to their usage.
Though right now I mentioned: Does anyone actually want to reaccess this data that was once pushed into the GPU? Are functions like glGetBufferSubData actually ever used other than for Debugging, since the documentation of these functions on the official wiki isn't very elaborate and I have never seen it in any tutorial.

GL is the general conecpt that everything can be queried. Reading back stuff that you yourself put should be avoided and is usually more expensive than if you keep a local copy. However, there is also data which is generated by the GPU which you might read back. Examples of this are of course frambeuffer contents, textures you rendered into, or vertex data which you stored via transform feedback into a buffer. So yes, there are real use cases for things like glGetBufferSubData() (although I prefer buffer mappings in most situations).
If you need support for such operations is another matter entirely, and one whoch I think is off-topic here and primarily opinion-based. The problem with those abstractions one builds without the intended use case in mind is that one tends to over-abstract things. YMMV.

I wrote a program to generate meshes using transform feedback, and needed to read the data in buffers to save the resulting mesh.
The transform feedback generated the data. It wasn't data that I originally pushed there.
So, yes.

Related

What is a proper way to separate data structure logic from its graphical representation?

It's more of a software design question, than strictly programming, so I'll paste UML diagrams instead of code for everyone's convenience.
Language is Java, so variables are implied references.
I'm writing an app, that helps edit a very simple data structure, that looks like this:
On my first trial run I've included all the drawing-related and optimization code into the data structure, that is, every Node knew how to draw itself and kept a reference to one of shared cached bitmaps. UML:
It was easy to add (fetch a corresponding bitmap and you're done) and remove (paint background color over previously mentioned bitmap). Performance-wise it was nice, but code-wise it was messy.
So on the next iteration I decided to split things, but I may have went to far and things got messy yet again:
Here data structure and its logic is completely separated, which is nice. I can easily load it from file or manipulate in some way before it needs to be drawn, but when it comes to drawing things get uncomfortable.
The classic way would be to change data then call invalidate() on drawing wrapper,but that's inefficient for many small changes. So to, say, delete 1 Tile Id have to either have Drawn representation be independent of Data and call deketeTile() for both separately, or funnel all commands to Data through Drawing class. Things get even messier when I try to add different drawing methods via Strategy pattern or somehow else. The horror:
What wis a clean efficient way to organize interactions with Model and View?
First, definitely decouple the app logic from UI. Make some model for your schematic. That will solve your trouble to unit test the app model, as you already said. Then I would try the Observer pattern. But given that a schematic can have lots and lots of graphical components (your Tiles), I would change the usual setup for notifying every observer when something changes in the model, to notifying only the corresponding GraphicalComponent (Tile), when a Component gets changed in the Model. Your UI asks Model to do things, and gets called back in some parts to update. This will be automatic, no duplicated calls, just the initial observer registry on GraphicalComponent creation.

Reassign an interface or allow GC to do its work on temporary variables

I'm very new to Go and am currently porting a PHP program.
I understand that Go is not a dynamically-typed language and I like that about it. It seems very structured and easy to keep track of everything.
But I've been coming across situations that seem to be a little ... ugly. Is there a better way of performing this sort of process:
plyr := builder.matchDetails.plyr[i]
plyrDetails := strings.Split(plyr, ",")
details := map[string]interface{}{
"position": plyrDetails[0], "id": plyrDetails[1],
"xStart": plyrDetails[2], "zStart": plyrDetails[3],
}
EDIT:
Is there a better way to achieve a map containing the strings from plyr than to create two additional variables, to be destroyed straight afterwards? Or is this the correct way?
tl;dr:
If possible, choose a different format and let a library do the string parsing/generation for you
Use structs rather than maps for anything you use a few times, for more compiler checks
The common way of using encoding/json accomplishes both of those.
Meanwhile, don't sweat perf too much because you'll probably vastly improve the old app's speed regardless; there's no indication speed of parsing or GC is a problem yet; and the syntactical differences you mentioned in the first rev. of the post don't necessarily actually relate to GC.
So, I understand you may be porting piece-for-piece, and that may limit what you can change now.
But if/when you can change things, a really clean solution would be to use the encoding/json package and a struct: the json package will parse input/generate output from structs without any manual string manipulation on your part, and using a struct gives you compile-time checking rather than only the runtime checking you get with a map. Lots of Go apps (and others) use JSON to expose their services.
An intermediate step could be to introduce struct types for any internal structure you use at least a few times, rather than maps, so even without updating the parsing, at least the internals of the app get the benefits of compile-time checking. structs are also what things like the gorm object/relational mapper expect to deal with. They happen to use less memory than maps, and be quicker (and more concise syntactically) to access, but those aren't even necessarily the most important considerations here.
On the performance of what you have now, and particularly whether different syntax would make it faster: don't sweat that, for a bunch of reasons: the port's likely to be faster than the PHP was whatever you do; we don't yet have any indication that parsing or GC is actually slow or your bottleneck; and the syntactical differences you talked about in the first revision of your question may not relate to GC much or at all. More/fewer var names in your code may not correspond to more/fewer heap allocations, 'cause often Go can allocate on the stack, briefly discussed under 'escape analysis' in Dave Cheney's Gocon Tokyo slides. And as peterSO said, we seem to be looking at allocations of smallish references, not, say, copying all of the string bytes from the request each time.
Go is NOT PHP. Write Go programs in Go. Write PHP programs in PHP.
Interface values are represented as a two-word pair giving a pointer
to information about the type stored in the interface and a pointer to
the associated data. Go Data Structures:
Interfaces
Reusing Go interface variables to "increase performance" makes no sense.

What kind of objects get serialized and why? When would I use this?

I understand what serialized is. I simply do not know when I would use it. I have seen the discouraged practice of session data in a database and things like that but other than that I do not know.
What kind of objects state would I save in a database, file system, anything that needs persistence? Why would I use it for a non-"permanent" reason?
I do not have a context per se. All I really do are client server web apps. I may get to use a Java stack for it, but I'd really like to understand this part of things, should I need it.
I have asked similar questions. I'm just not understanding.
In a sentence, using a generic serialiser is a reasonable way to save stuff to disk, move stuff over a network in a manner which doesn't require you to design a data format, write code that emits data in that format, and write a parser for that format (all error-prone) by hand.
Any time you want to persist an object (or object hierarchy) beyond its existence inside a single execution on a single machine, you are going to want to serialise and deserialise.
Some scenarios that come to my mind are
Caching: when you want to offload in-memory objects to disk (the caching framework can serialise the object to disk)
For thick clients (either a desktop application or an app using RMI) you'll need to transfer objects from one JVM to another, and this is done by serialising them
I can't think of any other scenarios from the top of my head.

In OOP, how does persistence cooperate with object relations?

So in OOP, objects send messages to other objects. This is a pretty simple concept, and as long as all of the objects live in memory, it's easy to implement e.g. by calling methods.
But in real life, we persist objects into the database or elsewhere, because there isn't enough RAM to hold all of them. How do you call a method on an object that is currently persisted?
OK, so maybe unpersisting one object can be incapsulated into its Factory. But what if I want to send messages to a lot of objects, e.g. in a loop? Unpersisting them one by one is a classic N+1 issue.
OK, I can have a Repository that'll unpersist all necessary objects in one shot. But doesn't it break incapsulation to ask a Repository to get my objects?
What about patterns like Observer? Is it possible to have an object subscribe to anything, knowing that it's going to be persisted?
Are there transparent implementations of this in any language?
I don't think it does. RAM is critical to OOP. Once you persist something, outside of RAM, it's not OOP anymore.
An object is actually the total of all the objects it's in contact with, plus all the objects they're in contact with, plus.... To work, an object needs almost instant communications with many other objects. Without that, performance will lurch to a halt. (Think virtual memory: if you've ever used 1.5GB on a machine with 0.5GB, you'll know the problem.)
Storage based on slow memory tends to use either relational storage or big blocks of sequential data, each block accessed ramdomly by key. I have written large-data, highly interactive (and so heavily OO) systems that persisted their data in a SQL database. I had to organize the DB so it could be divided into managable blocks, each able to stand on its own, several of which would be loaded into a session. Then I had to write vast amounts of code to turn the tables, rows, and columns into useful objects. To write to the DB, I had to reverse the whole process.
So to start the program, the DB data went into a blender and came out something completely different. When the user was done and saved to the database, the data got unblended and put back. The in-memory objects had no obvious relationship with the DB tables (though obviously one could be created from the other with enough work). And that's as close as I've been able to get to usable persistence. The atomic units were much bigger than objects or rows, and each transformation took a while (whole seconds).
I've worked a (very) little bit with ORM. The trick here is that OO is used to mimick relationals storage. You end up not with the objects you'd get if you started with a good OO design, but with something that looks just like a relational database. In a lot of cases this is just what you want and it works very well. But it's not real OOP and probably is not what you're looking for.
So I've found no general answer. Fast memory and OO--and slow memory and relational/non-realtional--are two different things, and bridging that gap takes careful study of specific cases and bit of genius.

Organzing Data in EEPROM

I have a 64KB EEPROM, organized as 128-byte pages, on my board which talks to an AT Mega 1281. The board also has a SD Card slot and is capable of copying over some configuration files onto the EEPROM (which acts as the internal memory). Due to the nature of the board, only two types of files are needed - one is known as the Circuit Data and the other is Location Data - both are binary files.
Up until now, I had just split the EEPROM into two 32K halves and wrote the Circuit Data in the top half and the Location Data in the bottom half. Both files also have a 25 byte header. I copy the header in the last pages of the files respective half i.e. the page starting at address 0x7F80 has the Circuit Data file's header and the address starting at 0xFF80 has the other header. The data is always going to be of fixed width so that makes random access quite easy.
My question is, is there a better, simpler, way to organize data in an EEPROM? At the moment, I don't even store the length of the data as it's not really needed. But I'm thinking it might add an another step of safety if I do include that in the header.
Better? It depends. Simpler? Really not. It depends how strong is your "always". How much do you believe yourself that the files will be always of fixed length? The fact that you are asking this question probably means some doubts. Keep in mind KISS principle. Microcontroller development is still an area where unecessary features are a direct threat to the solution stability. Having a data length in the header would be useful if you want to make your EEPROM access more generic. But then again, generalization for two files is an overkill.
Second thought: rather than introducing file lengths which you actually don't need, i would like to know why you store the file headers at the opposite side of the respective memory chunk. A "header" is to me something what needs to be read before the file itself. You could save one transfer of the reading address to EEPROM.
I believe, in any embedded project, simplest solution is the best. Your way to organize storage is simple, and looks like it meets all your requirements.
Any attempt to "improve" or "optimize" this solution will lead to more complicated code and will increase probability of making bug in it. So keep all your engineering solutions as simple as possible. If there will pop new requirements, you always can find new simple solution for them. Don't do any premature optimizations.