I currently have two programs, one program for solid lines and fills with a vertex-shader-for-solids and a fragment-shader-for-solids and a second program for textures with a vertex-shader-for-textures and a fragment-shader-for-textures. I swap the the two programs in and out using glUseProgram depending on what I am drawing. Is this a good solution? Or should I glAttachShader/glDetachShader from a single program?
Definitely, you're using the right solution. Binding a different program should be low overhead. You obviously don't want to do it more than needed, like any state changes. For example, if you can, render everything that uses one program first, then bind the other program, and render all the primitives that use it, that would be preferable over binding a different shader more frequently.
Attaching a different shader to a program means that you'll have to relink it, which is much more expensive than just binding a different program.
Related
In smalltalk, you're able to save the state of the world into an image file. I assume this has to do with Smalltalk's ability to "serialize" itself -- that is, objects can produce their own source code.
1) Is this an accurate understanding?
2) What is the challenge in adding this ability to modern languages (non-lisp, obviously)?
3) Is "serialization" the right word? What's the correct jargon?
It's much simpler than "serializing". A Smalltalk image is simply a snapshot of the object memory. It takes the whole RAM contents (after garbage collecting) and dumps it into a file. On startup, it loads that snapshot from disk into RAM and continues processing where it left off. There are some hooks to perform special actions on snapshot and when resuming, but basically this is how it works.
(added: see Lukas Renggli's comment below for a crucial design choice that makes it so simple compared to other environments)
Extending upon Bert Freudenberg’s excellent answer.
1) Is this (ie object's ability to serialize their own source code) an accurate understanding?
Nope. As Bert pointed out a Smalltalk image is simply a memory snapshot. The single-source-of-truth of both Smalltalk objects and Smalltalk programs is their memory representation. This is a huge difference to other languages, where programs are represented as text files.
2) What is the challenge in adding this ability to modern languages (non-lisp, obviously)?
Technically, bootstrapping an application from a memory snapshot should be possible for most languages. If I am not mistaken there are solutions that use this approach to speedup startup times for Java applications. You'd have to agree on a canonical memory representation though and you'd need to make care to reinitialize native resources upon restarting the program. For example, in Smalltalk, open files and network connecting are reopened. And also, there's a startup hook to fix the endianness of numbers.
3) Is "serialization" the right word? What's the correct jargon?
Hibernation is the term.
Crucial difference is that Smalltalk treats Program just as bunch of objects. And IDE is just a bunch of editors editing those objects, so when you save-load image, all your code is there just as when you left off.
For other languages it could be possible to do so, but I guess there would be more fiddling, depending on how much reflection there is. In most other languages, reflection comes as add-on, or even afterthought, but in Smalltalk it is in the heart of the system.
This happens already in a way when you put your computer to sleep, right? The kernel writes running programs to disk and loads them up again later? Presumably the kernel could move a running program to a new machine over a network, assuming same architecture on the other end? Java can serialized all objects also because of the JVM, right? Maybe the hurdle is just architecture implying varied memory layouts?
Edit: But I guess you're interested in using this functionality from the program itself. So I think it's just a matter of implementing the feature in the Python/Ruby interpreter and stdlib, and having some kind of virtual machine if you want to be able to move to a different hardware architecture.
I am reading Nehe OpenGL tutorial and there is a tutorial about Display Lists. Are there any reasons to use it instead of "classes and objects" way? For example creating class of cube and then just simply write cube.render(x, y, z); ?
The idea was to take advantage of hardware with on-board display-list processing. Basically, you'd build the display list in the memory of the graphics processor, then you could display the whole list by sending it a single command instead of re-sending all the coordinates and such every time you want to display the object. This can reduce the bandwidth requirement between the CPU and GPU substantially.
In reality, it depends heavily on the hardware and OpenGL implementation you're using though. With some implementations, a display list gets you major performance improvement -- but in others, it does essentially no good at all.
I am reading Nehe OpenGL tutorial and there is a tutorial about Display Lists.
I'd advise to stop reading Nehe. I've never seen anything decent related to Nehe to be mentioned on stackoverflow, and tutorials I saw use too much of platform-specific API.
Instead of NEHE, go to OpenGL.org and check "books" section. Alternatively, first version of "OpenGL red book" is available at glprogramming.com. While it doesn't cover most recent API present in OpenGL 4, all available methods are still available even in the most recent version of OpenGL via "compatibility profile".
Are there any reasons to use it instead of "classes and objects" way?
You're confusing two different concepts. Display list is OpenGL way to save sequence of OpenGL calls so you can "recall" them later quickly. Depending on OpenGL implementation, it might improve application performance. Usage of display lists has nothing to do with internal organization with your program (OOP). You can use OOP and still use display lists within cube.render(). Or you could use vertex buffer objects or whatever you want. Or you could work in "non-OOP" style and still use display lists.
Display lists are pre-compiled at GPU level.
Your own rendering functions are compiled at CPU level, whose individual commands still need to go through the GPU at run time.
This is like comparing "stored procedures" to custom functions calling inline SQL.
Stored procedures are compiled and execution plans are generated on the server side, while custom functions are only compiled in client side assemblies.
For background I have been working on an RPG based off Ray Wenderlich's tutorials. (Example)http://www.raywenderlich.com/1163/how-to-make-a-tile-based-game-with-cocos2d.
Now I am trying to build a scripted event/Cut scene system so that for instance when a player enters a building, the different characters can have a discussion of the current events, before continuing the adventure. My only problem is I can't really visualize how one would implement this.
I would guess some sort of one time use trigger, maybe kept in a large switch statement on a singleton somewhere ? Which maybe draws all the temp characters ? Then the event then deactivates itself.
I am just looking for a blueprint of how one would do this. Although programming examples are welcome as well.
It depends a lot on how much time you want to commit to the system and how versatile you want the final system to be. A powerful cut-scene system can be flexible enough to be used in almost every interaction in a typical 2d RPG.
If you want to go all out I would suggest a heavily data drive approach. Keep as much data in files and use the filesystem to your advantage. If you say 'all the dialog scenes are in this folder' then when adding a new scene it just needs to be dropped in the folder rather than creating the scene then touching some master switch statement somewhere. Just keep in mind with a large system you want to make adding a new cutscene as simple as possible, not 400 different places to touch.
I would also stay away from switch statements for tracking progress in a cutscene. It adds a lot of code overhead per scene. Idly a cutscene would be a simple as an array of data and a position. Your cutscene manager, the singleton, can parse through the array, decode the data into commands and fire them off.
Sorry if thats a big vague but a lot of these decisions depend on how your engine is structured and what you want out of the system. Keep in mind that the more general the system is, the more uses you may find for it going forward but it will take longer to get up in running to begin with.
you can just check for the tile you are on while you are moving, and when you are on a specific tile you can start a cutscene, also you can add a tag via TiledEditor (this is the recommended editor for using with CCTMXTiledMap) to your map to specify where the cutscene should begin just like he pointed the character start point in that tutorial. the you check for the triger specified (either a specific tile or what the point taged in map) in every gamecycle. and then it's almost very easy you just freeze controls and play a prerecorded camera and object movements till the cutscene finishes. the restore the game to normal mode and turn off checking for the triger.
I notice a pattern, when i did C++ and backend programming (in C# or any language) all my classes are neat and tidy. Recently i notice all my code are in a class and i literally have >50functions in it. I now realize its because i am doing UI. If i were to separate them by pages or forms/dialogs i would have a LOT MORE files, more lines of code and longer line of code. If i separate them i get the same problem (more files, lines, longer lines). Obviously the less lines the better (less code = less to debug, change or break during maintenance).
This specific project is 5k lines with 2k being from the web or libraries. All my .cs files are <1k lines. Is this acceptable even though i have 50+functions in a single class?
Bonus: I notice most of these functions are called only once. and putting certain code blocks (such as one function make two calls to the db) as their own function makes it harder for me to edit since they are divide between files and this balloon function count. So, i kind of dont know what to do. Do i create more classes to reduce function count (per class, it will increase function calls overall and already most are only called once)? How do i design classes doing frontend/UI?
I often find that my UI stuff grows considerably more complex than pure classes. Think about it - your "pure" classes are (for the most part) essentially machine instructions, and can (or should be able to) assume pure, pre-validated inputs and outputs, and do not have to accomodate the vagaries of human behavior.
A UI, on the other hand, is subject to all of human fallacy - and needs to respond in human-predictable ways, in a manner which humans can understand. THIS is where the complexity comes in.
Consider - in your nice, crisply defined classes, in which each function performs a fixed action against a known type of input, there is not alot of random BS to anticipate or handle.
A UI must be receptive to all manner of improper, inconsistent, or unanticipated input and actions by the user. While we, as designers, can pre-think some of this (and even minimize it with things like Combo-boxes and Command Buttons, a). all of that requires additional back-side code, and b) all of these things can then interact in different ways as well.
In our classes, WE decide how certain methods/functions and the like interact and affect one another. on the UI, we can do our best to point the user in the right direction, but there is still the random element. What if the user pushes the button before selecting an item from the list? There are several different ways to handle that scenario, all of which require another line (or ten, or 100) of code to handle gracefully.
Lastly, the more complex the project, the more complex the UI is likely to be, and the more of this must go on.
Managing the actions of the machine, given inputs and outpus we as programmers explicitly define, is EASY compared to predicting, managing, and handling the random quirks imposed on our stuff by a user. If only they would pay attention, right?
Anyway, I believe all of THAT is why code for a UI becomes greater, and more complex. As for how to break it out in a maintanable manner, the guys above covered it. Abstract the two. I design a form. I define the menas for the user to input data, and/or indicate what they want to happen next. Then I define the manner in which that form can communicate those things with my crisp, clean back end classes. Then I provide validation mechanisms, and the means to help control the user in navigating it all (e.g. the button is not enabled until the user selects an item from the list . . .).
Complex.
Highly embedded (limited code and ram size) projects pose unique challenges for code organization.
I have seen quite a few projects with no organization at all. (Mostly by hardware engineers who, in my experience are not typically concerned with non-functional aspects of code.)
However, I have been trying to organize my code accordingly:
hardware specific (drivers, initialization)
application specific (not likely to be reused)
reusable, hardware independent
For each module I try to keep the purpose to one of these three types.
Due to limited size of embedded projects and the emphasis on performance, it is often keep this organization.
For some context, my current project is a limited DSP application on a MSP430 with 8k flash and 256 bytes ram.
I've written and maintained multiple embedded products (30+ and counting) on a variety of target micros, including MSP430's. The "rules of thumb" I have been most successful with are:
Try to modularize generic concepts as much as possible (e.g. separate driver code from application code). -- It makes for easier maintenance and reuse/porting of a project to another target micro in the future.
DO NOT start by worrying about optimized code at the very beginning. Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect.
Work to ensure readability. Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your code.
I've worked on 8-bit PIC processors with similar limitations.
One restriction you don't have is how many comments you make or what you choose to name your methods, variables, etc.. Take advantage. Speed and size constraints do sometimes trump organization, but you can always explain.
Another tip is to break up a logical source file into even more pieces than you need, then bind them by #includeing them in a compilation unit. This allows you to have lots of reusable code (even one routine per file) but combine in whatever order you need. This is useful e.g. when trying to meet compilation unit size restrictions, or to pick and choose which common subroutines you need on the next project.
I try to organize it as if I had unlimited RAM and ROM, and it usually works out fine. As mentioned elsewhere, do not try to optimize it until you absolutely need to.
If you can get a pin-compatible processor that has more resources, it's better to get it working on that, concentrating on good structure and layout, then optimize for size later when you understand the code better.
Except under exceptional circumstances (see note), the organisation of your code will have no impact on the final product. (contents of the code are obviously a different matter)
So with that in mind you should organise your code as you would any other project.
With that said, the following are fairly typical:
If this is a processor that you've worked on before, or will be working on in the future, you will usually want to keep a dedicated hardware abstraction layer that can be shared between projects in the future. Typically this module would contain items like routines for managing any uarts, timers etc.
Usually it's reasonable to maintain a set of platform specific code for initialisation and setup that performs all of the configuration and initialisation up to the point where your executive takes over and runs your application. It will also include platform specific hal routines.
The executive/application is probably maintained as a separate module. All of the hardware specific code should be hidden in the hal (as mentioned above).
By splitting your code up like this you also have the option of compiling and running your application as a simulation, on a completely different platform, just by replacing the hardware specific code with routines that mimic the hardware.
This can be good for unit testing and debugging and algorithmic problems you might have.
Exceptional circumstances as might be imposed by unusual compiler restrictions. eg. I've come across some compilers that expect all interrupt service routines to be compiled within a single object file.
I've worked with some sensors like the Tmote Sky, I too have seen poor organization, and I have to admit i have contributed to it. Anyway I'd say that some confusion has to be, because loading too much modules or too much part of program will be (imho) resource killing too, so try to be aware of a threshold between organization and usability on the low resources.
Obviously this don't mean let caos begin, but for example try to get a look on the organization of the tinyOS source code and applications, it's an idea on what I'm trying to say.
Although it is a bit painful, one organization technique that is somewhat common with embedded C libraries is to split every single function and variable into a separate C source file, and then aggregate the resulting collection of O files into a library file.
The motivation for doing this is that for most normal linkers the unit of linkage is an object, for every object you either get the whole object or none of it. Since there is a 1-1 relationship between C files and object files, putting each symbol in it's own C file gives each one it's own object. This in turn lets the linker pull in only that subset of functions and variables that are actually used.
This sort of game doesn't help at all for headers they can happily be left as single files.