VCL - drawing a room - vcl

I am still trying to draw a floorplan (in BCB 6).
I already asked a few questions. As a result of Seeking floorplan design VCL toolbar I bought the TMC components.
Looking for non-rectangular panel VCL component got me close, but not quite there.
So, let's try again...
Some sort of panel, I guess. With nice thick border lines (walls) are around the edges, maybe 5 or even 10 pixels, so default panels won't do it.
I can't just draw lines separately, as they need to resize with the form. So, either I ties lines to panels (owner property) and redraw them ... (when? Form resize?, Panel resize?)
Or I can make my own panel component.
In either case, I need to be able to interrupt the lines with openings for doors - or do I add a door component? But then I need to tie that to the panel, in case of form resize.
How best to implement? It doesn't have to be too fancy, but something like this...
=============================================
| || || |
| || || |
| || || |
| || || |
=== ============= ============= ======
| || |
| |
| || |
=============================================
See? multiple doors too; preferably non-rectangular rooms (at least L-shape) and resizable with the form.
Any ideas?

I don't know if this is a solution that will fit your scenario, but if I were to design a similar application, I think I would take advantage of the easy nature of extending with new components in the VCL framework. I'd build components for the various graphical elements, door, wall, etc. I'd make a common object that they could inherit from.
For instance I'd make a TFloorplanElement component that all my graphical components could inherit from, I'd make the TFloorplanElement inherit from TGraphicControl to take advantage of the Anchor property given by TControl, and the Canvas provided by TGraphicControl. I wouldn't use a custom TPanel for this, I don't think the overhead of the windows handle provided by TWinControl is needed here.
For walls I'd make a component inheriting from my TFloorplanElement that is given two endpoints to connect the wall to, this could be either a door on one side and another wall on the other side, or any other combination of TFloorplanElement descendants. You need to have some sort of event handling for when either of these corners are moved, what you need to do in this case simply readjust the coordinates of your wall to the coordinates of the corner element.
One way you could solve the problem with form resizing, is by using a container control for your TFloorplanElement components, I guess that is what the Diagramming Studio is doing, but if you create a container component (TFloorplanContainer for instance) you could specify position of elements within in percentage, or have a scaling factor that was adjusted when the container was resized. The container ofcourse would be using its anchor properties binding it to the borders of its own container (ie. the form).
And whenever the container was resized you would redraw the containing elements. As I said to begin with, I'm not sure whether this is a solution that will work for you, or with the diagramming studio you use, but it is one approach I would consider, if I were in your place.

Take a look at TSimpleGraph, which might get you a huge head start on this. It's at:
http://www.delphiarea.com/products/delphi-components/simplegraph/
It's a FREE component that provides a panel, with a huge assortment of methods, properties and places for event handlers, and the effect is pretty gorgeous. They provide a nice exe demo that shows of some of the possibilities. They have defined objects for various shapes and lines, but with some work I think you could add your own stock things like walls, etc.
If it works for you, TSimpleGraph would provide a nice housing, while letting you concentrate on the meat of your app.

Related

How would I create a visual simulation in C++/CLI within a panel? (What libraries?)

Using C++/CLI on Visual Studio I want to create a 2D simulation, with the options to change the user's inputs on one side of the screen and the simulation on the other side. (The user's inputs would be used to calculate what to draw for the simulation)
I would like to be able to do this within a panel/fixed region keeping the drawing separate from UI elements (buttons etc). Essentially I would like to draw multiple dots on the screen and the position of those dots changes every second. Trouble is all the examples of of drawing I've seen take up the whole form.
What libraries and how can I use to create multiple 2D drawings either by controlling the colours of pixels or any other way inside a fixed region like a panel?
This really depends on what GUI toolkit you're using.
If you're using WinForms, create a control and override the OnPaint method.
If you're using WPF, I'd use WritableBitmap.
There are other methods for both toolkits, of course, but those are the ones I would use. Plus there's things like DirectX and OpenGL, but it sounds like you want something simple, so those would probably be overkill.

Design pattern for child calling method in parent

I am currently working on my biggest project and I am having trouble figuring out how to structure my code. I'm looking for some guidance.
I have 2 objects a Tile and Container. Each Tile has a 2D coordinate and are all children of the Container. The Container has methods that return tile for location, switch tiles, add tiles, and remove tiles.
Now when you click on a tile it disappears, that was easy because it was self contained. The problem comes when I created different types of tiles that inherit from the base Tile. Each different type of tile does a different action when you click on it. Some destroy surrounding tiles some switch with other tiles and others add new tiles. For simplicity we will call these 3 subclasses Tile-destroy, Tile-swap, and Tile-add.
My problem is when I click on these tiles how can they act on other tiles in the Container. Should I just call functions in the parent class or is there a better way to do this? I am having trouble #including the Tile in the Container as well as the other way around. I feel like its not a proper pattern.
I have it set up so when a click takes place the Container handles it and checks the type of tile that is clicked and acts from there with a large else-if statement however this makes it very difficult to add new tile types. Ideally all the information for what happens when you click on a tile is contained within each tile subclass.
Any ideas?
I can suggest you the simpliest design:
Your Container will be a game controller
Each tile has Parent property which is refer to Container
When you click on tile it sends Command to Container (for example, DestroyTile(x, y) or AddTile(x, y)
Container handle this commands and destroys, adds or swap tiles.
If you want really good and more decoupled design you can also create handlers for all operation types DestroyTileHandler, AddTileHandler. In Container on different commands you will just pass them [commands] to appropriate handler. Also you need to pass context object (like Field with tiles) to handler. This allows you to add and modify new operations without even changing Container code.
See related patterns: Command, Observer
Feel free to ask questions and good luck!

Kivy: Depth Oder not so in Depth

Now I could be wrong about this but after testing it all day, I have discovered...
When adding a widget and setting the z-index, the value "0" seems to be the magic depth.
If a widget's Z is at 0, it will be drawn on top of everything that's not at 0, Z wise.
It doesn't matter if a widget has a z-index of 99, -999, 10, -2 or what ever... It will not appear on top of a widget who's z-index is set to 0.
It gets more strange though...
Any index less than -2 or greater than 2 seems to create an "index out of range" error. Funny thing is...when I was working with a background and sprite widget, the background's Z was set to 999 and no errors. When I added another sprite widget, that's when the -2 to 2 z-index limitation appeared.
Yeah I know...sounds whacked!
My question is, am I right about "0" being the magic Z value?
If so, creating a simple 23D effect like making a sprite move being a big rock will take some unwanted code.
Since you can only set Z when adding,a widget, one must remove and immediately add back, with the new Z value...a widget.
You'll have to do this with the moving sprite and the overlapping object in question. Hell, I already have that code practically written but I want to find out from Kivy pros, is there a way to set z-index without removing and adding a widget.
If not, I'll have to settle for the painful way.
My version of Kivy is 1.9.0
What do you mean by z-order? Drawing order is determined entirely by order of widgets being added to the parent, and the index argument to add_widget is just a list index at which the widget will be inserted. The correct way to change drawing order amongs widgets is to remove and add them (actually you can mess with the canvases manually but this is the same thing just lower level, and not a better idea).
I found a working solution using basic logic based on the fact widgets have to be removed and added again in order to control depth/draw order.
I knew the Main Character widget had to be removed along with the object in question...so I created a Main Character Parent widget, which defines and control the Main Character, apart from its Graphic widget.
My test involves the Main Character walking in front of a large rock, then behind it...creating a 23D effect.
I simply used the "y-" theory along with widget attach and detach code to create the desired effect.
The only thing that caught me off guard was the fact my Graphic widget for my Actor was loading textures. That was a big no no because the fps died.
Simple fix, moved the texture loading to the Main Character Parent widget and the loading is done once for all-time.
PS, if anyone knows how to hide the scrollbars and wish to share that knowledge, it'll be much appreciated. I haven't looked for an API solution for it yet but I will soon.
Right now I'm just trying to make sure I can do the basic operations necessary for creating a commercial 23D game (handhelds).
I'm a graphic artist and web developer so coming up with lovely visuals won't be an issue. I'm more concerned with what'll be "under the hood" so to say. Hopefully enough, lol.

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

How to create a swanky SurfaceSlider

I am new to surface programming and stumbled upon this Image which I understand is a slider control on a tag visualization (in this case a card). This slider is
curved as opposed to conventional straight track
has a bigger thumb which displays the current position (thus eliminating the need of a separate label)
has a glowing feel (I understand this is due to overlapping controls with different blur radius)
Can anyone help with how to make such control.
-V
This is a custom-built control rather than a standard SurfaceSlider. It's not build using TagVisualizer either but that's only because the app that this picture shows was built ~2 years prior to TagVisualizer existing.
Now you should certainly use TagVisualizer to streamline an implementation of this but you'll still have to create a custom slider control - SurfaceSlider will not be a good fit because it assumes that the user is moving their finger linearly.
Within your custom arching slider control, you can use SurfaceThumb (which SurfaceSlider itself uses) to get the big glowing thumb... then just needs to listen to the Delta events on the thumb and move it along the constrained path as appropriate.