I am just starting my dive into Blender coming mainly from Quake's Radiant. I am trying to research whether it will fit the need I have for a level editor replacement. So with that in mind, here is my question:
What is the best method for creating and storing a set of prefab "entity" objects such as health packs, ammo pickups, and "moveable" objects such that they have a set of "properties" that can be changed within Blender?
I have found this page, but I am still getting lost as to how to integrate them on a per object basis and achieve the desired result:
https://docs.blender.org/manual/en/dev/editors/properties_editor.html
Note: It is not my goal to use the Blender game engine - just attach values to things for me to export to my own engine.
Edit1: Found an article discussing the topic although it seems very outdated:
https://www.gamasutra.com/blogs/IwanGabovitch/20120524/171032/Using_Blender_3D_as_a_3d_map_editor_rather_than_programming_your_own_from_scratch.php
Example:
// entity 105
{
"inv_item" "2"
"inv_name" "#str_02917"
"classname" "item_medkit"
"name" "item_medkit_11"
"origin" "-150 2322 72"
"triggerFirst" "1"
"triggersize" "40"
"rotation" "0.224951 0.97437 0 -0.97437 0.224951 0 0 0 1"
}
While we can manually add custom properties to any object, these are added to the specific object so can be unique to each object.
A better way of integrating new properties is to use bpy.props, these can be added to an objects class in blender, this means every object will have the same properties available.
You can setup a custom panel to edit your properties, like this simple example. Both the properties and panel as well as your object exporter can be defined in an addon which you can have enable at startup so that it is available every time you run blender. An addon also makes it easier for you to share your game editor with others.
The link I found seems to be the best option and allow for the most flexibility. The source code for the Super Tux Kart plugins serve as a great reference implementation:
https://sourceforge.net/p/supertuxkart/code/HEAD/tree/media/trunk/blender_26/
Related
How can I add a new item - Workspace openLabel: 'Workspace' - to the World-menu of Pharo 4.0 ? (What can I say... I prefer Workspace over the new what's-it-called. :-)
I've looked at several menu-related items in the Browser, but couldn't really make head or tails of it. I also tried to find where the menu is stored (it must be somewhere, right?), but couldn't find it.
Also, how would I go about to add it to one of the existing sub-menues of World-menu, and how could I create a new sub-menu (in the World-menu) and add it there?
Add the following class method to any class you like. Best to make one especially for this purpose and load it to your new images:
WorkspaceWorldMenuItem class>>menuCommandOn: aBuilder
menuCommandOn: aBuilder
<worldMenu>
(aBuilder item: #'Workspace')
order: 0.1;
label: 'Workspace';
action: [ Workspace open ]
The interesting part is the <worldMenu> pragma. You usually put it directly after the selector (and comment) and before any other element in the method.
To have a look at example usage open Finder, choose the Pragmas mode and search for worldMenu (without the angle brackets).
I am learning Objective-C Cocoa programming for OS X, and object-based programming in general, so I am a big novice here, so my question is a bit general and my guess is the answer to this is simply "experience"; however, I am curious if there is some route of knowledge to understanding what methods in various classes are best or perhaps required for getting tasks done.
For example, in a programming guide I am instructed to create a document-based program, and the document class contains an array to store data, with the following method bound to a button to create a new entry in the array:
- (IBAction)insertItem:(id)sender {
if (!theItems) {
theItems = [NSMutableArray array];
}
[theItems addObject:#"Double-click to edit."];
[theTableView reloadData];
[self updateChangeCount:NSChangeDone];
}
The array is "theItems" and its data is being presented in a TableView object. I understand that the steps here add a new string to the array and then refresh the table to display it, followed by setting the document to be set to an unsaved state.
What I am not getting is how one would know these specific steps and methods are required. Intuitively it seems one would just add items to the array, and that would be all that's required to have the new values simply show up in the table view for which the array is the data source, so how would one know that the tableView would need to be refreshed with the "reloadData" call? I can see someone (myself) figuring it out by trial and error, but is there some quick resource or guide (ie, some quick flow-chart) either in XCode or elsewhere that indicates for a table view that this would have to be a required action to display the new entry?
If I look at Apple's NSTableView class reference, it claims in the overview that you "modify the values in the data source and allow the changes to be reflected in the table view" which suggest the view is updated automatically, so the requirement to call "reloadData" on the view seems a little obscure.
Look for the guides. In the online class reference for NSTableView, there's a section at the top called "Companion Guides". For NSTableView, it lists the Table View Programming Guide for Mac. (In the prerelease 10.10 docs, the guides are listed under Related Documentation in the left-hand sidebar.)
I could have sworn this same information was available in Xcode's Documentation window, albeit somewhat hidden behind a "More related items" pseudo-link, but when I check right now there's no link to the guide anywhere in the NSTableView class reference. Which is a terrible oversight.
You can also browse or search the Guides section of the developer library.
Familiarity, studying the documentation and possibly reading some good books is the answer. For example, in the docs you quoted (emphasis mine)
you should modify the values in the data source and allow the changes to be reflected in the table view
You should do both these things. If you want it to happen "automatically", look into bindings, which uses several other Cocoa features you won't understand at this point either to do the table data source stuff for you. I'd recommend understanding what is happening manually before handing over control to bindings, so you have some chance of understanding when things go wrong.
As well as looking at the table view documentation, you also need to study the cell, delegate and datasource references. All of those objects work together to give you the functioning table view.
I have to check a frame window with RFT which is written with a .net framework. My problem is, after adding the frame as Testobject via drag and drop to the script it works fine. But after restarting RFT, it isnt able to recognize that frame any more, neither with find method or the highlight function for objects.
I read that there is a way to add objects to the proxys. But this frame is declared at proxy .Net.FormProxy and this proxy is existing in the file rational_ft.rftcust as
<Obj L=".Proxy">
<ClassName>Rational.Test.Ft.Domain.Net.FormProxy</ClassName>
<Replaces/>
<UsedBy>[System.Windows.Forms]System.Windows.Forms.Form</UsedBy>
</Obj>
I dont get what is the problem. Especially, why is it working some times but not always.
Thx for help..
The problem you have mentioned may happen for the following reasons.
The object recognition is indeed changing . And it is not necessarily the object that you are having problem with has object recognition issue , but an object in the parent heirarchy of this object (unless this object is the top level object).
The second reason could be the application is not getting enabled during playback time and you can try getRootTestObject().enableForTesting(/) API to force enable the application.
On a recorded Form object in the Object Map try to use the "update recognition properties" to see if there is change in recognition properties in actual vs recorded object.
You can also just test a different simple Form Application to see if the issue is related to your application or if it is general issue in your environment( I suspect it to be related to the application).
So i have a titanium app, and i just read about single contexts. (Incidentally, somebody here should write a book about programming in titanium... the only one out there doesn't really mention single contexts or any of that new-fangled stuff. Heck, make it an eBook. I'd buy it)
The titanium documentation stresses their use (http://docs.appcelerator.com/titanium/latest/#!/guide/Coding_Strategies-section-29004891_CodingStrategies-Executioncontexts) and then politely forgets how to implement a single context!!
So, question:
Let's say i have the awesomeWidget page - this just shows a button, and when you click on a button a new screen appears.
The aswesomeWidget page is accessed through another page - it is not from the root of the titanium app.
Keeping to single contexts, how do i add the view that the button creates to the current window?
Do I:
keep a global pointer to the current (and only) window?
pass the variable holding the current window down to all the following pages that use it
something else?
First off, Titanium keeps a reference to your current window anyway for you, so this use case is easy. For example:
awesomeWidgetButton.addEventListener('click' function(e) {
var yourView = Ti.UI.createView({...});
Titanium.UI.currentWindow.add(yourView);
});
If you want to dig further, the concept of a single context is closely tied to the use of CommonJS modules and the require keyword. It is very simple to keep a single context, just never open a window with the url component filled out, and liberally use the require() keyword. Other than that, its up to your imagination to keep track of who points to what and vice versa, there are standard patterns and best practices that apply here (MVC, Singletons, just keep it simple) just as in coding in any other language.
I am implementing a view model that is shared by applications on multiple platforms. I am using MvvmCross v3 that has its own MvxEventToCommand class, but I believe the challenge is the same for other frameworks like MVVM Light. As long as the event is used without parameters, the implementation is straightworward, and this is the case for simple interactions like tapping the control.
But when the command needs to handle event arguments things become more complicated. For example, the view model needs to act on certain scroll bar changes (and load more items in the associated list view). Here is the example of XAML:
<cmd:EventToCommand
Command="{Binding ScrollChanged}"
CommandParameter="{Binding EventArgs}" />
(MvvmCross uses MvxEventToCommand, but the principle is the same).
Then in my model I can have the following command handler:
public ICommand ScrollChanged
{
get
{
return new RelayCommand<ScrollChangedEventArgs>(e =>
{
MessageBox.Show("Change!");
});
}
}
(MvxCommand in MvvmCross).
The problem is that ScrollChangedEventArgs is platform specific and this code simply will not compile in a portable class library. This is a general problem with any command that needs not only a push when an event was fired but requires more specific event details. Moving this code in platform-specific part is silly because it more or less kills the concept of portable view models and code-behind-free views. I tried to search for projects that share view models between different platforms, but they all use simple events like "Tap" with no attached event details.
UPDATE 1 I agree with Stuart's remark that view models should only deal with higher level abstractions, so I will rephrase the original concern: how to map results of low-level interactions to a platform-neutral event that triggers a business logic command? Consider the example above: the business logic command is "load more items in a list", i.e. we deal with a list virtualization where a limited number of items from a large collection are loaded initially, and scrolling down to a bottom of a list should cause additional items to be loaded.
WinRT can take care of list virtualization by using observable collections that support ISupportIncrementalLoading interface. The runtime detects this capability and automatically requests extra items from a respective service when the user scrolls down the list. On other platforms this feature should be implemented manually and I can't find any other way than reacting on ScrollViewer ScrollChanged event. I can see then two further options:
Place OnScrollChanged handler in a code-behind file and call the portable view model higher level event (such as "OnItemsRequested");
Avoid code-behind stuff and struggle to wire the ScrollChanged event directly to a view model, then we will need to remap the platform-specific event first.
As long as there is no support for second option, putting event handler in code-behind file is OK as long as it is done for the sole purpose of event mapping. But I would like to investigate what can be done using the second option. MvvmCross has MapCommandParameter class which seems to be able to help, so I wonder if I should exploit that one.
UPDATE 2 I tried MapCommandParameter approach, and it worked allowing me to insert a platform-specific adapter that would map low-level events to view model-specific commands. So the second option worked without any struggle. Stuart also suggested listview-subclassing so there is no need to care about scrolling events. I plan to play with it later.
I agree that viewmodel commands should normally be expressed in terms of viewmodel concepts - so it would be 'strange' to send the viewmodel a command about the scrollbar value changing, but it might be ok to send the viewmodel a command about the user selecting certain list elements to be visible (which she does via scrolling)
One example where I've done this type of thing previously is in list selection.
I originally did this across multiple platforms using a cross-platform eventargs object -
https://github.com/slodge/MvvmCross/blob/vnext/Cirrious/Cirrious.MvvmCross/Commands/MvxSimpleSelectionChangedEventArgs.cs
this was then used on WindowsPhone (for example) via an EventToCommand class like https://github.com/slodge/MvvmCross/blob/vnext/Cirrious/Cirrious.MvvmCross.WindowsPhone/Commands/MvxSelectionChangedEventToCommand.cs
However... I have to admit that this code hasn't been used much... For list selection we have instead mainly used selecteditem binding, and there simply haven't been any apps that have needed more complex parameterized commands (so far) - you might even need to go back to very old v1 mvvmcross code to find any samples that use it.