I'm in the process of developing an RPG using Apples SpriteKit framework. All is well: I have NPC & game objects all that the player can interact with. Plus, I have a text box to display text. Now, I want to implement quests & such but I'm currently stumped on figuring out a good way to create such game content. I did come across Ray Wenderlich's tutorial (https://www.raywenderlich.com/30561/how-to-make-a-rpg) on making an RPG and using Lua as the scripting language of choice, but after trial & error, I realized the luaObjectiveC bridge is far too old & deprecated (not sure how to fix the myraid of defects & errors) & there aren't any viable counter alternatives. I tried looking in github but couldn't find anything useful to get me started. Thus, I realize I must code my own implementation from scratch.
Any suggestions/tips for how to go about solving this process? Should I have some sort of JSON file that stores dialogue text which is called upon & has appropriate content retrieved & displayed inside the text box?
I'm hoping to make this a more-or-less flexible solution so I can use it in future projects.
If you are just starting your game now, go ahead and start using the iOS 10 Tile interface for world construction. See this raywenderlich tutorial for more. The tile interface handles constructing the world and the performance of tile based rendering for you. You still need to implement the game logic, items, and so on. JSON is just as good as any other storage means. You are going to need to implement dialog logic yourself. I would also suggest that you have a peek at my texture memory reduction framework which is very useful for loading complex images that would otherwise consume a ton of texture memory.
Related
This is a follow-up question of How do you draw text in DirectX 11?
In Direct3D-12, things got much more complex and since it's new I couldn't find any suitable libraries online.
I'm building a basic Direct3D12 FPS Test application, and I like to display the FPS data on screen with my rendered image.
The general answer to questions like this is "if you have to ask, then you probably should be using DirectX 11." DirectX 12 is a graphics expert API that provide immense control, and is not particularly concerned with ease-of-use for novices. See this thread for more thoughts in this vein.
With that out of the way, one option is to use device interop and Direct2D/DirectWrite. See Working with Direct3D 11, Direct3D 10 and Direct2D.
UPDATE: DirectX Tool Kit for DirectX 12 is now available. It includes a SpriteFont / SpriteBatch implementation that will draw text on Direct3D 12 render targets. See this tutorial.
Pure DirectX 12, then you need to load the font glyph data into a vertex buffer and render with a vertex shader and pixel shader. You mentioned libraries online, will this is expert stuff and fortunately James Stanard at Microsoft release a how to with their open source MiniEngine project. He handles multiple fonts, antialiasing, and drop shadows in DirectX 12.
Find the project files at GitHub https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine and check out Textrender.h and Textrender.cpp
If you want maximum feature set with minimum work you probably should go with DirectWrite on top of a D3D11 interop device, like Chuck said in his answer.
If you want to roll your own high performance text rendering you may want to take a look at the text renderer in the miniengine example repository on github, it has some interesting ideas.
Unfortunately the only ways have already been described. Interface with DirectWrite or create your glyph file system.
What you are doing is importing a texture file with glyphs on it, cutting out small squares around each character from the glyph texture file, and then gluing it all together to form a string. It results in some faster drawing (in some case).
I think the approach to this as referenced by the others is slightly outdated and destined to fail. Direct3D11 had the same lack of text drawing support as Direct3D12 (perhaps misinformation on that). It was Direct3D9 which had the built in text drawing support, which nonetheless worked fine, and later supported sprite batch drawing where you could render all text in one sprite.
It seems backwards to state that you simply "need to know" or "are not an expert" to implement such a basic yet tedious system. Such a system is destined to fail in the same way why no one wants to use Assembly to code something they can code in C and onward.
The D3D11 and D3D12 math library also suffers from the same failures. To define and convert vectors you are better off including D3D9X math or custom math structures because the newer methods included are so backwards. "Someone" made it and must like it, but I remember making a complaint showing how easy it is to do vector operations before vs afterward, it nearly doubles or triples the amount of lines needed to perform basic vectors operation and conversions, not even counting the amount of references and learning time you would need to see how someone else's lib works. It seems to be a big failure presented by mathematicians who were never good at programming.
I want to develop an application for Mac OS X to record audio from one application.
I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.
My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.
Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.
What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.
There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.
So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.
See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.
If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)
You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.
Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.
I'm an Objective-C newbie. Most of my experience is in Java. Also, I've never really used Xcode before and so I'm pretty new at that as well.
I'm trying to create a simple, single-view Quartz OS X app (not iOS) to display agent-modeling simulations. The graphics are pretty simple; just colored squares and grids. I have been looking at Quartz tutorials and I can see how I could accomplish this (as far as drawing things are concerned). What I can't find is an example that tells me how to tie it all together. What do I put in AppDelegate? Do I need a WindowController? How do I link that up with AppDelegate? I got as far as creating a Quartz Composer View in Interface Builder for my app, but I have no idea where to go from there.
As I mentioned before, I've looked for numerous tutorials but there is nothing that I can find that gives me information as far as linking everything together.
You should visit this web page before you do anything else. It will show you how a Cocoa application is structured and where the appropriate entry points are to place your code.
While the entire article merits reading, visit the section "Entry and Exit Points," which best addresses your particular questions.
I am writing code for rrt(rapidly exploring random trees) which is a sampling based motion planning algorithm.I wrote the code in MATLAB but now i am writing it in c++.
I want to know how can we plot the robot path in real time with all the obstacles.
What I want is this: I want to see my robot traversing the space.So basically it's about the graphics.I am trying to use sfml but I am having problems with it.Some people suggested using opencv or opengl but I think they are not easy to use.I am looking for a simple to use library.
If opencv or opengl is the answer then please tell me what specifically i need to use in these libraries.I am working on linux(ubuntu 11.10)
You might want too look into using the internal matlab compiler for generating a standalone application directly from your M-code. That way you don't have to rewrite everything form scratch.
I have used the following link a couple of times just to refresh my memory
http://technologyinterface.nmsu.edu/5_1/5_1f/5_1f.html
Eg if you have made an M function with the following content(Example from link):
function y=PolyValue(poly,x)
poly=[1 2 -1 4 -5];
x=[5, 6];
y=polyval(poly, x)
you could use the command
mcc -m PolyValue
to compile the program.
This command would then give you the files necessary for implementation in a larger c++ program.
It should even support Gui elements and graphs.
Something like http://www.ros.org/news/2011/01/open-motion-planning-library-ompl-released.html
may be what you are looking for.
I've worked in both OpenCV for some image recognition projects and OpenGL for rendering displays and whether you go with a library like above or render it yourself is really up to how complex the display needs to be. Ask yourself some questions about how many different obstacle scenarios you are looking at. Are there a large multitude of possible shapes for the obstacles and the robot? Is the problem deterministic (in terms of both the robot's movement and the environment)?
In terms of OpenGL and OpenCV being not easy to use for those new to them, this is very much the case, but choosing to work in C++ makes the problem more difficult for beginners. As another user has mentioned, wrapping your Matlab code instead of throwing it away may be a viable option. Even running the matlab engine in the background to run your scripts through C++ may be viable, if speed is not a critical factor. See http://au.mathworks.com/help/matlab/matlab_external/introducing-matlab-engine.html for more information.
I know this will be a difficult question, so I am not necessarily looking for a direct answer but maybe a tutorial or a point in the right direction.
What I am doing is programing a robot that will be controlled by a remote operator. We have a 3D rendering of the robot in SolidWorks. What I am looking to do is get the 3D file into VB (probably using DX9) and be able to manipulate it using code so that the remote operator will have a better idea of what the robot is doing. The operator will also have live video to look at, but that doesn't really matter for this question.
Any help would be greatly appreciated. Thanks!
Sounds like a tough idea to implement. Well, for VB you are stuck with MDX 1.1(Comes with DirectX SDK) or SlimDX (or other 3rd party Managed DirectX wrapper). The latest XNA (replacement for MDX 1.1/2.0b) is only available for C# coder. You can try some workaround but it's not recommended and you won't get much community support. These are the least you need to get your VB to display some 3d stuffs.
If you want to save some trouble, you could use ready made game engine to simplified you job. Try Ogre, and it's managed wrapper MOgre. It was one of the candidate for my project. But I ended up with SlimDX due to Ogre not supporting video very well. But since video is not your requirement, you can really consider it. Most sample would be in C# also, so you need to convert to VB.Net to use. It won't be hard.
Here comes the harder part, you need to export your model exported from SolidWorks to DirectX Format (*.x). I did a quick search in google and only found a few paid tools to do that. You might need to spend a bit on that or spend more time looking for free converter tools.
That's about it. If you have more question, post again. Good Luck
I'm not sure what the real question is but what I suspect that you are trying to do is to be able to manipulate a SW model of a robot with some sort of a manual input. Assuming that this is the correct question, there are two aspects that need to be dwelt with:
1) The Solidworks module: Once the model of the robot is working properly in SW, a program can be written in VB.Net that can manipulate the positional mates for each of the joints. Also using VB, a window can be programmed with slide bars etc. that will allow the operator to be able to "remotely" control the robot. Once this is done, there is a great opportunity to setup a table that could store the sequencial steps. When completed, the VB program could be further developed to allow the robot to "cycle" through a sequence of moves. If any obstacles are also added to the model, this would be a great tool for collission detection and training off line.
2) If the question also includes the incorporation of a physical operator pendent there are a number of potential solutions for this. It would be hoped that the robot software would provide a VB library for communicating and commanding the Robot programatically. If this is the case, then the VB code could then be developed with a "run" mode where the SW robot is controlled by the operator pendent, instead of the controls in the VB window, (as mentioned above). This would then allow the opertor to work "offline" with a virtual robot.
Hope this helps.