RealityKit How to create custom meshes at runtime? - rendering

RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.
Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering?
Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.
I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.

Feb 2022
As of macOS 12 / iOS 15, RealityKit includes API to allow you to provide your own procedurally generated meshes, primarily through the following methods:
generate(from:)
generate(from:)
generateAsync(from:)
generateAsync(from:)
These provide means to do create the MeshResource instances - synchronously and asynchronously - either constructing the models and instances yourself, or by providing a list of MeshDescriptor that you create yourself.
The Apple documentation (as I'm writing this) is non-existent, but the APIs themselves are reasonably well documented if you look into the generated swift interfaces. Max Cobb has an article (on Medium): Getting Started with RealityKit: Procedural Geometries that goes into some description of how to use a MeshDescriptor to describe a surface mesh, and also has a swift package with some additional geometries that use this technique: RealityGeometries that's not hard to read through to see examples of using it in action.

As far as I know RealityKit can only use primitives or usdz files as models. While you can generate usdz files using ModelIO on device but that isn't feasible for your use case.
The synchronization however is built into ARKit although you have to do a little bit more work when you are not using RealityKit.
Create a MultipeerConnectivity session between the devices (that's something you need to to for RealityKit as well)
Configure your ARSession and set isCollborationEnabled which makes your session output CollaborationData in the session(_:didOutputCollaborationData:) delegate callback.
Send this data using your MultipeerConnectivity session.
When receiving data from other users integrate it into your session using update(with:)
To send arbitrary information between users you can either send them via MultipeerConnectivity independently from ARKit or use custom ARAnchors, which is the preferred option when your dealing with positional data, e.g. when a users has placed an object at a specific location.
Instead of adding objects directly (by using something like scene.rootNode.addChildNode() in SceneKit you create a special ARAnchor subclass with all the information needed to add your model and add it to your session.
Then you add the object in the rendered(_:didAdd:forAnchor:) callback. This has the benefits of better tracking around your object (because you added an anchor to the position, indicating to ARKit that it should remember the position) and that you don't need to do anything special for multiuser experiences, because ARKit calls the rendered(_:didAdd:forAnchor:) method for both manually added anchors as well as automatically added ones, for example when it receives collaboration data.

Related

Record sound of one application

I want to develop an application for Mac OS X to record audio from one application.
I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.
My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.
Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.
What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.
There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.
So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.
See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.
If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)
You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.
Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.

Is there still a difference between a library and an API?

Whenever I ask people about the difference between an API and a library, I get different opinions. Some give this kind of definition, saying that an API is a spec and a library is an implementation...
Some will tell you this type of definition, that an API is a bunch of mapped out functions, and a Library is just the distribution in compiled form.
All this makes me wonder, in a world of web code, frameworks and open-source, is there really a practical difference anymore? Could a library like jQuery or cURL crossover into the definition of an API?
Also, do frameworks cross over into this category at all? Is there part of Rails or Zend that could be more "API-like," or "libraryesque"?
Really looking forward to some enlightening thoughts :)
My view is that when I speak of an API, it means only the parts that are exposed to the programmer. If I speak of a 'library' then I also mean everything that is working "under the hood", though part of the library nevertheless.
A library contains re-usable chunks of code (a software program).
These re-usable codes of library is linked to your program through APIs
(Application Programming Interfaces). That is, this API is an interface to library through which re-usable codes are linked to your application program.
In simple term it can be said that an API is an interface between two software programs which facilitates the interaction between them.
For example, in procedural languages like C, the library math.c contains the implementations of mathematical function, such as sqrt, exp, log etc. It contains the definition of all these functions.
These function can be referenced by using the API math.h which describes and prescribes the expected behavior.
That being said, an API is a specification (math.h explains about all the functions it provides, their arguments and data they return etc.) and a library is an implementation (math.c contains all the definitions of these functions).
API is part of library that defines how it will interact with external code. Every library has API, API is sum of all public/exported stuff. Nowadays meaning of API is widened. we might call the way web site/service interact with code as API also. You can also tell that some device has API - the set of commands you can call.
Sometimes this terms can be mixed together. For example you have some server app (like TFS for example). It has API with it, and this API is implemented as a library. But this library is just a middle layer between you and not the one who executes your calls. But if library itself contains all action code then we can't say that this library is API.
I think that Library is a set of all classes and functions that can be used from our code to do our task easily. But the library can contain some of its private functions for its usage which it does not want to expose.
API is a part of library which is exposed to the user. So whatever documentation we have regarding a library, we call it an API Documentation because it contains only those classes and functions to which we have access.
we have first to define an interface ...
Interface :is the means by which 2 "things" talk to each other and exchange information. "things" could be a (1) human or (2) a running code of any sort (e.g. library ,desktop application , OS , web service ... etc).
if a human want to talks to a program he need Graphical user interface (GUI) or command line interface (CLI). both are types of interfaces that humans (but not programs) would like to use.
if however a running code (of any sort) want to talk to another running code (of any sort) it doesn't need or want a GUI or CLI ,it rather need an Application Programming Interface (API).
so to answer the original poster question : library is a type of running code and the API is the means by which this running code talk to other running codes.
In Clear and concise language
Library: Collection of all classes and methods stored for re-usability
API: Part of library classes and methods which can be used by a user in his/her code.
According to my perspective, whatever the functions are accessible to invoker , we can called as api in library file, library file having some of the functions which is private , we cannot access them ..
There are two cases when we speak or think of API
Computer program using library
Everything else (wider meaning)
I think, that in the first case, thinking in terms of API is confusing. It's because we always use a library. There are only libraries. API without library doesn't exist, while there's a tendency to think in such terms.
How about The Standard Template Library (STL) in C++? It's a software library.
You can have different libraries with the same API, meaning set of available classes, objects, methods, functions, procedures or whatever terms you like in some programming language. But it can be said, that we have different implementation of some "standard" library.
Some analogy may be that: SQL is a standard but can have different implementations. What you use is always some SQL engine which implements SQL. You may follow only standard set of features or use some extended, specific to that implementation.
And what "under the hood" in library is not your concern, except in terms of differences in efficiency by different implementations of such library.
Of course I'm aware, that this way of thinking is not what is a "generally binding standard". Just a lot of new terms have been created, that are not always clear, precise, intuitive, that brings some confusion. When Oracle speaks about Collections. It's not library, it's not API, it's a "Collections Framework".
Hello brothers and sisters.
Without using technical terms I would like to share my understanding regarding API and library.
The way I distinguish 'library' and 'API' is imagining a situation where I go to a book library. When I go there, I request a book which I need to a 'librarian' without knowing how a entire library is managed.
I make a simple relation between them like this.
Library = A book library which has a whole system and staffs to manage books.
API = A librarian who provides me a simple access to a book which I need.

sample mac Firefox Plugins?

I'm trying to re-write an old image-viewing plugin for the mac. The old version uses QuickDraw (I said it was old) and resources (really really old) and so it doesn't work in Firefox 3.6 (which is why I'm re-writing it)
I know some Objective C, and so I figure I'm going co re-write this in that using new-fangled Mac routines and nibs, etc. However, I don't know how to start. I've got the BasicPlugin example that comes with mozilla source, so I know how to create a plugin with entrypoints, etc. However, I don't know how to create the nib, and how to interface Obj-C with the entrypoints, etc.
Does anyone know of a more advanced sample for mac than BasicPlugin.bundle? (Preferably simple enough that I can just look at it and understand it...)
thanks.
Sadly i don't really know of any good "intermediate" example. However, integrating Obj-C isn't that difficult. Thus, following is a short overview of what needs to be done.
You can use Obj-C and C/C++-sources in the same project, its just recommendable to keep them seperated to some extent. This can for example be done by letting the source file with the entry-points and other NPAPI-interfacing stay plain C or C++ files and e.g. forward calls into the plugin from there.
Opaque pointers help to keep a clean seperation, see e.g. here.
The main changes to your plugin include switching to different drawing and event models. These have to be negotiated in NPP_New(), here is an example for the drawing model. When using Cocoa and to support 64bit enviroments, you need to use the Cocoa event model.
To draw UI elements you should be able to use a NSGraphicsContext from the CGContextRef and then draw an NSView in the context. See also the details provided in this post and its follow-ups.

Applescript Inside of a Cocoa Application

For the application I am writing, I need to access some other applications' items, for which Applescript seems the best way to go. I have been using the Appscript framework, which worked well, because it allowed me to thread it and not make my app lock up when an Applescript was taking a while. However, now I am attempting to make my application 64 bit compatible, and it seems like the Appscript framework does not support 64 bit. Is there a "good" way to use Applescript in Cocoa that will not lock up my application, but still give me the full control I need?
--firen
It seems like SBApplication should work, but I haven't used it before.
According to #cocoadevcentral:
SBApplication: use to make cross-application scripting calls with Objective-C instead of AppleScript. Ex: get current iTunes track.
Here is is the excerpt from the documentation:
The SBApplication class provides a mechanism enabling an Objective-C program to send Apple events to a scriptable application and receive Apple events in response. It thereby makes it possible for that program to control the application and exchange data with it. Scripting Bridge works by bridging data types between Apple event descriptors and Cocoa objects.
Although SBApplication includes methods that manually send and process Apple events, you should never have to call these methods directly. Instead, subclasses of SBApplication implement application-specific methods that handle the sending of Apple events automatically.
For example, if you wanted to get the current iTunes track, you can simply use the currentTrack method of the dynamically defined subclass for the iTunes application—which handles the details of sending the Apple event for you—rather than figuring out the more complicated, low-level alternative:
[iTunes propertyWithCode:'pTrk'];
If you do need to send Apple events manually, consider using the NSAppleEventDescriptor class.
Hope that helps!
As Blaenk mentioned Scripting Bridge may well be the way to go, although it can prove somewhat inefficient if you have to iterating through large arrays etc.
The simplest way to run an Applescript in Cocoa is using NSAppleScript.
Apple has some pretty good examples, which I found useful when I needed to do something similar. There are three articles you might want to take a look at. They all contain some sample code, which I always find very useful.
A Few Examples of using Scripting Bridge
Performance & Optimisation with Scripting Bridge
NSAppleScript Technote/Example
I created a gist with the full URLs as I can't post more than one link, what with being a newbie and all.
http://gist.github.com/130146
it seems like the Appscript framework does not support 64 bit.
Should work. Make sure you set the correct architectures and SDK (64-bit requires 10.5) in the Xcode project. File a bug report if you have a specific problem.

How do I create Cocoa interfaces without Interface Builder?

I would prefer to create my interfaces programatically. Seems as if all the docs on Apple Developer assume you're using Interface Builder. Is it possible to create these interfaces programatically, and if so where do I start learning about how to do this
I thought the relevant document for this, if possible would be in this section: http://developer.apple.com/referencelibrary/Cocoa/idxUserExperience-date.html
I like the question, and I'd also like to know of resources for going IB-less. Usefulness (the "why") is limited only by imagination. Off the top of my head, here are some possible reasons to program UIs explicitly:
Implementing a better Interface Builder.
Programming dynamic UIs, i.e., ones whose structure is not knowable statically (at compile/xcode time).
Implementing the Cocoa back-end of a cross-platform library or language for UIs.
There is a series of blog posts on working without a nib and a recent description by Michael Mucha on cocoa-dev.
I would prefer to create my interfaces programatically.
Why? Interface Builder is easier and faster. You can't write a typo by drag and drop, and you don't get those oh-so-handy Aqua guides when you're typing rectangles by hand.
Don't fight it. Interface Builder is your friend. Let it help you.
If you insist on wasting your own time and energy by writing your UI in code:
Not document-based (generally library-based, like Mail, iTunes, iPhoto): Create a subclass of NSObject, instantiate it, and make it the application's delegate, and in the delegate's applicationDidFinishLaunching: method, create a window, populate it with views, and order it front.
Document-based (like TextEdit, Preview, QuickTime Player): In the makeWindowControllers method in your subclass of NSDocument, create your windows (and populate them with views) and create window controllers for them, making sure to send yourself addWindowController: for each window controller.
As a completely blind developer I can say that IB is not compatible with VoiceOver (the built-in screen-reader on OS X).
This means that without access to robust documentation on using Cocoa without IB I cannot develop apps for OS X / iPhone in Cocoa, which means I (ironically) cannot easily develop apps that are accessible to the blind (and all others) on OS X / iOS.
My current solution, which I would prefer not to use, is Java + SWT, of course this works for OS X, not so much for iOS.
In fact IB becomes totally unusefull when you start to write your own UI classes. Let say that you create your own button that use an skin system based on a plist. Or you create an dinamic toolbar that load and unload items based on user selection.
IB doesn't accept custom UI elements, so more complex UI can't use him. And YES you will want to do more complex things that the UIKit gives you.
Though this is quiet a bit old...
I tried many times to do everything only with programmatically. This is hard, but possible.
Update:
I posted another question for this specific issue: View-based NSOutlineView without NIB?, and now
I believe everything can be done in programmatical way, but it's incredibly hard without consulting from Apple engineers due to lack of information or examples.
Below argument might be off-topic, but I like to note why I strongly prefer programmatically way.
I also prefer programmatic way. Because
Static layout tool cannot handle anything dynamic.
Reproducing same UI state across multiple NIBs is hard. Everything is implicit or hidden. You need to visit all the panels to find parameters. This kind of job is very easy to make mistake - mistake friendly.
Managing consistent state is hard. Because reproducing same look is hard.
Automation impossible. You cannot make auto-generated input form.
Parameter indirection - such as variable element size chosen by user - is not possible.
Aiming small point is a lot harder than hitting finger sized keys at fixed location - funny that this is serious usability issue for developers!
IB sometimes screws. Which means it's compilable, and still working, but when I open the source, it looks broken and extra editing becomes impossible. (you may not experienced this yet, but if XIB file goes complex, this must happen)
It's image based serialization. The concept is good. But the problem is image-base only. IB doesn't keep the source code for clean boot by replaying the source code. Clean boot is very important to guarantee specific running state. Also, we cannot fix the bugs in source-code. Bug s just will be stacked infinitely. This is core reason why we cannot reproduce the equal(not similar looking) UI state in IB.
Of course these stuffs can be solved by post-processing NIB UI, but if we have to configure everything again, there's no reason to use IB at first.
With text code, it's easy to reproducing the same state - just copy the code. Also easy to inspecting and fixing wrong part - because we have full control. But in IB, we have no control on hard-core details.
IB can't be ultimate solution. It's like a Photoshop, but even Photoshop offers text-based scripting facility. GUI is a moving program, and not a static image or graphic. An IB approach is completely wrong even for visual editing of GUI. If you're one of the Apple folks reading this, I beg you to remove whole dependency to IB completely ASAP.