CoreMIDI Manufacturer Presets - objective-c

I built a virtual MIDI controller with CoreMIDI and would like to import manufacturer presets for ControlChange (i.e. cc value, associated effect name with cc number, preset name, etc.). Is there a simple way to do this or do I need to hard code this information? I have found the MIDI manufacturer IDs on the MMA website, can this be used to obtain specific data in virtual instruments? Thanks.

MIDI has never provided a means of self-describing.
There have been efforts to standardise MIDI parameter sets, such as General MIDI, and vendor standards such as Yamaha XG and Roland GS, but even amongst instruments from the same vendor, the control sets were not consistent. Perhaps not surprising, as for this to work, the samples data used for the voices would need to be standardised as well - and of course, this is the differentiator between instruments.
What has tended to happen is that manufacturers have made heavy use of SYSEX for control functions in a way that is entirely non-standardized (particularly amongst their own products).
Building any kind of generalised MIDI editor requires you to create a mapping table for each device you intend to control describing the controls and their MIDI mapping. You'll usually find a substantial MIDI implementation chart in the user manual of each instrument with the data.

As of 2020, this will probably be best accomplished with MIDI Capability Inquiry. Parameter recall can be accomplished with MIDI-CI Property Exchange.

Related

CAN-bus bootloader standards

I'm developping an open source OTA update system for a few MCUs of a certain project. I wonder if there is some "standard" protocol for CAN-bus based bootloaders. Everything I saw online and in Application Notes from the chip manufacturers seem to be using their own brand of communication and thus their own specialized upload software too (mainly for demonstration for ANs).
My question is, am I missing something? Is there some standard way of doing this I'd rather adhere to, or should I just roll my own like they do and call it a day?
Features I'm interested in for the protocol side besides the obvious ones: checksumming, digital signatures, authenticated encryption.
Based on your tag, despite I do not see this from your question, I assume for now that you want to develop a boot-loader for automotive ECUs, which have a CAN connection.
The relevant protocols, which provide the services, are ISO 14229-3 or SAE J1939/73, with the first one much more common to my experience.
For development purposes, also ASAM MCD-1 XCP has support for that.
However, these are just the communication services and does not include usual usage patterns, which differ a lot across the OEMs.
For security, the German OEMs put a document together called "HIS Security. Module Specification", which I unfortunately did not find any more on the web.
They also have a blueprint for the design of a boot-loader.
However, this is anyway somewhat outdated, as boot-loaders today often are at least partially based on AUTOSAR, like the applications.
Last from them, you could also get a document partially specifying how the services above are used for flashing an ECU.
If you need further input, feel free to ask.
However, you will need yourself access to the non-free industry standards and recommendations.

Using FastRTPS for a Command and Control Application

I'm trying to understand how to use the FAST-RTPS libraries to implement a Command and Control application. The requirement is to allow multiple writers to direct command messages to a single reader that is tasked with controlling a piece of equipment. In this application there can be one or more identical pieces of equipment being controlled, each using a unique instance of the same reader code. I already understand that I should set the reader's RELIABILITY_QOS to RELIABLE and the OWNERSHIP_QOS to EXCLUSIVE_OWNERSHIP. The part that I am still thinking about is how to configure my application so that when a writer sends a command to the reader controlling the piece of equipment, other readers that might also receive the message will not act on it. I would like to do this at the FAST-RTPS level; that is, configure the application so that only the reader controlling the equipment receives the command message versus allowing multiple readers to receive the control message while programming these readers so that only the controlling reader will act on it. My approach so far involves assigning all controlling writers and only the controlling reader to a partition (See Advanced Functionalities in the Fast-RTPS Users Manual). There will be one of these partitions for each piece of equipment. Is this the proper way to implement my requirements or are there other, better ways?
Thank you.
Since this question was asked in under data-distribution-service, this answer references the OMG DDS specification, currently at version 1.4.
Although you could use Partitions to achieve the selective delivery that you are looking for, this would probably not be the recommended approach for your use case. The main disadvantage that comes to mind is a situation where a single writer has to send control messages to multiple pieces of equipment. With your current approach, you need a single Partition for each equipment, and you additionally need each message to be written into the right Partition. This can only be achieved by attaching a single Partition to each DataWriter, which would consequently require a single DataWriter per piece of equipment. Depending on your set-up, you may end up with many DataWriters where you would prefer to have a few, from the perspective of resource usage perspective as well as code complexity.
The proper mechanism that is intended for this kind of use-case is the so-called ContentFilteredTopic, as found in section 2.2.2.3.3 ContentFilteredTopic Class in the specification. For your convenience, I quoted some of it:
ContentFilteredTopic describes a more sophisticated subscription
that indicates the subscriber does not want to necessarily see all
values of each instance published under the Topic. Rather, it wants
to see only the values whose contents satisfy certain criteria. This
class therefore can be used to request content-based subscriptions.
The selection of the content is done using the filter_expression
with parameters expression_parameters.
Using ContentFilteredTopics, each DataReader would use a filter_expression that aligns with an identifier of the device that it is associated with. At the application level on the sender side, DataWriters would not be aware of that; they would just be writing their control messages. The middleware would take care of the delivery to those (and only those) DataReaders for which the filter expressions matches the data.
This is a core feature of many DDS-based systems. Although the DDS specification does not require it, in many cases the implementation is smart enough to do filtering on the DataWriter side, before the message goes onto the wire, in cases where that makes sense.
I do not know how much of this is actually implemented by Fast-RTPS.

PLC Programmable Logic Controller Protocols

I'd like to integrate a PLC with a computer. Set outputs and read inputs. I've looked at Modbus and its simple although if I want to act on the change in a input I would need to poll the input to detect the change. Are there any open and common protocols used by PLC's that would push/update on sensor/input change rather than requiring polling?
OPC UA (Unified Architecture) is an open protocol standard implemented on many PLCs with many PC client implementations available. It supports both "subscription" and "event" mechanisms, in addition to polling and other communication services.
Open and common, and also simple to implement, I don't think there are.
You should look for terms like "report by exception" and "unsolicited reporting". DNP3 for example has this feature, it's widely used in electrical applications, but it is not simple to implement, nor is it open.
Depending on your controller, maybe you can look at MQQT, there is support for Arduinos and RPi's, and also industrial controllers like WISE-5231
The two previous answer's are decent. As Nelson mentioned, you haven't specified which controller you are using. You also haven't mentioned what on the computer you'd like to integrate with the PLC. Beckhoff's TwinCAT PLCs can use MQTT, OPC-UA as well as a host of other protocols. They also offer libraries to use their ADS protocol.
As part of ADS, you can either set up an ADS server on your machine (it's very easy) and have your PLC's write to the server. The more typical way is to subscribe to variables/structure in the PLC using this ADS mechanism from within your program's runtime. An event will be fired when the variable struct changes (you can specify how much it should have changed by, if an analog value).
The method you pick is probably dictated by your architecture. If you have many PLCs, I would set up an ADS server in your computer, if you have a handful, subscribe from your program. Of course, you can mix and match these approaches too.
Here is a page of examples: https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_adssamples_net/html/tcsample_net_intro.htm&id=8269274592628480035

Is Cocotron the result of reverse engineering?

Does it replicate the behaviour of the various libraries (so the calls are exactly the same) or just codes them from scratch using unique optimizations and new ways to do their stuff?
There are different kinds of reverse engineering grouped roughly into Dirty-Room and Clean-Room. Dirty-Room basically involves disassembling machine code in some way to figure out what it does and using the disassembled code to create new code. Dirty-Room creates the problem of copyright infringement, you're basically plagiarizing the old system to create the new system either directly or indirectly through direct knowledge of the old systems implementation. Clean-Room involves implementing the same API's using documentation and testing against the system to be re-implemented. These two techniques can be used on their own or in various combinations together. For example the PC BIOS was reverse engineered using two teams, a Dirty-Room team which disassembled the original BIOS and created a specification and a Clean-Room team which implemented the new BIOS using the specification. High stake business situations for reverse engineering usually involves lawyers specializing in the field to create a proper new implementation which does not infringe on the old one.
Cocotron is a Clean-Room implementation. I/We use the documentation and test programs to create a new implementation (Cocotron) which matches the behavior of the old implementation (Cocoa). The Apple documentation is very good, the API's are organized well and it is easy to create test programs when needed. Cocotron is pretty good if I do say so, but it is definitely not Cocoa and I would imagine the sources vary wildly between the two.
Cocotron's internal implementation is fairly different from Cocoa's. I wouldn't say there's any "reverse engineering" involved.
You should know there's a history of having separate implementations of Cocoa's API (sort of). Cocoa grew out of OpenStep, which was originally designed as a specification with many different implementations on different platforms.

How do you organize code in embedded projects?

Highly embedded (limited code and ram size) projects pose unique challenges for code organization.
I have seen quite a few projects with no organization at all. (Mostly by hardware engineers who, in my experience are not typically concerned with non-functional aspects of code.)
However, I have been trying to organize my code accordingly:
hardware specific (drivers, initialization)
application specific (not likely to be reused)
reusable, hardware independent
For each module I try to keep the purpose to one of these three types.
Due to limited size of embedded projects and the emphasis on performance, it is often keep this organization.
For some context, my current project is a limited DSP application on a MSP430 with 8k flash and 256 bytes ram.
I've written and maintained multiple embedded products (30+ and counting) on a variety of target micros, including MSP430's. The "rules of thumb" I have been most successful with are:
Try to modularize generic concepts as much as possible (e.g. separate driver code from application code). -- It makes for easier maintenance and reuse/porting of a project to another target micro in the future.
DO NOT start by worrying about optimized code at the very beginning. Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect.
Work to ensure readability. Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your code.
I've worked on 8-bit PIC processors with similar limitations.
One restriction you don't have is how many comments you make or what you choose to name your methods, variables, etc.. Take advantage. Speed and size constraints do sometimes trump organization, but you can always explain.
Another tip is to break up a logical source file into even more pieces than you need, then bind them by #includeing them in a compilation unit. This allows you to have lots of reusable code (even one routine per file) but combine in whatever order you need. This is useful e.g. when trying to meet compilation unit size restrictions, or to pick and choose which common subroutines you need on the next project.
I try to organize it as if I had unlimited RAM and ROM, and it usually works out fine. As mentioned elsewhere, do not try to optimize it until you absolutely need to.
If you can get a pin-compatible processor that has more resources, it's better to get it working on that, concentrating on good structure and layout, then optimize for size later when you understand the code better.
Except under exceptional circumstances (see note), the organisation of your code will have no impact on the final product. (contents of the code are obviously a different matter)
So with that in mind you should organise your code as you would any other project.
With that said, the following are fairly typical:
If this is a processor that you've worked on before, or will be working on in the future, you will usually want to keep a dedicated hardware abstraction layer that can be shared between projects in the future. Typically this module would contain items like routines for managing any uarts, timers etc.
Usually it's reasonable to maintain a set of platform specific code for initialisation and setup that performs all of the configuration and initialisation up to the point where your executive takes over and runs your application. It will also include platform specific hal routines.
The executive/application is probably maintained as a separate module. All of the hardware specific code should be hidden in the hal (as mentioned above).
By splitting your code up like this you also have the option of compiling and running your application as a simulation, on a completely different platform, just by replacing the hardware specific code with routines that mimic the hardware.
This can be good for unit testing and debugging and algorithmic problems you might have.
Exceptional circumstances as might be imposed by unusual compiler restrictions. eg. I've come across some compilers that expect all interrupt service routines to be compiled within a single object file.
I've worked with some sensors like the Tmote Sky, I too have seen poor organization, and I have to admit i have contributed to it. Anyway I'd say that some confusion has to be, because loading too much modules or too much part of program will be (imho) resource killing too, so try to be aware of a threshold between organization and usability on the low resources.
Obviously this don't mean let caos begin, but for example try to get a look on the organization of the tinyOS source code and applications, it's an idea on what I'm trying to say.
Although it is a bit painful, one organization technique that is somewhat common with embedded C libraries is to split every single function and variable into a separate C source file, and then aggregate the resulting collection of O files into a library file.
The motivation for doing this is that for most normal linkers the unit of linkage is an object, for every object you either get the whole object or none of it. Since there is a 1-1 relationship between C files and object files, putting each symbol in it's own C file gives each one it's own object. This in turn lets the linker pull in only that subset of functions and variables that are actually used.
This sort of game doesn't help at all for headers they can happily be left as single files.