Segger Jlink flash download mechanism - embedded

I'm using Rohitab's excellent API monitor tool to monitor the DLL calls that Keil uVision is making to Segger's JLinkARM.dll so that I can replicate them within an automated test environment.
As part of this I'm trying to understand the mechanism through which uVision communicates with the flash loader program to download the image being debugged.
I understand that uVision download's a flash loader program to the target device's RAM and that loader interacts with the onboard flash to erase it and download the new image, though I'm struggling to see the DLL calls which are made from uVision to actually stream the image down to the flash loader.
I would have expected to see a whole bunch of JLINKARM_WriteMem calls to stream the data down but I don't. I can see a bunch of JLINK_WriteReg and JLINK_ReadReg calls but not enough to comprise the image. My guess is they are for monitoring the flashing process. I know Jlink support a number of flash download related APIs but I don't see them used here. I don't see any paths being passed either. JLink's own log file is similarly unhelpful here. Is there some out of band mechanism I'm missing here?

Sorry. I shouldn't be posting questions like this when I'm tired. JLINKARM_WriteMem is exactly the mechanism used. I don't know why I didn't see them the first time I tried it.

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

USB MSC with STM32Cube

I'm using STM32Cube to generate simple USB MSC project. I'm using STM32F417VG.
So I'm selecting USB_OTG_FS - Device_only and USB_DEVICE - Class For FS IP - Mass Storage Class.
Then I'm generating source code, compile it and download to board, put it's USB cable to PC and nothing happens.
What am I doing wrong?
The STM32Cube application helps you get started on developing an application, but does not do the work for you. The generated code will include all the libraries necessary and initialize the hardware so that all the functions you selected are available and ready to go, and then begin an empty infinite loop. It will not show any outward behavior or respond to any external stimulus.
You will need to add some of your own code for the microcontroller to actually do anything.
If you are unsure what you need to do to make the USB functions work, take a look at the example projects that come with STM32Cube and the documentation comments in the library files it included in your project.
However, even a "simple" USB project can be relatively complex, and an unresponsive microcontroller can be mystifying. You may want to get your bearings with very simple GPIO-type projects. Making an LED blink is a microcontroller's "Hello World".

Kinect motor control via Processing

I'm hacking the Kinect using some simple-openni based processing apps for a talk I plan to give soon and I found an API that appears to control the motor. There is a moveKinect method that appears to be added to the main ContextWrapper interface but I can't seem to get it to work. Looking through the svn history and release notes it appears to have been added last year with a note that explains it doesn't work with the newest drivers(5.1.02,Linux64). I've tried calling the method giving it values in degress and radians but nothing happens. I get no error and no movement. Has anyone else played with this? I'm running with the 2nd to latest processing 2.0 build (the link to processing 2.0.1 doesn't work) and the latest SImpleOpenNI package I could download.
SimpleOpenNI is the wrapper for OpenNI which allows access to the RGB/IR/Depth streams and the middleware for body/hand detection, but does not allow access to hardware like the LED, accelerometer or motor.
You should try Kinect P5 which uses libfreenect behind the scenes and supports motor control. Bare in mind you won't have support for the middleware.
If you need both middleware and hardware access you can try OpenFrameworks with the ofxOpenNI addon. It has a has a hardware class that works on OSX and Linux (as sudo) allowing use of both the middleware and motor.

Making my own application for my USB MIDI device

I want to try and make my own application for my Novation Nocturn, which is a USB DJ controller surface. The application software interacts with it to send out MIDI messages to software like Traktor, Ableton and Cubase.
I'm aware of libusb, but that's as far as I've got. I've successfully installed it to interact with my device but stopped there.
I'm after some suitable reading material basically. USB specs, MIDI specs and such. If I'm honest the full USB 2.0 spec looks like it holds loads of stuff I don't need.
Just looking for something interesting to do now that I've finished my degree (Computer Science). My current programming knowledge is C++ and mainly C#.
Could do with some direction on how to get stuck into this task.
edit:
Update to include some info from the Device Manager on the Nocturn.
Hardware IDs:
USB\VID_1235&PID_000A&REV_0009
USB\VID_1235&PID_000A
Compatible IDs:
USB\Class_FF&SubClass_00&Prot_00
USB\Class_FF&SubClass_00
USB\Class_FF
Device Class:
MEDIA
USB MIDI is probably one abstraction layer lower than you want to deal with. I'd suggest finding a good MIDI framework and interacting with the device via MIDI instead.
For C++, Juce is probably the way to go, as you didn't mention a target platform or any other specific requirements.
If you want to go the .NET route, the easiest way to get started is with the C# MIDI Toolkit code:
http://www.codeproject.com/KB/audio-video/MIDIToolkit.aspx
In there, you'll find all the basics for opening an device, reading input, and writing output. Alternatively, NAudio has some MIDI classes, but they are somewhat incomplete.
As you develop, you'll want a reference for the MIDI spec handy.
A tool that you will find invaluable is MIDI-OX. In fact, I suggest that before you start coding, you fire up MIDI-OX and use it to sniff the messages coming from the Novation. It will give you a good idea of what the Novation sends. You can use it in conjunction with MIDI Yoke (a configurable virtual MIDI port) to insert itself between the Novation, and Ableton Live (or whatever software you normally use with your Novation) so you can see all of the messages in normal use.
Done... Kidding, but I've started on this in Python - I personally want linux support. I am teaching myself python, but I only dabble in programming.
You can see basic functionality at https://github.com/dewert/nocturn-linux-midi. The guy who reverse engineered it (i.e. the leap I wouldn't have been able to make myself) doesn't seem to be doing any more with it. His code is at https://github.com/timoahummel/nocturn-game
I am using PyPortMIDI and PyUSB, both of which I believe are wrappers for the C equivalents. I think this is all ok on Windows, but haven't tried.
What is currently on my github is crap, but it is proof-of-concept. I'm working on doing it properly now, with threading and proper configuration options.
The driver for the Nocturn makes it appear to system as a MIDI device, even though it isn't a USB MIDI device at the hardware level. The Automap software works entirely at the MIDI level, receiving MIDI instructions and sending different instructions in response - it is separate from the driver and not neccesary.
Alternatively, look at https://github.com/timoahummel/nocturn-game for an example of talking to it directly over USB from Python. You can probably port this to another language with libusb bindings.
Old thread, but I've just recently started looking into this.
I had a look at the Python application that dewert has written. Interestingly, it turns out that the data that the Nocturn emits is in fact MIDI, although it doesn't register itself as a USB MIDI device.
But looking at the actual data coming from the device, it actually emits control change messages (0xB0 controller value) for everything. Also the control commands that are sent to it are also control change messages, albeit only the data bytes, as the Nocturn seems to support MIDI running status (i.e. when sending multiple control change messages, it is not necessary to repeat the data byte).
Indeed, the looking at the magical initialization data it is actually just a bunch of control changes: it starts with 0xb0 and from there on the data comes in twos. For instance the last two bytes in the init string are 0x7f 0x00 which simply turn off the LED for the rightmost forward button. (There is something subtle happening as a result of the initialization being sent though, as the Nocturn sometimes emits some messages which appear to be some form of timeout events, and that behavior changes depending on whether the initialization string has been sent or not.)
Using MIDI-like messages makes sense, as Novation would be well aware of the MIDI protocol, so it would be easiest for them to use it for the communication even if the device is not strictly a MIDI device.
Note though that the incrementors just send the values 1 or 127, i.e. +1 or -1 step, so even with some trivial mapping software it's not really useful as it is. (Actually, if turned quickly, one can get 3 or 125 for instance, with the 125 corresponding to -3.) The only controller which sends a continuous value is the slider, which emits an 8 bit value when moved.
I suppose you'll want to know about USB classes in general and USB MIDI class in particular. The latter is the best what you can hope for in case you don't posess documentation for some proprietary protocol (whether it's used there instead).

Access the stage in the Flash CS4 IDE

The stage in the Flash CS4 Authoring Enviroment is a running SWF. That what makes thing like the 3D and Bone Tools to work in the IDE.
Is it possible to access that swf ? I suspect the immediate answer would be no because that would raise some security issues maybe and cause lots of developers to crash the IDE every 5 minutes :).
That said I don't expect this to be a straight forward process, but I guess there should be a way to access that.
Any thoughts ?
I can only tell you how components work on the stage, where we've attempted the type of access you talk about.
I suspect that at their core, the 3d and bone tools are implemented using component-like tech to display the "live" stage instance. In general this would involve a compiled instance of a live preview swf that is placed on the stage. It is misleading to think of the stage as a single player. Each component preview runs in its own sandbox that, as far as I can tell, has no means of communication with other component previews on the IDE stage. There is no common storage location.
Of course, if you were in charge of the preview swf (as with the case of a component), you could try LocalConnection to chat, but the previews you want to penetrate are closed. I suspect if you dig hard enough, you'd find the bone/3d preview hidden in the installation folders (perhaps in a swc.. ik.swc looks interesting) and might be able to hack about at it with a decompiler, but straight out the box, I'm not sure there's a solution to what you ask.