Assistance with TI cc2541 replacement of existing code with custom functions - bluetooth-gatt

I have a small bluetooth device that uses the cc 2541 to control a toy. I am trying to build a proof of concept artefact, and have found the exact hardware i need, but it has functions already installed that don't suit me. Is there a way to replace the existing functions with new features and write them back to the device.
The current device has 10 different features, and are written into the controller.
Any advice will be appreciated

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

C two platform compiling

I want to run some C code targeted to run on a unique PIC micro based hardware setup in a PC windows environment as well. The objective is to emulate multiple instances of the hardware without the actual hardware. I expect to have to write the code upfront to account for this and to create low level functions in C or C# that emulate each of the PIC functions.
Does anyone know an environment that can support this?
What you're looking for is a microcontroller simulator. The simulator will run on the PC and execute the PIC code. In order to e.g. generate interrupts / simulate serial data / whatever you'll need to either configure the simulator settings or write a few custom function.
Your target code (for the PIC itself) shouldn't need to be aware of what environment it's running on.
Here's a few links, but you're best off searching for some variant of "PIC microcontroller simulator" or "microchip PIC simulator".
http://sourceforge.net/projects/gpsim/
http://sourceforge.net/projects/picsim/
I'm not sure that this question is about an instruction simulator. It seems to me that it is simply about the technique called "dual targeting". I've asked a similar question on StackOverflow before, please see: Prototyping and simulating embedded software on Windows . Recently, I've also made a blog post about dual targeting and rapid prototyping on Windows here: http://embeddedgurus.com/state-space/2013/04/dual-targeting-and-agile-prototyping-of-embedded-software-on-windows/ .
So, while the links should provide the answer to the original question, I'd like to add that the 8-bit PIC is probably no programmable in C++. In fact, it is so baroque that even C compilers for this "architecture" must cut many corners.

3d files in vb.net

I know this will be a difficult question, so I am not necessarily looking for a direct answer but maybe a tutorial or a point in the right direction.
What I am doing is programing a robot that will be controlled by a remote operator. We have a 3D rendering of the robot in SolidWorks. What I am looking to do is get the 3D file into VB (probably using DX9) and be able to manipulate it using code so that the remote operator will have a better idea of what the robot is doing. The operator will also have live video to look at, but that doesn't really matter for this question.
Any help would be greatly appreciated. Thanks!
Sounds like a tough idea to implement. Well, for VB you are stuck with MDX 1.1(Comes with DirectX SDK) or SlimDX (or other 3rd party Managed DirectX wrapper). The latest XNA (replacement for MDX 1.1/2.0b) is only available for C# coder. You can try some workaround but it's not recommended and you won't get much community support. These are the least you need to get your VB to display some 3d stuffs.
If you want to save some trouble, you could use ready made game engine to simplified you job. Try Ogre, and it's managed wrapper MOgre. It was one of the candidate for my project. But I ended up with SlimDX due to Ogre not supporting video very well. But since video is not your requirement, you can really consider it. Most sample would be in C# also, so you need to convert to VB.Net to use. It won't be hard.
Here comes the harder part, you need to export your model exported from SolidWorks to DirectX Format (*.x). I did a quick search in google and only found a few paid tools to do that. You might need to spend a bit on that or spend more time looking for free converter tools.
That's about it. If you have more question, post again. Good Luck
I'm not sure what the real question is but what I suspect that you are trying to do is to be able to manipulate a SW model of a robot with some sort of a manual input. Assuming that this is the correct question, there are two aspects that need to be dwelt with:
1) The Solidworks module: Once the model of the robot is working properly in SW, a program can be written in VB.Net that can manipulate the positional mates for each of the joints. Also using VB, a window can be programmed with slide bars etc. that will allow the operator to be able to "remotely" control the robot. Once this is done, there is a great opportunity to setup a table that could store the sequencial steps. When completed, the VB program could be further developed to allow the robot to "cycle" through a sequence of moves. If any obstacles are also added to the model, this would be a great tool for collission detection and training off line.
2) If the question also includes the incorporation of a physical operator pendent there are a number of potential solutions for this. It would be hoped that the robot software would provide a VB library for communicating and commanding the Robot programatically. If this is the case, then the VB code could then be developed with a "run" mode where the SW robot is controlled by the operator pendent, instead of the controls in the VB window, (as mentioned above). This would then allow the opertor to work "offline" with a virtual robot.
Hope this helps.

How do I get input from an XBox 360 controller?

I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)