I’m trying to create a rotary encoder device that will output a set of keypresses depending on whether I turn it forward or backward, but as of now am confused as to whether it just using a scroll wheel descriptor, interfacing the encoder as you would normally and creating an input report based on the keypresses, or if there’s more. In my approach, I’m trying to interface the encoder with my STM32 in the timer encoder mode, where the hardware counts for me, and depending on whether it’s increasing or decreasing, I output a set of keypresses, but I’m not sure if this is the correct way.
Related
I'm using STM32F4 and I want to generate a pulse. the question is how do I know the pulse is generated by set certain bit of swier in exti or not? is there any way to detect the generated pulse, or any alternative way to indicate that? how should I do to achieve that way with std library?
any code to config exti for soft event mode, and how to detect or indicate generated pulse
The "pulse generator" in the diagram is merely a description of how the event generation hardware works. It is not a user accessible function.
The difference between an interrupt and an event is not clear in ST's manuals, but an interrupt signals the NVIC, and will result in the associated handler code being executed, while an event is used to directly signal a peripheral device.
So here if the configured EXTI edge occurs, and the corresponding event mask bit is set, a pulse is generated signalling some other internal on-chip peripheral.
there any way to detect the generated pulse
Not in the context of that diagram. It is probably irrelevant to whatever it is you are trying to do.
how should I do to achieve that way with std library?
Classic X-Y problem, you have fixated a solution and are asking questions about the solution. You need to ask about the problem. Unfortunately it is entirely unclear what that problem is.
Moreover what "std library"? Are you using the older "standard peripheral library" or the abysmal CubeMX library?
If you want to simply generate an output pulse in response to an edge in an input, then most of the timer peripherals support that with zero software overhead. Search your parts reference manual for "One-pulse mode" in relation to any of the available timer peripherals.
I'm completely new to the space and i have a project that i would like to complete, the project is basically: extracting sensor data in the form of MODBUS than sending the data out through CANBUS. i already have a mod-bus input and a can-bus output.
researching into MODBUS has been a bit confusing so sorry if these questions seem a bit stupid.
Is it possible to write code which can convert MODBUS to CANBUS? or will i need external hardware.
additionally, i'm looking to add a microprocessor to my dev board, is there anything in specific i should look for that would help with my modbus and canbus operations? or will any microprocessor work.
thank you
The basics of all data communication is the OSI model. Start there. Then you'll eventually find out that Modbus is an application layer protocol standard and CAN is a physical/data-link layer standard.
Therefore your question doesn't make any sense. You can't convert an application layer into a physical/data-link layer. You can however convert Modbus to some specific application tier protocol for CAN, such as for example CANopen, Device Net or J1939. Or a custom one.
The lowest layers underneath Modbus is UART and most likely RS-485. Possibly RS-232.
You will need a RS-485 or RS-232 transceiver between the Modbus sensor and your MCU. And you will need a CAN transceiver between your MCU and the CAN bus. Additionally, it is very strongly advised to pick a MCU with built-in CAN controller hardware.
Pick a target hardware that suits your project, don't pick some random dev board and then try to duct tape it with misc hardware to suit the project requirements.
However, the hardware is the easy part. Buying and configuring protocol stacks for whatever application protocols you are using is the hard and expensive part.
Also, there are lots of companies making gateways, so why re-invent the wheel. If you need to convert between for example Modbus and CANopen, these are both well-known industry standards. Consider just buying a gateway.
I'm trying to tap the currently selected output audio device on macOS, so I basically have a pass through listener that can monitor the audio stream currently being output without affecting it.
I want to copy this data to a ring buffer in real time so I can operate on it separately.
The combination of Apple docs and (outdated?) SO answers are confusing as to whether I need to write a hacky kernel extension, can utilise CoreAudio for this, or need to interface with the HAL?
I would like to work in Swift if possible.
Many thanks
(ps. I had been looking at this and this)
I don't know about kernel extensions - their use of special "call us" signing certificates or the necessity of turning off SIP discourages casual exploration.
However you can use a combination of CoreAudio and HAL AudioServer plugins to do what you want, and you don't even need to write the plugin yourself, there are several open source versions to choose from.
CoreAudio doesn't give you a way to record from (or "tap") output devices - you can only record from input devices, so the way to get around this is to create a virtual "pass through" device (AudioServerPlugin), not associated with any hardware, that copies output through to input and then set this pass through device as default output and record from its input. I've done this using open source AudioServer Plugins like BackgroundMusic and BlackHole [TODO: add more].
To tap/record from the resulting device you can simply add an AudioDeviceIOProc callback to it or set the device as the kAudioOutputUnitProperty_CurrentDevice of an kAudioUnitSubType_HALOutput AudioUnit
There are two problems with the above virtual pass through device approach:
you can't your hear output anymore, because it's being consumed by the pass through device
changing default output device will switch away from your device and the tap will fall silent.
If 1. is a problem, then a simple is to create a Multi-Output device containing the pass through device and a real output device (see screenshot) & set this as the default output device. Volume controls stop working, but you can still change the real output device's volume in Audio MIDI Setup.app.
For 2. you can add a listener to the default output device and update the multi-output device above when it changes.
You can do most of the above in swift, although for ringbuffer-stowing from the buffer delivery callbacks you'll have to use C or some other language that can respect the realtime audio rules (no locks, no memory allocation, etc). You could maybe try AVAudioEngine to do the tap, but IIRC changing input device is a vale of tears.
I am wondering if there are projects/examples with any machine learning library (tensorflow, etc) which can do continous training kind of simulate an animal or pet.
What do I mean by animal/pet?
Let's assume I have these hardware robot.
Inputs:
Touch sensor, which returns number from 0 to 255 depends on touching force.
Microphone.
Webcam.
Outputs:
Moving module, which can move forward/backward and left/right. Let's say just some simple wheels system with 4 input pins. If I send +5v (binary 1) in pin 1, it goes forward, to pin 2, it goes backward, to pin 3, left, pin 4, right.
Speakers.
Everything is connected to central Computer (raspberry Pi or if not enough CPU/Memory then Microsoft Surface Pro with 4 cores i7 3+GHz CPU and 32 GB RAM).
Idea is to connect hardware inputs mentioned above to input of neural network, outputs to output of neural network and put this conditions:
Minimize bad feelings and maximize good.
If touch sensor returns number more than 128, it is bad feelings (pain}, if touch sensor returns less than 127 it is good feelings (pet). If battery is less than 20% it is bad feelings. Loud noise from microphone is bad feelings. In programming terms 3 variables to minimize and one to maximize.
When I connect it all together and switch it on, I will train it like a baby. Show some pictures, tell something, pet it for good work, etc. Show where is the battery (maybe I will put wireless charger, that it can do it by itself). I understand, that it will take long time, maybe years.
My problem now, that most of the examples which I found so far works like train first and then use this already trained neural network. Or use pre-trained by others neural network. I could not find an example with continuous training and usage of neural network.
Questions:
Is it possible to implement this with current machine learning technologies/libraries (tensorflow, etc)? Let's consider first only software part, if I have unlimited hardware.
If it is not possible, then why?
If it is possible, then links to examples or general approach description will be very helpful.
If it is possible, then what hardware will be needed?
P.S. of course I do not expect it to be as smart as human, even not as a dog/cat. Maybe like fly or mosquito :)
Also I would like to get high level answer without going very deep into details like, how would you implement moving module, etc. And everything as simple as possible.
The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.