While browsing through 'prtconf' output, I found the following properties for the PCIe device I'm implementing driver for:
% prtconf -v | less
...
name='pci-msix-capid-pointer' type=int items=1
value=000000b0
name='pci-msi-capid-pointer' type=int items=1
value=00000050
...
I guess these indicate MSI/MSI-X specific capabilities supported by the PCIe device, am I right? But what does value mean here?
You'd do well to look at pci_common_intr_ops() in the file that #andrew-henle pointed you to. The values refer to the offset into the PCI or PCI-E configuration space. If your device doesn't support MSI-X interrupts, then you won't see that property.
Related
Context: I am following an embedded systems course https://www.edx.org/course/embedded-systems-shape-the-world-microcontroller-i
In the lecture on bit specific addressing they present the following example on a "peanut butter and jelly port".
Given you a port PB which has a base address of 0x40005000 and you wanted to access both port 4 and port 6 from PB which would be PB6 and PB4 respectively. One could add the offset of port 4(0x40) and port 6(0x100) to the base address(0x40005000) and define that as their new address 0x40005140.
Here is where I am confused. If I wanted to define the address for PB6 it would be base(0x40005000) + offset(0x100) = 0x40005100 and the address for PB4 would be base(0x40005000) + offset(0x40) = 0x40005040. So how is it that to access both of them I could use base(0x40005000) + offset(0x40) + offset(0x100) = 0x40005140? Is this is not an entirely different location in memory for them individually?
Also why is bit 0 represented as 0x004. In binary that would be 0000 0100. I suppose it would represent bit 0 if you disregard the first two binary bits but why are we disregarding them?
Lecture notes on bit specific addressing:
Your interpretation of how memory-mapped registers are addressed is quite reasonable for any normal peripheral on an ARM based microcontroller.
However, if you read the GPIODATA register definition on page 662 of the TM4C123GH6PM datasheet then you will see that this "register" behaves very differently.
They map a huge block of the address space (1024 bytes) to a single 32-bit register. This means that bits[9:2] of the the address bus are not needed, and are in fact overloaded with data. They contain the mask of the bits to be updated. This is what the "offset" calculation you have copied is trying to describe.
Personally I think this hardware interface could be a very clever way to let you set only some of the outputs within a bank using a single atomic write, but it makes this a very bad choice of device to use for teaching, because this isn't the way things normally work.
I'm developing an I2C driver on the STM32F74 family processors. I'm using the STM32CubeMX Low Level drivers and I can't make sense of the generated defines for I2C start and stop register values (CR2).
The code is generated in stm32f7xx_ll_i2c.h and is as follows.
/** #defgroup I2C_LL_EC_GENERATE Start And Stop Generation
* #{
*/
#define LL_I2C_GENERATE_NOSTARTSTOP 0x00000000U
/*!< Don't Generate Stop and Start condition. */
#define LL_I2C_GENERATE_STOP (uint32_t)(0x80000000U | I2C_CR2_STOP)
/*!< Generate Stop condition (Size should be set to 0). */
#define LL_I2C_GENERATE_START_READ (uint32_t)(0x80000000U | I2C_CR2_START | I2C_CR2_RD_WRN)
/*!< Generate Start for read request. */
My question is why is bit 31 included in these defines? (0x80000000U). The reference manual (RM0385) states "Bits 31:27 Reserved, must be kept at reset value.". I can't decide between modifying the generated code or keeping the 31 bit. I'll happily take recommendations simply whether its more likely that this is something needed or that I'm going to break things by writing to a reserved bit.
Thanks in advance!
I am guessing here because who knows what was on the minds of the library authors? (Not a lot if you look at the source code!). But I would guess that it is a "dirty-trick" to check that when calling LL functions you are using the specified macros.
However it is severely flawed because the "trick" is only valid for Cortex-M3/4 STM32 variants (e.g. F1xx, F2xx, F4xx) where the I2C peripheral is very different and registers such as I2C_CR2 are only 15 bits wide.
The trick is that the library functions have parameter checking asserts such as:
assert_param(IS_TRANSFER_REQUEST(Request));
Where the IS_TRANSFER_REQUEST is defined thus:
#define IS_TRANSFER_REQUEST(REQUEST) (((REQUEST) == I2C_GENERATE_STOP) || \
((REQUEST) == I2C_GENERATE_START_READ) || \
((REQUEST) == I2C_GENERATE_START_WRITE) || \
((REQUEST) == I2C_NO_STARTSTOP))
This forces you to use the LL defined macros as parameters and not some self-defined or calculated mask because they all have that "unused" check bit in them.
If that truly is the the reason, it is an ill-advised practice that did not envisage the newer I2C peripheral. You might think that the bit was stripped from the parameter before being written to the register. I have checked, it is not. And if did you would be paying for that overhead on every call, which is also undesirable.
As an error detection technique if that is what it is, it is not even applied consistently; for example all the GPIO_PIN_xx macros are 16 bits wide and since they are masks not pin numbers, using bit 31 could for example guard against passing a literal pin-number 10 where the mask 1<<10 is in fact required. Passing 10 would refer to pins 3 and 1 not 10. And to be honest that mistake is far more likely than, passing an incorrect I2C transfer request type.
In the end however "Reserved" generally means "unused but may be used in future implementations", and requiring you to use the "reset value" is a way of ensuring forward binary compatibility. If you had such a device no doubt there would be a corresponding library update to support it - but it would require re-compilation of the code. The risk is low and probably only a problem if you attempt to run this binary on a newer incompatible part that used this bits.
I agree with Clifford, the ST CubeMC / HAL / LL library code is, in places, some of the worst written code imaginable. I have a particular issue with lines such as "TIMx->CCER &= ~TIM_CCER_CC1E" where TIM_CCER_CC1e is defined as 0x0001 and the CCER register contains reserved bits that should remain at 0. There are hundreds of such examples all throughout the library code. ST remain silent to my request for advice.
The STM32F3DISCOVERY board's data brief indicates it features:
STM32F303VCT6 microcontroller featuring 256‑Kbyte Flash memory and 48‑Kbyte RAM in an LQFP100 package
However, the reference manual (RM0316) for STM32F303x6 et al. indicates only 16 Kbytes of SRAM (section 3.3) and 64 Kbytes of Flash memory (section 4.1) for STM32F303x6. The 256 and 48 Kbyte values match up with the STM32F303xB/C, which is also what is linked to on the board's data brief under Table 1's "Target STM32", even though it says "STM32F303VCT6".
I don't understand why there appears to be a discrepancy. Am I missing or misunderstanding something?
Latest version on the st website seems right: Reference Manual.
For the STM32F303VC:
STM32F303xB/C and STM32F358xC devices feature up to 48 Kbytes of static SRAM.
For the STM32F303x6:
STM32F303x6/8 and STM32F328x8 devices feature the same memory but only up to 16Kbytes of static SRAM
Maybe you have a old copy with a typo?
My problem was that I did not realize x was a single-letter substitution. I thought that in STM32F303x6, x replaced VCT. It does not. STM32F303VCT6 matches to STM32F303xC (x replaces V), and the T6 is a specific product in this line of chips.
I'm searching for GPU information through the IO matching "IOPCIDevice" and should be nice to have info about Metal, i.e if is supported or not (I still support some years old MacPro). I see that Metal 2 has a new property called registryID and I've tried to match the IOIteratorNext, but it didn't. The code I use is just the same described here by #rsharma (credits goes to #trojanfoe) with little modifications.
So my question is: how can I use registryID to ensure is the same graphics card?
P.S. I already have an array of i/GPU that support Metal using MTLCopyAllDevices.
Given a registry entry ID, you can use IORegistryEntryIDMatching() to create a matching dictionary. You would then pass that to IOServiceGetMatchingService() (on the assumption that there's only one) or IOServiceGetMatchingServices() to retrieve the object.
I used aecm(webrtc) on my ARM-based embedded device for voice communication. Now, I'm trying to change aecm to aec for double-talk echo cancellation.
It's simple in aecm:
WebRtcAecm_Create()->WebRtcAecm_Init()->WebRtcAecm_BufferFarend()->WebRtcAecm_Process().
And all data(near, far, out) formats are 16bit signed short. However, just changing functions from WebRtcAecm_* to WebRtcAec_* and signed-short data to float(divided by 32768) didn't work.
I tried to find some examples in audio_processing unittest, but couldn't find any. What am I missing?
WebRTC AEC just compare Farend Buffer and NearBuffer extracted from Mic and remove Echo in NearBuffer based on Farend. So Echo you want to clear must exist in Farend Buffer, then AEC could remove it. Please check your device latency, Basically Farend Buffer only maintain 128m/s length.