I am using controller STM32Lxx. During the calculation of ROM memory checksum missing interrupts(In LIN bus is not responding properly.)
How to avoid this scenario?
I thought to calculate a checksum for a small portion of ROM then----> finally add all these checksum values.
is there any other method to overcome this issue?
Related
In the application I'm working on, there are chunks of pre-allocated memory that are filled with image data at one point. I need to wrap this data in an MPSImage to use it with Metal's MPS CNN filters.
From looking at the Docs it seems like there's no easy way to do this without copying the data into either the MPSImage or an MTLTexture.
Do you know of a way to achieve that with no-copy from pre-allocated pointers?
Thanks!
You can allocate an MTLTexture backed by an MTLBuffer created with bytesNoCopy constructor. And then allocate an MPSImage from that MTLTexture with initWithTexture:featureChannels:.
Keep in mind though that in this case the texture won't be in an optimal layout for GPU access, so this is a memory vs performance trade off.
Also keep in mind that bytesNoCopy constructor takes virtual memory page boundary aligned addresses only, and the driver needs to make sure that that memory is resident when you submit a command buffer that uses that memory.
I am trying to learn how to debug an MCU non-intrusively using SWD & openOCD.
while (1)
{
my_count++;
HAL_GPIO_TogglePin(LD2_GPIO_Port,LD2_Pin);
HAL_Delay(750);
}
The code running on my MCU has a free running counter "my_count" . I want to sample/trace the data stored in the address holding "my_count" in real time :
I was doing it this way:
while(1){// generic algorithm no specific language
mdw 0x00000000200000ac; //openOCD command to read from an address
}
0x200000ac is the address of the variable my_count from the .map file.
But, this method is very slow and experiences data drops at high frequencies.
Is there any other way to trace the data at high frequencies without experiencing data drops?
I made some napkin math, and I have an idea that may work.
As per Reference Manual, page 948, the max baud rate for UART of STM32F334 is 9Mbit/s.
If we want to send memory at the specific address, it will be 32 bits. 1 bit takes 1/9Mbps or 1.111*10^(-7)s, multiply that by 32 bits, that makes it 3.555 microseconds. Obviously, as I said, it's purely napkin math. There are start and stop bits involved. But we have a lot of wiggle room. You can easily fit 64 bits into transmission too.
Now, I've checked with the internet, it seems the ST-Link based on STM32F103 can have max baud rate of 4.5Mbps. A bummer, but we simply need to double our timings. 3.55*2 = 7.1us for 32-bit and 14.2us for 64-bit transmission. Even given there is some start and stop bit overhead, we still seem to fit into our 25us time budget.
So the suggestion is the following:
You have a timer set to 25us period that fires an interrupt, that activates DMA UART transmission. That way your MCU actually has very little overhead since DMA will autonomously handle the transmission, while your MCU can do whatever it wants in the meantime. Entering and exiting the timer ISR will be in fact the greatest part of the overhead caused by this, since in the ISR you will literally flip a pair of bits to tell DMA to send stuff over UART # 4.5Mbps.
If I've allocated an image in memory that IS NOT host-visible then I get a staging buffer that IS host-visible so I can write to it. I memcpy into that buffer, and then I do a vkCmdCopyBufferToImage.
However let's say we're running on hardware where the device-local image memory is also host-visible, is it more efficient and better to just memcpy into this image memory? In what image layout does the image need to be in if memcpying straight into it? Transfer destination? How would this work mip levels? In the copy buffer to image method you specify each mip level, but if memcpying in how do you do this? Or do just not? And just do the extra copy to the host-visible staging buffer and then do copy buffer to image?
VK_IMAGE_TILING_OPTIMAL arrangement is implementation-dependent and unexposed.
VK_IMAGE_TILING_LINEAR arrangement is whatever vkGetImageSubresourceLayout says it is.
The device-local host-visible memory on current desktop GPUs have a specific purpose. But anyway, you wouldn't have access to most of the GPU capacity this way.
If you do it the right way™, then there is already only one transfer. The memcpy is unnecessary. Either you build your linear image directly in Vulkan's memory, or you cast your arbitrary memory to Vulkan with VK_EXT_external_memory_host.
In what image layout does the image need to be in if memcpying straight into it?
Host writes are either VK_IMAGE_LAYOUT_PREINITIALIZED or VK_IMAGE_LAYOUT_GENERAL.
How would this work mip levels?
vkGetImageSubresourceLayout() gives pLayout->offset and pLayout->size saying where subresource starts and ends.
I'm required to do a code verification using CRC. In this case, all I do is pass every byte found flash memory through an algorithm to calculate the CRC and compare the result to a predefined CRC value.
However, I'm hung up on the idea that the flash memory might change at some point, causing the CRC verification to fail.
Assuming that the code isn't touched again whatsoever, is it possible that flash memory will change during execution? If so, what can cause it to change? And how do I avoid said change?
Flash memory only means it retains its content in the absence of power; flash memory is definitely erasable / reprogrammable. The separate term Read-Only-Memory (ROM) means it cannot be altered after the initial write.
But memory doesn't change unless a CPU instruction touches it or it degrades or is affected by external factors. Flash memory contents might last ten years unperturbed. Usually it is the number of read/write cycles that degrade flash memory before age does. High static electrical charges could corrupt flash but magnetic fields should have little effect.
If you have any influence over the hardware specs, ROM should be considered if that is the primary intent; it has several advantages over flash for this purpose.
Lastly, you note that you will pass "every byte" of memory through your CRC algorithm. If the correct checksum is to be stored within the same memory medium, you have a recursive problem of trying to precompute a checksum that contains it's own checksum. In most cases, the valid checksum should be located in some segment of the memory that is not itself subjected to the scan.
In any event if it were to change spontaneously that is exactly what your code verification is intended to catch and you would want and expect the CRC to fail, so there is no problem - it is doing its job
It is mostly not a matter of spontaneously changing, but rather one of not being written correctly in the first instance, or possibly protecting against malicious or accidental tampering. If part of the flash is used for variable non-volatile storage, you would obviously not include that area in your CRC. If you are partitioning the same flash into code space and NV storage, potentially errors in the NV storage code could inadvertently modify the code space, so your CRC protects against that and also external tampering (via JTAG for example).
Flash memory is subject to erase/write cycle endurance, and after a number of cycles some bits may stick "high". Endurance varies between parts from around 10000 to 100000, and is seldom a problem for code storage. Flash memory also has a nominal data retention time; this is normally quoted at 10 years; but that is worst-case extreme condition rating - again your CRC guards against these effects.
I have a very low speed data connection over serial (RS485):
9600 baud
actual data transmission rate is about 25% of that.
The serial line is going through an area of extremely high EMR. Peak fluctuations can reach 3000 KV.
I am not in the position (yet) to force a change in the physical medium, but could easily offer to put in a simple robust forward error correction scheme. The scheme needs to be easy to implement on a PIC18 series micro.
Ideas?
This site claims to implement Reed-Solomon on the PIC18. I've never used it myself, but perhaps it could be a helpful reference?
Search for CRC algorithm used in MODBUS ASCII protocol.
I develop with PIC18 devices and currently use the MCC18 and PICC18 compilers. I noticed a few weeks ago that the peripheral headers for PICC18 incorrectly map the Busy2USART() macro to the TRMT bit instead of the TRMT2 bit. This caused me major headaches for short time before I discovered the problem. Example, a simple transmission:
putc2USART(*p_value++);
while Busy2USART();
putc2USART(*p_value);
When the Busy2USART() macro was incorrectly mapped to the TRMT bit, I was never waiting for bytes to leave the shift register because I was monitoring the wrong bit. Before I realized the inaccurate header file, the only way I was able to successfully transmit a byte over 485 was to wait 1 ms between bytes. My baud rate was 91912 and the delays between bytes killed my throughput.
I also suggest implementing a means of collision detection and checksums. Checksums are cheap, even on a PIC18. If you are able to listen to your own transmissions, do so, it will allow you to be aware of collisions that may result from duplicate addresses on the same loop and incorrect timings.