I am porting the USB driver from the STM32F4 device to STM32L4 device. It almost works. During the enumeration it sends and receives the information, but the data is not exactly the same as from the "plain" STM Cube generated project. II have the same settings in both project but get the strange results.
I lost a week trying to find the solution, maybe someone here had a similar problem and can help me out. Sorry for the images but there is no other of posting some informations on the SO
As you can see the packets are almost the same, but not identical. After 25th transmition the board stalls and accepts only very limited number of the requests
The both files form the wireshark (in the wireshark and text formats) are here:
https://gitlab.com/diymat/usb-problem/tree/master
The ep* files are form my port, stmcdc* - STM Cube generated one. Both were running using the same hardware.
Is it same clock config ?
Is 48MHz USb clk ok ?
Using non crystal or ext osc to get 48MHz USB commonly lead to issue on large xfer Even on F4 can't say for l4
So it may appear to work with hid and rather short packet but start mis-behaving for larger xfer
So I would like to be of a bit better help, but I would definitely need more information to give you a good direction to head in. That being said, there could be a number of things wrong, so I will try to cover what I can think of. Let's start with some of the most obvious.
Clock configuration/hardware can be causing problems due to faulty components, or incorrect selection of software/hardware. This can cause a number of issues, but this could be a symptom of that.
If you are using the generated FATFS middleware from ST, Small stack size inside of the configuration of the L4 can cause this exact problem. it may work until the PC register receives an erroneous error which could lead to some sort of fault, or in some cases, just a bad result returned inside of the FATFS, or USB peripheral code from ST and it will return a dirty sector on the USB window read from the FATFS, which will result in terminated operations on the peripheral.
I see you're using STM32Cube, so you can edit stack and heap size by opening the file startup_stm32l475xx.s
and edit the two as you see fit for your application.
; Amount of memory (in bytes) allocated for Stack
; Tailor this value to your application needs
; <h> Stack Configuration
; <o> Stack Size (in Bytes) <0x0-0xFFFFFFFF:8>
; </h>
Stack_Size EQU 0x400
AREA STACK, NOINIT, READWRITE, ALIGN=3
Stack_Mem SPACE Stack_Size
__initial_sp
; <h> Heap Configuration
; <o> Heap Size (in Bytes) <0x0-0xFFFFFFFF:8>
; </h>
Heap_Size EQU 0x200
Try increasing the stack size and see what happens. Good luck on finding your solution!
Related
I'm learning how to use Intel Pin and I have a couple of questions regarding the instrumentation process for a particular usecase. I would like to create a memory reference trace of a simple packet processing application. I have developed the required pintool for that purpose and my questions are the following.
Assuming I use the same network packet trace at all times as input to my packet processing application and let's say I instrument that same application on two different machines. How will the memory reference traces be different? Apparently Pin instruments userspace and is architecture independent so I wouldn't expect to see big qualitative differences in the two output memory reference traces. Is that assumption correct ?
How will the memory trace change if I experiment with the rate at which I inject network packets to my packet processing application ? Or will it change at all and if yes how can I detect how the output traces differ ?
Thank you
I assume you are doing something related to following the data flow / code flow of the network packet, probably closely related to data tainting?
Assuming I use the same network packet trace at all times as input to my packet processing application and let's say I instrument that same application on two different machines. How will the memory reference traces be different?
There are multiple factors that can make the memory trace trace quite different, the crucial point being "two different machines":
Exact copy of the same O.S : traces nearly the same (as the stack, heap and virtual memory manager will work the same) except addresses will change (ASLR).
Same O.S (but not necessarily the same version of the system shared libraries): probably the same as above if there is no recompilation of the target application. Maybe minor difference due to the heap manager that can behave differently.
Different O.S (where a recompilation of the traced application is needed): completely different traces.
Apparently Pin instruments userspace and is architecture independent so I wouldn't expect to see big qualitative differences in the two output memory reference traces. Is that assumption correct ?
Pintools needs to be recompiled for different archs, but the pintool itself should not change the way the target application is traced (same pintool + same os + same application = nearly same trace).
How will the memory trace change if I experiment with the rate at which I inject network packets to my packet processing application ?
This is system dependent and also depends on your insertion point(s). If you start tracing at recv() or recvfrom() there might be some congestion or dropped packets (UDP) if, for example, the rate is too important. Depends on the protocol, your receive window, etc. There are really multiple factors here.
Or will it change at all and if yes how can I detect how the output traces differ ?
I'd probably check the code flow rather than the data flow for this case (seems easier to me). Given exactly the same packet but different rates, if the code branches are not the same (maybe at the basic block (BBL) level), this immediately tells that the same packet is handled differently.
I've written a small single-line oriented UDP based display service to support raspberry pi projects I frequently work on, where it would be nice to see the results of sensor data being captured. This is a rewrite of that program using GTK3 V3.18.9, and GLIB2 V2.46.2. I'm developing on OSX El Capitan
It seems to double in real memory size every 30 minutes or so based on traffic; so I'm presuming I have a memory leak somewhere. But for the life of me I can't see where in the code it could possible be. Val grind did not initially work for me, so I've got some studying to do to resolve what ever issue that is.
Meanwhile I was hoping that different eyes might be able to suggest a coding cause for a traffic based (at least I think so) memory leak. Here is the program and test client.
It starts up using about 10MB of real memory, then jumps when it has received 64 total messages to 14MB, than slowly grows from there. At 64 message, I start deleting the 65th message off the end of the list, presuming I should be saving memory; as this program might run for weeks.
Here is the code for the test client and the display service:
https://gist.github.com/skoona/c1218919cf393c98af474a4bf868009f
I need some explaination about the bios load/execution procedure. I need to authenticate the bios executed by the CPU. My idea is to perform the HMAC-SHA1 of the MISO data stream ( the datas from the SPI BIOS Flash to the CPU ).
The problem is that I'm not sure that the MISO data stream is always the same. I did some tries and I get always a different data stream from the previous one. The first part of the stream is always the same, after a while ( I don't have the equipment to dump the whole communication and get the moment when it happens ) the stream is different. I'm not sure but I suspect it is different because I can sniff few bytes of the stream when a counter reaches a specified value and I get different sniffed values. I think the sniffing procedure is correct, but I can't be sure ( The sniffing is performed by a FPGA between CPU and SPI BIOS FLASH and I wrote the VHDL ).
I've noted too that the CPU reads at least 2 times the reset vector ( 0hFFFFF0 ) during the execution of the bios.
Is it possible that the CPU performs different steps at the every power on ? In you opinion is it possible to authenticate the data stream ? What I need is to be sure that the executed bios is a valid bios ( my bios ).
I apoligize if the question is a mess, but my knowledge about the bios and boot procedure are poor.
Thanks for the help.
Yes, the system usually resets several times after power-on and BIOS takes different execution paths. Also, the SPI controller may read the flash part in chunks and cache those, so what you see is read from flash is not necessarily what's executed by the CPU. Unfortunately your method is not going to be reliable and there is an industry standard method for doing this, it is called Measured Boot and it involves TPM. Please Google it to get an idea and see if it works for what you need.
I'm porting u-boot-2013.10 to MPC8306 based board. Previously, I can erase the first several sectors of Nor Flash using BDI2000. But after sometime, when the porting task is nearly done, (I mean that I can use gdb to trace the code execution and find the u-boot code runs into command line mainloop, though there are no serial output at the time, due to error configure of Serial Port)the first 256KB of Nor Flash can't be erased even if after power off reset. Other sectors can be erased normally.
The Nor Flash is Micron M29W256GL, with block size 128KB. I'm sure the WP# Pin has been pulled high, so there is no hardware protection upon the first block.
When config the jumper on board to change the PowerPC Config Word in order not to let MPC8306 fetch boot code at power up, the problem remains.
I used to run u-boot-1.1.6 on this board, I have erased this version of u-boot so many times without the problem mentioned above. I guess u-boot-2013.10 made some new approach to flash manipulation or others, for example, non-volatile protection on first 256KB of flash.
Is there someone can help me to solve the problem? I would very much appreciate your help.
I saw a lot of information about MMC/SD cards and I tried to make a library to read this (modifying the Procyon AVRlib).
But I have some problems here. I don't change the original code and tried here. My problem is about the initialization of an SD card. I have two here, a 256 MB and another 1 GB.
I send the init commands in this order: CMD0, CMD55, ACMD41, and CMD1.
But the 256 MB SD card only returns a 0x01 response for each command. I send the CMD1 a lot of times and the 256 MB SD card always returns only 0x01, never 0x00.
The 1 GB SD is more crazy... CMD0 returns with 0x01. Nice, but the CMD55 command responds with 0x05. At other times it responds with 0xC1 and also sometimes responds with 0xF0 with a 0x5F in the next interation...
Around the Internet there is information and examples, but it is a bit confused. Here in my project, I must use a 1 GB card and I'm trying with a microSD card with an SD adapter (I think that this is not the problem).
How do I fix this problem?
PS: My problem is like the problem in Stack Overflow question Initializing SD card in SPI issues, but the solution didn't solve my problem. The 1 GB SD card only returns 0x01 ever... :cry:
Why do you need CMD1? And did you read the note below it, that says "CMD1 is a valid command for the thin (1.4 mm) standard size SD memory card only if used after re-initializing a card (not after power on reset)."?
About the 1 GB card, ideas that come to mind:
After every command (send command, get reply), do you send 8 dummy bytes before making CS high?
The values returned seem weird (0x05 doesn't have busy bit set, so WTF?), maybe there's a hardware issue?
Does the card work otherwise?
Maybe this helps a bit:
SD Specifications Part 1 Physical LayerSimplified Specification
A simple explanation of MMC/SD usage over SPI is provided here. I have used the associated FAT file-system library too and it works well.
However, the solution may not work for some makes of cards. For such cards, you may have to edit the procedure/library. That may be why your 1 GB card acts differently -- it may be a different make of card. The SPI mode of certain cards may not be that popular for commercial equipment, and thus may be more deviated in specification by some card manufacturers.
If you bit bang the commands and clocks, you may have more control and confidence that those procedures are correct. That is useful because you need some solid ground to build on to progress bit by bit. I found that the <400 kHz 80 clocks was critical on one card, but could run at more than 2 MHz on another.
Try to progress one command at a time that is reliable for both cards.