Build up multi vxworks in vmware - vxworks

when I built up one vxworks in vmware it works. But when I create more two vxworks seperately with different IP, the second vxworks fails with(log is from vxware.log):
2015-09-02T09:10:45.057+08:00| vcpu-0| W110: VLANCE: RDP OUT to unknown Register 100
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: VNET: MACVNetPort_SetPADR: Ethernet0: can't set PADR (0)
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: Msg_Post: Warning
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: [msg.vnet.padrConflict] MAC address 00:0C:29:5A:23:AF of adapter Ethernet0 is within the reserved address range or is in use by another virtual adapter on your system. Adapter Ethernet0 may not have network connectivity.
I am sure each vxworks OS got its own MAC address. Another point is that I created the second vxworks through copying the files from the first one.

Forgive me.
Remove the macro VXWORKS_RUN_ON_VMWARE and any related code in sysLn97xEnd.c.
Everything works perfectly under VMWorkstation 11.
MAC can be set under vm machine's config page.
Maybe those macro is for the far previous version of vmworkstation.
setting mac address in vmware does not work.
you need a function to generate different mac address while system booting.
each copy of the vm machines will need to build a differenet bootrom and a vxworks.
(simplely use -D MACRO in (.wpj)MAKEFILE to switch macs between different projects with a single header.)
here is a dirty solution for setting multi macs in one vm machine:
0.
define the mac addresses and a function to access it in ln97xEnd.c.
\#define LN97_MAX_IP (4)
int ln97EndLoaded = 0;
char ln97DefineAddr[LN97_MAX_IP][6] = {
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa0},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa1},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa2},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa3}
};
END_OBJ * ln97xEndList[LN97_MAX_IP] = {NULL, NULL, NULL, NULL};
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl)
{
int i;
for (i = 0; i endObj)
{
return ln97DefineAddr[i];
}
}
if (i
1.
Modify ln97xEndLoad() in ln97xEnd.c to init different mac (and store the END_OBJ* if needed).
END_OBJ * ln97xEndLoad
...
DRV_LOG (DRV_DEBUG_LOAD, "Done loading ln97x...\n", 1, 2, 3, 4, 5, 6);
/** add to save END_OBJ* */
if (ln97EndLoaded endObj;
ln97EndLoaded++;
}
/** end add */
return (&pDrvCtrl->endObj);
...
2.
change sysLan97xEnetAddrGet() in sysLn97xEnd.c.
aprom should not be set by ln97xFindDefinedAddr() instead of "00:0C:29:5A:23:AF".
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl);
...
STATUS sysLan97xEnetAddrGet
...
{
char * addrDef = NULL;
...
/* modify by frankzhou to support in VMware */
\#define VXWORKS_RUN_ON_VMWARE
\#ifndef VXWORKS_RUN_ON_VMWARE
/* check for ASCII 'W's at APROM bytes 14 and 15 */
if ((aprom [0xe] != 'W') || (aprom [0xf] != 'W'))
{
logMsg ("sysLn97xEnetAddrGet: W's not stored in aprom\n",
0, 1, 2, 3, 4, 5);
return ERROR;
}
\#endif
\#ifdef VXWORKS_RUN_ON_VMWARE
/** add by bonex for multi mac addr */
addrDef = ln97xFindDefinedAddr(pDrvCtrl);
if (addrDef == NULL)
{
aprom[0]=0x00;
aprom\[1]=0x0c;
aprom[2]=0x29;
aprom[3]=0x5a;
aprom[4]=0x23;
aprom[5]=0xaf;
}
else
{
bcopy (addrDef, aprom, 6);
}
/** end by bonex */
\#endif
/* end by frankzhou */
...
3.
rebuild the bootrom, and rebuild the vxworks too.
result:
[telnet to vmware and check arpShow][1]
[1]: https://i.stack.imgur.com/kR9Uy.jpg

This is due to the setting of MAC address in sysLn97xEnd.c. This must be modified and rebuid the bootrom and vxworks image for another vxworks node, or it will render the conflict.

Related

STM32 USB Tx Busy

I have an application running on STM32F429ZIT6 using USB stack to communicate with PC client.
MCU receives one type of message of 686 bytes every second and receives another type of message of 14 bytes afterwards with 0.5 seconds of delay between messages. The 14 bytes message is a heartbeat so it needs to replied by MCU.
It happens that after 5 to 10 minutes of continuous operation, MCU is not able to send data because
hcdc->TxState is always busy. Reception works fine.
During Rx interruption, application only adds data to ring buffer, so that this buffer is later serialized and processed by main function.
static int8_t CDC_Receive_HS(uint8_t* Buf, uint32_t *Len) {
/* USER CODE BEGIN 11 */
/* Message RX Completed, Send it to Ring Buffer to be processed at FMC_Run()*/
for(uint16_t i = 0; i < *Len; i++){
ring_push(RMP_RXRingBuffer, (uint8_t *) &Buf[i]);
}
USBD_CDC_SetRxBuffer(&hUsbDeviceHS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceHS);
return (USBD_OK);
/* USER CODE END 11 */ }
USB TX is also kept as simple as possible:
uint8_t CDC_Transmit_HS(uint8_t\* Buf, uint16_t Len) {
uint8_t result = USBD_OK;
/\* USER CODE BEGIN 12 */
USBD_CDC_HandleTypeDef hcdc = (USBD_CDC_HandleTypeDef*)hUsbDeviceHS.pClassData;
if (hcdc-\>TxState != 0)
{
ZF_LOGE("Tx failed, resource busy\\n\\r"); return USBD_BUSY;
}
USBD_CDC_SetTxBuffer(&hUsbDeviceHS, Buf, Len);
result = USBD_CDC_TransmitPacket(&hUsbDeviceHS);
ZF_LOGD("TX Message Result:%d\\n\\r", result);
/ USER CODE END 12 \*/
return result;
}
I'm using latest HAL Drivers and software from CubeIDE (1.27.1).
I have tried expanding heap min size from 0x200 to larger values but result is the same.
Also Line Coding is set according to what recommended values:
case CDC_SET_LINE_CODING:
LineCoding.bitrate = (uint32_t) (pbuf[0] | (pbuf[1] << 8) | (pbuf[2] << 16) | (pbuf[3] << 24));
LineCoding.format = pbuf[4];
LineCoding.paritytype = pbuf[5];
LineCoding.datatype = pbuf[6];
ZF_LOGD("Line Coding Set\n\r");
break;
case CDC_GET_LINE_CODING:
pbuf[0] = (uint8_t) (LineCoding.bitrate);
pbuf[1] = (uint8_t) (LineCoding.bitrate >> 8);
pbuf[2] = (uint8_t) (LineCoding.bitrate >> 16);
pbuf[3] = (uint8_t) (LineCoding.bitrate >> 24);
pbuf[4] = LineCoding.format;
pbuf[5] = LineCoding.paritytype;
pbuf[6] = LineCoding.datatype;
ZF_LOGD("Line Coding Get\n\r");
break;
Thanks in advance, any support is appreciated.
I don't know enough about the STM32 libraries to really check your code, but I suspect you are forgetting to read the bytes transmitted by the STM32 on PC side. Try opening a terminal program like PuTTY and connecting to the STM32's virtual serial port. Otherwise, the Windows USB-to-serial driver (usbser.sys) will eventually have its buffers filled with data from your device and it will stop requesting more, at which point the buffers on your device will fill up as well.

USB CDC: STM32F103RBT6 Cant get USB Device to show up

I got an Olimexino-STM32 with an STM32F103RBT6. I used STM32 Workbench 2.6 on Windows 10 x64 and STMCubeXM 4.27.0.
I choosed in CubeMX
STM32F103RBT6
RCC HSE
USB
USB Device Communication Device Class
Debug Line: JTAG 4 pin
Fix clock speeds
Add LED1 on PA5 as GPIO Out
Add LED2 on PA1 as GPIO Out
Toolchain SW4STM
Stack size
In main.c in the while loop
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
HAL_GPIO_WritePin(LED2_GPIO_Port, LED2_Pin, GPIO_PIN_SET);
HAL_Delay(500);
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
HAL_GPIO_WritePin(LED2_GPIO_Port, LED2_Pin, GPIO_PIN_RESET);
HAL_Delay(500);
uint8_t HiMsg[] = "hello\r\n";
CDC_Transmit_FS(HiMsg, strlen(HiMsg));
}
/* USER CODE END 3 */
in usb_cdc_if.c
static int8_t CDC_Receive_FS(uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
if (Buf[0]=='1')
{
HAL_GPIO_WritePin(LED1_GPIO_Port, LED1_Pin, GPIO_PIN_SET);
}
else
{
HAL_GPIO_WritePin(LED1_GPIO_Port, LED1_Pin, GPIO_PIN_RESET);
}
USBD_CDC_SetRxBuffer(&hUsbDeviceFS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
return (USBD_OK);
/* USER CODE END 6 */
}
Generate, compile and flash to Board. I can see the LED blinking in the main loop. The speed seems also correct but i just cant get a device registered on windows. I can flash the board with an example hex file from the distributor with the bootloader active. There i can get the COM port registered. So i can see the USB Port is ok but somehow the generated code doesnt register the COM port.
Any Ideas?
https://www.olimex.com/Products/Duino/STM32/OLIMEXINO-STM32/
Here is the test1.ioc file
#MicroXplorer Configuration settings - do not modify
File.Version=6
KeepUserPlacement=false
Mcu.Family=STM32F1
Mcu.IP0=NVIC
Mcu.IP1=RCC
Mcu.IP2=SYS
Mcu.IP3=USB
Mcu.IP4=USB_DEVICE
Mcu.IPNb=5
Mcu.Name=STM32F103R(8-B)Tx
Mcu.Package=LQFP64
Mcu.Pin0=PD0-OSC_IN
Mcu.Pin1=PD1-OSC_OUT
Mcu.Pin2=PA5
Mcu.Pin3=PA11
Mcu.Pin4=PA12
Mcu.Pin5=VP_SYS_VS_ND
Mcu.Pin6=VP_SYS_VS_Systick
Mcu.Pin7=VP_USB_DEVICE_VS_USB_DEVICE_CDC_FS
Mcu.PinsNb=8
Mcu.ThirdPartyNb=0
Mcu.UserConstants=
Mcu.UserName=STM32F103RBTx
MxCube.Version=4.27.0
MxDb.Version=DB.4.0.270
NVIC.BusFault_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.DebugMonitor_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.HardFault_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.MemoryManagement_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.NonMaskableInt_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.PendSV_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.PriorityGroup=NVIC_PRIORITYGROUP_4
NVIC.SVCall_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.SysTick_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.USB_LP_CAN1_RX0_IRQn=true\:0\:0\:false\:false\:true\:false
NVIC.UsageFault_IRQn=true\:0\:0\:false\:false\:true\:false
PA11.Mode=Device
PA11.Signal=USB_DM
PA12.Mode=Device
PA12.Signal=USB_DP
PA5.GPIOParameters=GPIO_Label
PA5.GPIO_Label=LED1
PA5.Locked=true
PA5.Signal=GPIO_Output
PCC.Checker=false
PCC.Line=STM32F103
PCC.MCU=STM32F103R(8-B)Tx
PCC.PartNumber=STM32F103RBTx
PCC.Seq0=0
PCC.Series=STM32F1
PCC.Temperature=25
PCC.Vdd=3.3
PD0-OSC_IN.Mode=HSE-External-Oscillator
PD0-OSC_IN.Signal=RCC_OSC_IN
PD1-OSC_OUT.Mode=HSE-External-Oscillator
PD1-OSC_OUT.Signal=RCC_OSC_OUT
PinOutPanel.RotationAngle=0
RCC.ADCFreqValue=36000000
RCC.AHBFreq_Value=72000000
RCC.APB1CLKDivider=RCC_HCLK_DIV2
RCC.APB1Freq_Value=36000000
RCC.APB1TimFreq_Value=72000000
RCC.APB2Freq_Value=72000000
RCC.APB2TimFreq_Value=72000000
RCC.FCLKCortexFreq_Value=72000000
RCC.FamilyName=M
RCC.HCLKFreq_Value=72000000
RCC.IPParameters=ADCFreqValue,AHBFreq_Value,APB1CLKDivider,APB1Freq_Value,APB1TimFreq_Value,APB2Freq_Value,APB2TimFreq_Value,FCLKCortexFreq_Value,FamilyName,HCLKFreq_Value,MCOFreq_Value,PLLCLKFreq_Value,PLLMCOFreq_Value,PLLMUL,SYSCLKFreq_VALUE,SYSCLKSource,TimSysFreq_Value,USBFreq_Value,USBPrescaler,VCOOutput2Freq_Value
RCC.MCOFreq_Value=72000000
RCC.PLLCLKFreq_Value=72000000
RCC.PLLMCOFreq_Value=36000000
RCC.PLLMUL=RCC_PLL_MUL9
RCC.SYSCLKFreq_VALUE=72000000
RCC.SYSCLKSource=RCC_SYSCLKSOURCE_PLLCLK
RCC.TimSysFreq_Value=72000000
RCC.USBFreq_Value=48000000
RCC.USBPrescaler=RCC_USBCLKSOURCE_PLL_DIV1_5
RCC.VCOOutput2Freq_Value=8000000
USB_DEVICE.CLASS_NAME_FS=CDC
USB_DEVICE.IPParameters=VirtualMode,VirtualModeFS,CLASS_NAME_FS
USB_DEVICE.VirtualMode=Cdc
USB_DEVICE.VirtualModeFS=Cdc_FS
VP_SYS_VS_ND.Mode=No_Debug
VP_SYS_VS_ND.Signal=SYS_VS_ND
VP_SYS_VS_Systick.Mode=SysTick
VP_SYS_VS_Systick.Signal=SYS_VS_Systick
VP_USB_DEVICE_VS_USB_DEVICE_CDC_FS.Mode=CDC_FS
VP_USB_DEVICE_VS_USB_DEVICE_CDC_FS.Signal=USB_DEVICE_VS_USB_DEVICE_CDC_FS
board=custom
Problem solved: The Olimexino-STM32 has an output to connect and disconnect the USB. I had not configured that output pin.

How do I configure the u-boot video driver for a 320x240 LVDS display on an iMX6 board?

I have a custom hardware device that uses a Variscite i.MX6Q (quad-core) to drive a 320x240 display. Once the linux kernel starts booting, the LCD display works great - no issues at all. However, prior to that the boot loader (u-boot) shows a white screen (sometimes with faint vertical lines) for about 0.25s, then goes black for about 8s until the kernel takes over (reinitializing the display and correctly showing the kernel's own splash screen).
Since the linux kernel can drive the display just fine, I'm sure I've just mis-configured something in my u-boot setup...but I'm tearing my hair out trying to figure out what and where! Resources / things I've tried include:
Porting LVDS LCD With Low Resolution to i.MX6 - This seems highly relevant, but refers to tweaking linux kernel drivers instead of uboot and I'm not experienced enough to port the knowledge to uboot.
U-Boot splash screen - LVDS - This seems soooo close to the problem I'm having, but doesn't list a clear solution. One response in the forum linked to a suggestion to invert the polarity of one of the clocks, which I tried but did not notice any difference.
How to display splash screen on parallel LCD in u-boot - In the same theme as the prior posts, this again hints at an issue with specifying clocks for low-res displays.
i.mx6 33.26MHz LVDS panel cannot display in u-boot - Following these instructions, I modified ...../uboot/drivers/video/ipu_common.c and set the g_ldb_clk struct .rate members to 6400000, but that seemed to have no effect.
Adding Displays to iMX Developer's Kits [Warning - PDF!] - Instructions on how to add support for new displays to iMX boards; section 6.1.4 talks about iMX6Q. However, I've added the proper display timings to the displays[] var (see code below) and I'm still having problems.
From my custom board schematics, I know that I need to configure a PWM backlight display on PWM2 and backlight enable/disable on GPIO 5-13, and I need to provide custom display timings. So, the relevant sections in ..../uboot/board/variscite/mx6var_som.c:
struct display_info_t const displays[] = {{
.bus = -1,
.addr = 0,
.pixfmt = IPU_PIX_FMT_RGB24,
.detect = detect_MyCustomBoard,
.enable = lvds_enable_disable,
.mode = {
.name = "VAR-QVGA MX6CB-R",
.refresh = 60, /* optional */
.xres = 320,
.yres = 240,
.pixclock = MHZ2PS(6.4),
.left_margin = 64,
.right_margin = 20,
.upper_margin = 8,
.lower_margin = 4,
.hsync_len = 4,
.vsync_len = 10,
.sync = FB_SYNC_EXT,
.vmode = FB_VMODE_NONINTERLACED
} },
...
};
static void setup_display(void)
{
...
/* Turn off backlight until display is ready */
SETUP_IOMUX_PAD(PAD_DISP0_DAT19__GPIO5_IO13 | MUX_PAD_CTRL(NO_PAD_CTRL));
gpio_direction_output(IMX_GPIO_NR(5, 13), 0);
/* Setup the backlight dimmer (via PWM) */
SETUP_IOMUX_PAD(PAD_DISP0_DAT9__PWM2_OUT | MUX_PAD_CTRL(BACKLIGHT_PWM_CTRL));
pwm_init(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD, 0);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, 0, VAR_SOM_BACKLIGHT_PERIOD);
...
/* Turn on LDB0, LDB1, IPU,IPU DI0 clocks */
reg = readl(&mxc_ccm->CCGR3);
reg |= MXC_CCM_CCGR3_LDB_DI0_MASK | MXC_CCM_CCGR3_LDB_DI1_MASK;
writel(reg, &mxc_ccm->CCGR3);
/* set LDB0, LDB1 clk select to 011/011 */
reg = readl(&mxc_ccm->cs2cdr);
reg &= ~(MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_MASK
| MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_MASK);
reg |= (1 << MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_OFFSET)
| (1 << MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_OFFSET);
writel(reg, &mxc_ccm->cs2cdr);
...
}
int splash_screen_prepare(void)
{
...
/* Turn on backlight */
gpio_set_value(IMX_GPIO_NR(5, 13), 1);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD*127/256, VAR_SOM_BACKLIGHT_PERIOD);
...
}
For comparison, here are the relevant sections of my linux device tree:
&pwm2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pwm2_1>;
status = "okay";
};
backlight {
compatible = "pwm-backlight";
pwms = <&pwm2 0 50000>;
brightness-levels = <0 4 8 16 32 64 128 248>;
default-brightness-level = <7>;
status = "okay";
};
&ldb {
status = "okay";
lvds-channel#0 {
fsl,data-mapping = "spwg";
fsl,data-width = <24>;
status = "okay";
primary;
display-timings {
native-mode = <&timing0r>;
timing0r: hsd100pxn1 {
clock-frequency = <6400000>;
hactive = <320>;
vactive = <240>;
hback-porch = <64>;
hfront-porch = <20>;
vback-porch = <8>;
vfront-porch = <4>;
hsync-len = <4>;
vsync-len = <10>;
};
};
};
...
};
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hog>;
imx6qdl-var-som-mx6 {
pinctrl_hog: hoggrp {
fsl,pins = <
...
/* LCD Enable on GPIO 5-13 */
MX6QDL_PAD_DISP0_DAT19__GPIO5_IO13 0xc0000000
...
>;
};
In terms of hardware, the LVDS signal from the iMX6 is converted to parallel RGB by a TI SN65LVDS822 FlatlinkTM LVDS receiver, which drives a 320x240 QVGA Okaya RH320240T-3x5AP-A display.
The framework I'm using is Yocto (Krogoth release), which includes:
U-Boot 2015.04-mx6+g535519b: git://github.com/varigit/uboot-imx.git, branch imx_v2015.04_4.1.15_1.1.0_ga_var03, commit 535519
Linux kernel 4.1.15: git://github.com/varigit/linux-2.6-imx.git, branch imx-rel_imx_4.1.15_2.0.0_ga-var01, commit 5a4b34
I do have a Variscite DevKit, and when I boot the SOM in the DevKit (with an appropriate device tree and associated drivers) everything works great and I see both the uboot splash image as well as the linux kernel splash image. This implies that the image I'm using for the uboot splash is valid, can be read by uboot, etc.
There is one other kicker: I do not have serial console access on my production board set :(.
So, the big question here is what am I doing wrong in my uboot display driver initialization? At this point, I'd even welcome strategies on how to go about debugging this (although I don't have access to an oscilloscope).

STM32F4: SD-Card using FatFs and USB fails

(also asked on SE: Electrical Engineering)
In my application, I've set up a STM32F4, SD-Card and USB-CDC (all with CubeMX).
Using a PC, I send commands to the STM32, which then does things on the SD-Card.
The commands are handled using a "communicationBuffer" (implemented by me) which waits for commands over USB, UART, ... and sets a flag, when a \n character was received. The main loop polls for this flag and if it is set, a parser handles the command. So far, so good.
When I send commands via UART, it works fine, and I can get a list of the files on the SD-Card or perform other access via FatFs without a problem.
The problem occurs, when I receive a command via USB-CDC. The parser works as expected, but FatFs claims FR_NO_FILESYSTEM (13) in f_opendir.
Also other FatFs commands fail with this error-code.
After one failed USB-command, commands via UART will also fail. It seems, as if the USB somehow crashes the initialized SD-Card-driver.
Any idea how I can resolve this behaviour? Or a starting point for debugging?
My USB-Implementation:
I'm using CubeMX, and therefore use the prescribed way to initialize the USB-CDC interface:
main() calls MX_USB_DEVICE_Init(void).
In usbd_conf.c I've got:
void HAL_PCD_MspInit(PCD_HandleTypeDef* pcdHandle)
{
GPIO_InitTypeDef GPIO_InitStruct;
if(pcdHandle->Instance==USB_OTG_FS)
{
/* USER CODE BEGIN USB_OTG_FS_MspInit 0 */
/* USER CODE END USB_OTG_FS_MspInit 0 */
/**USB_OTG_FS GPIO Configuration
PA11 ------> USB_OTG_FS_DM
PA12 ------> USB_OTG_FS_DP
*/
GPIO_InitStruct.Pin = OTG_FS_DM_Pin|OTG_FS_DP_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.Alternate = GPIO_AF10_OTG_FS;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* Peripheral clock enable */
__HAL_RCC_USB_OTG_FS_CLK_ENABLE();
/* Peripheral interrupt init */
HAL_NVIC_SetPriority(OTG_FS_IRQn, 7, 1);
HAL_NVIC_EnableIRQ(OTG_FS_IRQn);
/* USER CODE BEGIN USB_OTG_FS_MspInit 1 */
/* USER CODE END USB_OTG_FS_MspInit 1 */
}
}
and the receive-process is implemented in usbd_cdc_if.c as follows:
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
mRootObject->mUsbBuffer->fillBuffer(Buf, *Len);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
return (USBD_OK);
/* USER CODE END 6 */
}
fillBuffer is implemented as follows (I use the same implementation for UART and USB transfer - with separate instances for the respective interfaces. mBuf is an instance-variable of type std::vector<char>):
void commBuf::fillBuffer(uint8_t *buf, size_t len)
{
// Check if last fill has timed out
if(SystemTime::getMS() - lastActionTime > timeout) {
mBuf.clear();
}
lastActionTime = SystemTime::getMS();
// Fill new content
mBuf.insert(mBuf.end(), buf, buf + len);
uint32_t done = 0;
while(!done) {
for(auto i = mBuf.end() - len, ee = mBuf.end(); i != ee; ++i) {
if(*i == '\n') {
newCommand = true;
myCommand = std::string((char*) &mBuf[0],i - mBuf.begin() + 1);
mBuf.erase(mBuf.begin(), mBuf.begin() + (i - mBuf.begin() + 1));
break;
}
}
done = 1;
}
}
I resolved the problem:
In usb_cdc_if.c the #define APP_RX_DATA_SIZE was set to 4 (for some unknown reason). As this is lower than the packet size, incoming packets of a larger size than 4 bytes were overwriting my memory.
It happened, that the following portion of my memory was the FATFS* FatFs[] pointer-list to the initialized FATFS-Filesystem structs.
So subsequently the address to this struct was overwritten, when a command of 5 or more bytes arrived.
Phew, that was a tough one.

FreeRTOS + LWIP -UDP Data Transfer

I am kinda new to the lwip stack. I'm trying to send some data over UDP protocol from my development board to my pc. And using ethernet cable between the two of them.
I gave an ip address to my server(source-board), which is 192.168.1.75:88. And the ip address of my computer is 192.168.1.2:90. When I set this configuration and run the program, I can sniff nothing with wireshark, I mean there is no udp package exchange at all. But when I change all destination adress to 255.255.255.255 or 0.0.0.0, I can sniff some packages.
Why can't I send udp packages to the ip address that I want?
Main.c
int main(void)
{
#define dst_port 88
#define src_port 90
#ifdef SERIAL_DEBUG
DebugComPort_Init();
#endif
LCD_LED_Init();
ETH_BSP_Config();
LwIP_Init();
IP4_ADDR(&dstaddr, 0, 0, 0, 0);
IP4_ADDR(&srcaddr, 192, 168, 1, 75);
pcb = udp_new();
udp_bind(pcb, &dstaddr, src_port);
udp_recv(pcb, RecvUTPCallBack, NULL);
udp_connect(pcb, &dstaddr, dst_port);
#ifdef USE_DHCP
/* Start DHCPClient */
xTaskCreate(LwIP_DHCP_task, "DHCPClient", configMINIMAL_STACK_SIZE * 2, NULL,DHCP_TASK_PRIO, NULL);
#endif
/* Start toogleLed4 task : Toggle LED4 every 250ms */
xTaskCreate(ToggleLed4, "LED4", configMINIMAL_STACK_SIZE, NULL, LED_TASK_PRIO, NULL);
xTaskCreate(SendUDP, "UDP", configMINIMAL_STACK_SIZE, NULL, LED_TASK_PRIO, NULL);
/* Start scheduler */
vTaskStartScheduler();
for( ;; );
}
SendUDP Task
void SendUDP(void * pvParameters)
{
while(1)
{
pcb = udp_new();
udp_bind(pcb, &dstaddr, src_port);
udp_recv(pcb, RecvUTPCallBack, NULL);
udp_connect(pcb, &dstaddr, dst_port);
pb = pbuf_alloc(PBUF_TRANSPORT, sizeof((str)), PBUF_REF);
pb->payload = str;
pb->len = pb->tot_len = sizeof((str));
udp_sendto(pcb, pb, &dstaddr, dst_port);
udp_disconnect(pcb);
udp_remove(pcb);
pbuf_free(pb);
vTaskDelay(1000);
}
}
I figured this out about a week ago, but I couldn't post the answer to here.
First of all, there is an ip defining in main.h, like
/*Static IP ADDRESS*/
#define IP_ADDR0 192
#define IP_ADDR1 168
#define IP_ADDR2 1
#define IP_ADDR3 15
and this configurations being use in netconf.h
IP4_ADDR(&ipaddr, IP_ADDR0, IP_ADDR1, IP_ADDR2, IP_ADDR3);
that's why server's ip adress is always 192.168.1.15.
And the second, I just started to use netconn API instead of raw API, this is way much easier than raw API. And this is my new SendwithUDP function, which is working perfect.
void SendwithUDP(uint16_t *veri, uint8_t length)
{
while(1)
{
if(((EventFlags.udp) && (1<<0)) == (1<<0))
{
STM_EVAL_LEDToggle(LED3);
sendconn = netconn_new( NETCONN_UDP );
netconn_bind(sendconn, IP_ADDR_ANY, src_port );
netconn_connect(sendconn, &clientAddr, 150);
sendbuf = netbuf_new();
data =netbuf_alloc(sendbuf, 2*length);
memcpy(data, veri, 2*length);
netconn_send(sendconn, sendbuf);
netbuf_free(sendbuf);
netbuf_delete(sendbuf);
netconn_disconnect(sendconn);
netconn_delete(sendconn);
vTaskDelay(10);
}
}
}