How to limit FPS for UVC gadget? - usb

I'm developing an application based on g_webcam kind of template code available at git://git.ideasonboard.org/uvc-gadget.git. I've noticed the FPS setting supplied in the USB device config structures is not respected. In fact, gadget attempts the fastest possible frame rate. Moreover, the host tends to loose the pipe to UVC device due to probable low-level USB interface flooding due to opportunistic FPS selection.
So, how can we set a hard-limit on FPS for a UVC gadget?
Thanks!
Kernel module source:
/* Uncompressed Payload - 3.1.2. Uncompressed Video Frame Descriptor */
static const struct UVC_FRAME_UNCOMPRESSED(1)
uvc_frame_uncompressed_360p = {
.bLength = UVC_DT_FRAME_UNCOMPRESSED_SIZE(1),
.bDescriptorType = USB_DT_CS_INTERFACE,
.bDescriptorSubType = UVC_VS_FRAME_UNCOMPRESSED,
.bFrameIndex = 1,
.bmCapabilities = 0,
.wWidth = cpu_to_le16(FRAME_WIDTH),
.wHeight = cpu_to_le16(FRAME_HEIGHT),
.dwMinBitRate = cpu_to_le32(FRAME_WIDTH * FRAME_HEIGHT * 8 * FRAME_RATE),
.dwMaxBitRate = cpu_to_le32(FRAME_WIDTH * FRAME_HEIGHT * 8 * FRAME_RATE),
.dwMaxVideoFrameBufferSize = cpu_to_le32(FRAME_WIDTH * FRAME_HEIGHT),
.dwDefaultFrameInterval = cpu_to_le32(FRAME_RATE_USEC),
.bFrameIntervalType = 1,
.dwFrameInterval[0] = cpu_to_le32(FRAME_RATE_USEC),
};
UVC gadget source:
static const struct uvc_frame_info uvc_frames_grey[] = {
{ FRAME_WIDTH, FRAME_HEIGHT, FRAME_RATE_USEC, },
{ 0, 0, 0, },
};
Common header:
#define STREAMING_MAXPACKET 1024
#define FRAME_WIDTH 160
#define FRAME_HEIGHT 90
#define FRAME_RATE 330 /* 330 FPS */
#define FRAME_RATE_USEC 30303 /* 330 FPS */

I believe Paul added support for setting a limit on the FPS in usb-gadget, which got upstreamed recently.
Please consider looking at the latest version of the repository.
Let us know if you hit any further issues on this.

Related

How to pick target hardware for opencl

How do you tell OpenCL to target build for a gpu instead of a cpu? will it automatically pick one over the other?
OpenCL will not automatically pick a device for you. You have to explicitly choose a platform (Intel/AMD/Nvidia) and a device (CPU/GPU) on that platform. Platform #0 and device #0 by default will not always give you the GPU. This is quite cumbersome when running code on different computers, as on each you have to manually select the device.
However there is a smart solution for this, a lightweight OpenCL-Wrapper that automatically picks the fastest available GPU (or CPU if no GPU is available) for you. This works by reading out the number of compute units and clock frequency and adding missing information (number of cores per CU) via vendor and device name with a small database.
Find the source code with an example here.
Here is just the code for automatically selecting the fastest device:
vector<cl::Device> cl_devices; // get all devices of all platforms
{
vector<cl::Platform> cl_platforms; // get all platforms (drivers)
cl::Platform::get(&cl_platforms);
for(uint i=0u; i<(uint)cl_platforms.size(); i++) {
vector<cl::Device> cl_devices_available;
cl_platforms[i].getDevices(CL_DEVICE_TYPE_ALL, &cl_devices_available); // to query only GPUs, use CL_DEVICE_TYPE_GPU here
for(uint j=0u; j<(uint)cl_devices_available.size(); j++) {
cl_devices.push_back(cl_devices_available[j]);
}
}
}
cl::Device cl_device; // select fastest available device
{
float best_value = 0.0f;
uint best_i = 0u; // index of fastest device
for(uint i=0u; i<(uint)cl_devices.size(); i++) { // find device with highest (estimated) floating point performance
const string name = trim(cl_devices[i].getInfo<CL_DEVICE_NAME>()); // device name
const string vendor = trim(cl_devices[i].getInfo<CL_DEVICE_VENDOR>()); // device vendor
const uint compute_units = (uint)cl_devices[i].getInfo<CL_DEVICE_MAX_COMPUTE_UNITS>(); // compute units (CUs) can contain multiple cores depending on the microarchitecture
const uint clock_frequency = (uint)cl_devices[i].getInfo<CL_DEVICE_MAX_CLOCK_FREQUENCY>(); // in MHz
const bool is_gpu = cl_devices[i].getInfo<CL_DEVICE_TYPE>()==CL_DEVICE_TYPE_GPU;
const uint ipc = is_gpu?2u:32u; // IPC (instructions per cycle) is 2 for GPUs and 32 for most modern CPUs
const bool nvidia_192_cores_per_cu = contains_any(to_lower(name), {" 6", " 7", "ro k", "la k"}) || (clock_frequency<1000u&&contains(to_lower(name), "titan")); // identify Kepler GPUs
const bool nvidia_64_cores_per_cu = contains_any(to_lower(name), {"p100", "v100", "a100", "a30", " 16", " 20", "titan v", "titan rtx", "ro t", "la t", "ro rtx"}) && !contains(to_lower(name), "rtx a"); // identify P100, Volta, Turing, A100, A30
const bool amd_128_cores_per_dualcu = contains(to_lower(name), "gfx10"); // identify RDNA/RDNA2 GPUs where dual CUs are reported
const float nvidia = (float)(contains(to_lower(vendor), "nvidia"))*(nvidia_192_cores_per_cu?192.0f:(nvidia_64_cores_per_cu?64.0f:128.0f)); // Nvidia GPUs have 192 cores/CU (Kepler), 128 cores/CU (Maxwell, Pascal, Ampere) or 64 cores/CU (P100, Volta, Turing, A100)
const float amd = (float)(contains_any(to_lower(vendor), {"amd", "advanced"}))*(is_gpu?(amd_128_cores_per_dualcu?128.0f:64.0f):0.5f); // AMD GPUs have 64 cores/CU (GCN, CDNA) or 128 cores/dualCU (RDNA, RDNA2), AMD CPUs (with SMT) have 1/2 core/CU
const float intel = (float)(contains(to_lower(vendor), "intel"))*(is_gpu?8.0f:0.5f); // Intel integrated GPUs usually have 8 cores/CU, Intel CPUs (with HT) have 1/2 core/CU
const float arm = (float)(contains(to_lower(vendor), "arm"))*(is_gpu?8.0f:1.0f); // ARM GPUs usually have 8 cores/CU, ARM CPUs have 1 core/CU
const uint cores = to_uint((float)compute_units*(nvidia+amd+intel+arm)); // for CPUs, compute_units is the number of threads (twice the number of cores with hyperthreading)
const float tflops = 1E-6f*(float)cores*(float)ipc*(float)clock_frequency; // estimated device FP32 floating point performance in TeraFLOPs/s
if(tflops>best_value) {
best_value = tflops;
best_i = i;
}
}
const string name = trim(cl_devices[best_i].getInfo<CL_DEVICE_NAME>()); // device name
cl_device = cl_devices[best_i];
print_info(name); // print device name
}
Alternatively, you can also make it automatically choose the device with most memory rather than FLOPs, or a device with specified ID from the list of all devices from all platforms. There is many more benefits to using this wrapper, for example significantly simpler code for using arrays and automatic tracking of total device memory allocation, all while not impacting performance in any way.

QSPI chip select pin control and configuration in Zephyr RTOS question

I am trying to use QSPI bus to access a serial NOR flash in Zephyr. The CS(chip select) pin stays high all the time. It should be active low when the flash chip is selected. Just wondering if the QSPI CS is working in Zephyr or I need to configure the CS pin as GPIO and control it by my software.
Anyone had CS pin of QSPI working in Zephyr ?? I am using rNF52480 from Nordic semiconductor.
Thanks,
JC
I think you need to configure it yourself:
...
#include <drivers/gpio.h>
#include <drivers/spi.h>
struct spi_cs_control spi_cs = {
/* PA4 as CS pin */
.gpio_dev = DEVICE_DT_GET(DT_NODELABEL(gpioa)),
.gpio_pin = 4,
.gpio_dt_flags = GPIO_ACTIVE_LOW,
/* delay in microseconds to wait before starting the transmission and before releasing the CS line */
.delay = 10,
};
#define SPI_CS (&spi_cs)
struct spi_config spi_cfg = {
.frequency = 350000,
.operation = SPI_OP_MODE_MASTER | SPI_TRANSFER_MSB | SPI_WORD_SET(8) | SPI_LINES_QUAD | SPI_LOCK_ON,
.cs = SPI_CS,
};
void spi_init()
{
spi = device_get_binding("SPI_1");
....
}

Distortion in ESP32 I2S audio playback with external DAC for sample frequency higher than 20kSps

Hardware: ESP32 DevKitV1, PCM5102 breakout board, SD-card adapter.
Software: Arduino framework.
For some time I am struggling with audio playback using a I2S DAC external to ESP32.
The problem is I can only play without distortion for low sample frequencies, i.e. below 20kSps.
I have been studying the documentation, https://docs.espressif.com/projects/esp-idf/en/latest/api-reference/peripherals/i2s.html, and numerous other sources but sill haven't managed to fix this.
I2S configuration function:
esp_err_t I2Smixer::i2sConfig(int bclkPin, int lrckPin, int dinPin, int sample_rate)
{
// i2s configuration: Tx to ext DAC, 2's complement 16-bit PCM, mono,
const i2s_config_t i2s_config = {
.mode = (i2s_mode_t)(I2S_MODE_MASTER | I2S_MODE_TX | I2S_CHANNEL_MONO), // only tx, external DAC
.sample_rate = sample_rate,
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
.channel_format = I2S_CHANNEL_FMT_ONLY_RIGHT, // single channel
// .channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT, //2-channels
.communication_format = (i2s_comm_format_t)(I2S_COMM_FORMAT_I2S | I2S_COMM_FORMAT_I2S_MSB),
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL3, // highest interrupt priority that can be handeled in c
.dma_buf_count = 128, //16,
.dma_buf_len = 128, // 64
.use_apll = false,
.tx_desc_auto_clear = true};
const i2s_pin_config_t pin_config = {
.bck_io_num = bclkPin, //this is BCK pin
.ws_io_num = lrckPin, // this is LRCK pin
.data_out_num = dinPin, // this is DATA output pin
.data_in_num = I2S_PIN_NO_CHANGE // Not used
};
esp_err_t ret1 = i2s_driver_install((i2s_port_t)i2s_num, &i2s_config, 0, NULL);
esp_err_t ret2 = i2s_set_pin((i2s_port_t)i2s_num, &pin_config);
esp_err_t ret3 = i2s_set_sample_rates((i2s_port_t)i2s_num, sample_rate);
// i2s_adc_disable((i2s_port_t)i2s_num);
// esp_err_t ret3 = rtc_clk_apll_enable(1, 15, 8, 5, 6);
return ret1 + ret2 + ret3;
}
A wave file, which was created in a 16 bit mono PCM, 44.1kHz format, is opened:
File sample_file = SD.open("/test.wav")
In the main loop, the samples are fed to the I2S driver.
esp_err_t I2Smixer::loop()
{
esp_err_t ret1 = ESP_OK, ret2 = ESP_OK;
int32_t output = 0;
if (sample_file.available())
{
if (sample_file.size() - sample_file.position() > 2) // bytes left
{
int16_t tmp; // 16 bits signed PCM assumed
sample_file.read((uint8_t *)&tmp, 2);
output =(int32_t)tmp;
}
else
{
sample_file.close();
}
}
size_t i2s_bytes_write;
int16_t int16_t_output = (int16_t)output;
ret1 = i2s_write((i2s_port_t)i2s_num, &int16_t_output, 2, &i2s_bytes_write, portMAX_DELAY);
if (i2s_bytes_write != 2)
ret2 = ESP_FAIL;
return ret1 + ret2;
}
This works fine for sample rates up to 20 kSps.
For a sample rate of 32k or 44.1k heavy distortion occurs. I suspect that this is caused by the I2S DMA Tx buffer.
If the number of DMA buffers (dma_buf_count) and the buffer length (dma_buf_len) is increased, then the sound is played fine at first. Subsequently, after a short time, the distortion kicks in again. I cannot measure this short time span, maybe around a second, but I did notice it depends on the dma_buf_count and dma_buf_len.
Next to this, I tried increasing the CPU frequency to 240MHz, no improvement.
Further I tried to play a file from SPIFSS, no improvement.
I am out of ideas right now, has anyone encountered this issue also?
Reading one sample at a time and pushing it to the I2S driver will not be the most efficient usage of the driver. You are using just 2 bytes in every 128 byte DMA buffer. That leaves just a single sample period to push the next sample before the DMA buffer is "starved".
Read the file in 128 byte (64 sample) chunks and write the whole chunk to the I2S in order to use the DMA effectively.
Depending on the file-system implementation it may be a little more efficient too to use larger chunks that are sympathetic to the file-system's media, sector size and DMA buffering.

STM32F4 Timing Problem with systick after switching IDE (coocox to TrueStudio)

I am working on a weird problem: As a part of my project, I migrated a firmware from CooCox to TrueStudio. Both, CooCox and TrueStudio automatically create some standard files while creating a project for a specific Microcontroller. The Microcontroller used here is the STM32F407VGT6. I am using ms - delay and s - delay which are derived from the µs - Delay function I will show you.
*edit2: I should mention, that the original project is a pure C project. I am trying to make the Project a C++/C project in TrueStudio.
What I will try now is to migrate the firmware into a TrueStudio pure C project and see if the problem still exists.
I will inform you about the results
**Results: The problem is actually gone now in the pure C Project, but I would really like to implement classes etc using C++. Any ideas how to solve this?
**
*
The initializing systick code is (HCLK Frequency = 168MHz).
*edit1: the HCLK Frequency equals the SYSCLK *
void systick_init(void){
RCC_ClocksTypeDef RCC_Clocks;
Systick_Delay=0;
RCC_GetClocksFreq(&RCC_Clocks);
SysTick_Config((RCC_Clocks.HCLK_Frequency / 1000000) - 1);
}
The function for the 1µs Delay looks like this:
void delay_us(volatile uint32_t delay)
{
Systick_Delay = delay;
while(Systick_Delay != 0);
}
The Systick Handler contains the following Code:
void SysTick_Handler(void)
{
// Tick für Delay
if(Systick_Delay != 0x00)
{
Systick_Delay--;
}
}
When I create a .hex file to flash the µC with using Coocox, the timing function works (with some minor accuracy mistakes that don't bother me).
When I create the .hex file with TrueStudio, the delays have massive inaccuracys. For example, a delay of 500ms becomes a delay of roughly 2s.
Since the Code is written dependant on the actual HCLK_Frequency, I can't understand the mistake and in my understanding, even if the HCLK should differ, the 1µs Delay should still take about 1µs.
My next step will be comparing the automatically created system files, but maybe anyone has a different approach / another idea?
*edit 3: I normally include my systick - header with the command ' extern "C" '. So my systick source file is a .c file. When I rename the file to systick.cpp, and I include the header without 'extern "C"', the delay function does not work at all. Maybe, that helps with the solution?
*
You are either running off a different clock or have different PLL settings. Looks like your clock speed is 1/4 of what you had before.
The basic startup code provided does not always set the maximum speed for the board. Have a look in some of the examples in the stm32cube.zip code. You will find some System Clock Configuration code for your board which will select the correct clock and pll settings. (this will be in your code somewhere as well).
Look in main.c under stm32cubef4/projects/STM32F4-Discovery\Demonstrations\src.
You will find the following code which sets up the clock:
/**
* #brief System Clock Configuration
* The system Clock is configured as follow :
* System Clock source = PLL (HSE)
* SYSCLK(Hz) = 168000000
* HCLK(Hz) = 168000000
* AHB Prescaler = 1
* APB1 Prescaler = 4
* APB2 Prescaler = 2
* HSE Frequency(Hz) = 8000000
* PLL_M = 8
* PLL_N = 336
* PLL_P = 2
* PLL_Q = 7
* VDD(V) = 3.3
* Main regulator output voltage = Scale1 mode
* Flash Latency(WS) = 5
* #param None
* #retval None
*/
static void SystemClock_Config(void)
{
RCC_ClkInitTypeDef RCC_ClkInitStruct;
RCC_OscInitTypeDef RCC_OscInitStruct;
/* Enable Power Control clock */
__HAL_RCC_PWR_CLK_ENABLE();
/* The voltage scaling allows optimizing the power consumption when the device is
clocked below the maximum system frequency, to update the voltage scaling value
regarding system frequency refer to product datasheet. */
__HAL_PWR_VOLTAGESCALING_CONFIG(PWR_REGULATOR_VOLTAGE_SCALE1);
/* Enable HSE Oscillator and activate PLL with HSE as source */
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSE;
RCC_OscInitStruct.HSEState = RCC_HSE_ON;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSE;
RCC_OscInitStruct.PLL.PLLM = 8;
RCC_OscInitStruct.PLL.PLLN = 336;
RCC_OscInitStruct.PLL.PLLP = RCC_PLLP_DIV2;
RCC_OscInitStruct.PLL.PLLQ = 7;
HAL_RCC_OscConfig(&RCC_OscInitStruct);
/* Select PLL as system clock source and configure the HCLK, PCLK1 and PCLK2
clocks dividers */
RCC_ClkInitStruct.ClockType = (RCC_CLOCKTYPE_SYSCLK | RCC_CLOCKTYPE_HCLK | RCC_CLOCKTYPE_PCLK1 | RCC_CLOCKTYPE_PCLK2);
RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK;
RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1;
RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV4;
RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV2;
HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_5);
/* STM32F405x/407x/415x/417x Revision Z devices: prefetch is supported */
if (HAL_GetREVID() == 0x1001)
{
/* Enable the Flash prefetch */
__HAL_FLASH_PREFETCH_BUFFER_ENABLE();
}
}
I found the solution now!
For anyone having similar problems:
You need to declare the SysTick_Handler function as
extern "C" void SysTick_Handler(void)
{
//Tick für Delay
if(Systick_Delay != 0x00)
{
Systick_Delay--;
}
}
Now it is working like it is supposed to do.

How do I configure the u-boot video driver for a 320x240 LVDS display on an iMX6 board?

I have a custom hardware device that uses a Variscite i.MX6Q (quad-core) to drive a 320x240 display. Once the linux kernel starts booting, the LCD display works great - no issues at all. However, prior to that the boot loader (u-boot) shows a white screen (sometimes with faint vertical lines) for about 0.25s, then goes black for about 8s until the kernel takes over (reinitializing the display and correctly showing the kernel's own splash screen).
Since the linux kernel can drive the display just fine, I'm sure I've just mis-configured something in my u-boot setup...but I'm tearing my hair out trying to figure out what and where! Resources / things I've tried include:
Porting LVDS LCD With Low Resolution to i.MX6 - This seems highly relevant, but refers to tweaking linux kernel drivers instead of uboot and I'm not experienced enough to port the knowledge to uboot.
U-Boot splash screen - LVDS - This seems soooo close to the problem I'm having, but doesn't list a clear solution. One response in the forum linked to a suggestion to invert the polarity of one of the clocks, which I tried but did not notice any difference.
How to display splash screen on parallel LCD in u-boot - In the same theme as the prior posts, this again hints at an issue with specifying clocks for low-res displays.
i.mx6 33.26MHz LVDS panel cannot display in u-boot - Following these instructions, I modified ...../uboot/drivers/video/ipu_common.c and set the g_ldb_clk struct .rate members to 6400000, but that seemed to have no effect.
Adding Displays to iMX Developer's Kits [Warning - PDF!] - Instructions on how to add support for new displays to iMX boards; section 6.1.4 talks about iMX6Q. However, I've added the proper display timings to the displays[] var (see code below) and I'm still having problems.
From my custom board schematics, I know that I need to configure a PWM backlight display on PWM2 and backlight enable/disable on GPIO 5-13, and I need to provide custom display timings. So, the relevant sections in ..../uboot/board/variscite/mx6var_som.c:
struct display_info_t const displays[] = {{
.bus = -1,
.addr = 0,
.pixfmt = IPU_PIX_FMT_RGB24,
.detect = detect_MyCustomBoard,
.enable = lvds_enable_disable,
.mode = {
.name = "VAR-QVGA MX6CB-R",
.refresh = 60, /* optional */
.xres = 320,
.yres = 240,
.pixclock = MHZ2PS(6.4),
.left_margin = 64,
.right_margin = 20,
.upper_margin = 8,
.lower_margin = 4,
.hsync_len = 4,
.vsync_len = 10,
.sync = FB_SYNC_EXT,
.vmode = FB_VMODE_NONINTERLACED
} },
...
};
static void setup_display(void)
{
...
/* Turn off backlight until display is ready */
SETUP_IOMUX_PAD(PAD_DISP0_DAT19__GPIO5_IO13 | MUX_PAD_CTRL(NO_PAD_CTRL));
gpio_direction_output(IMX_GPIO_NR(5, 13), 0);
/* Setup the backlight dimmer (via PWM) */
SETUP_IOMUX_PAD(PAD_DISP0_DAT9__PWM2_OUT | MUX_PAD_CTRL(BACKLIGHT_PWM_CTRL));
pwm_init(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD, 0);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, 0, VAR_SOM_BACKLIGHT_PERIOD);
...
/* Turn on LDB0, LDB1, IPU,IPU DI0 clocks */
reg = readl(&mxc_ccm->CCGR3);
reg |= MXC_CCM_CCGR3_LDB_DI0_MASK | MXC_CCM_CCGR3_LDB_DI1_MASK;
writel(reg, &mxc_ccm->CCGR3);
/* set LDB0, LDB1 clk select to 011/011 */
reg = readl(&mxc_ccm->cs2cdr);
reg &= ~(MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_MASK
| MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_MASK);
reg |= (1 << MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_OFFSET)
| (1 << MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_OFFSET);
writel(reg, &mxc_ccm->cs2cdr);
...
}
int splash_screen_prepare(void)
{
...
/* Turn on backlight */
gpio_set_value(IMX_GPIO_NR(5, 13), 1);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD*127/256, VAR_SOM_BACKLIGHT_PERIOD);
...
}
For comparison, here are the relevant sections of my linux device tree:
&pwm2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pwm2_1>;
status = "okay";
};
backlight {
compatible = "pwm-backlight";
pwms = <&pwm2 0 50000>;
brightness-levels = <0 4 8 16 32 64 128 248>;
default-brightness-level = <7>;
status = "okay";
};
&ldb {
status = "okay";
lvds-channel#0 {
fsl,data-mapping = "spwg";
fsl,data-width = <24>;
status = "okay";
primary;
display-timings {
native-mode = <&timing0r>;
timing0r: hsd100pxn1 {
clock-frequency = <6400000>;
hactive = <320>;
vactive = <240>;
hback-porch = <64>;
hfront-porch = <20>;
vback-porch = <8>;
vfront-porch = <4>;
hsync-len = <4>;
vsync-len = <10>;
};
};
};
...
};
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hog>;
imx6qdl-var-som-mx6 {
pinctrl_hog: hoggrp {
fsl,pins = <
...
/* LCD Enable on GPIO 5-13 */
MX6QDL_PAD_DISP0_DAT19__GPIO5_IO13 0xc0000000
...
>;
};
In terms of hardware, the LVDS signal from the iMX6 is converted to parallel RGB by a TI SN65LVDS822 FlatlinkTM LVDS receiver, which drives a 320x240 QVGA Okaya RH320240T-3x5AP-A display.
The framework I'm using is Yocto (Krogoth release), which includes:
U-Boot 2015.04-mx6+g535519b: git://github.com/varigit/uboot-imx.git, branch imx_v2015.04_4.1.15_1.1.0_ga_var03, commit 535519
Linux kernel 4.1.15: git://github.com/varigit/linux-2.6-imx.git, branch imx-rel_imx_4.1.15_2.0.0_ga-var01, commit 5a4b34
I do have a Variscite DevKit, and when I boot the SOM in the DevKit (with an appropriate device tree and associated drivers) everything works great and I see both the uboot splash image as well as the linux kernel splash image. This implies that the image I'm using for the uboot splash is valid, can be read by uboot, etc.
There is one other kicker: I do not have serial console access on my production board set :(.
So, the big question here is what am I doing wrong in my uboot display driver initialization? At this point, I'd even welcome strategies on how to go about debugging this (although I don't have access to an oscilloscope).