pViewports is always UNUSED - vulkan

I have enabled all verification layers and am error and warning free and should be able to render a colored triangle. However, currently all I am seeing is a cleared screen (black) or an empty screen (grey) depending on the machine I am running on.
Upon further inspection it seems in my call to vkCreateGraphicsPipelines the pViewports and pScissors are always set to UNUSED even though I passed in values. I do not have dynamicState and the counts are both one.
Am I missing a flag or is my binding flawed?
Code snippet:
Thread 0, Frame 0:
vkCreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines) returns VkResult VK_SUCCESS (0):
device: VkDevice = 00000000056FD350
pipelineCache: VkPipelineCache = 0000000000000000
createInfoCount: uint32_t = 1
pCreateInfos: const VkGraphicsPipelineCreateInfo* = 000000000566BB38
pCreateInfos[0]: const VkGraphicsPipelineCreateInfo = 000000000566BB38:
sType: VkStructureType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO (28)
pNext: const void* = NULL
...
pViewportState: const VkPipelineViewportStateCreateInfo* = 000000000725A920:
sType: VkStructureType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO (22)
pNext: const void* = NULL
flags: VkPipelineViewportStateCreateFlags = 0
viewportCount: uint32_t = 1
pViewports: const VkViewport* = UNUSED
scissorCount: uint32_t = 1
pScissors: const VkRect2D* = UNUSED
Debugging prints:
Pipeline_Info.pViewportState.pScissors before assign = 0
Pipeline_Info.pViewportState.pViewports before assign = 0
Pipeline_Info.pViewportState.pScissors after assign = 2350856
Pipeline_Info.pViewportState.pViewports after assign = 2348296
vkCreateGraphicsPipelines call result = 0
Full API Dump (see line 2107):
https://pastebin.com/MmXUBnk0
Full code (for those who are curious - see line 791):
https://github.com/AdaDoom3/AdaDoom3/blob/master/Engine/neo-engine-renderer.adb

Just wanted to confirm: The UNUSED is definitely a VK_LAYER_LUNARG_api_dump layer bug.
There is basically if(false) ... else print UNUSED in the code.
UPDATE: Issue fix PR.
So that is a dead end. But you found a bug in the Vulkan ecosystem which is a good thing too...
As for your rendering problem, in several places you have 1x1 render target (e.g. in vkCmdBeginRenderPass), so there would be not much to see. I think Windows usually does this (initially creating 0x0 window) and then you have to react to a resize event.

Related

RX Fifo1 for CAN is not generating an interrupt callback (basically it is not receiving the data)

There are two types of messages on the CAN bus. Those are broadcast message and default message. Currently, I'm using fifo0 for both the message(which works perfectly fine). But I would like to use fifo1 specially for broadcast message. Below is my initializing code
uint8 BspCan_RxFilterConfig(uint32 filterId, uint32 filterMask, uint8 filterBankId, uint8 enableFlag, uint8 fifoAssignment)
{
///\todo Add method for calculating filter on the fly
CAN_FilterTypeDef canBusFilterConfig;
FunctionalState filterEnableFlag = ENABLE;
if(enableFlag == 0)
{
filterEnableFlag = DISABLE;
}
else
{
filterEnableFlag = ENABLE;
}
/*Define filter used to determine if application needs to handle message on the CAN bus or if it should
ignore it. If the selected rx FIFO is changed, the rx functions in this module must also be updated.
Using mask mode with all bits set to "don't care"*/
canBusFilterConfig.FilterBank = filterBankId; //Identification of which of the filter banks to define.
canBusFilterConfig.FilterMode = CAN_FILTERMODE_IDMASK; //Sets whether to filter out messages based on a specific id or a list
canBusFilterConfig.FilterScale = CAN_FILTERSCALE_32BIT; //Sets the width of the filter, 32-bit width means filter applies to full range of std id, extended id, IDE, and RTR bits
canBusFilterConfig.FilterIdHigh = (0xFFFF0000 & filterId)>>16; //For upper 16 bits, dominant bit is expected (logic 0)
canBusFilterConfig.FilterIdLow = 0x0000FFFF & filterId; //For Lower 16 bits, dominant bit is expected (logic 0)
canBusFilterConfig.FilterMaskIdHigh = (0xFFFF0000 & filterMask)>>16; //Upper 16 bits are don't care
canBusFilterConfig.FilterMaskIdLow = 0x0000FFFF & filterMask; //Lower 16 bits are don't care
//canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO0; //Sets which rx FIFO to which to apply the filter settings
canBusFilterConfig.FilterActivation = filterEnableFlag;
canBusFilterConfig.SlaveStartFilterBank = 1; //Bank for the defined filter. Arbitrary value.
if (fifoAssignment == 0)
{
canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO0;
}
else
{
canBusFilterConfig.FilterFIFOAssignment = CAN_FILTER_FIFO1;
}
//Only fails if CAN peripheral is not in ready or listening state
if (HAL_CAN_ConfigFilter(&gCanBusH, &canBusFilterConfig) != HAL_OK)
{
return(ERR_CAN_INIT_FAILED);
}
else
{
return(SZW_NO_ERROR);
}
}//end BspCan_RxFilterConfig
When initializing, fifo0 works perfectly but fifo1 doesn't. If I just initialize fifo1 for both types of messages, it doesn't generate the interrupt. What am I doing wrong over here ? How to i initialize fifo1 to make it work and generate interrupt? I also tried without using digital filters still no luck.
Thanks in advance,

How do I configure the u-boot video driver for a 320x240 LVDS display on an iMX6 board?

I have a custom hardware device that uses a Variscite i.MX6Q (quad-core) to drive a 320x240 display. Once the linux kernel starts booting, the LCD display works great - no issues at all. However, prior to that the boot loader (u-boot) shows a white screen (sometimes with faint vertical lines) for about 0.25s, then goes black for about 8s until the kernel takes over (reinitializing the display and correctly showing the kernel's own splash screen).
Since the linux kernel can drive the display just fine, I'm sure I've just mis-configured something in my u-boot setup...but I'm tearing my hair out trying to figure out what and where! Resources / things I've tried include:
Porting LVDS LCD With Low Resolution to i.MX6 - This seems highly relevant, but refers to tweaking linux kernel drivers instead of uboot and I'm not experienced enough to port the knowledge to uboot.
U-Boot splash screen - LVDS - This seems soooo close to the problem I'm having, but doesn't list a clear solution. One response in the forum linked to a suggestion to invert the polarity of one of the clocks, which I tried but did not notice any difference.
How to display splash screen on parallel LCD in u-boot - In the same theme as the prior posts, this again hints at an issue with specifying clocks for low-res displays.
i.mx6 33.26MHz LVDS panel cannot display in u-boot - Following these instructions, I modified ...../uboot/drivers/video/ipu_common.c and set the g_ldb_clk struct .rate members to 6400000, but that seemed to have no effect.
Adding Displays to iMX Developer's Kits [Warning - PDF!] - Instructions on how to add support for new displays to iMX boards; section 6.1.4 talks about iMX6Q. However, I've added the proper display timings to the displays[] var (see code below) and I'm still having problems.
From my custom board schematics, I know that I need to configure a PWM backlight display on PWM2 and backlight enable/disable on GPIO 5-13, and I need to provide custom display timings. So, the relevant sections in ..../uboot/board/variscite/mx6var_som.c:
struct display_info_t const displays[] = {{
.bus = -1,
.addr = 0,
.pixfmt = IPU_PIX_FMT_RGB24,
.detect = detect_MyCustomBoard,
.enable = lvds_enable_disable,
.mode = {
.name = "VAR-QVGA MX6CB-R",
.refresh = 60, /* optional */
.xres = 320,
.yres = 240,
.pixclock = MHZ2PS(6.4),
.left_margin = 64,
.right_margin = 20,
.upper_margin = 8,
.lower_margin = 4,
.hsync_len = 4,
.vsync_len = 10,
.sync = FB_SYNC_EXT,
.vmode = FB_VMODE_NONINTERLACED
} },
...
};
static void setup_display(void)
{
...
/* Turn off backlight until display is ready */
SETUP_IOMUX_PAD(PAD_DISP0_DAT19__GPIO5_IO13 | MUX_PAD_CTRL(NO_PAD_CTRL));
gpio_direction_output(IMX_GPIO_NR(5, 13), 0);
/* Setup the backlight dimmer (via PWM) */
SETUP_IOMUX_PAD(PAD_DISP0_DAT9__PWM2_OUT | MUX_PAD_CTRL(BACKLIGHT_PWM_CTRL));
pwm_init(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD, 0);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, 0, VAR_SOM_BACKLIGHT_PERIOD);
...
/* Turn on LDB0, LDB1, IPU,IPU DI0 clocks */
reg = readl(&mxc_ccm->CCGR3);
reg |= MXC_CCM_CCGR3_LDB_DI0_MASK | MXC_CCM_CCGR3_LDB_DI1_MASK;
writel(reg, &mxc_ccm->CCGR3);
/* set LDB0, LDB1 clk select to 011/011 */
reg = readl(&mxc_ccm->cs2cdr);
reg &= ~(MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_MASK
| MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_MASK);
reg |= (1 << MXC_CCM_CS2CDR_LDB_DI0_CLK_SEL_OFFSET)
| (1 << MXC_CCM_CS2CDR_LDB_DI1_CLK_SEL_OFFSET);
writel(reg, &mxc_ccm->cs2cdr);
...
}
int splash_screen_prepare(void)
{
...
/* Turn on backlight */
gpio_set_value(IMX_GPIO_NR(5, 13), 1);
pwm_config(VAR_SOM_BACKLIGHT_PWM_ID, VAR_SOM_BACKLIGHT_PERIOD*127/256, VAR_SOM_BACKLIGHT_PERIOD);
...
}
For comparison, here are the relevant sections of my linux device tree:
&pwm2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pwm2_1>;
status = "okay";
};
backlight {
compatible = "pwm-backlight";
pwms = <&pwm2 0 50000>;
brightness-levels = <0 4 8 16 32 64 128 248>;
default-brightness-level = <7>;
status = "okay";
};
&ldb {
status = "okay";
lvds-channel#0 {
fsl,data-mapping = "spwg";
fsl,data-width = <24>;
status = "okay";
primary;
display-timings {
native-mode = <&timing0r>;
timing0r: hsd100pxn1 {
clock-frequency = <6400000>;
hactive = <320>;
vactive = <240>;
hback-porch = <64>;
hfront-porch = <20>;
vback-porch = <8>;
vfront-porch = <4>;
hsync-len = <4>;
vsync-len = <10>;
};
};
};
...
};
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hog>;
imx6qdl-var-som-mx6 {
pinctrl_hog: hoggrp {
fsl,pins = <
...
/* LCD Enable on GPIO 5-13 */
MX6QDL_PAD_DISP0_DAT19__GPIO5_IO13 0xc0000000
...
>;
};
In terms of hardware, the LVDS signal from the iMX6 is converted to parallel RGB by a TI SN65LVDS822 FlatlinkTM LVDS receiver, which drives a 320x240 QVGA Okaya RH320240T-3x5AP-A display.
The framework I'm using is Yocto (Krogoth release), which includes:
U-Boot 2015.04-mx6+g535519b: git://github.com/varigit/uboot-imx.git, branch imx_v2015.04_4.1.15_1.1.0_ga_var03, commit 535519
Linux kernel 4.1.15: git://github.com/varigit/linux-2.6-imx.git, branch imx-rel_imx_4.1.15_2.0.0_ga-var01, commit 5a4b34
I do have a Variscite DevKit, and when I boot the SOM in the DevKit (with an appropriate device tree and associated drivers) everything works great and I see both the uboot splash image as well as the linux kernel splash image. This implies that the image I'm using for the uboot splash is valid, can be read by uboot, etc.
There is one other kicker: I do not have serial console access on my production board set :(.
So, the big question here is what am I doing wrong in my uboot display driver initialization? At this point, I'd even welcome strategies on how to go about debugging this (although I don't have access to an oscilloscope).

Raw Input mouse lastx, lasty with odd values while logged in through RDP

When I attempt to update my mouse position from the lLastX, and lLastY members of the RAWMOUSE structure while I'm logged in via RDP, I get some really odd numbers (like > 30,000 for both). I've noticed this behavior on Windows 7, 8, 8.1 and 10.
The usFlags member returns a value of MOUSE_MOVE_ABSOLUTE | MOUSE_VIRTUAL_DESKTOP. Regarding the MOUSE_MOVE_ABSOLUTE, I am handling absolute positioning as well as relative in my code. However, the virtual desktop flag has me a bit confused as I assumed that flag was for a multi-monitor setup. I've got a feeling that there's a connection to that flag and the weird numbers I'm getting. Unfortunately, I really don't know how to adjust the values without a point of reference, nor do I even know how to get a point of reference.
When I run my code locally, everything works as it should.
So does anyone have any idea why RDP + Raw Input would give me such messed up mouse lastx/lasty values? And if so, is there a way I can convert them to more sensible values?
It appears that when using WM_INPUT through remote desktop, the MOUSE_MOVE_ABSOLUTE and MOUSE_VIRTUAL_DESKTOP bits are set, and the values seems to be ranging from 0 to USHRT_MAX.
I never really found a clear documentation stating which coordinate system is used when MOUSE_VIRTUAL_DESKTOP bit is set, but this seems to have worked well thus far:
case WM_INPUT: {
UINT buffer_size = 48;
LPBYTE buffer[48];
GetRawInputData((HRAWINPUT)lparam, RID_INPUT, buffer, &buffer_size, sizeof(RAWINPUTHEADER));
RAWINPUT* raw = (RAWINPUT*)buffer;
if (raw->header.dwType != RIM_TYPEMOUSE) {
break;
}
const RAWMOUSE& mouse = raw->data.mouse;
if ((mouse.usFlags & MOUSE_MOVE_ABSOLUTE) == MOUSE_MOVE_ABSOLUTE) {
static Vector3 last_pos = vector3(FLT_MAX, FLT_MAX, FLT_MAX);
const bool virtual_desktop = (mouse.usFlags & MOUSE_VIRTUAL_DESKTOP) == MOUSE_VIRTUAL_DESKTOP;
const int width = GetSystemMetrics(virtual_desktop ? SM_CXVIRTUALSCREEN : SM_CXSCREEN);
const int height = GetSystemMetrics(virtual_desktop ? SM_CYVIRTUALSCREEN : SM_CYSCREEN);
const Vector3 absolute_pos = vector3((mouse.lLastX / float(USHRT_MAX)) * width, (mouse.lLastY / float(USHRT_MAX)) * height, 0);
if (last_pos != vector3(FLT_MAX, FLT_MAX, FLT_MAX)) {
MouseMoveEvent(absolute_pos - last_pos);
}
last_pos = absolute_pos;
}
else {
MouseMoveEvent(vector3((float)mouse.lLastX, (float)mouse.lLastY, 0));
}
}
break;

AudioUnitRender fails with GenericOutput (-10877 /Invalid Element)

I am basically trying to obtain the samples produced by an AUGraph using a GenericOutput Node and a call to AudioUnitRender. As a starting point for my program I used the MixerHost example by Apple and changed the outputNode as follows.
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_GenericOutput;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
Later when I want to obtain my samples, I call
AudioUnitRenderActionFlags ioActionFlags = kAudioOfflineUnitRenderAction_Render;
AudioTimeStamp inTimeStamp = {0};
inTimeStamp.mHostTime = mach_absolute_time();
inTimeStamp.mFlags = kAudioTimeStampSampleHostTimeValid;
result = AudioUnitRender (
ioUnit,
&ioActionFlags,
&inTimeStamp,
1,
1024,
ioData
);
which yields an
"-10877 / Invalid Element"
error. My assumption is, that the error comes from not setting the inTimeStamp.mSampleTime field correctly. To be honest, I have not found a way to find out the sample time other than AudioQueueDeviceGetCurrentTime, which I cannot use, since I do not use an AudioQueue. However changing the ioActionFlag to kAudioTimeStampHostTimeValid does not change the the error behaviour.
The error pertaining to the element (AKA 'bus') refers to the 4th argument (1) to your AudioUnitRender call. The Generic Output unit only has one element/bus: 0 which has an input, output and global scope. If you pass 0 to the call instead of 1 for the element #, that error should disappear.

How to define end in objective C

OSStatus SetupBuffers(BG_FileInfo *inFileInfo)
{
int numBuffersToQueue = kNumberBuffers;
UInt32 maxPacketSize;
UInt32 size = sizeof(maxPacketSize);
// we need to calculate how many packets we read at a time, and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
OSStatus result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize);
AssertNoError("Error getting packet upper bound size", end);
bool isFormatVBR = (inFileInfo->mFileFormat.mBytesPerPacket == 0 || inFileInfo- >mFileFormat.mFramesPerPacket == 0);
CalculateBytesForTime(inFileInfo->mFileFormat, maxPacketSize, 0.5/*seconds*/, &mBufferByteSize, &mNumPacketsToRead);
// if the file is smaller than the capacity of all the buffer queues, always load it at once
if ((mBufferByteSize * numBuffersToQueue) > inFileInfo->mFileDataSize)
inFileInfo->mLoadAtOnce = true;
if (inFileInfo->mLoadAtOnce)
{
UInt64 theFileNumPackets;
size = sizeof(UInt64);
result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyAudioDataPacketCount, &size, &theFileNumPackets);
AssertNoError("Error getting packet count for file", end);***>>>>this is where xcode says undefined<<<<***
mNumPacketsToRead = (UInt32)theFileNumPackets;
mBufferByteSize = inFileInfo->mFileDataSize;
numBuffersToQueue = 1;
}
//Here is the exact error
label 'end' used but not defined
I have that error twice
If you look at the SoundEngine.cpp source that the snippet comes from, you'll see it's defined on the very next line:
end:
return result;
It's a label that execution jumps to when there's an error.
Uhm, the only place I can find AssertNoError is here in Technical Note TN2113. And it has a completely different format. AssertNoError(theError, "couldn't unregister the ABL"); Where is AssertNoError defined?
User #Jeremy P mentions this document as well.