The question is regarding OpenGL ES 2.0 and EGL 1.4.
I'm trying to understand if there is a spec requirement of the behavior of GL commands after eglTerminate was called. I mean if there is any GL error or it can be an exception.
Is there any definition of an expected behavior in this case, or should GL commands not be influenced by EGL commands at all?
Thanks
Calling eglTerminate flags all EGL resources associated with the EGLDisplay you are terminating for deletion. This includes any surfaces and contexts, which would certainly affect the behaviour of an OpenGL ES context in your case.
Regarding expected behaviour, the spec wording you're after is the following (from http://www.khronos.org/registry/egl/specs/eglspec.1.5.pdf - eglTerminate, page 17):
Use of bound contexts and surfaces (that is, continuing to issue com-
mands to a bound client API context) will not result in interruption
or termination of applications, but rendering results are undefined,
and client APIs may generate errors.
i.e. if your context is still current when you terminate the display, any subsequent OpenGL ES calls made on that context are undefined - they may raise OpenGL ES errors, or result in incorrect rendering, but should not cause an exception.
Related
I'm in the process of porting some rendering code to Vulkan. I've been using the SPIR-V cross-compiler to avoid the requirement of re-writing all my shaders, which has been working well in most cases, but now I've hit an issue I just can't get past.
I have a vertex shader being compiled to SPIR-V that uses SV_RenderTargetArrayIndex. This maps to ShaderViewportIndexLayerEXT in SPIRV. The device I'm running on (an NVidia 3090 with latest drivers) supports the VK_EXT_shader_viewport_index_layer extension (which is core in 1.2), and I'm explicitly enabling both shaderOutputViewportIndex and shaderOutputLayer in the VkPhysicalDeviceVulkan12Features struct (chained off the vkPhysicalDeviceFeatures2 which is in turn chained on pNext for the vkDeviceCreateInfo struct.
I've also added the line:
[[vk::ext_extension("SPV_EXT_shader_viewport_index_layer")]]
To the .hlsl source file, and verified the SPIRV being output contains the extension reference.
The validation layer is whining, with:
Validation Error: [ VUID-VkShaderModuleCreateInfo-pCode-01091 ] Object 0: handle = 0x1e66c477eb0, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xa7bb8db6 | vkCreateShaderModule(): The SPIR-V Capability (ShaderViewportIndexLayerEXT) was declared, but none of the requirements were met to use it. The Vulkan spec states: If pCode declares any of the capabilities listed in the SPIR-V Environment appendix, one of the corresponding requirements must be satisfied
Is there something else I need to do to enable this feature on the device? Any help/insight would be greatly appreciated.
(I can always drop the requirement to use this feature - realistically in my use case on DX12 it's not gaining me much if anything, but I'd rather figure this out!)
I'm practicing with the vulkan API, Yesterday I wasted almost the entire day in implementing secondary buffers for use different fragment shaders on different objects.
Big issue was an error "segmentation fault" in call to vkCmdDrawIndexed(). For the moment this is a matt black box for me, I don't find a method to investigate the origin of the issue. Although the vulkan API has validation layers for debug, it is already complicate without these ones. I suspect that the error is in the code for the creation secondary CommandBuffers.
Leaving out these problems due to my not knowing, I accidentally found that the code works the same with only the primary commandbuffer and multiple calls to vkCmdBindPipeline():
vkBeginCommandBuffer(primaryCommandBuffer...);
vkCmdBeginRenderPass(...);
vkCmdBindPipeline(...pipeline_things_a);
vkCmdBindDescriptorSets(...);
vkCmdBindVertexBuffers(...);
vkCmdBindIndexBuffer(...);
vkCmdSetViewport(...);
vkCmdSetScissor(...);
draw_things_a(...) {... vkCmdDrawIndexed(...) ...}
vkCmdBindPipeline(...pipeline_things_b);
vkCmdBindDescriptorSets(...);
vkCmdBindVertexBuffers(...);
vkCmdBindIndexBuffer(...);
vkCmdSetViewport(...);
vkCmdSetScissor(...);
draw_things_b(...) {... vkCmdDrawIndexed(...) ...}
vkCmdEndRenderPass(primaryCommandBuffer);
vkEndCommandBuffer(primaryCommandBuffer);
I'm outside regular learning path, so my question can be an obvious error for the most, but I ask:
Is it an error the multiple call of vkCmdBindPipeline() in primary command buffer?
No, that's not an error. You can do arbitrary calls for e.g. binding pipelines within a single command buffer. In general you can call all commands beginning with vkCmd arbitrarily in a single command buffer as much as you want.
VK_ERROR_UNKNOWN was part of Vulkan 1.0. However, it was only first defined in Vulkan-Header 1.2.13 (see history).
Is there a particular reason for this?
VK_ERROR_UNKNOWN was added so you have some specific code to return if your driver (or perhaps layer) encounter some inconsistency and panic. Previously VK_ERROR_VALIDATION_FAILED_EXT was often used for the case.
Either way, returning VK_ERROR_UNKNOWN is in of itself a part of undefined behavior, and is not allowed as part of conformant behavior. So it is not compatibility breaking change to introduce the code.
While porting a regular C++ class to a Windows Runtime class, I hit a fairly significant road block. My C++ class reports certain error conditions by throwing custom error objects. This allows clients to conveniently filter on exceptions, documented in the public interface.
I cannot seem to find a reliable way to pass enough information across the ABI to replicate the same fidelity1 using the Windows Runtime. Under the assumption, that an HRESULT is the only generalized error reporting information, I have evaluated the following options:
The 'obvious' choice: Map the exception condition to any of the predefined HRESULT values. While this technically works (presumably), there is no way at the call site to distinguish between errors originating from the implementation, and errors originating from callees of the implementation.
Invent custom HRESULTs. If this layout still applies to the Windows Runtime, I could easily set the Customer bit and go crazy with my 27 bits worth of error code representation. This works, until someone else does the same. I'm not aware of any way to attribute an HRESULT to an interface, which would solve this ambiguity.
Even if either of the above could be made to work as intended, throwing hresult_errors as prescribed, the call site would still be at the mercy of the language projection. While C# seemingly allows to pass any System.Exception(-derived) error object across the ABI, and have them re-thrown at the call site, C++/WinRT only supports some 14 distinct exception types (see throw_hresult).
With neither of these options allowing for sufficiently complete error information to cross the ABI, it seems that an HRESULT simply may not be enough. Does the Windows Runtime have any provisioning to allow for additional (arbitrary) error information to cross the ABI?
1 I'm not strictly interested in passing actual C++ exceptions across. Instead, I'm looking for a way to allow clients to uniquely identify documented error conditions, in a natural way. Passing custom Windows Runtime error types would be fine.
There are a few options here. Our general API guidance for Windows Runtime APIs that have well-defined, expected failure modes is that failure information should be part of the normal parameters and return value. We would normally create a TryDoSomething API in this situation and provide extended error information via either a return or out parameter. This works best for us due to the fact that there's no consistent way to map exceptions across all languages. This is a topic we hope to revisit more in xlang in the future.
HRESULTs are usable with a caveat. HRESULT values can be a nuisance in anything but C++, where you need to redefine them locally because you can't just use the header. They will generate exceptions in most languages, so if this is common, you'll be creating debugger noise for your components' clients.
The last option allows you to transit a language-specific exception stored in a COM object across the ABI boundary (and up the COM logical stack, including across marshalled calls). In practice it will only be usable by C++ code compiled with the same compiler, settings, and type definitions as the component itself. E.g. passing it from a component compiled with VC to a component compiled with Clang could potentially lead to memory corruption.
Assuming I haven't scared you off, you'll want to look at RoOriginateLanguageException. It allows you to wrap the exception in a COM object and store it with other winrt error data in the TLS. We use this in projections to enable exceptions thrown within a callback to propagate to the outer code using the same projection in a controlled way that unwinds safely through other code potentially written using other languages or tools. This is how the support in C# and other languages is implemented.
Thanks,
Ben
As I know there are separate vectors to handle SError caused by EL0 and EL1.
My queston is follow:
Due to fact that SError is asynchronous, can I rely on fact that if cpu entered serror_el1_vector to handle SError, then SError was caused exactly in EL1 (not in EL0, EL2, EL3) and if cpu entered serror_el0_vector then SError was exactly caused in EL0? Another word, is it possible folowing case:
EL0:
1.1. incorrect access to some device register (for ex. write to RO register) that cause SError interrupt. Such access does not generate access error immediately, but at some point later when AXI transaction actually happened the memory system returns a fault, which is reported as asynchronous abort.
1.2. SError still not generated and user has time to make svc to enter EL1
EL1:
2.1. Now cpu in EL1 mode enetered by step 1.2
2.2. SError caused by step 1.1 finally generated but now cpu in EL1, not in EL0, so in which vector cpu will enter to handle SError: serror_el1_vector or serror_el0_vector? Because initially incorrect access was caused in EL0 but now cpu in EL1 state.
Thank you in advance!
Can I detect from which mode (EL1, EL0,…) SError interrupt was caused?
No, unless you have stronger guarantees than those given in the ARM Architecture Reference Manual.
The problem is that nearly everything is implementation defined.
For a start, there seems to be no guarantee that an SError is even caused by the PE. Page D1-2198:
An External abort generated by the memory system might be taken asynchronously using the SError interrupt. These SError interrupts always behave as edge-triggered interrupts. An implementation might include other sources of SError interrupt.
So it's entirely possible that the source of SError can be off-chip.
In addition, in a multi-core system nothing seems to prevent the possibility of core 1 to issue a write that leads to a SError which is subsequently sent to core 2.
Next, let's look at what information an SError carries. Page D1-2170:
If the exception is a synchronous exception or an SError interrupt, information characterizing the reason for the exception is saved in the ESR_ELx at the target Exception level.
Looking at ESR_EL1 on page D12-2798:
IDS, bit [24]
IMPLEMENTATION DEFINED syndrome. Possible values of this bit are:
0b0
Bits[23:0] of the ISS field holds the fields described in this encoding.
---------- Note ----------
If the RAS Extension is not implemented, this means that bits[23:0] of the ISS field are RES0.
--------------------------
0b1
Bits[23:0] of the ISS field holds IMPLEMENTATION DEFINED syndrome information that can be used to provide additional information about the SError interrupt.
So it's possible for the PE to implement a custom register configuration that provides the information you're looking for, but again: that's implementation defined.
Also this is outside of the scope of the PE specification, but it's possible that the memory system provides a way to recover the source of a SError.
Bottom line: Everything's implementation defined, so refer to the manual of your specific hardware.