I am working with EGL on an ARM GPU, and I am using a pbuffer to do off screen rendering. I follow the standard procedures as described in the documentation to set everything up:
EGLDisplay display;
EGLConfig config;
EGLContext context;
EGLSurface surface;
EGLint num_config;
// assume I allocated both attrib lists somewhere
attribute_list[0] = EGL_SURFACE_TYPE;
attribute_list[1] = EGL_PBUFFER_BIT;
attribute_list[2] = EGL_RENDERABLE_TYPE;
attribute_list[3] = EGL_OPENGL_ES2_BIT;
attribute_list[4] = EGL_OPENGL_RED_SIZE;
attribute_list[5] = 8;
attribute_list[6] = EGL_OPENGL_GREEN_SIZE;
attribute_list[7] = 8;
attribute_list[8] = EGL_OPENGL_BLUE_SIZE;
attribute_list[9] = 8;
attribute_list[9] = EGL_OPENGL_ALPHA_SIZE;
attribute_list[10] = 8;
attribute_list[11] = EGL_OPENGL_DEPTH_SIZE;
attribute_list[12] = 8;
attribute_list[13] = EGL_NONE;
pbuffer_attribs[0] = EGL_WIDTH;
pbuffer_attribs[1] = 512;
pbuffer_attribs[2] = EGL_HEIGHT;
pbuffer_attribs[3] = 512;
pbuffer_attribs[4] = EGL_NONE;
/* get an EGL display connection */
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
/* initialize the EGL display connection */
eglInitialize(display, NULL, NULL);
/* get an appropriate EGL frame buffer configuration */
eglChooseConfig(display, attribute_list, &config, 1, &num_config);
/* create an EGL rendering context */
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
/* create the surface */
surface = eglCreatePbufferSurface(display, config, pbuffer_attribs);
/* connect the context to the surface */
eglMakeCurrent(display, surface, surface, context);
After this, my reads and writes should be associated with this offscreen pBuffer, correct? Does this pBuffer have a FBO which is distinct from the default framebuffer associated with it? The issue I am running into is I get a GL_FRAMEBUFFER_UNDEFINED error when I try to glReadPixels. This error happens when:
GL_FRAMEBUFFER_UNDEFINED is returned if target is the default framebuffer, but the default framebuffer does not exist.
My reading of this error is I am rendering to the default FBO and not to the pBuffer FBO. Is this interpretation correct? If so, what else do I need to do so I can read and write to the pBuffer FBO?
If the above sequence completes successfully (without errors), then, yes, the offscreen pBuffer becomes the default framebuffer for the OpenGL ES context and all reads and writes will be associated with the pBuffer (unless a non-default FBO is bound).
It's worth checking that eglGetError() returns EGL_SUCCESS after each EGL call. The following part of your code listing looks suspicious:
attribute_list[9] = 8;
attribute_list[9] = EGL_OPENGL_ALPHA_SIZE;
Related
I've been learning vulkan and following vulkan-tutorial and right now I'm at the Texture mapping part. I'm loading an image and uploading it to the host memory, but I'm having trouble understanding the layout transitions and barriers.
Consider this (pseudo)code for loading and transitioning an image (inspired by this), which will be sampled in a fragment shader:
auto texture = loadTexture(filePath);
auto stagingBuffer = createStagingBuffer(texture.pixels, texture.size);
// Create image with:
// usage - VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT
// properties - VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
auto imageBuffer = createImage();
// -- begin single usage command buffer --
auto cb = beginCommandBuffer();
VkImageMemoryBarrier preCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED,
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
// ...
};
// PipelineBarrier (preCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
// dstStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// imageMemoryBarrier = &preCopyBarrier
vkCmdPipelineBarrier(cb, ...);
// Copy stagingBuffer.buffer to imageBuffer.image
// dstImageLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL
vkCmdCopyBufferToImage(cb, ...);
VkImageMemoryBarrier postCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.dstAccessMask = VK_ACCESS_SHADER_READ_BIT,
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
// ...
};
// PipelineBarrier (postCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
// imageMemoryBarrier = &postCopyBarrier
vkCmdPipelineBarrier(cb, ...)
endAndSubmitCommandBuffer(cb);
The preCopyBarrier is there because of the vkCmdCopyBufferToImage(...) command and will be "used"/"activated" only once and that is during this command(?).
The postCopyBarrier is there because of the fact, that it will be sampled in the fragment shader, so the layout transition
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL -> VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL
has to happen every single time a frame is rendered(? Please correct me, if I'm wrong).
But (assuming I'm correct, which I'm probably not) I'm having trouble wrapping my head around the fact, that I'm creating a preCopyBarrier, which will be used only once and postCopyBarrier, which will be used continuously. If I were to load for example 200 textures, I'd have a bunch of their single usage preCopyBarriers laying around. Isn't this a...waste?
This might be a stupid question and I'm probably missing/misunderstanding something important, but I feel like I shouldn't move on without understanding this concept correctly.
I have an image of dimensions 4096*4096 (so 67108864 bytes, since there are 4 channels) that I want to copy from a staging buffer to a device local image. The buffer already has the data stored and I have set up the image barriers properly, so now I want to perform the copy operation... Except it doesn't work. The validation layers give me this error message when I call vkCmdCopyBufferToImage() -
IMAGE(ERROR): object: 0x0 type: 6 location: 3903 msgCode: 417333590: vkCmdCopyBufferToImage(): pRegion[0] exceeds buffer size of 67108864 bytes. The spec valid usage text states 'The buffer region specified by each element of pRegions mustbe a region that is contained within srcBuffer' (https://www.khronos.org/registry/vulkan/specs/1.0/html/vkspec.html#VUID-vkCmdCopyBufferToImage-pRegions-00171).
I can't find anything wrong with the values that I gave it though. The VkBufferImageCopy struct I passed to it looks like this-
VkBufferImageCopy bufImgCopy;
bufImgCopy.bufferOffset = 0;
bufImgCopy.bufferImageHeight = 0;
bufImgCopy.bufferRowLength = 0;
bufImgCopy.imageExtent = modelTexture.imgExtents; // 4096 * 4096 * 1
bufImgCopy.imageOffset = {0, 0, 0};
bufImgCopy.imageSubresource.aspectMask = modelTexture.subResource.aspectMask; // Colour attachment
bufImgCopy.imageSubresource.baseArrayLayer = modelTexture.subResource.baseArrayLayer; // 0
bufImgCopy.imageSubresource.layerCount = VK_REMAINING_ARRAY_LAYERS;
bufImgCopy.imageSubresource.mipLevel = 0;
I can't figure out why the api thinks the struct is specifying a size greater than the buffer size. The format of the image is VK_FORMAT_B8G8R8A8_UNORM.
EDIT
Here's the code that sets up the staging buffer-
stageBuf.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
stageBuf.shareMode = VK_SHARING_MODE_EXCLUSIVE;
stageBuf.bufSize = static_cast<VkDeviceSize>(verts.size() * sizeof(vert) + indices.size() * sizeof(u32)) > modelImage.size ? static_cast<VkDeviceSize>(verts.size() * sizeof(vert) + indices.size() * sizeof(u32)) : modelImage.size;
// filled from the previous struct.
VkBufferCreateInfo info;
info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
info.pNext = nullptr;
info.flags = 0;
info.queueFamilyIndexCount = bufInfo.qFCount;
info.pQueueFamilyIndices = bufInfo.qFIndices;
info.usage = bufInfo.usage;
info.sharingMode = bufInfo.shareMode;
info.size = bufInfo.bufSize;
if (vkCreateBuffer(device, &info, nullptr, &(bufInfo.buf)) != VK_SUCCESS)
{ //...
VkMemoryRequirements memReqs;
vkGetBufferMemoryRequirements(device, buf, &memReqs);
for (u32 type = 0; type < memProps.memoryTypeCount; ++type)
if ((memReqs.memoryTypeBits & (1 << type)) &&
((memProps.memoryTypes[type].propertyFlags & memFlags) == memFlags)) // The usual things to set buffers up.
{
VkMemoryAllocateInfo info;
info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
info.pNext = nullptr;
info.allocationSize = memReqs.size;
info.memoryTypeIndex = type;
if (vkAllocateMemory(device, &info, nullptr, &mem.memory) == VK_SUCCESS)
{ //....
// All this works perfectly except for the texture copy.
if (vkBindBufferMemory(device, buf, mem.memory, mem.offset) != VK_SUCCESS)
{ //...
I'm using this staging buffer for both the vertex and index buffers (which I have taken as a single buffer with offsets) as well as the image which I'm trying to copy to. The memory allocated is according to the size of the largest data structure.
As noted in the comments. Using VK_REMAINING_ARRAY_LAYERS is invalid for the layerCount of VkImageSubresourceRange, so you have to explicitly set the layerCount to the actual number of layers to copy.
Hi I am trying to bind depth memory buffer but I get an error saying as below. I have no idea why this error is popping up.
The depth format is VK_FORMAT_D16_UNORM and the usage is VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT. I have read online that the TILING shouldnt be linear but then I get a different error. Thanks!!!
The code for creating and binding the image is as below.
VkImageCreateInfo imageInfo = {};
// If the depth format is undefined, use fallback as 16-byte value
if (Depth.format == VK_FORMAT_UNDEFINED) {
Depth.format = VK_FORMAT_D16_UNORM;
}
const VkFormat depthFormat = Depth.format;
VkFormatProperties props;
vkGetPhysicalDeviceFormatProperties(*deviceObj->gpu, depthFormat, &props);
if (props.linearTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_LINEAR;
}
else if (props.optimalTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
}
else {
std::cout << "Unsupported Depth Format, try other Depth formats.\n";
exit(-1);
}
imageInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageInfo.pNext = NULL;
imageInfo.imageType = VK_IMAGE_TYPE_2D;
imageInfo.format = depthFormat;
imageInfo.extent.width = width;
imageInfo.extent.height = height;
imageInfo.extent.depth = 1;
imageInfo.mipLevels = 1;
imageInfo.arrayLayers = 1;
imageInfo.samples = NUM_SAMPLES;
imageInfo.queueFamilyIndexCount = 0;
imageInfo.pQueueFamilyIndices = NULL;
imageInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
imageInfo.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
imageInfo.flags = 0;
// User create image info and create the image objects
result = vkCreateImage(deviceObj->device, &imageInfo, NULL, &Depth.image);
assert(result == VK_SUCCESS);
// Get the image memory requirements
VkMemoryRequirements memRqrmnt;
vkGetImageMemoryRequirements(deviceObj->device, Depth.image, &memRqrmnt);
VkMemoryAllocateInfo memAlloc = {};
memAlloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memAlloc.pNext = NULL;
memAlloc.allocationSize = 0;
memAlloc.memoryTypeIndex = 0;
memAlloc.allocationSize = memRqrmnt.size;
// Determine the type of memory required with the help of memory properties
pass = deviceObj->memoryTypeFromProperties(memRqrmnt.memoryTypeBits, 0, /* No requirements */ &memAlloc.memoryTypeIndex);
assert(pass);
// Allocate the memory for image objects
result = vkAllocateMemory(deviceObj->device, &memAlloc, NULL, &Depth.mem);
assert(result == VK_SUCCESS);
// Bind the allocated memeory
result = vkBindImageMemory(deviceObj->device, Depth.image, Depth.mem, 0);
assert(result == VK_SUCCESS);
Yes, linear tiling may not be supported for depth usage Images.
Consult the specification and Valid Usage section of VkImageCreateInfo. The capability is queried by vkGetPhysicalDeviceFormatProperties and vkGetPhysicalDeviceImageFormatProperties commands. Though depth formats are "opaque", so there is not much reason to use linear tiling.
This you seem to be doing in your code.
But the error informs you that you are trying to use a memory type that is not allowed for the given Image. Use vkGetImageMemoryRequirements command to query which memory types are allowed.
Possibly you have some error there (you are using 0x1 which is obviously not part of 0x84 per the message). You may want to reuse the example code in the Device Memory chapter of the specification. Provide your memoryTypeFromProperties implementation for more specific answer.
I accidentally set the typeIndex to 1 instead of i and it works now. In my defense I have been vulkan coding the whole day and my eyes are bleeding :). Thanks for the help.
bool VulkanDevice::memoryTypeFromProperties(uint32_t typeBits, VkFlags
requirementsMask, uint32_t *typeIndex)
{
// Search memtypes to find first index with those properties
for (uint32_t i = 0; i < 32; i++) {
if ((typeBits & 1) == 1) {
// Type is available, does it match user properties?
if ((memoryProperties.memoryTypes[i].propertyFlags & requirementsMask) == requirementsMask) {
*typeIndex = i;// was set to 1 :(
return true;
}
}
typeBits >>= 1;
}
// No memory types matched, return failure
return false;
}
I am following netlink example on this question and answer.
But, I don't see a sort of connection identifier in source codes. Say:
Kernel
my_nl_sock = netlink_kernel_create(&init_net, NETLINK_USERSOCK, 0,
my_nl_rcv_msg, NULL, THIS_MODULE);
User space
nls = nl_socket_alloc();
ret = nl_connect(nls, NETLINK_USERSOCK);
ret = nl_send_simple(nls, MY_MSG_TYPE, 0, msg, sizeof(msg));
where NETLINK_USERSOCK and MY_MSG_TYPE don't seem to be a connection identifier.
In such a case, how does netlink know which data comes from which user space app or kernel module and which user space app or kernel module the data should go?
In my guess, netlink receives data from user space app or kernel module and broadcasts it. And every netlink-connected app or module checks message type if data is destined to 'me'
Is what I think right?
Firstly, I recommend to read some doc, for example http://www.linuxfoundation.org/collaborate/workgroups/networking/generic_netlink_howto
To communicate you have to register a family with supported operations. It can be done with the following functions
int genl_register_family( struct genl_family *family)
int genl_register_ops( struct genl_family * family, struct genl_ops *ops)
An example of a family definition:
/*
* Attributes (variables): the index in this enum is used as a reference for the type,
* userspace application has to indicate the corresponding type
*/
enum {
CTRL_ATT_R_UNSPEC = 0,
CTRL_ATT_CNT_SESSIONS,
__CTRL_ATT_R_MAX
};
#define CTRL_ATT_R_MAX ( __CTRL_ATT_R_MAX - 1 )
#define CTRL_FAMILY "your-family"
#define CTRL_PROTO_VERSION 1
/* Family definition */
static struct genl_family ctrl_bin_gnl_family = {
.id = GENL_ID_GENERATE, // genetlink should generate an id
.hdrsize = 0,
.name = CTRL_FAMILY, // the name of this family, used by userspace application
.version = CTRL_PROTO_VERSION, // version number
.maxattr = CTRL_ATT_R_MAX, // max number of attr
};
An example of an operation definition:
struct genl_ops ctrl_info = {
.cmd = CTRL_CMD_INFO,
.flags = 0,
.policy = 0, // you can use policy if you need
.doit = 0, // set this callback if this op does some interval stuff
.dumpit = __info, // set this callback if this op dump data
};
After that you can use in your userspace app your family and operations to communicate. Make a connection:
struct nl_sock * _nl = nl_socket_alloc();
int ret = genl_connect(nl);
// test if fail
int gid = genl_ctrl_resolve( nl, CTRL_FAMILY );
// test if fail
Send info operation
struct nl_msg * msg = msg_alloc(
CTRL_CMD_INFO,
NLM_F_DUMP
);
int ret = nl_send_auto(_nl, msg );
// test if fail
// wait for the ack
// read a reply
OSStatus SetupBuffers(BG_FileInfo *inFileInfo)
{
int numBuffersToQueue = kNumberBuffers;
UInt32 maxPacketSize;
UInt32 size = sizeof(maxPacketSize);
// we need to calculate how many packets we read at a time, and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
OSStatus result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize);
AssertNoError("Error getting packet upper bound size", end);
bool isFormatVBR = (inFileInfo->mFileFormat.mBytesPerPacket == 0 || inFileInfo- >mFileFormat.mFramesPerPacket == 0);
CalculateBytesForTime(inFileInfo->mFileFormat, maxPacketSize, 0.5/*seconds*/, &mBufferByteSize, &mNumPacketsToRead);
// if the file is smaller than the capacity of all the buffer queues, always load it at once
if ((mBufferByteSize * numBuffersToQueue) > inFileInfo->mFileDataSize)
inFileInfo->mLoadAtOnce = true;
if (inFileInfo->mLoadAtOnce)
{
UInt64 theFileNumPackets;
size = sizeof(UInt64);
result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyAudioDataPacketCount, &size, &theFileNumPackets);
AssertNoError("Error getting packet count for file", end);***>>>>this is where xcode says undefined<<<<***
mNumPacketsToRead = (UInt32)theFileNumPackets;
mBufferByteSize = inFileInfo->mFileDataSize;
numBuffersToQueue = 1;
}
//Here is the exact error
label 'end' used but not defined
I have that error twice
If you look at the SoundEngine.cpp source that the snippet comes from, you'll see it's defined on the very next line:
end:
return result;
It's a label that execution jumps to when there's an error.
Uhm, the only place I can find AssertNoError is here in Technical Note TN2113. And it has a completely different format. AssertNoError(theError, "couldn't unregister the ABL"); Where is AssertNoError defined?
User #Jeremy P mentions this document as well.