How save GTiff data in memory by C++ - gdal

I want to create dataset to store the image data as Gtiff format, and then pass the memory address of whole Gtiff file in memory to anther function that pulish this tiff as WMS service.
Now I found the funciton: VSIGetMemFileBuffer returns that binData is 0x0 and binDataLength is very large number. Why? How to achive my purpose? Thanks a lot!
The following is my code snippet:
GDALDriver *hMemDriver = GetGDALDriverManager()->GetDriverByName("MEM");
char** papszOptions = NULL;
GDALDataset *hMemDS = hMemDriver->Create(“/vsimem/geotiffnameout”, NCOLS, NROWS, 0, GDT_Float32, NULL);
char szTmp[64];
memset(szTmp, 0, sizeof(szTmp));
CPLPrintPointer(szTmp, dataBuff, sizeof(szTmp)); //dataBuff is my image data buffer
papszOptions = CSLSetNameValue(papszOptions, "DATAPOINTER", szTmp);
hMemDS->AddBand(GDT_Float32, papszOptions);
vsi_l_offset binDataLength;
int bUnlinkAndSeize = FALSE;
GByte* binData = VSIGetMemFileBuffer(“/vsimem/geotiffnameout”, &binDataLength, bUnlinkAndSeize); //Here, I get that binData is 0x0 and binDataLength is very large number. Why?
I want to store the image data proecessed as GTiff format in memory!

Related

How to write the avc1 atom with libavcodec for MP4 file using H264 codec

I am trying to create an MP4 file using libavcodec. I am using a raspberry pi which has a built in hardware H264 encoder. It outputs Annex B H264 frames and I am trying to see the proper way to save these frames into an MP4 container.
My first attempt simply wrote the MP4 header without building the extradata. The raspberry pi transmits as first frame the SPS and PPS info. This is followed by IDR and then the remaining H264 frames. I started with avformat_write_header and then repackaged the succeeding frames in AVPacket and used
av_write_frame(outputFormatCtx, &pkt);
This works fine but mplayer tries to decode the first frame ( the one containing SPS and PPS info ) and fails with decoding that frame. However, succeeding frames are decodable and the video plays fine from that point on.
I wanted to construct a proper MP4 file so I wanted the SPS and PPS information to go the MP4 header. I read that it should be in the avc1 atom and that I needed to build the extradata and somehow link it to the outputformatctx.
This is my effort so far, after parsing sps and pps from the returned encoder buffers. (I removed the leading 0x0000001 nal delimiters prior to memcpying to sps and pps).
if ((sps) && (pps)) {
//length of extradata is 6 bytes + 2 bytes for spslen + sps + 1 byte number of pps + 2 bytes for ppslen + pps
uint32_t extradata_len = 8 + spslen + 1 + 2 + ppslen;
outputStream->codecpar->extradata = (uint8_t*)av_mallocz(extradata_len);
outputStream->codecpar->extradata_size = extradata_len;
//start writing avcc extradata
outputStream->codecpar->extradata[0] = 0x01; //version
outputStream->codecpar->extradata[1] = sps[1]; //profile
outputStream->codecpar->extradata[2] = sps[2]; //comatibility
outputStream->codecpar->extradata[3] = sps[3]; //level
outputStream->codecpar->extradata[4] = 0xFC | 3; // reserved (6 bits), NALU length size - 1 (2 bits) which is 3
outputStream->codecpar->extradata[5] = 0xE0 | 1; // reserved (3 bits), num of SPS (5 bits) which is 1 sps
//write sps length
memcpy(&outputStream->codecpar->extradata[6],&spslen,2);
//Check to see if written correctly
uint16_t *cspslen=(uint16_t *)&outputStream->codecpar->extradata[6];
fprintf(stderr,"SPS length Wrote %d and read %d \n",spslen,*cspslen);
//Write the actual sps
int i = 0;
for (i=0; i<spslen; i++) {
outputStream->codecpar->extradata[8 + i] = sps[i];
}
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Number of pps
outputStream->codecpar->extradata[8 + spslen] = 0x01;
//Size of pps
memcpy(&outputStream->codecpar->extradata[8+spslen+1],&ppslen,2);
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Check to see if written correctly
uint16_t *cppslen=(uint16_t *)&outputStream->codecpar->extradata[+8+spslen+1];
fprintf(stderr,"PPS length Wrote %d and read %d \n",ppslen,*cppslen);
//Write actual PPS
for (i=0; i<ppslen; i++) {
outputStream->codecpar->extradata[8 + spslen + 1 + 2 + i] = pps[i];
}
//Output the extradata to check
for (size_t i = 0; i != outputStream->codecpar->extradata_size; ++i)
fprintf(stderr, "\\%02x", (unsigned char)outputStream->codecpar->extradata[i]);
fprintf(stderr,"\n");
//Access the outputFormatCtx internal AVCodecContext and copy the codecpar to it
AVCodecContext *avctx= outputFormatCtx->streams[0]->codec;
fprintf(stderr,"Extradata size output stream sps pps %d\n",outputStream->codecpar->extradata_size);
if(avcodec_parameters_to_context(avctx, outputStream->codecpar) < 0 ){
fprintf(stderr,"Error avcodec_parameters_to_context");
}
//Check to see if extradata was actually transferred to OutputformatCtx internal AVCodecContext
fprintf(stderr,"Extradata size after sps pps %d\n",avctx->extradata_size);
//Write the MP4 header
if(avformat_write_header(outputFormatCtx , NULL) < 0){
fprintf(stderr,"Error avformat_write_header");
ret = 1;
} else {
extradata_written=true;
fprintf(stderr,"EXTRADATA written\n");
}
}
The resulting video file does not play. The extradata is actually stored in the tail section of the MP4 file instead of the location in the MP4 header for avc1. So it is being written by libavcodec but written likely by avformat_write_trailer.
I will post the PPS and SPS info here and the final extradata byte string just in case the error was in forming the extradata.
Here is the buffer from the hardware encoder with sps and pps preceded by the nal delimiter
\00\00\00\01\27\64\00\28\ac\2b\40\a0\cd\00\f1\22\6a\00\00\00\01\28\ee\04\f2\c0
Here is the 13 byte sps:
27640028ac2b40a0cd00f1226a
Here is the 5 byte pps:
28ee04f2c0
Here is the final extradata byte string which is 29 bytes long. I hope I wrote the PPS and SPS size correctly.
\01\64\00\28\ff\e1\0d\00\27\64\00\28\ac\2b\40\a0\cd\00\f1\22\6a\01\05\00\28\ee\04\f2\c0
I did the same conversion from NAL delimiter 0x0000001 to 4 byte NAL size for the succeeding frames from the encoder and saved them to the file sequentially and then wrote the trailer.
Any idea where the mistake is? How can I write the extradata to its proper location in the MP4 header?
Thanks,
Chris
Well, I found the problem. The raspberry pi is little endian so I assumed that I must write the sps length and pps length and each NALU size in little endian. They need to be written in big endian. After I made the change, the avcc atom showed in mp4info and mplayer can now playback the video.
It's not necessary to access the outputformatctx internal avcodeccontext and modify it.
This post was very helpful:
Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
Thanks,
Chris

unable to copy from buffer to image

I have an image of dimensions 4096*4096 (so 67108864 bytes, since there are 4 channels) that I want to copy from a staging buffer to a device local image. The buffer already has the data stored and I have set up the image barriers properly, so now I want to perform the copy operation... Except it doesn't work. The validation layers give me this error message when I call vkCmdCopyBufferToImage() -
IMAGE(ERROR): object: 0x0 type: 6 location: 3903 msgCode: 417333590: vkCmdCopyBufferToImage(): pRegion[0] exceeds buffer size of 67108864 bytes. The spec valid usage text states 'The buffer region specified by each element of pRegions mustbe a region that is contained within srcBuffer' (https://www.khronos.org/registry/vulkan/specs/1.0/html/vkspec.html#VUID-vkCmdCopyBufferToImage-pRegions-00171).
I can't find anything wrong with the values that I gave it though. The VkBufferImageCopy struct I passed to it looks like this-
VkBufferImageCopy bufImgCopy;
bufImgCopy.bufferOffset = 0;
bufImgCopy.bufferImageHeight = 0;
bufImgCopy.bufferRowLength = 0;
bufImgCopy.imageExtent = modelTexture.imgExtents; // 4096 * 4096 * 1
bufImgCopy.imageOffset = {0, 0, 0};
bufImgCopy.imageSubresource.aspectMask = modelTexture.subResource.aspectMask; // Colour attachment
bufImgCopy.imageSubresource.baseArrayLayer = modelTexture.subResource.baseArrayLayer; // 0
bufImgCopy.imageSubresource.layerCount = VK_REMAINING_ARRAY_LAYERS;
bufImgCopy.imageSubresource.mipLevel = 0;
I can't figure out why the api thinks the struct is specifying a size greater than the buffer size. The format of the image is VK_FORMAT_B8G8R8A8_UNORM.
EDIT
Here's the code that sets up the staging buffer-
stageBuf.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
stageBuf.shareMode = VK_SHARING_MODE_EXCLUSIVE;
stageBuf.bufSize = static_cast<VkDeviceSize>(verts.size() * sizeof(vert) + indices.size() * sizeof(u32)) > modelImage.size ? static_cast<VkDeviceSize>(verts.size() * sizeof(vert) + indices.size() * sizeof(u32)) : modelImage.size;
// filled from the previous struct.
VkBufferCreateInfo info;
info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
info.pNext = nullptr;
info.flags = 0;
info.queueFamilyIndexCount = bufInfo.qFCount;
info.pQueueFamilyIndices = bufInfo.qFIndices;
info.usage = bufInfo.usage;
info.sharingMode = bufInfo.shareMode;
info.size = bufInfo.bufSize;
if (vkCreateBuffer(device, &info, nullptr, &(bufInfo.buf)) != VK_SUCCESS)
{ //...
VkMemoryRequirements memReqs;
vkGetBufferMemoryRequirements(device, buf, &memReqs);
for (u32 type = 0; type < memProps.memoryTypeCount; ++type)
if ((memReqs.memoryTypeBits & (1 << type)) &&
((memProps.memoryTypes[type].propertyFlags & memFlags) == memFlags)) // The usual things to set buffers up.
{
VkMemoryAllocateInfo info;
info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
info.pNext = nullptr;
info.allocationSize = memReqs.size;
info.memoryTypeIndex = type;
if (vkAllocateMemory(device, &info, nullptr, &mem.memory) == VK_SUCCESS)
{ //....
// All this works perfectly except for the texture copy.
if (vkBindBufferMemory(device, buf, mem.memory, mem.offset) != VK_SUCCESS)
{ //...
I'm using this staging buffer for both the vertex and index buffers (which I have taken as a single buffer with offsets) as well as the image which I'm trying to copy to. The memory allocated is according to the size of the largest data structure.
As noted in the comments. Using VK_REMAINING_ARRAY_LAYERS is invalid for the layerCount of VkImageSubresourceRange, so you have to explicitly set the layerCount to the actual number of layers to copy.

In Lua, how should I read a file into an array of bytes?

To read a file into an array of bytes a, I have been using the following code:
file = io.open(fileName, "rb")
str = file:read("*a")
a = {str:byte(1, #str)}
Although this works for smaller files, str:byte fails for a 1MB file, giving stack overflow (string slice too long).
Is there an alternative method which will successfully read these larger files?
local fileName = 'C:\\Program Files\\Microsoft Office\\Office12\\excel.exe'
local file = assert(io.open(fileName, 'rb'))
local t = {}
repeat
local str = file:read(4*1024)
for c in (str or ''):gmatch'.' do
t[#t+1] = c:byte()
end
until not str
file:close()
print(#t) --> 18330984
In case of using LuaJIT, another approach is to read a chunk of bytes and convert it to a C array. If reading the whole file in one shot, the buffer should allocate enough memory to store it (filesize bytes). Alternativaly, it could possible to read the file in chunks and reuse the buffer for each chunk.
The advantage of using a C buffer is that it's more efficient, memory-wise, than to convert a chunk of bytes to a Lua string or to a Lua table. The disadvantage is that FFI is only supported in LuaJIT.
local ffi = require("ffi")
-- Helper function to calculate file size.
local function filesize (fd)
local current = fd:seek()
local size = fd:seek("end")
fd:seek("set", current)
return size
end
local filename = "example.bin"
-- Open file in binary mode.
local fd, err = io.open(filename, "rb")
if err then error(err) end
-- Get size of file and allocate a buffer for the whole file.
local size = filesize(fd)
local buffer = ffi.new("uint8_t[?]", size)
-- Read whole file and store it as a C buffer.
ffi.copy(buffer, fd:read(size), size)
fd:close()
-- Iterate through buffer to print out contents.
for i=0,size-1 do
io.write(buffer[i], " ")
end
This will store each block (1) bytes from the file file.txt into the table bytes
local bytes = {}
file = assert(io.open("file.txt","rb"))
block = 1 --blocks of 1 byte
while true do
local byte = file:read(block)
if byte == nil then
break
else
bytes[#bytes+1] = string.byte(byte)
end
end
file:close()

Getting raw sample data of m4a file to draw waveform

I'm using AudioToolbox to access m4a audio files with following code:
UInt32 packetsToRead = 1; //Does it makes difference?
void *buffer = malloc(maxPacketSize * packetsToRead);
for (UInt64 packetIndex = 0; packetIndex < packetCount; packetIndex++)
{
ioNumberOfPackets = packetsToRead;
ioNumberOfBytes = maxPacketSize * ioNumberOfPackets;
AudioFileReadPacketData(audioFile, NO, &ioNumbersOfBytes, NULL, packetIndex, &ioNumberOFPackets, buffer);
for (UInt32 batchPacketIndex = 0; batchPacketIndex < ioNumberOfPackets; batchPacketIndex++)
{
//What to do here to get amplitude value? How to get sample value?
}
packetIndex+=ioNumberOfPackets;
}
My audio format is:
AppleM4A, 8000 Hz, 16 Bit, 4096 frames per packet
The solution was to use extended audio file services. You just have to set up transition between client format and PCM. Got the right way overthere Audio Processing: Playing with volume level.
To get waveform data, you may first need to convert your compressed audio file into raw PCM samples, such as found inside a WAV file, or other non-compressed audio format. Try AVAssetReader, et.al.

How to define end in objective C

OSStatus SetupBuffers(BG_FileInfo *inFileInfo)
{
int numBuffersToQueue = kNumberBuffers;
UInt32 maxPacketSize;
UInt32 size = sizeof(maxPacketSize);
// we need to calculate how many packets we read at a time, and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
OSStatus result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize);
AssertNoError("Error getting packet upper bound size", end);
bool isFormatVBR = (inFileInfo->mFileFormat.mBytesPerPacket == 0 || inFileInfo- >mFileFormat.mFramesPerPacket == 0);
CalculateBytesForTime(inFileInfo->mFileFormat, maxPacketSize, 0.5/*seconds*/, &mBufferByteSize, &mNumPacketsToRead);
// if the file is smaller than the capacity of all the buffer queues, always load it at once
if ((mBufferByteSize * numBuffersToQueue) > inFileInfo->mFileDataSize)
inFileInfo->mLoadAtOnce = true;
if (inFileInfo->mLoadAtOnce)
{
UInt64 theFileNumPackets;
size = sizeof(UInt64);
result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyAudioDataPacketCount, &size, &theFileNumPackets);
AssertNoError("Error getting packet count for file", end);***>>>>this is where xcode says undefined<<<<***
mNumPacketsToRead = (UInt32)theFileNumPackets;
mBufferByteSize = inFileInfo->mFileDataSize;
numBuffersToQueue = 1;
}
//Here is the exact error
label 'end' used but not defined
I have that error twice
If you look at the SoundEngine.cpp source that the snippet comes from, you'll see it's defined on the very next line:
end:
return result;
It's a label that execution jumps to when there's an error.
Uhm, the only place I can find AssertNoError is here in Technical Note TN2113. And it has a completely different format. AssertNoError(theError, "couldn't unregister the ABL"); Where is AssertNoError defined?
User #Jeremy P mentions this document as well.