Turning a C++ Application into VR [duplicate] - game-engine

This question already has an answer here:
How to prepare my game for VR? [closed]
(1 answer)
Closed 3 years ago.
I have a working c++ application using openGL and Cuda, and want to convert it to a VR application. I searched the internet and it seems that the standard way to develop VR apps is to use a game engine (unity or unreal). Is there any other possible ways to avoid using one of these, and just convert my already existing application into VR?

The usual approach is, that at the start of render of a frame, you're polling the VR API for each eye's projection and "camera" translation matrix, then use that, to render into textures using an FBO, and at the end pass the IDs of those textures back to the API. With OpenVR:
Initialize the FBO
vr::EVRInitError err_hmd = vr::VRInitError_None;
m_hmd = vr::VR_Init(&err_hmd, vr::VRApplication_Scene);
if( err_hmd != vr::VRInitError_None ){
m_hmd = nullptr;
}
if( m_hmd ){
m_hmd->GetRecommendedRenderTargetSize(&m_hmd_width, &m_hmd_height);
mat44_from_hmd44(m_prj_l, m_hmd->GetProjectionMatrix(vr::Eye_Left, 0.01f, 10.f) );
mat44_from_hmd44(m_prj_r, m_hmd->GetProjectionMatrix(vr::Eye_Right, 0.01f, 10.f) );
mat44_from_hmd34(m_eye_l, m_hmd->GetEyeToHeadTransform(vr::Eye_Left) );
mat44_from_hmd34(m_eye_r, m_hmd->GetEyeToHeadTransform(vr::Eye_Right) );
if( !vr::VRCompositor() ){
vr::VR_Shutdown();
m_hmd = nullptr;
}
m_timer_vr = startTimer(1000/50);
} else {
}
if( m_hmd_width && m_hmd_height ){
qDebug() << "resize to" << m_hmd_width << m_hmd_height;
eye_target_textures.create(m_hmd_width, m_hmd_height);
}
Update the HMD pose:
vr::VREvent_t vrev;
while( m_hmd->PollNextEvent(&vrev, sizeof(vrev)) );
// Process SteamVR action state
// UpdateActionState is called each frame to update the state of the actions themselves. The application
// controls which action sets are active with the provided array of VRActiveActionSet_t structs.
vr::VRActiveActionSet_t actionSet = { 0 };
vr::VRInput()->UpdateActionState( &actionSet, sizeof(actionSet), 1 );
vr::TrackedDevicePose_t tdp[ vr::k_unMaxTrackedDeviceCount ];
vr::VRCompositor()->WaitGetPoses(tdp, vr::k_unMaxTrackedDeviceCount, NULL, 0);
mat4x4_translate(m_pose, 0, 0, -1);
for( int i = 0; i < vr::k_unMaxTrackedDeviceCount; ++i ){
if( !tdp[i].bPoseIsValid ){ continue; }
switch( m_hmd->GetTrackedDeviceClass(i) ){
case vr::TrackedDeviceClass_HMD:
mat44_from_hmd34(m_pose, tdp[i].mDeviceToAbsoluteTracking );
break;
}
}
Then when rendering you use m_pose and the m_eye… matrices.

Related

Problems connecting to the input pins of GMFBridge Sink Filter

I am experiencing a strange problem when trying to use the GMFBridge filter with the output of an Euresys UxH264 card.
I am trying to integrate this card into our solution, that relies on GMFBridge to handle the ability of continuous capture to multiple files, performing muxing and file-splitting without having to stop the capture graph.
This card captures video and audio from analog inputs. It provides a DirectShow filter exposing both a raw stream of the video input and a hardware-encoded H.264 stream. The audio stream is provided as an uncompressed stream only.
When I attempt to directly connect any of the output pins of the Euresys source filters to the input pins of the GMFBridge Sink, they get rejected, with the code VFW_E_NO_ALLOCATOR. (In the past I have successfully connected both H.264 and raw audio streams to the bridge).
Grasping at straws, I plugged in a pair of SampleGrabber filters between the Euresys card filters and the bridge sink filter, and -just like that- the connections between sample grabbers and sink were accepted.
However, I am not getting any packets on the other side of the bridge (the muxing graph). I inspected the running capture graph with GraphStudioNext and somehow the sample grabbers appear detached from my graph, even though I got successful confirmations when I connected them to the source filter!.
Here's the source code creating the graph.
void EuresysSourceBox::BuildGraph() {
HRESULT hRes;
CComPtr<IGraphBuilder> pGraph;
COM_CALL(pGraph.CoCreateInstance(CLSID_FilterGraph));
#ifdef REGISTER_IN_ROT
_rotEntry1 = FilterTools::RegisterGraphInROT(IntPtr(pGraph), "euresys graph");
#endif
// [*Video Source*]
String^ filterName = "Ux H.264 Visual Source";
Guid category = _GUIDToGuid((GUID)AM_KSCATEGORY_CAPTURE);
FilterMonikerList^ videoSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pVideoSource;
int monikerIndex = config->BoardIndex; // a filter instance will be retrieved for every installed board
clr_scoped_ptr<CComPtr<IMoniker>>^ ppVideoSourceMoniker = videoSourceList[monikerIndex];
COM_CALL((*ppVideoSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pVideoSource));
COM_CALL(pGraph->AddFilter(pVideoSource, L"VideoSource"));
// [Video Source]
//
// [*Audio Source*]
filterName = "Ux H.264 Audio Encoder";
FilterMonikerList^ audioSourceList = FilterTools::GetFilterMonikersByName(category, filterName);
CComPtr<IBaseFilter> pAudioSource;
clr_scoped_ptr<CComPtr<IMoniker>>^ ppAudioSourceMoniker = audioSourceList[monikerIndex];
COM_CALL((*ppAudioSourceMoniker->get())->BindToObject(NULL, NULL, IID_IBaseFilter, (void**)&pAudioSource));
COM_CALL(pGraph->AddFilter(pAudioSource, L"AudioSource"));
CComPtr<IPin> pVideoCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IPin> pAudioOutPin(FilterTools::GetPin(pAudioSource, "Audio"));
CComPtr<IBaseFilter> pSampleGrabber;
COM_CALL(pSampleGrabber.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber, L"SampleGrabber"));
CComPtr<IPin> pSampleGrabberInPin(FilterTools::GetPin(pSampleGrabber, "Input"));
COM_CALL(pGraph->ConnectDirect(pVideoCompressedOutPin, pSampleGrabberInPin, NULL)); // DOES NOT FAIL!!
CComPtr<IBaseFilter> pSampleGrabber2;
COM_CALL(pSampleGrabber2.CoCreateInstance(CLSID_SampleGrabber));
COM_CALL(pGraph->AddFilter(pSampleGrabber2, L"SampleGrabber2"));
CComPtr<IPin> pSampleGrabber2InPin(FilterTools::GetPin(pSampleGrabber2, "Input"));
COM_CALL(pGraph->ConnectDirect(pAudioOutPin, pSampleGrabber2InPin, NULL)); // DOES NOT FAIL!!
// [Video Source]---
// |-->[*Bridge Sink*]
// [Audio Source]---
CComPtr<IPin> pSampleGrabberOutPin(FilterTools::GetPin(pSampleGrabber, "Output"));
CComPtr<IPin> pSampleGrabber2OutPin(FilterTools::GetPin(pSampleGrabber2, "Output"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleGrabberOutPin),
IntPtr(pSampleGrabber2OutPin)
);
// Root graph to parent object
_ppCaptureGraph.reset(new CComPtr<IGraphBuilder>(pGraph));
}
COM_CALL is my HRESULT checking macro, it will raise a managed exception if the result is other than S_OK. So the connection between pins succeeded, but here is how the graph looks disjointed when it is running:
So, I have three questions:
1) What could VFW_E_NO_ALLOCATOR mean in this instance? (the source filter can be successfully connected to other components such as LAV Video decoder or ffdshow video decoder).
2) Is there a known workaround to circumvent the VFW_E_NO_ALLOCATOR problem?
3) Is it possible that a filter gets disconnected at runtime as it seems to be happening in my case?
I found a reference by Geraint Davies giving a reason as to why the GMFBridge sink filter may be rejecting the connection.
It looks as though the parser is insisting on using its own allocator
-- this is common with parsers where the output samples are merely pointers into the input samples. However, the bridge cannot implement
suspend mode without using its own allocator, so a copy is required.
With this information, I decided to create an ultra simple CTransformFilter filter that simply accepts the allocator offered by the bridge and copies to the output sample whatever comes in from the input sample. I am not 100% sure that what I did was right, but it is working now. I could successfully plug-in the Euresys card as part of my capture infrastructure.
For reference, if anyone experiences something similar, here is the code of the filter I created:
class SampleCopyGeneratorFilter : public CTransformFilter {
protected:
HRESULT CheckInputType(const CMediaType* mtIn) override { return S_OK; }
HRESULT GetMediaType(int iPosition, CMediaType* pMediaType) override;
HRESULT CheckTransform(const CMediaType *mtIn, const CMediaType *mtOut) override { return S_OK; }
HRESULT DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) override;
HRESULT Transform(IMediaSample *pSource, IMediaSample *pDest) override;
public:
SampleCopyGeneratorFilter();
};
//--------------------------------------------------------------------------------------------------------------------
SampleCopyGeneratorFilter::SampleCopyGeneratorFilter()
: CTransformFilter(NAME("SampleCopyGeneratorFilter"), NULL, GUID_NULL)
{
}
HRESULT SampleCopyGeneratorFilter::GetMediaType(int iPosition, CMediaType* pMediaType) {
HRESULT hRes;
ASSERT(m_pInput->IsConnected());
if( iPosition < 0 )
return E_INVALIDARG;
CComPtr<IPin> connectedTo;
COM_CALL(m_pInput->ConnectedTo(&connectedTo));
CComPtr<IEnumMediaTypes> pMTEnumerator;
COM_CALL(connectedTo->EnumMediaTypes(&pMTEnumerator));
AM_MEDIA_TYPE* pIteratedMediaType;
for( int i = 0; i <= iPosition; i++ ) {
if( pMTEnumerator->Next(1, &pIteratedMediaType, NULL) != S_OK )
return VFW_S_NO_MORE_ITEMS;
if( i == iPosition )
*pMediaType = *pIteratedMediaType;
DeleteMediaType(pIteratedMediaType);
}
return S_OK;
}
HRESULT SampleCopyGeneratorFilter::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp) {
HRESULT hRes;
AM_MEDIA_TYPE mt;
COM_CALL(m_pInput->ConnectionMediaType(&mt));
try {
BITMAPINFOHEADER* pBMI = HEADER(mt.pbFormat);
pProp->cbBuffer = DIBSIZE(*pBMI); // format is compressed, uncompressed size should be enough
if( !pProp->cbAlign )
pProp->cbAlign = 1;
pProp->cbPrefix = 0;
pProp->cBuffers = 4;
ALLOCATOR_PROPERTIES actualProperties;
COM_CALL(pAlloc->SetProperties(pProp, &actualProperties));
if( pProp->cbBuffer > actualProperties.cbBuffer )
return E_FAIL;
return S_OK;
} finally{
FreeMediaType(mt);
}
}
HRESULT SampleCopyGeneratorFilter::Transform(IMediaSample *pSource, IMediaSample *pDest) {
HRESULT hRes;
BYTE* pBufferIn;
BYTE* pBufferOut;
long destSize = pDest->GetSize();
long dataLen = pSource->GetActualDataLength();
if( dataLen > destSize )
return VFW_E_BUFFER_OVERFLOW;
COM_CALL(pSource->GetPointer(&pBufferIn));
COM_CALL(pDest->GetPointer(&pBufferOut));
memcpy(pBufferOut, pBufferIn, dataLen);
pDest->SetActualDataLength(dataLen);
pDest->SetSyncPoint(pSource->IsSyncPoint() == S_OK);
return S_OK;
}
Here is how I inserted the filter in the capture graph:
CComPtr<IPin> pAACEncoderOutPin(FilterTools::GetPin(pAACEncoder, "XForm Out"));
CComPtr<IPin> pVideoSourceCompressedOutPin(FilterTools::GetPin(pVideoSource, "Encoded"));
CComPtr<IBaseFilter> pSampleCopier;
pSampleCopier = new SampleCopyGeneratorFilter();
COM_CALL(pGraph->AddFilter(pSampleCopier, L"SampleCopier"));
CComPtr<IPin> pSampleCopierInPin(FilterTools::GetPin(pSampleCopier, "XForm In"));
COM_CALL(pGraph->ConnectDirect(pVideoSourceCompressedOutPin, pSampleCopierInPin, NULL));
CComPtr<IPin> pSampleCopierOutPin(FilterTools::GetPin(pSampleCopier, "XForm Out"));
CreateGraphBridge(
IntPtr(pGraph),
IntPtr(pSampleCopierOutPin),
IntPtr(pAACEncoderOutPin)
);
Now, I still have no idea why inserting the sample grabber instead did not work and resulted in detached graphs. I corroborated this quirk by examining the graphs with Graphedit Plus too. If anyone can offer me an explanation, I would be very grateful indeed.

Saving Files from MFC Dialog

buds. I'm trying to save a "block layout". This consists of saving the data of each block, each being a pointer to the CBlock class. So I need to store their properties within a file, so the player can load its layout and play with it. The problem is that I don't know what's the best way to save that data into a text file. I will be needing at least the BLOCKTYPE (This being an enum I can use to construct the objects from) and the X and Y positions of the Block.
I tried saving my block vector with a for but that didn't work at all. So far I have no idea of how I can save and load as well.
This is how I'm trying to save my files currently, but it doesn't work.
void CCreateWindow::OnBnClickedButtonSave(){
// TODO: Add your control notification handler code here
if (m_blockLayout.size() > 0)
{
CString m_filter = TEXT("Super Breakout Maker Files (*.sbm)|*.sbm|All Files (*.*)|*.*||");
CFile m_saveFile;
CFileDialog m_fileDlg(FALSE, TEXT(".sbm"), TEXT("mylayout"), OFN_OVERWRITEPROMPT, m_filter, NULL, 0, TRUE);
//CFileDialog m_fileDlg(FALSE, TEXT(".sbm"), TEXT("mylayout"), 0, m_filter);
if (m_fileDlg.DoModal() == IDOK)
{
m_saveFile.Open(m_fileDlg.GetFileName(), CFile::modeCreate | CFile::modeWrite);
CArchive m_saveArchive(&m_saveFile, CArchive::store);
for (int i = 0; i > m_blockLayout.size(); i++)
{
m_saveArchive << m_blockLayout[i]->GetBlockType() << m_blockLayout[i]->GetXPosition() << m_blockLayout[i]->GetYPosition();
}
m_saveArchive.Close();
MessageBox(TEXT("Your layout was successfully saved!"), TEXT("Notification"), MB_ICONINFORMATION);
}
else
{
return;
}
m_saveFile.Close();
}
else
{
MessageBox(TEXT("You can't save an empty layout."), TEXT("Warning"), MB_ICONWARNING);
}}

Vulkan depth image binding error

Hi I am trying to bind depth memory buffer but I get an error saying as below. I have no idea why this error is popping up.
The depth format is VK_FORMAT_D16_UNORM and the usage is VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT. I have read online that the TILING shouldnt be linear but then I get a different error. Thanks!!!
The code for creating and binding the image is as below.
VkImageCreateInfo imageInfo = {};
// If the depth format is undefined, use fallback as 16-byte value
if (Depth.format == VK_FORMAT_UNDEFINED) {
Depth.format = VK_FORMAT_D16_UNORM;
}
const VkFormat depthFormat = Depth.format;
VkFormatProperties props;
vkGetPhysicalDeviceFormatProperties(*deviceObj->gpu, depthFormat, &props);
if (props.linearTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_LINEAR;
}
else if (props.optimalTilingFeatures & VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT) {
imageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
}
else {
std::cout << "Unsupported Depth Format, try other Depth formats.\n";
exit(-1);
}
imageInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageInfo.pNext = NULL;
imageInfo.imageType = VK_IMAGE_TYPE_2D;
imageInfo.format = depthFormat;
imageInfo.extent.width = width;
imageInfo.extent.height = height;
imageInfo.extent.depth = 1;
imageInfo.mipLevels = 1;
imageInfo.arrayLayers = 1;
imageInfo.samples = NUM_SAMPLES;
imageInfo.queueFamilyIndexCount = 0;
imageInfo.pQueueFamilyIndices = NULL;
imageInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
imageInfo.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
imageInfo.flags = 0;
// User create image info and create the image objects
result = vkCreateImage(deviceObj->device, &imageInfo, NULL, &Depth.image);
assert(result == VK_SUCCESS);
// Get the image memory requirements
VkMemoryRequirements memRqrmnt;
vkGetImageMemoryRequirements(deviceObj->device, Depth.image, &memRqrmnt);
VkMemoryAllocateInfo memAlloc = {};
memAlloc.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
memAlloc.pNext = NULL;
memAlloc.allocationSize = 0;
memAlloc.memoryTypeIndex = 0;
memAlloc.allocationSize = memRqrmnt.size;
// Determine the type of memory required with the help of memory properties
pass = deviceObj->memoryTypeFromProperties(memRqrmnt.memoryTypeBits, 0, /* No requirements */ &memAlloc.memoryTypeIndex);
assert(pass);
// Allocate the memory for image objects
result = vkAllocateMemory(deviceObj->device, &memAlloc, NULL, &Depth.mem);
assert(result == VK_SUCCESS);
// Bind the allocated memeory
result = vkBindImageMemory(deviceObj->device, Depth.image, Depth.mem, 0);
assert(result == VK_SUCCESS);
Yes, linear tiling may not be supported for depth usage Images.
Consult the specification and Valid Usage section of VkImageCreateInfo. The capability is queried by vkGetPhysicalDeviceFormatProperties and vkGetPhysicalDeviceImageFormatProperties commands. Though depth formats are "opaque", so there is not much reason to use linear tiling.
This you seem to be doing in your code.
But the error informs you that you are trying to use a memory type that is not allowed for the given Image. Use vkGetImageMemoryRequirements command to query which memory types are allowed.
Possibly you have some error there (you are using 0x1 which is obviously not part of 0x84 per the message). You may want to reuse the example code in the Device Memory chapter of the specification. Provide your memoryTypeFromProperties implementation for more specific answer.
I accidentally set the typeIndex to 1 instead of i and it works now. In my defense I have been vulkan coding the whole day and my eyes are bleeding :). Thanks for the help.
bool VulkanDevice::memoryTypeFromProperties(uint32_t typeBits, VkFlags
requirementsMask, uint32_t *typeIndex)
{
// Search memtypes to find first index with those properties
for (uint32_t i = 0; i < 32; i++) {
if ((typeBits & 1) == 1) {
// Type is available, does it match user properties?
if ((memoryProperties.memoryTypes[i].propertyFlags & requirementsMask) == requirementsMask) {
*typeIndex = i;// was set to 1 :(
return true;
}
}
typeBits >>= 1;
}
// No memory types matched, return failure
return false;
}

Detecting the Image very very slow in device

I am using this SURF code to detect the logo in my image. It is working fine but it is very slow. Any idea about how can I optimize it?
- (void)findObject
{
//NSLog(#"%# %#", self, NSStringFromSelector(_cmd));
width = 0;
CvMemStorage* storage = cvCreateMemStorage(0);
static CvScalar colors[] =
{
{{0,0,255}},
{{0,128,255}},
{{0,255,255}},
{{0,255,0}},
{{255,128,0}},
{{255,255,0}},
{{255,0,0}},
{{255,0,255}},
{{255,255,255}}
};
if( !objectToFind || !image )
{
NSLog(#"Missing object or image");
return;
}
CvSize objSize = cvGetSize(objectToFind);
IplImage* object_color = cvCreateImage(objSize, 8, 3);
cvCvtColor( objectToFind, object_color, CV_GRAY2BGR );
CvSeq *objectKeypoints = 0, *objectDescriptors = 0;
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;
CvSURFParams params = cvSURFParams(500, 1);
double tt = (double)cvGetTickCount();
NSLog(#"Finding object descriptors");
cvExtractSURF( objectToFind, 0, &objectKeypoints, &objectDescriptors, storage, params );
NSLog(#"Object Descriptors: %d", objectDescriptors->total);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
NSLog(#"Image Descriptors: %d", imageDescriptors->total);
tt = (double)cvGetTickCount() - tt;
NSLog(#"Extraction time = %gms", tt/(cvGetTickFrequency()*1000.));
CvPoint src_corners[4] = {{0,0}, {objectToFind->width,0}, {objectToFind->width, objectToFind->height}, {0, objectToFind->height}};
CvPoint dst_corners[4];
CvSize size = cvSize(image->width > objectToFind->width ? image->width : objectToFind->width,
objectToFind->height+image->height);
output = cvCreateImage(size, 8, 1 );
cvSetImageROI( output, cvRect( 0, 0, objectToFind->width, objectToFind->height ) );
//cvCopy( objectToFind, output );
cvResetImageROI( output );
cvSetImageROI( output, cvRect( 0, objectToFind->height, output->width, output->height ) );
cvCopy( image, output );
cvResetImageROI( output );
NSLog(#"Locating Planar Object");
#ifdef USE_FLANN
NSLog(#"Using approximate nearest neighbor search");
#endif
if( locatePlanarObject( objectKeypoints, objectDescriptors, imageKeypoints,
imageDescriptors, src_corners, dst_corners ))
{
for( i = 0; i < 4; i++ )
{
CvPoint r1 = dst_corners[i%4];
CvPoint r2 = dst_corners[(i+1)%4];
//cvLine( output, cvPoint(r1.x, r1.y+objectToFind->height ),
//cvPoint(r2.x, r2.y+objectToFind->height ), colors[6] );
cvLine( output, cvPoint(r1.x, r1.y+objectToFind->height ),
cvPoint(r2.x, r2.y+objectToFind->height ), colors[6],4 );
//if(i==0)
width = sqrt(((r1.x-r2.x)*(r1.x-r2.x))+((r1.y-r2.y)*(r1.y-r2.y)));
}
}
vector<int> ptpairs;
NSLog(#"finding Pairs");
#ifdef USE_FLANN
flannFindPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs );
#else
findPairs( objectKeypoints, objectDescriptors, imageKeypoints, imageDescriptors, ptpairs );
#endif
/* for( i = 0; i < (int)ptpairs.size(); i += 2 )
{
CvSURFPoint* r1 = (CvSURFPoint*)cvGetSeqElem( objectKeypoints, ptpairs[i] );
CvSURFPoint* r2 = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, ptpairs[i+1] );
cvLine( output, cvPointFrom32f(r1->pt),
cvPoint(cvRound(r2->pt.x), cvRound(r2->pt.y+objectToFind->height)), colors[8] );
}*/
float dist = 629.0/width;
[distanceLabel setText:[NSString stringWithFormat:#"%.2f",dist]];
NSLog(#"Converting Output");
UIImage *convertedOutput = [OpenCVUtilities UIImageFromGRAYIplImage:output];
NSLog(#"Opening Stuff");
[imageView setImage:convertedOutput];
cvReleaseImage(&object_color);
[activityView stopAnimating];
}
In the above code image is my original image and objectToFind is the logo which I want to detect.
Please let me know if my question is not clear.
You need to use profiling to decide which part of your code is the slowest.
Since you are using XCode, you have a built-in profiler at hands reach:
in he top-left corner you press-and-hold "Run" button and choose "Profile".
click on Profile and and select Time Profiler.
after a while you press "stop" in the profiler, and select "Hide Missing Symbols", "Hide System Libraries" and "Top Functions", deselect "Separate by Thread".
Now look up function main and there is a hidden right arrow after it. Click on that arrow and you can see time in percent and call statistics by the calltree.
This is how you start.
In general I have the following suggestions without profiling:
Avoid creating new images and memory storages as much as you can. (You can pass images for temporary use to your function, and preserve those images outside so that you can reuse them later.)
scale down your image (and your logo) to role-out major parts of the image
use less descriptors
The two rules of thumb:
you need to decide what to improve after the profiling, as profiling often yields surprising results.
The quicker part you try to improve the less potential gain you have.

DirectShow's PushSource filters cause IMediaControl::Run to return S_FALSE

I'm messing around with the PushSource sample filter shipped with the DirectShow SDK and I'm having the following problem:
When I call IMediaControl::Run(), it returns S_FALSE which means "the graph is preparing to run, but some filters have not completed the transition to a running state". MSDN suggests to then call IMediaControl::GetState() and wait for the transition to finish.
And so, I call IMediaControl::GetState(INFINITE, ...) which is supposed to solve the problem.
However, to the contrary, it returns VFW_S_STATE_INTERMEDIATE even though I've specified an infinite waiting time.
I've tried all three variations (Bitmap, Bitmap Set and Desktop) and they all behave the same way, which initially lead me to believe there is a bug in there somewhere.
However, then, I tried using IFilterGraph::AddSourceFilter to do the same and it did the same thing, which must mean it's my rendering code that is the problem:
CoInitialize(0);
IGraphBuilder *graph = 0;
assert(S_OK == CoCreateInstance(CLSID_FilterGraph, 0, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void**)&graph));
IBaseFilter *pushSource = 0;
graph->AddSourceFilter(L"sample.bmp", L"Source", &pushSource);
IPin *srcOut = 0;
assert(S_OK == GetPin(pushSource, PINDIR_OUTPUT, &srcOut));
graph->Render(srcOut);
IMediaControl *c = 0;
IMediaEvent *pEvent;
assert(S_OK == graph->QueryInterface(IID_IMediaControl, (void**)&c));
assert(S_OK == graph->QueryInterface(IID_IMediaEvent, (void**)&pEvent));
HRESULT hr = c->Run();
if(hr != S_OK)
{
if(hr == S_FALSE)
{
OAFilterState state;
hr = c->GetState(INFINITE, &state);
assert(hr == S_OK );
}
}
long code;
assert(S_OK == pEvent->WaitForCompletion(INFINITE, &code));
Anyone knows how to fix this?
IBaseFilter *pushSource = 0;
graph->AddSourceFilter(L"sample.bmp", L"Source", &pushSource);
AddSourceFilter adds a default source filter, I don't think it will add your pushsource samplefilter.
I would recommend to add the graph to the ROT, so you can inspect it with graphedit.
And what happens if you don't call GetState()?
hr = pMediaControl->Run();
if(FAILED(hr)) {
/// handle error
}
long evCode=0;
while (evCode == 0)
{
pEvent->WaitForCompletion(1000, &evCode);
/// other code
}
Open GraphEditPlus, add your filter, render its pin and press Run. Then you'll see states of each filter separately, so you'll see what filter didn't run and why.