I've been using std::unique_ptr to store some COM resources, and provided a custom deleter function. However, many of the COM functions want pointer-to-pointer. Right now, I'm using the implementation detail of _Myptr, in my compiler. Is it going to break unique_ptr to be accessing this data member directly, or should I store a gajillion temporary pointers to construct unique_ptr rvalues from?
COM objects are reference-countable by their nature, so you shouldn't use anything except reference-counting smart pointers like ATL::CComPtr or _com_ptr_t even if it seems inappropriate for your usecase (I fully understand your concerns, I just think you assign too much weight to them). Both classes are designed to be used in all valid scenarios that arise when COM objects are used, including obtaining the pointer-to-pointer. Yes, that's a bit too much functionality, but if you don't expect any specific negative consequences you can't tolerate you should just use those classes - they are designed exactly for this purpose.
I've had to tackle the same problem not too long ago, and I came up with two different solutions:
The first was a simple wrapper that encapsulated a 'writeable' pointer and could be std::moved into my smart pointer. This is just a little more convenient that using the temp pointers you are mentioning, since you cannot define the type directly at the call-site.
Therefore, I didn't stick with that. So what I did was a Retrieve helper-function that would get the COM function and return my smart-pointer (and do all the temporary pointer stuff internally). Now this trivially works with free-functions that only have a single T** parameter. If you want to use this on something more complex, you can just pass in the call via std::bind and only leave the pointer-to-be-returned free.
I know that this is not directly what you're asking, but I think it's a neat solution to the problem you're having.
As a side note, I'd prefer boost's intrusive_ptr instead of std::unique_ptr, but that's a matter of taste, as always.
Edit: Here's some sample code that's transferred from my version using boost::intrusive_ptr (so it might not work out-of-the box with unique_ptr)
template <class T, class PtrType, class PtrDel>
HRESULT retrieve(T func, std::unique_ptr<PtrType, PtrDel>& ptr)
{
ElementType* raw_ptr=nullptr;
HRESULT result = func(&raw_ptr);
ptr.reset(raw_ptr);
return result;
}
For example, it can be used like this:
std::unique_ptr<IFileDialog, ComDeleter> FileDialog;
/*...*/
using std::bind;
using namespace std::placeholders;
std::unique_ptr<IShellItem, ComDeleter> ShellItem;
HRESULT status = retrieve(bind(&IFileDialog::GetResult, FileDialog, _1), ShellItem);
For bonus points, you can even let retrieve return the unique_ptr instead of taking it by reference. The functor that bind generates should have signature typedefs to derive the pointer type. You can then throw an exception if you get a bad HRESULT.
C++0x smart pointers have a portable way to get at the raw pointer container .get() or release it entirely with .release(). You could also always use &(*ptr) but that is less idiomatic.
If you want to use smart pointers to manage the lifetime of an object, but still need raw pointers to use a library which doesn't support smart pointers (including standard c library) you can use those functions to most conveniently get at the raw pointers.
Remember, you still need to keep the smart pointer around for the duration you want the object to live (so be aware of its lifetime).
Something like:
call_com_function( &my_uniq_ptr.get() ); // will work fine
return &my_localscope_uniq_ptr.get(); // will not
return &my_member_uniq_ptr.get(); // might, if *this will be around for the duration, etc..
Note: this is just a general answer to your question. How to best use COM is a separate issue and sharptooth may very well be correct.
Use a helper function like this.
template< class T >
T*& getPointerRef ( std::unique_ptr<T> & ptr )
{
struct Twin : public std::unique_ptr<T>::_Mybase {};
Twin * twin = (Twin*)( &ptr );
return twin->_Myptr;
}
check the implementation
int wmain ( int argc, wchar_t argv[] )
{
std::unique_ptr<char> charPtr ( new char[25] );
delete getPointerRef(charPtr);
getPointerRef(charPtr) = 0;
return charPtr.get() != 0;
}
Related
winrt::hstring is convertible to std::basic_string_view which comes in handy quite often. However, I am unable to do the same for IVectorView.
Looking at the interface of IVector, I imagine you would have to convert it back to the underlying implementation type so I tried
using impl_type = winrt::impl::vector_impl<float, std::vector<float>, winrt::impl::single_threaded_collection_base>;
winrt::Windows::Foundation::Collections::IVectorView vector_view = GetIVectorView();
auto& impl = *winrt::get_self<impl_type>(vector_view);
auto& container = impl.get_container();
which compiles but container.size() is 0 which is incorrect.
Edit:
vector_view was the result of the TensorFloat.GetAsVectorView Method. So I can solve my problem by using the TensorFloat.CreateReference Method to get a IMemoryBufferReference instead of a IVectorView.
However, I'd still like to know whether IVectorView can be converted to a std::span, if not why is this not allowed.
The IVector and IVectorView interfaces are specifically designed not to expose the underlying contiguous memory, probably to support cases where there is no underlying contiguous memory or the implementation language doesn't expose it as such (javascript??).
You probably could get back the implementation type in when you know cppwinrt provides the implementation, however I'm my case there is no possible way of knowing the implemention type. In any case, it's inadvisable to do this.
In my case it would have been better if TensorFloat.GetAsVectorView did not exist so I could find TensorFloat.CreateReference.
Also it would be nice if cppwinrt made themselves range-v3 compatible. But until the most advisable thing to do is just copy to a std::vector.
I grew up in the days when passing around structures was bad mojo because they are often large, so pointers were always the way to go. Now that C++11 has quite good RVO (right value optimization), I'm wondering if code like the following will be efficient.
As you can see, my class has a bunch of vector structures (not pointers to them). The constructor accepts value structures and stores them away.
My -hope- is that the compiler will use move semantics so that there really is no copying of data going on; the constructor will (when possible) just assume ownership of the values passed in.
Does anyone know if this is true, and happens automagically, or do I need a move constructor with the && syntax and so on?
// ParticleVertex
//
// Class that represents the particle vertices
class ParticleVertex : public Vertex
{
public:
D3DXVECTOR4 _vertexPosition;
D3DXVECTOR2 _vertexTextureCoordinate;
D3DXVECTOR3 _vertexDirection;
D3DXVECTOR3 _vertexColorMultipler;
ParticleVertex(D3DXVECTOR4 vertexPosition,
D3DXVECTOR2 vertexTextureCoordinate,
D3DXVECTOR3 vertexDirection,
D3DXVECTOR3 vertexColorMultipler)
{
_vertexPosition = vertexPosition;
_vertexTextureCoordinate = vertexTextureCoordinate;
_vertexDirection = vertexDirection;
_vertexColorMultipler = vertexColorMultipler;
}
virtual const D3DVERTEXELEMENT9 * GetVertexDeclaration() const
{
return particleVertexDeclarations;
}
};
Yes, indeed you should trust the compiler to optimally "move" the structures:
Want Speed? Pass By Value
Guideline: Don’t copy your function arguments. Instead, pass them by value and let the compiler do the copying
In this case, you'd move the arguments into the constructor call:
ParticleVertex myPV(std::move(pos),
std::move(textureCoordinate),
std::move(direction),
std::move(colorMultipler));
In many contexts, the std::move will be implicit, e.g.
D3DXVECTOR4 getFooPosition() {
D3DXVECTOR4 result;
// bla
return result; // NRVO, std::move only required with MSVC
}
ParticleVertex myPV(getFooPosition(), // implicit rvalue-reference moved
RVO means Return Value Optimization not Right value optimization.
RVO is a optimization performed by the compiler when the return of a function is by value, and its clear that the code returns a temporary object created in the body, so the copy can be avoided. The function returns the created object directly.
What C++11 introduces is Move Semantics. Move semantics allows us to "move" the resource from a certain temporary to a target object.
But, move implies that the object wich the resource comes from, is in a unusable state after the move. This is not the case (I think) you want in your class, because the vertex data is used by the class, even if the user calls to this function or not.
So, use the common return by const reference to avoid copies.
On the other hand,, DirectX provides handles to the resources (Pointers), not the real resource. Pointers are basic types,its copying is cheap, so don't worry about performance. In your case, you are using 2d/3d vectors. Its copying is cheap too.
Personally, I think that returning a pointer to an internal resource is a very bad idea, always. I think that in this case the best aproach is to return by const reference.
For example:
... some code
int sizeOfSomeObject = someObject.length();
... some code, sizeOfSomeObject is not need anymore
now I need other int variable for other action(for example, for position in some object), and i have the dilemma: create a new variable or use sizeOfSomeObject for this. In the first case I will keep readability, but lose performance. In the second case - on the contrary. What usually do programmers in this situation?
In the first case I will keep readability, but lose performance. In the second case - on the contrary.
So did you benchmark it? I suspect no, you didn't. Most modern compilers do a lot of agressive analysis during register allocation, so if the optimizer perceives that there's a variable that's not used anymore, but there's a new variable of the same type, it will just merge the two variables to the same memory region or processor register. No need to worry about performance penalties.
And anyway, don't do premature optimization (which this is). In 90% of the cases, readability is more important than "performance".
All in all, go ahead and create a new variable with an appropriate, different, descriptive name. And just for fun, compile this version and the version in which you used the same variable name, and look at the generated assembly (or bytecode, or...) - and find out that they're identical.
I would use different named variables for different things.
In terms of something like this, I don't think just one variable would cause a massive performance hit. In most languages you have the option to clear variables from memory in some way when they are no longer in use, so I would recommend doing that so that the code means something to you or others when read at a later date.
In C++, you can use blocks for objects to be destroyed as soon as they are not needed anymore:
void some_function () {
{
MyClass c;
// ... here we use c ...
}
// now c has been destroyed
{
MyClass d;
// ... here we use d ...
}
// now d has been destroyed
}
In your example (with int variables), there is no reason to worry about performance. The worst thing that could probably happen is memory for two variables being used instead of one, but (i) that's negligible and (ii) int's will probably live in a CPU register, anyway. If you really worry, use the block approach for your int example.
It depends how often such an int would be initialized. If it's not in some hugely nested for loop, most (all) programmers will go for the first. Besides, most modern programming languages have a garbage collector, which cleans up left over objects.
Decent compiler will optimize out your second variable, so that shouldn't be an issue.
That said, there are situations where variable reuse makes sense. E.g., you might have some variable that holds a generic output populated from call to some external API. According to the context and parameters passed to the API you'll process the data differently but it's probably better (more readable etc.) to reuse the same data variable.
For example, something like this:
void* data = getSomeData(params);
//process data
//change params
data = getSomeData(params);
//process data
//change params
data = getSomeData(params);
i have a question regarding operator overloading in cli/c++ environment
static Length^ operator++(Length^ len)
{
Length^ temp = gcnew Length(len->feet, len->inches);
++temp->inches;
temp->feet += temp->inches/temp->inchesPerFoot;
temp->inches %= temp->inchesPerFoot;
return temp;
}
(the code is from ivor horton's book.)
why do we need to declare a new class object (temp) on the heap just to return it?
ive googled for the info on overloading but theres really not much out there and i feel kinda lost.
This is the way operator overloading is implemented in .NET. Overloaded operator is static function, which returns a new instance, instead of changing the current instance. Therefore, post and prefix ++ operators are the same. Most information about operator overloading talks about native C++. You can see .NET specific information, looking for C# samples, for example this: http://msdn.microsoft.com/en-us/library/aa288467(v=vs.71).aspx
.NET GC allows to create a lot of lightweight new instances, which are collected automatically. This is why .NET overloaded operators are more simple than in native C++.
Yes, because you're overloading POST-increment operator here. Hence, the original value may be used a lot in the code, copied and stored somewhere else, despite the existance of the new value. Example:
store_length_somewhere( len++ );
While len will be increased, the original value might be stored by the function somewhere else. That means that you might need two different values at the same time. Hence the creation and return of a new value.
I would like to know the difference between these two (sorry I do not know the name of this subject).
I come from C# where I was used to write System.data as well as classA.MethodA. I have already found out that in Cpp, with namespaces I need to use ::, with classmembers ->. But what about simple "."?
I have created System::data:odbc::odbcConnection^ connection. Later I was able to use connection.Open. Why not connection->open?
Im sorry, I am sure its something easily findable on the net, but I dont know english term for these.
Thank you guys
If you have a pointer to an object, you use:
MyClass *a = new MyClass();
a->MethodName();
On the other hand, if you have an actual object, you use dotted notation:
MyClass a;
a.MethodName();
To clarify the previous answers slightly, the caret character ^ in VC++ can be thought of as a * for most intents and purposes. It is a 'handle' to a class, and means something slightly different, but similar. See this short Googled explanation:
http://blogs.msdn.com/branbray/archive/2003/11/17/51016.aspx
So, in your example there, if you initialize your connection like:
System::Data::Odbc::OdbcConnection connect;
//You should be able to do this:
connect.Open();
Conversely, if you do this:
System::Data::Odbc::OdbcConnection^ connect1 = gcnew System::Data::Odbc::OdbcConnection();
connect1.Open(); // should be an error
connect1->Open(); //correct
The short answer: C++ allows you to manage your own memory. As such, you can create and manipulate memory, through usage of pointers (essentially integer variables containing memory addresses, rather than a value).
a.Method() means a is an instance of a class, from which you call Method.
a->Method() means a is a pointer to an instance of a class, from which you call Method.
When you use syntax like a->member, you are using a pointer to a structure or object.
When you use syntax like a.member, you are using the structure or object and not a pointer to the structure or object.
I did a quick google for you and THIS looks fairly quick and decent explanation.