I'm struggling a lot porting an MFC application from 32bit to 64bit.
I have same classes with CStringArray members and they use the CArchive serialization and all works fine in 32 bit app.
Now I split the same application into two parts, one in 32bit and the other in 64 bit and they need to share some serialized data; when I serialize a CStringArray member and try to deserialize it in the 64 bit app I get an CArchiveException, with cause=3 that should be "endOfFile".
It's not clear what's going on, I suspect that I can't serialize it a data in 32 bit and read it on 64 bit app due to the size. If I follow the GetSize() function of CStringArray, I see that the return is INT_PTR that is defined as follows:
This means there is no way to make a CStringArray serialization on 32bit and deserialization on 64 bit? Exist an workaround or something similar? There are other MFC data that I should check between 32 and 64 app?
EDIT: The problem is not related strictly on the CStringArray that (see the comments) is fine between 32/64.
I answer to my question for future readers. The problem was not related to the CStringArray but to other parts of code near it. The problem was that before CStringArray obj there is another member that was an CArray of a custom struct. This struct have a couple of members that use an override of the CArchive SerializeElements function to made a correct data serialization in 32 bit.
This serialize element was written in this way:
template<class TYPE> void AFXAPI SerializeElements (CArchive& ar, CRect* pElements, int nCount);
but the 64bit version does not use this override because the last parameter was int instead of INT_PTR and without using SerializeElements, I made a wrong serialization (less byte that needed, this is why I reach the endOfFile to soon while reading).
To fix it, just use the right declaration for the SerializeElements that is:
template<class TYPE> void AFXAPI SerializeElements (CArchive& ar, CRect* pElements, **INT_PTR** nCount);
Hope can help
Related
We've got a 32-bit C++/CLI assembly and one of the classes wraps a native object. It overrides GetHashCode to return the address of the native object it wraps (m_native is a pointer):
int NativeWrapper::GetHashCode()
{
return (int)m_native;
}
I'm now converting this assembly to support 64-bit, so the current GetHashCode implementation is no longer suitable. Are there any suitable algorithms to generate a hash code from a 64-bit address? Or am I missing an easier alternative?
I would use the same algorithm as the .Net framework does for generating the hash code of an Int64: the lower 32 bits XORed with the upper 32 bits.
int NativeWrapper::GetHashCode()
{
return ((int)m_native) ^ (int)(m_native >> 32);
}
Although, you could make a case for simple truncation: That's what IntPtr.GetHashCode does.
If you want to support a dual 32/64 compile, perhaps casting m_native to IntPtr and using its GetHashCode would be a good implementation that works in both 32 and 64 bit modes.
I am working on refactoring a large amount of code from an unmanaged C++ assembly into a C# assembly. There is currently a mixed-mode assembly going between the two with, of course, a mix of managed and unmanaged code. There is a function I am trying to call in the unmanaged C++ which relies on FILE*s (as defined in stdio.h). This function ties into a much larger process which cannot be refactored into the C# code yet, but which now needs to be called from the managed code.
I have searched but cannot find a definitive answer to what kind of underlying system pointer the System::IO::FileStream class uses. Is this just applied on top of a FILE*? Or is there some other way to convert a FileStream^ to a FILE*? I found FileStream::SafeFileHandle, on which I can call DangerousGetHandle().ToPointer() to get a native void*, but I'm just trying to be certain that if I cast this to FILE* that I'm doing the right thing...?
void Write(FILE *out)
{
Data->Write(out); // huge bulk of code, writing the data
}
virtual void __clrcall Write(System::IO::FileStream ^out)
{
// is this right??
FILE *pout = (FILE*)out->SafeFileHandle->DangerousGetHandle().ToPointer();
Write(pout);
}
You'll need _open_osfhandle followed by _fdopen.
Casting is not magic. Just because the input and types output are right for your situation doesn't mean the values are.
I've been using std::unique_ptr to store some COM resources, and provided a custom deleter function. However, many of the COM functions want pointer-to-pointer. Right now, I'm using the implementation detail of _Myptr, in my compiler. Is it going to break unique_ptr to be accessing this data member directly, or should I store a gajillion temporary pointers to construct unique_ptr rvalues from?
COM objects are reference-countable by their nature, so you shouldn't use anything except reference-counting smart pointers like ATL::CComPtr or _com_ptr_t even if it seems inappropriate for your usecase (I fully understand your concerns, I just think you assign too much weight to them). Both classes are designed to be used in all valid scenarios that arise when COM objects are used, including obtaining the pointer-to-pointer. Yes, that's a bit too much functionality, but if you don't expect any specific negative consequences you can't tolerate you should just use those classes - they are designed exactly for this purpose.
I've had to tackle the same problem not too long ago, and I came up with two different solutions:
The first was a simple wrapper that encapsulated a 'writeable' pointer and could be std::moved into my smart pointer. This is just a little more convenient that using the temp pointers you are mentioning, since you cannot define the type directly at the call-site.
Therefore, I didn't stick with that. So what I did was a Retrieve helper-function that would get the COM function and return my smart-pointer (and do all the temporary pointer stuff internally). Now this trivially works with free-functions that only have a single T** parameter. If you want to use this on something more complex, you can just pass in the call via std::bind and only leave the pointer-to-be-returned free.
I know that this is not directly what you're asking, but I think it's a neat solution to the problem you're having.
As a side note, I'd prefer boost's intrusive_ptr instead of std::unique_ptr, but that's a matter of taste, as always.
Edit: Here's some sample code that's transferred from my version using boost::intrusive_ptr (so it might not work out-of-the box with unique_ptr)
template <class T, class PtrType, class PtrDel>
HRESULT retrieve(T func, std::unique_ptr<PtrType, PtrDel>& ptr)
{
ElementType* raw_ptr=nullptr;
HRESULT result = func(&raw_ptr);
ptr.reset(raw_ptr);
return result;
}
For example, it can be used like this:
std::unique_ptr<IFileDialog, ComDeleter> FileDialog;
/*...*/
using std::bind;
using namespace std::placeholders;
std::unique_ptr<IShellItem, ComDeleter> ShellItem;
HRESULT status = retrieve(bind(&IFileDialog::GetResult, FileDialog, _1), ShellItem);
For bonus points, you can even let retrieve return the unique_ptr instead of taking it by reference. The functor that bind generates should have signature typedefs to derive the pointer type. You can then throw an exception if you get a bad HRESULT.
C++0x smart pointers have a portable way to get at the raw pointer container .get() or release it entirely with .release(). You could also always use &(*ptr) but that is less idiomatic.
If you want to use smart pointers to manage the lifetime of an object, but still need raw pointers to use a library which doesn't support smart pointers (including standard c library) you can use those functions to most conveniently get at the raw pointers.
Remember, you still need to keep the smart pointer around for the duration you want the object to live (so be aware of its lifetime).
Something like:
call_com_function( &my_uniq_ptr.get() ); // will work fine
return &my_localscope_uniq_ptr.get(); // will not
return &my_member_uniq_ptr.get(); // might, if *this will be around for the duration, etc..
Note: this is just a general answer to your question. How to best use COM is a separate issue and sharptooth may very well be correct.
Use a helper function like this.
template< class T >
T*& getPointerRef ( std::unique_ptr<T> & ptr )
{
struct Twin : public std::unique_ptr<T>::_Mybase {};
Twin * twin = (Twin*)( &ptr );
return twin->_Myptr;
}
check the implementation
int wmain ( int argc, wchar_t argv[] )
{
std::unique_ptr<char> charPtr ( new char[25] );
delete getPointerRef(charPtr);
getPointerRef(charPtr) = 0;
return charPtr.get() != 0;
}
unfortunately I cannot resort to C# in my current project, so I'll have to solve this without the unsafe keyword.
I've got a bitmap, and I need to access the pixels and channel values directly. I'd like to go beyond Marshal.ReadByte() and Marshal.WriteByte() (and definitely beyond GetPixel and SetPixel).
Is there a way to put all the pixel data of the bitmap into a Byte array that works on both 32 and 64 bit systems? I want the exact same layout as the original bitmap, so the padding for each row (if it exists) also needs to be included.
Marshal doesn't seem to have something akin to:
byte[] ReadBytes(IntPtr start, int offset, int count)
Unless I totally missed it...
Any help greatly appreciated,
David
ps. So far all my images are in 32BppPArgb pixelformat.
Marshal does have a Method that does exactly what you are asking. See Marshall.Copy()
public static void Copy(
IntPtr source,
byte[] destination,
int startIndex,
int length
)
Copies data from an unmanaged memory
pointer to a managed 8-bit unsigned
integer array.
And there are overloads to go the other direction as well
Would something like this do? (untested):
Public Shared Function BytesFromBitmap(ByVal Image As Drawing.Bitmap) As Byte()
Using buffer As New IO.MemoryStream()
image.Save(result, Drawing.Imaging.ImageFormat.Bmp)
Using rdr As New IO.BinaryReader(buffer)
Return rdr.ReadBytes(buffer.Length)
End Using
End Using
End Function
It won't let you manipulate the pixels in a Drawing.Bitmap object directly, but it will let you copy that bitmap to a byte array, as per the question title.
Another option is serialization via the BinaryFormatter, but I think that will still require you to pass it through a MemoryStream.
VB does not offer methods for direct memory access. You have two choices:
Use the Marshal class
Write a small unsafe C# (or C++/CLI) library that handles only these operations and reference it from your VB code.
Alright, there is a third option. VB.Net does not inherently support direct memory access, but it can be accomplished. It's just ugly and prone to errors. Nonetheless, if you're willing to put in the effort, you can try building a bitmap access library using these techniques combined with the approach referenced previously.
shf301 is right on the money, but I'd like to add a link to a comprehensive explanation/tutorial on fast pixel data access. Rather than saving the image to a stream and accessing a file-in-memory, it would be better to lock the bitmap, copy pixel data out, access it, and copy it back in. The performance of this technique is pretty good.
Code is in c#, but the approach is language-neutral and easy to read.
http://ilab.ahemm.org/tutBitmap.html
I'm evaluating Server 2008. My C++ executable is getting this error. I've seen this error on MSDN that seems to have required a hot-fix for several previous OSes. Anyone else seen this? I get the same results for the 32 & 64 bit OS.
Code snippet:
HRESULT GroupStart([in] short iClientId, [in] VARIANT GroupDataArray,
[out] short* pGroupInstance, [out] long* pCommandId);
Where the GroupDataArray VARIANT argument wraps a single-dimension SAFEARRAY of VARIANTs wrapping a DCAPICOM_GroupData struct entries:
// DCAPICOM_GroupData
[
uuid(F1FE2605-2744-4A2A-AB85-1E1845C280EB),
helpstring("removed")
]
typedef struct DCAPICOM_GroupData {
[helpstring("removed")]
long m_lImageID;
[helpstring("removed")]
unsigned char m_ucHeadID;
[helpstring("removed")]
unsigned char m_ucPlateID;
} DCAPICOM_GroupData;
After opening a support case with Microsoft, I can now answer my own question. This is (now) a recognized bug. The issue has to do with marshalling on the server side, but before the server code is called. Our structure is 6 bytes long, but this COM implementation is interpreting it as 8, so the marshalling fails, and this is the error you get back. The workaround, until a Service Pack is released to deal with this, is to add two extra bytes to the structure to pad it up to 8 bytes. We haven't run across any more instances that fail yet, but we still have a lot of testing to do still.
We ran into the same error recently with a client/server app communicating via DCOM. It turned out that the size of a marshalled interface pointer going across the wire (i.e., not local) had changed (gotten bigger). You might like to check whether your code is doing any special marshalling via CoMarshalInterface or the like.