We've got a 32-bit C++/CLI assembly and one of the classes wraps a native object. It overrides GetHashCode to return the address of the native object it wraps (m_native is a pointer):
int NativeWrapper::GetHashCode()
{
return (int)m_native;
}
I'm now converting this assembly to support 64-bit, so the current GetHashCode implementation is no longer suitable. Are there any suitable algorithms to generate a hash code from a 64-bit address? Or am I missing an easier alternative?
I would use the same algorithm as the .Net framework does for generating the hash code of an Int64: the lower 32 bits XORed with the upper 32 bits.
int NativeWrapper::GetHashCode()
{
return ((int)m_native) ^ (int)(m_native >> 32);
}
Although, you could make a case for simple truncation: That's what IntPtr.GetHashCode does.
If you want to support a dual 32/64 compile, perhaps casting m_native to IntPtr and using its GetHashCode would be a good implementation that works in both 32 and 64 bit modes.
Related
I'm struggling a lot porting an MFC application from 32bit to 64bit.
I have same classes with CStringArray members and they use the CArchive serialization and all works fine in 32 bit app.
Now I split the same application into two parts, one in 32bit and the other in 64 bit and they need to share some serialized data; when I serialize a CStringArray member and try to deserialize it in the 64 bit app I get an CArchiveException, with cause=3 that should be "endOfFile".
It's not clear what's going on, I suspect that I can't serialize it a data in 32 bit and read it on 64 bit app due to the size. If I follow the GetSize() function of CStringArray, I see that the return is INT_PTR that is defined as follows:
This means there is no way to make a CStringArray serialization on 32bit and deserialization on 64 bit? Exist an workaround or something similar? There are other MFC data that I should check between 32 and 64 app?
EDIT: The problem is not related strictly on the CStringArray that (see the comments) is fine between 32/64.
I answer to my question for future readers. The problem was not related to the CStringArray but to other parts of code near it. The problem was that before CStringArray obj there is another member that was an CArray of a custom struct. This struct have a couple of members that use an override of the CArchive SerializeElements function to made a correct data serialization in 32 bit.
This serialize element was written in this way:
template<class TYPE> void AFXAPI SerializeElements (CArchive& ar, CRect* pElements, int nCount);
but the 64bit version does not use this override because the last parameter was int instead of INT_PTR and without using SerializeElements, I made a wrong serialization (less byte that needed, this is why I reach the endOfFile to soon while reading).
To fix it, just use the right declaration for the SerializeElements that is:
template<class TYPE> void AFXAPI SerializeElements (CArchive& ar, CRect* pElements, **INT_PTR** nCount);
Hope can help
We have a plugin system that calls functions in dlls (user-generated plugins) by dlopening/LoadLibrarying the dll/so/dylib and then dlsyming/GetProcAddressing the function, and then storing that result in a function pointer.
Unfortunately, due to some bad example code being copy-pasted, some of these dlls in the wild do not have the correct function signature, and do not contain a return statement.
A dll might contain this:
extern "C" void Foo() { stuffWithNoReturn(); } // copy-paste from bad code
or it might contain this:
extern "C" int Foo() { doStuff(); return 1; } // good code
The application that loads the dll relies on the return value, but there are a nontrivial number of dlls out there that don't have the return statement. I am trying to detect this situation, and warn the user about the problem with his plugin.
This naive code should explain what I'm trying to do:
typedef int (*Foo_f)(void);
Foo_f func = (Foo_f)getFromDll(); // does dlsym or GetProcAddress depending on platform
int canary = 0x42424242;
canary = (*func)();
if (canary == 0x42424242)
printf("You idiot, this is the wrong signature!!!\n");
else
real_return_value = canary;
This unfortunately does not work, canary contains a random value after calling a dll that has the known defect. I naively assumed calling a function with no return statement would leave the canary intact, but it doesn't.
My next idea was to write a little bit of inline assembler to call the function, and check the eax register upon return, but Visual Studio 2015 doesn't allow __asm() in x64 code anymore.
I know there is no standards-conform solution to this, as casting the function pointer to the wrong type is of course undefined behavior. But if someone has a solution that works at least on 64bit Windows with Visual C++, or a solution that works with clang on MacOS, I would be most delighted.
#Lorinczy Zsigmond is right in that the contents of the register are undefined if the function does something but returns nothing.
We found however that in practice, the plugins that return nothing also have almost always empty functions that compile to a retn 0x0 and leaves the return register untouched. We can detect this case by spraying the rax register with a known value (0xdeadbeef) and checking for that.
The Rust documentation is vague on bool's size.
Is it guaranteed to be 1 byte, or is it unspecified like in C++?
fn main() {
use std::mem;
println!("{}",mem::size_of::<bool>()); //always 1?
}
Rust emits i1 to LLVM for bool and relies on whatever it produces. LLVM uses i8 (one byte) to represent i1 in memory for all the platforms supported by Rust for now. On the other hand, there's no certainty about the future, since the Rust developers have been refusing to commit to the particular bool representation so far.
So, it's guaranteed by the current implementation but not guaranteed by any specifications.
You can find more details in this RFC discussion and the linked PR and issue.
Please, see E_net4's answer for more information about changes introduced in Rust since this answer had been published.
While historically there was a wish to avoid committing to a more specific representation, it was eventually decided in January 2018 that bool should provide the following guarantees:
The definition of bool is equivalent to the C99 definition of _Bool
In turn, for all currently supported platforms, the size of bool is exactly 1.
The documentation has been updated accordingly. In the Rust reference, bool is defined as thus:
The bool type is a datatype which can be either true or false. The boolean type uses one byte of memory. [...]
It has also been documented since 1.25.0 that the output of std::mem::size_of::<bool>() is 1.
As such, one can indeed rely on bool being 1 byte (and if this is ever to change, it will be a pretty loud change).
See also:
In C how much space does a bool (boolean) take up? Is it 1 bit, 1 byte or something else?
Why is a boolean 1 byte and not 1 bit of size? (C++)
I am working on refactoring a large amount of code from an unmanaged C++ assembly into a C# assembly. There is currently a mixed-mode assembly going between the two with, of course, a mix of managed and unmanaged code. There is a function I am trying to call in the unmanaged C++ which relies on FILE*s (as defined in stdio.h). This function ties into a much larger process which cannot be refactored into the C# code yet, but which now needs to be called from the managed code.
I have searched but cannot find a definitive answer to what kind of underlying system pointer the System::IO::FileStream class uses. Is this just applied on top of a FILE*? Or is there some other way to convert a FileStream^ to a FILE*? I found FileStream::SafeFileHandle, on which I can call DangerousGetHandle().ToPointer() to get a native void*, but I'm just trying to be certain that if I cast this to FILE* that I'm doing the right thing...?
void Write(FILE *out)
{
Data->Write(out); // huge bulk of code, writing the data
}
virtual void __clrcall Write(System::IO::FileStream ^out)
{
// is this right??
FILE *pout = (FILE*)out->SafeFileHandle->DangerousGetHandle().ToPointer();
Write(pout);
}
You'll need _open_osfhandle followed by _fdopen.
Casting is not magic. Just because the input and types output are right for your situation doesn't mean the values are.
I'm evaluating Server 2008. My C++ executable is getting this error. I've seen this error on MSDN that seems to have required a hot-fix for several previous OSes. Anyone else seen this? I get the same results for the 32 & 64 bit OS.
Code snippet:
HRESULT GroupStart([in] short iClientId, [in] VARIANT GroupDataArray,
[out] short* pGroupInstance, [out] long* pCommandId);
Where the GroupDataArray VARIANT argument wraps a single-dimension SAFEARRAY of VARIANTs wrapping a DCAPICOM_GroupData struct entries:
// DCAPICOM_GroupData
[
uuid(F1FE2605-2744-4A2A-AB85-1E1845C280EB),
helpstring("removed")
]
typedef struct DCAPICOM_GroupData {
[helpstring("removed")]
long m_lImageID;
[helpstring("removed")]
unsigned char m_ucHeadID;
[helpstring("removed")]
unsigned char m_ucPlateID;
} DCAPICOM_GroupData;
After opening a support case with Microsoft, I can now answer my own question. This is (now) a recognized bug. The issue has to do with marshalling on the server side, but before the server code is called. Our structure is 6 bytes long, but this COM implementation is interpreting it as 8, so the marshalling fails, and this is the error you get back. The workaround, until a Service Pack is released to deal with this, is to add two extra bytes to the structure to pad it up to 8 bytes. We haven't run across any more instances that fail yet, but we still have a lot of testing to do still.
We ran into the same error recently with a client/server app communicating via DCOM. It turned out that the size of a marshalled interface pointer going across the wire (i.e., not local) had changed (gotten bigger). You might like to check whether your code is doing any special marshalling via CoMarshalInterface or the like.