How to call CPUID leaf 7 and subleaf 0? - cpuid

I have downloaded and installed yum install cpuid on fedora.
Could you please let me know How I can call CPUID leaf 7 and subleaf 0 ? I wanted to check some features available on that platform.
I really appreciate your help. Thanks in advance.

_cpuid_count can be used to get function 7 and 0xD with a subleaf.
The definition is in cpuid.h
#define __cpuid_count(level, count, a, b, c, d)
__asm__ ("cpuid\n\t"
: "=a" (a), "=b" (b), "=c" (c), "=d" (d)
: "0" (level), "2" (count))
Note that _cpuid_count does not check if the function is supported.
You can make a check similar to __get_cpuid in cpuid.h. This should probably be added to stdc libraries.
static __inline int
__get_cpuid_count (unsigned int __level, unsigned int __count,
unsigned int *__eax, unsigned int *__ebx,
unsigned int *__ecx, unsigned int *__edx)
{
unsigned int __ext = __level & 0x80000000;
if (__get_cpuid_max (__ext, 0) < __level)
return 0;
__cpuid_count (__level, __count, *__eax, *__ebx, *__ecx, *__edx);
return 1;
}

Related

SMHasher setup?

The SMHasher test suite for hash functions is touted as the best of the lot. But the latest version I've got (from rurban) gives absolutely no clue on how to check your proposed hash function (it does include an impressive battery of hash functions, but some of interest --if only for historic value-- are missing). Add that I'm a complete CMake newbie.
It's actually quite simple. You just need to install CMake.
Building SMHasher
To build SMHasher on a Linux/Unix machine:
git clone https://github.com/rurban/smhasher
cd smhasher/
git submodule init
git submodule update
cmake .
make
Adding a new hash function
To add a new function, you can edit just three files: Hashes.cpp, Hashes.h and main.cpp.
For example, I will add the ElfHash:
unsigned long ElfHash(const unsigned char *s)
{
unsigned long h = 0, high;
while (*s)
{
h = (h << 4) + *s++;
if (high = h & 0xF0000000)
h ^= high >> 24;
h &= ~high;
}
return h;
}
First, need to modify it slightly to take a seed and length:
uint32_t ElfHash(const void *key, int len, uint32_t seed)
{
unsigned long h = seed, high;
const uint8_t *data = (const uint8_t *)key;
for (int i = 0; i < len; i++)
{
h = (h << 4) + *data++;
if (high = h & 0xF0000000)
h ^= high >> 24;
h &= ~high;
}
return h;
}
Add this function definition to Hashes.cpp. Also add the following to Hashes.h:
uint32_t ElfHash(const void *key, int len, uint32_t seed);
inline void ElfHash_test(const void *key, int len, uint32_t seed, void *out) {
*(uint32_t *) out = ElfHash(key, len, seed);
}
In file main.cpp add the following line into array g_hashes:
{ ElfHash_test, 32, 0x0, "ElfHash", "ElfHash 32-bit", POOR, {0x0} },
(The third value is self-verification. You will learn this only after running the test once.)
Finally, rebuild and run the test:
make
./SMHasher ElfHash
It will show you all the tests that this hash function fails. (It is very bad.)

Parallel Brute Force Algorithm GPU

I have implemented a parallel BF Generator in Python like in this Post!
Parallelize brute force generation.
I want to implement this parallel technique on a GPU. Should be like a parallel BF Generator on a GPU.
Can someone help me out with some code examples for a parallel BF Generator on a GPU?
Couldn't find any examples online which made me suspicious.
Look at this implementation - i did the distribution on the GPU with this code:
void IncBruteGPU( unsigned char* theBrute, unsigned int charSetLen, unsigned int bruteLength, unsigned int incNr){
unsigned int i = 0;
while(incrementBy > 0 && i < bruteLength){
int add = incrementBy + ourBrute[i];
ourBrute[i] = add % charSetLen;
incrementBy = add / charSetLen;
i++;
}
}
call it like this:
// the Thread index number
int idx = get_global_id(0);
// the length of your charset "abcdefghi......"
unsigned int charSetLen = 26;
// the length of the word you want to brute
unsigned int bruteLength = 6;
// theBrute keeps the single start numbers of the alphabeth
unsigned char theBrute[MAX_BRUTE_LENGTH];
IncrementBruteGPU(theBrute, charSetLen, bruteLength, idx);
Good Luck!

pthread_mutex_t struct: What does lock stand for?

I am looking at the pthread_mutex_t structure in the pthreadtypes.h file. What does the "__lock" stand for? Is it like a lock number assigned to the mutex?
typedef union
{
struct __pthread_mutex_s
{
int __lock;
unsigned int __count;
int __owner;
#if __WORDSIZE == 64
unsigned int __nusers;
#endif
/* KIND must stay at this position in the structure to maintain
binary compatibility. */
int __kind;
#if __WORDSIZE == 64
int __spins;
__pthread_list_t __list;
# define __PTHREAD_MUTEX_HAVE_PREV 1
#else
unsigned int __nusers;
__extension__ union
{
int __spins;
__pthread_slist_t __list;
};
#endif
} __data;
char __size[__SIZEOF_PTHREAD_MUTEX_T];
long int __align;
} pthread_mutex_t;
The __lock member of struct __pthread_mutex_s __data is used as a futex object on Linux. Many of the following details may differ depending on the architecture you're looking at:
See the pthread_mutex_lock.c code for the high level locking function for pthread mutexes - __pthread_mutex_lock(), which generally will end up calling LLL_MUTEX_LOCK() and the definitions of LLL_MUTEX_LOCK() and friends, which end up calling lll_lock(), etc., in lowlevellock.h.
The lll_lock() macro in turn calls __lll_lock_wait_private(), which calls lll_futex_wait(), which makes the sys_futex system call.

What does PKCS5_PBKDF2_HMAC_SHA1 return value mean?

I'm attempting to use OpenSSL's PKCS5_PBKDF2_HMAC_SHA1 method. I gather that it returns 0 if it succeeds, and some other value otherwise. My question is, what does a non-zero return value mean? Memory error? Usage error? How should my program handle it (retry, quit?)?
Edit: A corollary question is, is there any way to figure this out besides reverse-engineering the method itself?
is there any way to figure this out besides reverse-engineering the method itself?
PKCS5_PBKDF2_HMAC_SHA1 looks like one of those undocumented functions because I can't find it in the OpenSSL docs. OpenSSL has a lot of them, so you should be prepared to study the sources if you are going to use the library.
I gather that it returns 0 if it succeeds, and some other value otherwise.
Actually, its reversed. Here's how I know...
$ grep -R PKCS5_PBKDF2_HMAC_SHA1 *
crypto/evp/evp.h:int PKCS5_PBKDF2_HMAC_SHA1(const char *pass, int passlen,
crypto/evp/p5_crpt2.c:int PKCS5_PBKDF2_HMAC_SHA1(const char *pass, int passlen,
...
So, you find the function's implementation in crypto/evp/p5_crpt2.c:
int PKCS5_PBKDF2_HMAC_SHA1(const char *pass, int passlen,
const unsigned char *salt, int saltlen, int iter,
int keylen, unsigned char *out)
{
return PKCS5_PBKDF2_HMAC(pass, passlen, salt, saltlen, iter,
EVP_sha1(), keylen, out);
}
Following PKCS5_PBKDF2_HMAC:
$ grep -R PKCS5_PBKDF2_HMAC *
...
crypto/evp/evp.h:int PKCS5_PBKDF2_HMAC(const char *pass, int passlen,
crypto/evp/p5_crpt2.c:int PKCS5_PBKDF2_HMAC(const char *pass, int passlen,
...
And again, from crypto/evp/p5_crpt2.c:
int PKCS5_PBKDF2_HMAC(const char *pass, int passlen,
const unsigned char *salt, int saltlen, int iter,
const EVP_MD *digest,
int keylen, unsigned char *out)
{
unsigned char digtmp[EVP_MAX_MD_SIZE], *p, itmp[4];
int cplen, j, k, tkeylen, mdlen;
unsigned long i = 1;
HMAC_CTX hctx_tpl, hctx;
mdlen = EVP_MD_size(digest);
if (mdlen < 0)
return 0;
HMAC_CTX_init(&hctx_tpl);
p = out;
tkeylen = keylen;
if(!pass)
passlen = 0;
else if(passlen == -1)
passlen = strlen(pass);
if (!HMAC_Init_ex(&hctx_tpl, pass, passlen, digest, NULL))
{
HMAC_CTX_cleanup(&hctx_tpl);
return 0;
}
while(tkeylen)
{
if(tkeylen > mdlen)
cplen = mdlen;
else
cplen = tkeylen;
/* We are unlikely to ever use more than 256 blocks (5120 bits!)
* but just in case...
*/
itmp[0] = (unsigned char)((i >> 24) & 0xff);
itmp[1] = (unsigned char)((i >> 16) & 0xff);
itmp[2] = (unsigned char)((i >> 8) & 0xff);
itmp[3] = (unsigned char)(i & 0xff);
if (!HMAC_CTX_copy(&hctx, &hctx_tpl))
{
HMAC_CTX_cleanup(&hctx_tpl);
return 0;
}
if (!HMAC_Update(&hctx, salt, saltlen)
|| !HMAC_Update(&hctx, itmp, 4)
|| !HMAC_Final(&hctx, digtmp, NULL))
{
HMAC_CTX_cleanup(&hctx_tpl);
HMAC_CTX_cleanup(&hctx);
return 0;
}
HMAC_CTX_cleanup(&hctx);
memcpy(p, digtmp, cplen);
for(j = 1; j < iter; j++)
{
if (!HMAC_CTX_copy(&hctx, &hctx_tpl))
{
HMAC_CTX_cleanup(&hctx_tpl);
return 0;
}
if (!HMAC_Update(&hctx, digtmp, mdlen)
|| !HMAC_Final(&hctx, digtmp, NULL))
{
HMAC_CTX_cleanup(&hctx_tpl);
HMAC_CTX_cleanup(&hctx);
return 0;
}
HMAC_CTX_cleanup(&hctx);
for(k = 0; k < cplen; k++)
p[k] ^= digtmp[k];
}
tkeylen-= cplen;
i++;
p+= cplen;
}
HMAC_CTX_cleanup(&hctx_tpl);
return 1;
}
So it looks like 0 on failure, and 1 on success. You should not see other values. And if you get a 0, then all the OUT parameters are junk.
Memory error? Usage error?
Well, sometimes you can call ERR_get_error. If you call it and it makes sense, then the error code is good. If the error code makes no sense, then its probably not good.
Sadly, that's the way I handle it because the library is not consistent with setting error codes. For example, here's the library code to load the RDRAND engine.
Notice the code clears the error code on failure if its a 3rd generation Ivy Bridge (that's the capability being tested), and does not clear or set an error otherwise!!!
void ENGINE_load_rdrand (void)
{
extern unsigned int OPENSSL_ia32cap_P[];
if (OPENSSL_ia32cap_P[1] & (1<<(62-32)))
{
ENGINE *toadd = ENGINE_rdrand();
if(!toadd) return;
ENGINE_add(toadd);
ENGINE_free(toadd);
ERR_clear_error();
}
}
How should my program handle it (retry, quit?)?
It looks like a hard failure.
Finally, that's exactly how I navigate the sources in this situation. If you don't like grep you can try ctags or another source code browser.

enum acting like an unsigned int in Xcode 4.6 even when enum is defined as a signed int

I have only been able to recreate this bug using xCode 4.6. Everything works as expected using Xcode 4.5
The issue is myVal has the correct bit structure to represent an int val of -1. However, it is showing a value of 4294967295 which is the value of the same bit structure if represented by an unsigned int. You'll notice that if i cast myVal to an int it will show the correct value. This is strange, because the enum should be an int to being with.
here is a screen shot showing the value of all of my variables in the debugger at the end of main. http://cl.ly/image/190s0a1P1b1t
typedef enum : int {
BTEnumValueNegOne = -1,
BTEnumValueZero = 0,
BTEnumValueOne = 1,
}BTEnumValue;
int main(int argc, const char * argv[])
{
#autoreleasepool {
//on this line of code myVal is 0
BTEnumValue myVal = BTEnumValueZero;
//we are adding -1 to the value of zero
myVal += BTEnumValueNegOne;
//at this moment myVal has the exact bit stucture
//of a signed int at -1, but it is displaying it
//as a unsigned int, so its value is 4294967295
//however, if i cast the enum (which should already
//be an int with a signing) to an int, it displays
//the correct value of -1
int myIntVal = (int)myVal;
}
return 0;
}
The new, preferred way to declare enum types is with the NS_ENUM macro as explained in this post: http://nshipster.com/ns_enum-ns_options/