How does that contract parse out the ETH wallet address? - solidity

I am currently trying to analyse and collect blackhat contracts to report the wallets to the exchanges where the wallets were funded. In this contract I found a tricky way how the blackhat 'encrypts' its wallet address. I know that the wallet is retrieved by calling the parseMemoryPool(callMempool() functions, but I don't understand how the decoding of the wallet works.
https://pastebin.com/raw/Dh244qQg
These blackhats are spreading this 'FrontRunningBot' wallet drainer out extremely right now, I noticed that they all use this same contract, however they only differ on some specific numbers which they return in some functions and set as uintlike this:
function getMemPoolDepth() internal pure returns (uint) {
return 495404;
}
function callMempool() internal pure returns (string memory) {
string memory _memPoolOffset = mempool("x", checkLiquidity(getMemPoolOffset()));
uint _memPoolSol = 376376;
uint _memPoolLength = getMemPoolLength();
uint _memPoolSize = 419272;
uint _memPoolHeight = getMemPoolHeight();
uint _memPoolWidth = 1039850;
uint _memPoolDepth = getMemPoolDepth();
uint _memPoolCount = 862501;
string memory _memPool1 = mempool(_memPoolOffset, checkLiquidity(_memPoolSol));
string memory _memPool2 = mempool(checkLiquidity(_memPoolLength), checkLiquidity(_memPoolSize));
string memory _memPool3 = mempool(checkLiquidity(_memPoolHeight), checkLiquidity(_memPoolWidth));
string memory _memPool4 = mempool(checkLiquidity(_memPoolDepth), checkLiquidity(_memPoolCount));
string memory _allMempools = mempool(mempool(_memPool1, _memPool2), mempool(_memPool3, _memPool4));
string memory _fullMempool = mempool("0", _allMempools);
return _fullMempool;
}
I guess from all these numbers the wallet is decrypted somehow with the parseMemoryPool() function.
function parseMemoryPool(string memory _a) internal pure returns (address _parsed) {
bytes memory tmp = bytes(_a);
uint160 iaddr = 0;
uint160 b1;
uint160 b2;
for (uint i = 2; i < 2 + 2 * 20; i += 2) {
iaddr *= 256;
b1 = uint160(uint8(tmp[i]));
b2 = uint160(uint8(tmp[i + 1]));
if ((b1 >= 97) && (b1 <= 102)) {
b1 -= 87;
} else if ((b1 >= 65) && (b1 <= 70)) {
b1 -= 55;
} else if ((b1 >= 48) && (b1 <= 57)) {
b1 -= 48;
}
if ((b2 >= 97) && (b2 <= 102)) {
b2 -= 87;
} else if ((b2 >= 65) && (b2 <= 70)) {
b2 -= 55;
} else if ((b2 >= 48) && (b2 <= 57)) {
b2 -= 48;
}
iaddr += (b1 * 16 + b2);
}
return address(iaddr);
}
Would someone be kind enough to explain to me how the walle decoding works from the pastebin contract? Thanks in advance!

I recently walked through this scam in a series of tweets.
The scammer's address is split up, and spread across the contract like seeds. We can see these pretty clearly, however. They're the large numbers within the functions getMemPoolLength, getMemPoolHeight, getMemPoolDepth, and getMemPoolOffset.
Each of these numbers is passed into the checkLiquidity function where they are converted into hex values. Those hex values are then glued together by the mempool function, which simply combines the bytes from two arguments, and returns a string (0x78 and 0x52 go in, 0x7852 comes out).
Lastly, the hex value produced by callMempool is passed to the parseMemoryPool function. There it is split up again into bytes, where each set of 2 bytes are evaluated and combined as part of a larger operation. The end result is a massive number. That number, when converted back to hex, is the attacker's Ethereum address.

Related

Valgrind examine memory, patching lackey

I would like to patch valgrind's lackey example tool. I would like to
examine the memory of the instrumented binary for the appearence
of a certain string sequence around the pointer of a store instruction.
Alternatively scan all memory regions on each store for the appearence
of such a sequence. Does anyone know a reference to a adequate
example? Basically I'd like to
for (i = -8; i <= 8; i++) {
if (strncmp(ptr+i, "needle", 6) == 0)
printf("Here ip: %x\n", ip);
}
But how can I verify that ptr in the range of [-8,8] is valid? Is there
a function that tracks the heap regions? Or do I have to track /proc/pid/maps each time?
// Konrad
Turns out that the exp-dhat tools in valgrind works for me:
static VG_REGPARM(3)
void dh_handle_write ( Addr addr, UWord szB )
{
Block* bk = find_Block_containing(addr);
if (bk) {
if (is_subinterval_of(bk->payload, bk->req_szB, addr-10, 10*2)) {
int i = 0;
for (i = -10; i <= 10; i++) {
if ((VG_(memcmp)(((char*)addr)+ i, searchfor, 6) == 0)) {
ExeContext *ec = VG_(record_ExeContext)( VG_(get_running_tid)(), 0 );
VG_(pp_ExeContext) ( ec );
VG_(printf)(" ---------------- ----------- found %08lx # %08lx --------\n", addr, ip);
}
}
}
bk->n_writes += szB;
if (bk->histoW)
inc_histo_for_block(bk, addr, szB);
}
}
Each time for a write I search for the occurance of array searchfor and print a stacktrace if found...

How to do memcmp() in Swift?

I'm trying to convert some of my code from Objective-C to Swift. One method that I'm having trouble with does a comparison of a CBUUID with an UInt16. This was fairly straight-forward in Objective-C, but I'm having a hard time coming up with a good way to do this in Swift.
Here's the Objective-C version:
/*
* #method compareCBUUIDToInt
*
* #param UUID1 UUID 1 to compare
* #param UUID2 UInt16 UUID 2 to compare
*
* #returns 1 (equal) 0 (not equal)
*
* #discussion compareCBUUIDToInt compares a CBUUID to a UInt16 representation of a UUID and returns 1
* if they are equal and 0 if they are not
*
*/
-(int) compareCBUUIDToInt:(CBUUID *)UUID1 UUID2:(UInt16)UUID2 {
char b1[16];
[UUID1.data getBytes:b1];
UInt16 b2 = [self swap:UUID2];
if (memcmp(b1, (char *)&b2, 2) == 0) return 1;
else return 0;
}
My (untested) version of this method in Swift got much more complicated and I'm hoping that I'm just missing some better ways to use the language:
func compareCBUUID(CBUUID1: CBUUID, toInt CBUUID2: UInt16) -> Int {
let uuid1data = CBUUID1.data
let uuid1count = uuid1data.length / sizeof(UInt8)
var uuid1array = [UInt8](count: uuid1count, repeatedValue: 0)
uuid1data.getBytes(&uuid1array, length: uuid1count * sizeof(UInt8))
// #todo there's gotta be a better way to do this
let b2: UInt16 = self.swap(CBUUID2)
var b2Array = [b2 & 0xff, (b2 >> 8) & 0xff]
if memcmp(&uuid1array, &b2Array, 2) == 0 {
return 1
}
return 0
}
There are two things that seem to complicate things. First, it isn't possible to declare a fixed sized buffer in Swift, so the char b1[16] in ObjC becomes 3 lines in Swift. Second, I don't know of a way to do a memcmp() in Swift with a UInt16. The compiler complains that:
'UInt16' is not convertible to '#value inout $T5'
So that's where the clunky step comes in where I separate out the UInt16 into a byte array by hand.
Any suggestions?
The corresponding Swift code for char b1[16] would be
var b1 = [UInt8](count: 16, repeatedValue: 0)
and for the byte swapping you can use the "built-in" method byteSwapped
or bigEndian.
Casting the pointer for memcpy() is a bit tricky.
The direct translation of your Objective-C code to Swift would be (untested!):
var b1 = [UInt8](count: 16, repeatedValue: 0)
CBUUID1.data.getBytes(&b1, length: sizeofValue(b1))
var b2: UInt16 = CBUUID2.byteSwapped
// Perhaps better:
// var b2: UInt16 = CBUUID2.bigEndian
if memcmp(UnsafePointer(b1), UnsafePointer([b2]), 2) == 0 {
// ...
}
However, if you define b1 as an UInt16 array then you don't need
memcmp() at all:
var b1 = [UInt16](count: 8, repeatedValue: 0)
CBUUID1.data.getBytes(&b1, length: sizeofValue(b1))
var b2: UInt16 = CBUUID2.bigEndian
if b1[0] == b2 {
// ...
}

Determine Position of Most Signifiacntly Set Bit in a Byte

I have a byte I am using to store bit flags. I need to compute the position of the most significant set bit in the byte.
Example Byte: 00101101 => 6 is the position of the most significant set bit
Compact Hex Mapping:
[0x00] => 0x00
[0x01] => 0x01
[0x02,0x03] => 0x02
[0x04,0x07] => 0x03
[0x08,0x0F] => 0x04
[0x10,0x1F] => 0x05
[0x20,0x3F] => 0x06
[0x40,0x7F] => 0x07
[0x80,0xFF] => 0x08
TestCase in C:
#include <stdio.h>
unsigned char check(unsigned char b) {
unsigned char c = 0x08;
unsigned char m = 0x80;
do {
if(m&b) { return c; }
else { c -= 0x01; }
} while(m>>=1);
return 0; //never reached
}
int main() {
unsigned char input[256] = {
0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f,
0x10,0x11,0x12,0x13,0x14,0x15,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,0x1f,
0x20,0x21,0x22,0x23,0x24,0x25,0x26,0x27,0x28,0x29,0x2a,0x2b,0x2c,0x2d,0x2e,0x2f,
0x30,0x31,0x32,0x33,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x3b,0x3c,0x3d,0x3e,0x3f,
0x40,0x41,0x42,0x43,0x44,0x45,0x46,0x47,0x48,0x49,0x4a,0x4b,0x4c,0x4d,0x4e,0x4f,
0x50,0x51,0x52,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x5b,0x5c,0x5d,0x5e,0x5f,
0x60,0x61,0x62,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x6b,0x6c,0x6d,0x6e,0x6f,
0x70,0x71,0x72,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x7b,0x7c,0x7d,0x7e,0x7f,
0x80,0x81,0x82,0x83,0x84,0x85,0x86,0x87,0x88,0x89,0x8a,0x8b,0x8c,0x8d,0x8e,0x8f,
0x90,0x91,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0x9b,0x9c,0x9d,0x9e,0x9f,
0xa0,0xa1,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xab,0xac,0xad,0xae,0xaf,
0xb0,0xb1,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xbb,0xbc,0xbd,0xbe,0xbf,
0xc0,0xc1,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xcb,0xcc,0xcd,0xce,0xcf,
0xd0,0xd1,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xdb,0xdc,0xdd,0xde,0xdf,
0xe0,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xeb,0xec,0xed,0xee,0xef,
0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0xfa,0xfb,0xfc,0xfd,0xfe,0xff };
unsigned char truth[256] = {
0x00,0x01,0x02,0x02,0x03,0x03,0x03,0x03,0x04,0x04,0x04,0x04,0x04,0x04,0x04,0x04,
0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08};
int i,r;
int f = 0;
for(i=0; i<256; ++i) {
r=check(input[i]);
if(r !=(truth[i])) {
printf("failed %d : 0x%x : %d\n",i,0x000000FF & ((int)input[i]),r);
f += 1;
}
}
if(!f) { printf("passed all\n"); }
else { printf("failed %d\n",f); }
return 0;
}
I would like to simplify my check() function to not involve looping (or branching preferably). Is there a bit twiddling hack or hashed lookup table solution to compute the position of the most significant set bit in a byte?
Your question is about an efficient way to compute log2 of a value. And because you seem to want a solution that is not limited to the C language I have been slightly lazy and tweaked some C# code I have.
You want to compute log2(x) + 1 and for x = 0 (where log2 is undefined) you define the result as 0 (e.g. you create a special case where log2(0) = -1).
static readonly Byte[] multiplyDeBruijnBitPosition = new Byte[] {
7, 2, 3, 4,
6, 1, 5, 0
};
public static Byte Log2Plus1(Byte value) {
if (value == 0)
return 0;
var roundedValue = value;
roundedValue |= (Byte) (roundedValue >> 1);
roundedValue |= (Byte) (roundedValue >> 2);
roundedValue |= (Byte) (roundedValue >> 4);
var log2 = multiplyDeBruijnBitPosition[((Byte) (roundedValue*0xE3)) >> 5];
return (Byte) (log2 + 1);
}
This bit twiddling hack is taken from Find the log base 2 of an N-bit integer in O(lg(N)) operations with multiply and lookup where you can see the equivalent C source code for 32 bit values. This code has been adapted to work on 8 bit values.
However, you may be able to use an operation that gives you the result using a very efficient built-in function (on many CPU's a single instruction like the Bit Scan Reverse is used). An answer to the question Bit twiddling: which bit is set? has some information about this. A quote from the answer provides one possible reason why there is low level support for solving this problem:
Things like this are the core of many O(1) algorithms such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
That was a fun little challenge. I don't know if this one is completely portable since I only have VC++ to test with, and I certainly can't say for sure if it's more efficient than other approaches. This version was coded with a loop but it can be unrolled without too much effort.
static unsigned char check(unsigned char b)
{
unsigned char r = 8;
unsigned char sub = 1;
unsigned char s = 7;
for (char i = 0; i < 8; i++)
{
sub = sub & ((( b & (1 << s)) >> s--) - 1);
r -= sub;
}
return r;
}
I'm sure everyone else has long since moved on to other topics but there was something in the back of my mind suggesting that there had to be a more efficient branch-less solution to this than just unrolling the loop in my other posted solution. A quick trip to my copy of Warren put me on the right track: Binary search.
Here's my solution based on that idea:
Pseudo-code:
// see if there's a bit set in the upper half
if ((b >> 4) != 0)
{
offset = 4;
b >>= 4;
}
else
offset = 0;
// see if there's a bit set in the upper half of what's left
if ((b & 0x0C) != 0)
{
offset += 2;
b >>= 2;
}
// see if there's a bit set in the upper half of what's left
if > ((b & 0x02) != 0)
{
offset++;
b >>= 1;
}
return b + offset;
Branch-less C++ implementation:
static unsigned char check(unsigned char b)
{
unsigned char adj = 4 & ((((unsigned char) - (b >> 4) >> 7) ^ 1) - 1);
unsigned char offset = adj;
b >>= adj;
adj = 2 & (((((unsigned char) - (b & 0x0C)) >> 7) ^ 1) - 1);
offset += adj;
b >>= adj;
adj = 1 & (((((unsigned char) - (b & 0x02)) >> 7) ^ 1) - 1);
return (b >> adj) + offset + adj;
}
Yes, I know that this is all academic :)
It is not possible in plain C. The best I would suggest is the following implementation of check. Despite quite "ugly" I think it runs faster than the ckeck version in the question.
int check(unsigned char b)
{
if(b&128) return 8;
if(b&64) return 7;
if(b&32) return 6;
if(b&16) return 5;
if(b&8) return 4;
if(b&4) return 3;
if(b&2) return 2;
if(b&1) return 1;
return 0;
}
Edit: I found a link to the actual code: http://www.hackersdelight.org/hdcodetxt/nlz.c.txt
The algorithm below is named nlz8 in that file. You can choose your favorite hack.
/*
From last comment of: http://stackoverflow.com/a/671826/315052
> Hacker's Delight explains how to correct for the error in 32-bit floats
> in 5-3 Counting Leading 0's. Here's their code, which uses an anonymous
> union to overlap asFloat and asInt: k = k & ~(k >> 1); asFloat =
> (float)k + 0.5f; n = 158 - (asInt >> 23); (and yes, this relies on
> implementation-defined behavior) - Derrick Coetzee Jan 3 '12 at 8:35
*/
unsigned char check (unsigned char b) {
union {
float asFloat;
int asInt;
} u;
unsigned k = b & ~(b >> 1);
u.asFloat = (float)k + 0.5f;
return 32 - (158 - (u.asInt >> 23));
}
Edit -- not exactly sure what the asker means by language independent, but below is the equivalent code in python.
import ctypes
class Anon(ctypes.Union):
_fields_ = [
("asFloat", ctypes.c_float),
("asInt", ctypes.c_int)
]
def check(b):
k = int(b) & ~(int(b) >> 1)
a = Anon(asFloat=(float(k) + float(0.5)))
return 32 - (158 - (a.asInt >> 23))

Progress 10.1C 4GL Encode Function

Does anyone know which algorithm Progress 10.1C uses in the Encode Function?
I found this: http://knowledgebase.progress.com/articles/Article/21685
The Progress 4GL ENCODE function uses a CRC-16 algorithm to generate its encoded output.
Progress 4GL:
ENCODE("Test").
gives as output "LkwidblanjsipkJC"
But for example on http://www.nitrxgen.net/hashgen/ with Password "Test", I never get the Result as from Progress..
Any Ideas? :)
I've made the algorithm available on https://github.com/pvginkel/ProgressEncode.
I needed this function in Java. So I ported Pieter's C# code (https://github.com/pvginkel/ProgressEncode) to Java. At least all test cases passed. Enjoy! :)
public class ProgressEncode {
static int[] table = { 0x0000, 0xC0C1, 0xC181, 0x0140, 0xC301, 0x03C0,
0x0280, 0xC241, 0xC601, 0x06C0, 0x0780, 0xC741, 0x0500, 0xC5C1,
0xC481, 0x0440, 0xCC01, 0x0CC0, 0x0D80, 0xCD41, 0x0F00, 0xCFC1,
0xCE81, 0x0E40, 0x0A00, 0xCAC1, 0xCB81, 0x0B40, 0xC901, 0x09C0,
0x0880, 0xC841, 0xD801, 0x18C0, 0x1980, 0xD941, 0x1B00, 0xDBC1,
0xDA81, 0x1A40, 0x1E00, 0xDEC1, 0xDF81, 0x1F40, 0xDD01, 0x1DC0,
0x1C80, 0xDC41, 0x1400, 0xD4C1, 0xD581, 0x1540, 0xD701, 0x17C0,
0x1680, 0xD641, 0xD201, 0x12C0, 0x1380, 0xD341, 0x1100, 0xD1C1,
0xD081, 0x1040, 0xF001, 0x30C0, 0x3180, 0xF141, 0x3300, 0xF3C1,
0xF281, 0x3240, 0x3600, 0xF6C1, 0xF781, 0x3740, 0xF501, 0x35C0,
0x3480, 0xF441, 0x3C00, 0xFCC1, 0xFD81, 0x3D40, 0xFF01, 0x3FC0,
0x3E80, 0xFE41, 0xFA01, 0x3AC0, 0x3B80, 0xFB41, 0x3900, 0xF9C1,
0xF881, 0x3840, 0x2800, 0xE8C1, 0xE981, 0x2940, 0xEB01, 0x2BC0,
0x2A80, 0xEA41, 0xEE01, 0x2EC0, 0x2F80, 0xEF41, 0x2D00, 0xEDC1,
0xEC81, 0x2C40, 0xE401, 0x24C0, 0x2580, 0xE541, 0x2700, 0xE7C1,
0xE681, 0x2640, 0x2200, 0xE2C1, 0xE381, 0x2340, 0xE101, 0x21C0,
0x2080, 0xE041, 0xA001, 0x60C0, 0x6180, 0xA141, 0x6300, 0xA3C1,
0xA281, 0x6240, 0x6600, 0xA6C1, 0xA781, 0x6740, 0xA501, 0x65C0,
0x6480, 0xA441, 0x6C00, 0xACC1, 0xAD81, 0x6D40, 0xAF01, 0x6FC0,
0x6E80, 0xAE41, 0xAA01, 0x6AC0, 0x6B80, 0xAB41, 0x6900, 0xA9C1,
0xA881, 0x6840, 0x7800, 0xB8C1, 0xB981, 0x7940, 0xBB01, 0x7BC0,
0x7A80, 0xBA41, 0xBE01, 0x7EC0, 0x7F80, 0xBF41, 0x7D00, 0xBDC1,
0xBC81, 0x7C40, 0xB401, 0x74C0, 0x7580, 0xB541, 0x7700, 0xB7C1,
0xB681, 0x7640, 0x7200, 0xB2C1, 0xB381, 0x7340, 0xB101, 0x71C0,
0x7080, 0xB041, 0x5000, 0x90C1, 0x9181, 0x5140, 0x9301, 0x53C0,
0x5280, 0x9241, 0x9601, 0x56C0, 0x5780, 0x9741, 0x5500, 0x95C1,
0x9481, 0x5440, 0x9C01, 0x5CC0, 0x5D80, 0x9D41, 0x5F00, 0x9FC1,
0x9E81, 0x5E40, 0x5A00, 0x9AC1, 0x9B81, 0x5B40, 0x9901, 0x59C0,
0x5880, 0x9841, 0x8801, 0x48C0, 0x4980, 0x8941, 0x4B00, 0x8BC1,
0x8A81, 0x4A40, 0x4E00, 0x8EC1, 0x8F81, 0x4F40, 0x8D01, 0x4DC0,
0x4C80, 0x8C41, 0x4400, 0x84C1, 0x8581, 0x4540, 0x8701, 0x47C0,
0x4680, 0x8641, 0x8201, 0x42C0, 0x4380, 0x8341, 0x4100, 0x81C1,
0x8081, 0x4040 };
public static byte[] Encode(byte[] input) {
if (input == null)
return null;
byte[] scratch = new byte[16];
int hash = 17;
for (int i = 0; i < 5; i++) {
for (int j = 0; j < input.length; j++)
scratch[15 - (j % 16)] ^= input[j];
for (int j = 0; j < 16; j += 2) {
hash = Hash(scratch, hash);
scratch[j] = (byte) (hash & 0xFF);
scratch[j + 1] = (byte) ((hash >>> 8) & 0xFF);
}
}
byte[] target = new byte[16];
for (int i = 0; i < 16; i++) {
byte lower = (byte) (scratch[i] & 0x7F);
if ((lower >= 'A' && lower <= 'Z') || (lower >= 'a' && lower <= 'z'))
target[i] = lower;
else
target[i] = (byte) (((scratch[i] >>> 4 & 0xF) + 0x61) & 0xFF);
}
return target;
}
private static int Hash(byte[] scratch, int hash) {
for (int i = 15; i >= 0; i--)
hash = ((hash >>> 8) & 0xFF ^ table[hash & 0xFF] ^ table[scratch[i] & 0xFF]) & 0xFFFF;
return hash;
}
}
There are several implementations of CRC-16. Progress Software (deliberately) does not document which variant is used.
For what purpose are you looking for this?
Rather than trying to use "encode" I'd recommend studying OE's cryptography functionality. I'm not sure what 10.1C supports, the 11.0 docs I have says OE supports:
• DES — Data Encryption Standard
• DES3 — Triple DES
• AES — Advanced Encryption Standard
• RC4 — Also known as ARC4
The OE PDF docs are available here:
http://communities.progress.com/pcom/docs/DOC-16074
The way how the ENCODE function only works one way. Progress has never disclosed the algorithm behind it. Plus they have never built in a function to decode.
As with OE 10.0B Progress has implemented cryptography within the ABL. Have a look at the ENCRYPT and DECRYPT function.

Multiply 2 very big numbers in IOS

I have to multiply 2 large integer numbers, every one is 80+ digits.
What is the general approach for such kind of tasks?
You will have to use a large integer library. There are some open source ones listed on Wikipedia's Arbitrary Precision arithmetic page here
We forget how awesome it is that CPUs can multiply numbers that fit into a single register. Once you try to multiply two numbers that are bigger than a register you realize what a pain in the ass it is to actually multiply numbers.
I had to write a large number class awhile back. Here is the code for my multiply function. KxVector is just an array of 32 bit values with a count, and pretty self explanatory, and not included here. I removed all the other math functions for brevity. All the math operations are easy to implement except multiply and divide.
#define BIGNUM_NEGATIVE 0x80000000
class BigNum
{
public:
void mult( const BigNum& b );
KxVector<u32> mData;
s32 mFlags;
};
void BigNum::mult( const BigNum& b )
{
// special handling for multiply by zero
if ( b.isZero() )
{
mData.clear();
mFlags = 0;
return;
}
// apply sign
mFlags ^= b.mFlags & BIGNUM_NEGATIVE;
// multiply two numbers using a naive multiplication algorithm.
// this would be faster with karatsuba or FFT based multiplication
const BigNum* ppa;
const BigNum* ppb;
if ( mData.size() >= b.mData.size() )
{
ppa = this;
ppb = &b;
} else {
ppa = &b;
ppb = this;
}
assert( ppa->mData.size() >= ppb->mData.size() );
u32 aSize = ppa->mData.size();
u32 bSize = ppb->mData.size();
BigNum tmp;
for ( u32 i = 0; i < aSize + bSize; i++ )
tmp.mData.insert( 0 );
const u32* pb = ppb->mData.data();
u32 carry = 0;
for ( u32 i = 0; i < bSize; i++ )
{
u64 mult = *(pb++);
if ( mult )
{
carry = 0;
const u32* pa = ppa->mData.data();
u32* pd = tmp.mData.data() + i;
for ( u32 j = 0; j < aSize; j++ )
{
u64 prod = ( mult * *(pa++)) + *pd + carry;
*(pd++) = u32(prod);
carry = u32( prod >> 32 );
}
*pd = u32(carry);
}
}
// remove leading zeroes
while ( tmp.mData.size() && !tmp.mData.last() ) tmp.mData.pop();
mData.swap( tmp.mData );
}
It depends on what you want to do with the numbers. Do you want to use more arithmetic operators or do you simply want to multiply two numbers and then output them to a file? If it's the latter it's fairly simple to put the digits in an int or char array and then implement a multiplication function which works just like you learned to do multiplication by hands.
This is the simplest solution if you want to do this in C, but of course it's not very memory efficient. I suggest looking for Biginteger libraries for C++, e.g. if you want to do more, or just implement it by yourself to suit your needs.