Progress 10.1C 4GL Encode Function - cryptography

Does anyone know which algorithm Progress 10.1C uses in the Encode Function?
I found this: http://knowledgebase.progress.com/articles/Article/21685
The Progress 4GL ENCODE function uses a CRC-16 algorithm to generate its encoded output.
Progress 4GL:
ENCODE("Test").
gives as output "LkwidblanjsipkJC"
But for example on http://www.nitrxgen.net/hashgen/ with Password "Test", I never get the Result as from Progress..
Any Ideas? :)

I've made the algorithm available on https://github.com/pvginkel/ProgressEncode.

I needed this function in Java. So I ported Pieter's C# code (https://github.com/pvginkel/ProgressEncode) to Java. At least all test cases passed. Enjoy! :)
public class ProgressEncode {
static int[] table = { 0x0000, 0xC0C1, 0xC181, 0x0140, 0xC301, 0x03C0,
0x0280, 0xC241, 0xC601, 0x06C0, 0x0780, 0xC741, 0x0500, 0xC5C1,
0xC481, 0x0440, 0xCC01, 0x0CC0, 0x0D80, 0xCD41, 0x0F00, 0xCFC1,
0xCE81, 0x0E40, 0x0A00, 0xCAC1, 0xCB81, 0x0B40, 0xC901, 0x09C0,
0x0880, 0xC841, 0xD801, 0x18C0, 0x1980, 0xD941, 0x1B00, 0xDBC1,
0xDA81, 0x1A40, 0x1E00, 0xDEC1, 0xDF81, 0x1F40, 0xDD01, 0x1DC0,
0x1C80, 0xDC41, 0x1400, 0xD4C1, 0xD581, 0x1540, 0xD701, 0x17C0,
0x1680, 0xD641, 0xD201, 0x12C0, 0x1380, 0xD341, 0x1100, 0xD1C1,
0xD081, 0x1040, 0xF001, 0x30C0, 0x3180, 0xF141, 0x3300, 0xF3C1,
0xF281, 0x3240, 0x3600, 0xF6C1, 0xF781, 0x3740, 0xF501, 0x35C0,
0x3480, 0xF441, 0x3C00, 0xFCC1, 0xFD81, 0x3D40, 0xFF01, 0x3FC0,
0x3E80, 0xFE41, 0xFA01, 0x3AC0, 0x3B80, 0xFB41, 0x3900, 0xF9C1,
0xF881, 0x3840, 0x2800, 0xE8C1, 0xE981, 0x2940, 0xEB01, 0x2BC0,
0x2A80, 0xEA41, 0xEE01, 0x2EC0, 0x2F80, 0xEF41, 0x2D00, 0xEDC1,
0xEC81, 0x2C40, 0xE401, 0x24C0, 0x2580, 0xE541, 0x2700, 0xE7C1,
0xE681, 0x2640, 0x2200, 0xE2C1, 0xE381, 0x2340, 0xE101, 0x21C0,
0x2080, 0xE041, 0xA001, 0x60C0, 0x6180, 0xA141, 0x6300, 0xA3C1,
0xA281, 0x6240, 0x6600, 0xA6C1, 0xA781, 0x6740, 0xA501, 0x65C0,
0x6480, 0xA441, 0x6C00, 0xACC1, 0xAD81, 0x6D40, 0xAF01, 0x6FC0,
0x6E80, 0xAE41, 0xAA01, 0x6AC0, 0x6B80, 0xAB41, 0x6900, 0xA9C1,
0xA881, 0x6840, 0x7800, 0xB8C1, 0xB981, 0x7940, 0xBB01, 0x7BC0,
0x7A80, 0xBA41, 0xBE01, 0x7EC0, 0x7F80, 0xBF41, 0x7D00, 0xBDC1,
0xBC81, 0x7C40, 0xB401, 0x74C0, 0x7580, 0xB541, 0x7700, 0xB7C1,
0xB681, 0x7640, 0x7200, 0xB2C1, 0xB381, 0x7340, 0xB101, 0x71C0,
0x7080, 0xB041, 0x5000, 0x90C1, 0x9181, 0x5140, 0x9301, 0x53C0,
0x5280, 0x9241, 0x9601, 0x56C0, 0x5780, 0x9741, 0x5500, 0x95C1,
0x9481, 0x5440, 0x9C01, 0x5CC0, 0x5D80, 0x9D41, 0x5F00, 0x9FC1,
0x9E81, 0x5E40, 0x5A00, 0x9AC1, 0x9B81, 0x5B40, 0x9901, 0x59C0,
0x5880, 0x9841, 0x8801, 0x48C0, 0x4980, 0x8941, 0x4B00, 0x8BC1,
0x8A81, 0x4A40, 0x4E00, 0x8EC1, 0x8F81, 0x4F40, 0x8D01, 0x4DC0,
0x4C80, 0x8C41, 0x4400, 0x84C1, 0x8581, 0x4540, 0x8701, 0x47C0,
0x4680, 0x8641, 0x8201, 0x42C0, 0x4380, 0x8341, 0x4100, 0x81C1,
0x8081, 0x4040 };
public static byte[] Encode(byte[] input) {
if (input == null)
return null;
byte[] scratch = new byte[16];
int hash = 17;
for (int i = 0; i < 5; i++) {
for (int j = 0; j < input.length; j++)
scratch[15 - (j % 16)] ^= input[j];
for (int j = 0; j < 16; j += 2) {
hash = Hash(scratch, hash);
scratch[j] = (byte) (hash & 0xFF);
scratch[j + 1] = (byte) ((hash >>> 8) & 0xFF);
}
}
byte[] target = new byte[16];
for (int i = 0; i < 16; i++) {
byte lower = (byte) (scratch[i] & 0x7F);
if ((lower >= 'A' && lower <= 'Z') || (lower >= 'a' && lower <= 'z'))
target[i] = lower;
else
target[i] = (byte) (((scratch[i] >>> 4 & 0xF) + 0x61) & 0xFF);
}
return target;
}
private static int Hash(byte[] scratch, int hash) {
for (int i = 15; i >= 0; i--)
hash = ((hash >>> 8) & 0xFF ^ table[hash & 0xFF] ^ table[scratch[i] & 0xFF]) & 0xFFFF;
return hash;
}
}

There are several implementations of CRC-16. Progress Software (deliberately) does not document which variant is used.
For what purpose are you looking for this?

Rather than trying to use "encode" I'd recommend studying OE's cryptography functionality. I'm not sure what 10.1C supports, the 11.0 docs I have says OE supports:
• DES — Data Encryption Standard
• DES3 — Triple DES
• AES — Advanced Encryption Standard
• RC4 — Also known as ARC4
The OE PDF docs are available here:
http://communities.progress.com/pcom/docs/DOC-16074

The way how the ENCODE function only works one way. Progress has never disclosed the algorithm behind it. Plus they have never built in a function to decode.
As with OE 10.0B Progress has implemented cryptography within the ABL. Have a look at the ENCRYPT and DECRYPT function.

Related

Valgrind examine memory, patching lackey

I would like to patch valgrind's lackey example tool. I would like to
examine the memory of the instrumented binary for the appearence
of a certain string sequence around the pointer of a store instruction.
Alternatively scan all memory regions on each store for the appearence
of such a sequence. Does anyone know a reference to a adequate
example? Basically I'd like to
for (i = -8; i <= 8; i++) {
if (strncmp(ptr+i, "needle", 6) == 0)
printf("Here ip: %x\n", ip);
}
But how can I verify that ptr in the range of [-8,8] is valid? Is there
a function that tracks the heap regions? Or do I have to track /proc/pid/maps each time?
// Konrad
Turns out that the exp-dhat tools in valgrind works for me:
static VG_REGPARM(3)
void dh_handle_write ( Addr addr, UWord szB )
{
Block* bk = find_Block_containing(addr);
if (bk) {
if (is_subinterval_of(bk->payload, bk->req_szB, addr-10, 10*2)) {
int i = 0;
for (i = -10; i <= 10; i++) {
if ((VG_(memcmp)(((char*)addr)+ i, searchfor, 6) == 0)) {
ExeContext *ec = VG_(record_ExeContext)( VG_(get_running_tid)(), 0 );
VG_(pp_ExeContext) ( ec );
VG_(printf)(" ---------------- ----------- found %08lx # %08lx --------\n", addr, ip);
}
}
}
bk->n_writes += szB;
if (bk->histoW)
inc_histo_for_block(bk, addr, szB);
}
}
Each time for a write I search for the occurance of array searchfor and print a stacktrace if found...

iTextSharp can't read numbers in this PDF

I'm reading PDF by iTextSharp-5.5.7.0, PdfTextExtractor.GetTextFromPage() works well in most of files until this: sample PDF
I can't read any number from it, for example: only return 'ANEU' from 'A0NE8U', they are fine in Adobe Reader to copy out. Code is here:
public static string ExtractTextFromPdf(string path)
{
using (PdfReader reader = new PdfReader(path))
{
StringBuilder text = new StringBuilder();
for (int i = 1; i <= reader.NumberOfPages; i++)
{
text.Append(PdfTextExtractor.GetTextFromPage(reader, i));
}
return text.ToString();
}
}
The font in question has a ToUnicode map which is used for text extraction. Unfortunately, though, iText(Sharp) reads it only partially, and digits are located after the mappings read.
In detail:
The cause for the issue is the implementation of AbstractCMap.addRange (I'm showing the iText Java code as iText also has this issue and I'm more into the Java version):
void addRange(PdfString from, PdfString to, PdfObject code) {
byte[] a1 = decodeStringToByte(from);
byte[] a2 = decodeStringToByte(to);
if (a1.length != a2.length || a1.length == 0)
throw new IllegalArgumentException("Invalid map.");
byte[] sout = null;
if (code instanceof PdfString)
sout = decodeStringToByte((PdfString)code);
int start = a1[a1.length - 1] & 0xff;
int end = a2[a2.length - 1] & 0xff;
for (int k = start; k <= end; ++k) {
a1[a1.length - 1] = (byte)k;
PdfString s = new PdfString(a1);
s.setHexWriting(true);
if (code instanceof PdfArray) {
addChar(s, ((PdfArray)code).getPdfObject(k - start));
}
else if (code instanceof PdfNumber) {
int nn = ((PdfNumber)code).intValue() + k - start;
addChar(s, new PdfNumber(nn));
}
else if (code instanceof PdfString) {
PdfString s1 = new PdfString(sout);
s1.setHexWriting(true);
++sout[sout.length - 1];
addChar(s, s1);
}
}
}
The loop only considers the range in the least significant byte of from and to. Thus, for the range in question:
1 beginbfrange
<0000><01E1>[
<FFFD><FFFD><FFFD><0020><0041><0042><0043><0044>
<0045><0046><0047><0048><0049><004A><004B><004C>
...
<2248><003C><003E><2264><2265><00AC><0394><03A9>
<00B5><03C0><00B0><221E><2202><222B><221A><2211>
<220F><25CA>]
endbfrange
it only iterates from 0x00 to 0xE1, i.e. only the first 226 entries of the 482 mappings.
There actually are some peculiar restrictions in CMaps, e.g. there may only be up to 100 separate bfrange entries in the same section, and in the alternative bfrange entry syntax
n beginbfrange
srcCode1 srcCode2 dstString
endbfrange
which is handled by the same method addRange, there is the restriction
When defining ranges of this type, the value of the last byte in the string shall be less than or equal to 255 − (srcCode2 − srcCode1).
Probably a misunderstanding of this restriction made the developer believe, srcCode2 and srcCode1 also would merely differ in the least significant byte.
But maybe there are even more restrictions which I merely did not find...
Meanwhile (as of iText 5.5.9, tested against a development SNAPSHOT) this issue seems to have been fixed.

Sample implementation of java CBCBlockCipherMac in Objective c

Can anyone share a sample code on how to implement CBCBlockCipherMac in objective C. here is how far I got and its giving a different result from the java implementation.
const unsigned char key[16] = "\x1\x2\x3\x4\x5\x6\x7\x8\x9\x0\x1\x2\x3\x4\x5\x6";
const unsigned char data[14] = "\x54\x68\x69\x73\x69\x73\x6d\x79\x73\x74\x72\x69\x6e\x67";
CMAC_CTX *ctx = CMAC_CTX_new();
ret = CMAC_Init(ctx, key, sizeof(key), EVP_des_ede3(), 0);
printf("CMAC_Init = %d\n", ret);
ret = CMAC_Update(ctx, data, sizeof(data));
printf("CMAC_Update = %d\n", ret);
size_t size;
//unsigned int size;
unsigned char tag[4];
ret = CMAC_Final(ctx, tag, &size);
printf("CMAC_Final = %d, size = %u\n", ret, size);
CMAC_CTX_free(ctx);
printf("expected: 391d1520\n"
"got: ");
size_t index;
for (index = 0; index < sizeof(tag) - 1; ++index) {
printf("%02x", tag[index]);
if ((index + 1) % 4 == 0) {
printf(" ");
}
}
printf("%02x\n", tag[sizeof(tag) - 1]);
And my java code looks like this
String *data = "Thisismystring";
String *keyString = "1234567890123456";
bytes[]mac = new byte[4];
CBCBlockCipherMac macCipher = new CBCBlockCipherMac(DESedeEngine);
DESedeParameters keyParameter = new DESedeParameters(keyString.getBytes());
DESedeEngine engine = new DESedeEngine();
engine,init(true, keyParameter);
byte[] dataBytes = data.getBytes();
macCipher.update(dataBytes,0,data.length());
macCipher.doFinal(mac,0);
byte[] macBytesEncoded = Hex.encode(mac);
String macString = new String(macBytesEncoded);
This gives me "391d1520". But the objective c gives me "01000000"
CMAC is not the same as CBC MAC. CMAC has an an additional step at the beginning and the end of the calculation. If possible I would suggest you upgrade your Java code to use CMAC, as CBC is not as secure, e.g. using org.bouncycastle.crypto.macs.CMac.
OpenSSL does not seem to implement CBC MAC directly (at least, I cannot find any reference to it). So if you need it, you need to implement it yourself.
You can use CBC mode encryption with a zero IV and take the last 16 bytes of the encryption. Of course, this means you need to store the rest of the ciphertext in a buffer somewhere, or you need to use the update functions smartly (reusing the same buffer over and over again for the ciphertext).

Determine Position of Most Signifiacntly Set Bit in a Byte

I have a byte I am using to store bit flags. I need to compute the position of the most significant set bit in the byte.
Example Byte: 00101101 => 6 is the position of the most significant set bit
Compact Hex Mapping:
[0x00] => 0x00
[0x01] => 0x01
[0x02,0x03] => 0x02
[0x04,0x07] => 0x03
[0x08,0x0F] => 0x04
[0x10,0x1F] => 0x05
[0x20,0x3F] => 0x06
[0x40,0x7F] => 0x07
[0x80,0xFF] => 0x08
TestCase in C:
#include <stdio.h>
unsigned char check(unsigned char b) {
unsigned char c = 0x08;
unsigned char m = 0x80;
do {
if(m&b) { return c; }
else { c -= 0x01; }
} while(m>>=1);
return 0; //never reached
}
int main() {
unsigned char input[256] = {
0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f,
0x10,0x11,0x12,0x13,0x14,0x15,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,0x1f,
0x20,0x21,0x22,0x23,0x24,0x25,0x26,0x27,0x28,0x29,0x2a,0x2b,0x2c,0x2d,0x2e,0x2f,
0x30,0x31,0x32,0x33,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x3b,0x3c,0x3d,0x3e,0x3f,
0x40,0x41,0x42,0x43,0x44,0x45,0x46,0x47,0x48,0x49,0x4a,0x4b,0x4c,0x4d,0x4e,0x4f,
0x50,0x51,0x52,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x5b,0x5c,0x5d,0x5e,0x5f,
0x60,0x61,0x62,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x6b,0x6c,0x6d,0x6e,0x6f,
0x70,0x71,0x72,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x7b,0x7c,0x7d,0x7e,0x7f,
0x80,0x81,0x82,0x83,0x84,0x85,0x86,0x87,0x88,0x89,0x8a,0x8b,0x8c,0x8d,0x8e,0x8f,
0x90,0x91,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0x9b,0x9c,0x9d,0x9e,0x9f,
0xa0,0xa1,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xab,0xac,0xad,0xae,0xaf,
0xb0,0xb1,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xbb,0xbc,0xbd,0xbe,0xbf,
0xc0,0xc1,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xcb,0xcc,0xcd,0xce,0xcf,
0xd0,0xd1,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xdb,0xdc,0xdd,0xde,0xdf,
0xe0,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xeb,0xec,0xed,0xee,0xef,
0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0xfa,0xfb,0xfc,0xfd,0xfe,0xff };
unsigned char truth[256] = {
0x00,0x01,0x02,0x02,0x03,0x03,0x03,0x03,0x04,0x04,0x04,0x04,0x04,0x04,0x04,0x04,
0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08};
int i,r;
int f = 0;
for(i=0; i<256; ++i) {
r=check(input[i]);
if(r !=(truth[i])) {
printf("failed %d : 0x%x : %d\n",i,0x000000FF & ((int)input[i]),r);
f += 1;
}
}
if(!f) { printf("passed all\n"); }
else { printf("failed %d\n",f); }
return 0;
}
I would like to simplify my check() function to not involve looping (or branching preferably). Is there a bit twiddling hack or hashed lookup table solution to compute the position of the most significant set bit in a byte?
Your question is about an efficient way to compute log2 of a value. And because you seem to want a solution that is not limited to the C language I have been slightly lazy and tweaked some C# code I have.
You want to compute log2(x) + 1 and for x = 0 (where log2 is undefined) you define the result as 0 (e.g. you create a special case where log2(0) = -1).
static readonly Byte[] multiplyDeBruijnBitPosition = new Byte[] {
7, 2, 3, 4,
6, 1, 5, 0
};
public static Byte Log2Plus1(Byte value) {
if (value == 0)
return 0;
var roundedValue = value;
roundedValue |= (Byte) (roundedValue >> 1);
roundedValue |= (Byte) (roundedValue >> 2);
roundedValue |= (Byte) (roundedValue >> 4);
var log2 = multiplyDeBruijnBitPosition[((Byte) (roundedValue*0xE3)) >> 5];
return (Byte) (log2 + 1);
}
This bit twiddling hack is taken from Find the log base 2 of an N-bit integer in O(lg(N)) operations with multiply and lookup where you can see the equivalent C source code for 32 bit values. This code has been adapted to work on 8 bit values.
However, you may be able to use an operation that gives you the result using a very efficient built-in function (on many CPU's a single instruction like the Bit Scan Reverse is used). An answer to the question Bit twiddling: which bit is set? has some information about this. A quote from the answer provides one possible reason why there is low level support for solving this problem:
Things like this are the core of many O(1) algorithms such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
That was a fun little challenge. I don't know if this one is completely portable since I only have VC++ to test with, and I certainly can't say for sure if it's more efficient than other approaches. This version was coded with a loop but it can be unrolled without too much effort.
static unsigned char check(unsigned char b)
{
unsigned char r = 8;
unsigned char sub = 1;
unsigned char s = 7;
for (char i = 0; i < 8; i++)
{
sub = sub & ((( b & (1 << s)) >> s--) - 1);
r -= sub;
}
return r;
}
I'm sure everyone else has long since moved on to other topics but there was something in the back of my mind suggesting that there had to be a more efficient branch-less solution to this than just unrolling the loop in my other posted solution. A quick trip to my copy of Warren put me on the right track: Binary search.
Here's my solution based on that idea:
Pseudo-code:
// see if there's a bit set in the upper half
if ((b >> 4) != 0)
{
offset = 4;
b >>= 4;
}
else
offset = 0;
// see if there's a bit set in the upper half of what's left
if ((b & 0x0C) != 0)
{
offset += 2;
b >>= 2;
}
// see if there's a bit set in the upper half of what's left
if > ((b & 0x02) != 0)
{
offset++;
b >>= 1;
}
return b + offset;
Branch-less C++ implementation:
static unsigned char check(unsigned char b)
{
unsigned char adj = 4 & ((((unsigned char) - (b >> 4) >> 7) ^ 1) - 1);
unsigned char offset = adj;
b >>= adj;
adj = 2 & (((((unsigned char) - (b & 0x0C)) >> 7) ^ 1) - 1);
offset += adj;
b >>= adj;
adj = 1 & (((((unsigned char) - (b & 0x02)) >> 7) ^ 1) - 1);
return (b >> adj) + offset + adj;
}
Yes, I know that this is all academic :)
It is not possible in plain C. The best I would suggest is the following implementation of check. Despite quite "ugly" I think it runs faster than the ckeck version in the question.
int check(unsigned char b)
{
if(b&128) return 8;
if(b&64) return 7;
if(b&32) return 6;
if(b&16) return 5;
if(b&8) return 4;
if(b&4) return 3;
if(b&2) return 2;
if(b&1) return 1;
return 0;
}
Edit: I found a link to the actual code: http://www.hackersdelight.org/hdcodetxt/nlz.c.txt
The algorithm below is named nlz8 in that file. You can choose your favorite hack.
/*
From last comment of: http://stackoverflow.com/a/671826/315052
> Hacker's Delight explains how to correct for the error in 32-bit floats
> in 5-3 Counting Leading 0's. Here's their code, which uses an anonymous
> union to overlap asFloat and asInt: k = k & ~(k >> 1); asFloat =
> (float)k + 0.5f; n = 158 - (asInt >> 23); (and yes, this relies on
> implementation-defined behavior) - Derrick Coetzee Jan 3 '12 at 8:35
*/
unsigned char check (unsigned char b) {
union {
float asFloat;
int asInt;
} u;
unsigned k = b & ~(b >> 1);
u.asFloat = (float)k + 0.5f;
return 32 - (158 - (u.asInt >> 23));
}
Edit -- not exactly sure what the asker means by language independent, but below is the equivalent code in python.
import ctypes
class Anon(ctypes.Union):
_fields_ = [
("asFloat", ctypes.c_float),
("asInt", ctypes.c_int)
]
def check(b):
k = int(b) & ~(int(b) >> 1)
a = Anon(asFloat=(float(k) + float(0.5)))
return 32 - (158 - (a.asInt >> 23))

What's the most efficient way to access 2D seismic data

Can anyone tell me the most efficient/performant method to access 2D seismic data using Ocean?
For example, if I need to perform a calculation using data from 3x2D seismic lines (all with the same geometry) is this the most efficient way?
for (int j = 0; j < seismicLine1.NumSamplesJK.I; j++)
{
ITrace trace1 = seismicLine1.GetTrace(j);
ITrace trace2 = seismicLine2.GetTrace(j);
ITrace trace3 = seismicLine3.GetTrace(j);
for (int k = 0; k < seismicLine1.NumSamplesJK.J; k++)
{
double sum = trace1[k] + trace2[k] + trace3[k];
}
}
Thanks
A followup to #Keith's suggestion - with .NET4 his code could be refactored to a generic:
public static IEnumerable<Tuple<T1, T2, T3>> TuplesFrom<T1,T2,T3>(IEnumerable<T1> s1, IEnumerable<T2> s2, IEnumerable<T3> s3)
{
bool m1, m2, m3; // "more" flags
using (var e1 = s1.GetEnumerator())
using (var e2 = s2.GetEnumerator())
using (var e3 = s3.GetEnumerator())
while ((m1 = e1.MoveNext()) &&
(m2 = e2.MoveNext()) &&
(m3 = e3.MoveNext()))
yield return Tuple.Create(e1.Current, e2.Current, e3.Current);
if (m1 || m2 || m3)
throw new ArgumentException(); // sequences of unequal lengths
}
Which gives:
foreach (var traceTuple in TuplesFrom(seismicLine1.Traces, seismicLine2.Traces, seismicLine3.Traces))
for (int k = 0; k < maxK; ++k)
sum = traceTuple.Item1[k] + traceTuple.Item2[k] + traceTuple.Item3[k];
What you have will work except for the two bugs I see, but it can also be made slightly faster. First the bugs. Your loops should be testing NumSamplesIJK.J not .I for the outer loop and .K, not .J for the inner loop. The .I is always 0 for 2D lines.
You can get a slight performance lift by minimizing the dereference of the NumSamplesIJK properties. Since the geometries are the same you should create a pair of variables for the J and K properties and use them.
int maxJ = seismicLine1.NumSamplesIJK.J;
int maxK = seismicLine1.NumsamplesIJK.K;
for (int j = 0; j < maxJ; j++)
...
for (int k = 0; k < maxK; k++)
...
You might also consider using the Traces enumerator instead of calling GetTrace. It will process the data in trace ascending order. Unfortunatley with three lines the code is a bit harder to read.
int maxK = SeismicLine1.NumSamplesIJK.K;
IEnumerator line2Traces = seismicLine2.Traces.GetEnumerator();
ITrace line2Trace = line2Traces.MoveNext();
IEnumerator line3Traces = seismicLine3.Traces.GetEnumerator();
ITrace line3Trace = line3Traces.MoveNext();
foreach (ITrace line1Trace in seismicLine1.Traces)
{
for (int k = 0; k < maxK; k++)
{
double sum = line1Trace[k] + line2Trace[k] + line3Trace[k];
}
line2Trace = line2Traces.MoveNext();
line3Trace = line3Traces.MoveNext();
}
I don't know what, if any, performance lift this might provide. You'll have to profile it to find out.
Good luck.