Calculate CRC32 in Kotlin (convert C# Code to Kotlin) - kotlin

Could someone help me out with converting this C# code to Kotlin?
public static UInt32 CalcCrc32(byte[] bytes, int length)
{
// GLB_u32CalcCrc32OverBytes
UInt32 Checksum = 0xFFFFFFFF;
for (int i = 0; i < length; i++)
{
byte top = (byte)(Checksum >> 24);
top ^= bytes[i];
Checksum = (Checksum << 8) ^ crc_table[top];
}
return Checksum;
}
It allows the CRC32 caluclation of the first length bytes of bytes.
I have tried different approaches to deal with the unsigned datatypes, but I cannot get it to return the correct CRC.
This was the closest I got
Generating the crc table (taken from this repo)
private val crcTable = (0 until 256).map {
crc32(it.toUByte(), 0x04C11DB7.toUInt())
}
private fun crc32(input: UByte, polynomial: UInt): UInt {
val bigEndianInput = input.toBigEndianUInt()
return (0 until 8).fold(bigEndianInput) { result, _ ->
val isMostSignificantBitOne = result and 0x80000000.toUInt() != 0.toUInt()
val shiftedResult = result shl 1
when (isMostSignificantBitOne) {
true -> shiftedResult xor polynomial
false -> shiftedResult
}
}
}
private fun UByte.toBigEndianUInt(): UInt = this.toUInt() shl 24
Converting the C# method to Kotlin
private fun calcCrc32(bytes: ByteArray, length: Int): UInt {
var checksum : UInt = 0xFFFFFFFFu
for (i in 0 until length) {
var top = (checksum shr 24).toByte()
top = top xor bytes[i]
checksum = checksum shl 8 xor crcTable[top.toInt()]
}
return checksum
}
But this code throws an IndexOutOfBoundsException, because top ends up being -1.
Unit Test
import com.google.common.truth.Truth.assertThat
import com.proregia.pump.common.CrcUtil
import org.junit.Test
class CrcUtilTest {
#Test
fun crc16_correctByteArray_returnsCorrectCrc16() {
val data = byteArrayOf(
0xe8.toByte(),
0x03.toByte(),
0x00.toByte(),
0x00.toByte(),
0x3c.toByte(),
0x00.toByte(),
0x00.toByte(),
0x00.toByte(),
0x90.toByte(),
0x01.toByte(),
0x00.toByte(),
0x00.toByte(),
0x02.toByte(),
0x00.toByte(),
0x00.toByte()
)
CrcUtil.updateCrc16(data)
assertThat(data[13]).isEqualTo(0xAD)
assertThat(data[14]).isEqualTo(0xC1)
}
}

Try toUByte() instead of toByte() in calcCrc32(), also applying it to the result of bytes[i].
private fun calcCrc32(bytes: ByteArray, length: Int): UInt {
var checksum: UInt = 0xFFFFFFFFu
val uBytes = bytes.toUByteArray()
for (i in 0 until length) {
var top = (checksum shr 24).toUByte()
top = top xor uBytes[i]
checksum = checksum shl 8 xor crcTable[top.toInt()]
}
return checksum
}

The JDK has a built-in class that allows you to compute the CRC32 of a given value: https://docs.oracle.com/javase/8/docs/api/java/util/zip/CRC32.html
You can take advantage of Kotlin's interoperability with Java to compute the CRC32 checksum of a given value:
import java.util.zip.CRC32
fun calculateCRC32(value: ByteArray): Long {
val crc32Calculator = CRC32()
crc32Calculator.update(value)
return crc32Calculator.value
}
println(calculateCRC32("Some text".toByteArray())) // prints 3444260633

Related

What is the most efficient way to create a kotlin.String from a range of a ByteArray?

In Java you have:
byte[] bytes = ...
int pos = ...
int length = ...
new String(bytes, pos, length)
This will create one additional byte[] internally in String.
What is the most efficient way to do that in Kotlin (i.e. with the least amount of additional objects)?
val bytes : ByteArray = ...
val pos : Int = ...
val length : Int = ...
???
val bytes: ByteArray = ByteArray(10) { ('a'..'z').toList()[it].code.toByte() }
val pos: Int = 3
val length: Int = 4
val result = String(bytes.sliceArray(pos until pos + length))

Getting extra empty user input while scanning

I'm kinda of new to Kotlin programming.
I have been trying to scan the user input and calculate the fibonacci series for this input.
The problem that I'm facing: I have to enter either a Enter Button or any extra text before entering the real scanner value that should calculate the result.
here's the code snippet:
import java.util.Scanner;
fun main() {
val scanner = Scanner(System.`in`);
print("Geben Sie bitte eine Zahl ein :");
calculateFibonacciSequence(scanner.nextLong()).forEach { element->print("${element}\n") };
}
fun calculateFibonacciSequence(n : Long) : Array<Long>{
val tempArr = mutableListOf<Long>();
var term1 = 0;
var term2 = 1;
var count = 1;
while (count <= n){
tempArr.add(term1.toLong());
val s = term1+term2;
term1 = term2;
term2 = s;
count++;
}
val list: Array<Long> = tempArr.toTypedArray();
return list;
}
You can use readLine(). It worked for me:
val n = readLine()?.toLongOrNull() ?: error("Please enter a long number")
calculateFibonacciSequence(n)
toLongOrNull() will try to parse the String as Long, it will return null if the String does not represent any Long number.
?. are the safe calls
And ?: is the Elvis operator

Kotlin - The caracter literal does not conform expect type Int

I'm struggling with types with my program, I've been asked to do it in JS first and it worked fine but now I can't achieve the result.
Do you think I should make another 'algorithm' ? In advance, thank you for your time.
fun main(){
// the idea is to put numbers in a box
// that cant be larger than 10
val data = "12493419133"
var result = data[0]
var currentBox = Character.getNumericValue(data[0])
var i = 1
while(i < data.length){
val currentArticle = Character.getNumericValue(data[i])
currentBox += currentArticle
println(currentBox)
if(currentBox <= 10){
result += Character.getNumericValue(currentArticle)
}else{
result += '/'
//var resultChar = result.toChar()
// result += '/'
currentBox = Character.getNumericValue(currentArticle)
result += currentArticle
}
i++
}
print(result) //should print 124/9/341/91/33
}
The result is actually of a Char type, and the overload operator function + only accepts Int to increment ASCII value to get new Char.
public operator fun plus(other: Int): Char
In idomatic Kotlin way, you can solve your problem:
fun main() {
val data = "12493419133"
var counter = 0
val result = data.asSequence()
.map(Character::getNumericValue)
.map { c ->
counter += c
if (counter <= 10) c.toString() else "/$c".also{ counter = c }
}
.joinToString("") // terminal operation, will trigger the map functions
println(result)
}
Edit: If the data is too large, you may want to use StringBuilder because it doesn't create string every single time the character is iterated, and instead of using a counter of yourself you can use list.fold()
fun main() {
val data = "12493419133"
val sb = StringBuilder()
data.fold(0) { acc, c ->
val num = Character.getNumericValue(c)
val count = num + acc
val ret = if (count > 10) num.also { sb.append('/') } else count
ret.also { sb.append(c) } // `ret` returned to ^fold, next time will be passed as acc
}
println(sb.toString())
}
If you want a result in List<Char> type:
val data = "12493419133"
val result = mutableListOf<Char>()
var sum = 0
data.asSequence().forEach {
val v = Character.getNumericValue(it)
sum += v
if (sum > 10) {
result.add('/')
sum = v
}
result.add(it)
}
println(result.joinToString(""))

Swift equivalent to Objective-C FourCharCode single quote literals (e.g. 'TEXT')

I am trying replicate some Objective C cocoa in Swift. All is good until I come across the following:
// Set a new type and creator:
unsigned long type = 'TEXT';
unsigned long creator = 'pdos';
How can I create Int64s (or the correct Swift equivalent) from single quote character literals like this?
Types:
public typealias AEKeyword = FourCharCode
public typealias OSType = FourCharCode
public typealias FourCharCode = UInt32
I'm using this in my Cocoa Scripting apps, it considers characters > 0x80 correctly
func OSTypeFrom(string : String) -> UInt {
var result : UInt = 0
if let data = string.dataUsingEncoding(NSMacOSRomanStringEncoding) {
let bytes = UnsafePointer<UInt8>(data.bytes)
for i in 0..<data.length {
result = result << 8 + UInt(bytes[i])
}
}
return result
}
Edit:
Alternatively
func fourCharCodeFrom(string : String) -> FourCharCode
{
assert(string.count == 4, "String length must be 4")
var result : FourCharCode = 0
for char in string.utf16 {
result = (result << 8) + FourCharCode(char)
}
return result
}
or still swiftier
func fourCharCode(from string : String) -> FourCharCode
{
return string.utf16.reduce(0, {$0 << 8 + FourCharCode($1)})
}
I found the following typealiases from the Swift API:
typealias FourCharCode = UInt32
typealias OSType = FourCharCode
And the following functions:
func NSFileTypeForHFSTypeCode(hfsFileTypeCode: OSType) -> String!
func NSHFSTypeCodeFromFileType(fileTypeString: String!) -> OSType
This should allow me to create the equivalent code:
let type : UInt32 = UInt32(NSHFSTypeCodeFromFileType("TEXT"))
let creator : UInt32 = UInt32(NSHFSTypeCodeFromFileType("pdos"))
But those 4-character strings doesn't work and return 0.
If you wrap each string in ' single quotes ' and call the same functions, you will get the correct return values:
let type : UInt32 = UInt32(NSHFSTypeCodeFromFileType("'TEXT'"))
let creator : UInt32 = UInt32(NSHFSTypeCodeFromFileType("'pdos'"))
Adopt the ExpressibleByStringLiteral protocol to use four-character string literals directly:
extension FourCharCode: ExpressibleByStringLiteral {
public init(stringLiteral value: StringLiteralType) {
if let data = value.data(using: .macOSRoman), data.count == 4 {
self = data.reduce(0, {$0 << 8 + Self($1)})
} else {
self = 0
}
}
}
Now you can just pass a string literal as the FourCharCode / OSType / UInt32 parameter:
let record = NSAppleEventDescriptor.record()
record.setDescriptor(NSAppleEventDescriptor(boolean: true), forKeyword: "test")
In Swift 4 or later, I use this code - if the string is not 4 characters in size, it will return an OSType(0):
extension String {
public func osType() -> OSType {
var result:UInt = 0
if let data = self.data(using: .macOSRoman), data.count == 4
{
data.withUnsafeBytes { (ptr:UnsafePointer<UInt8>) in
for i in 0..<data.count {
result = result << 8 + UInt(ptr[i])
}
}
}
return OSType(result)
}
}
let type = "APPL".osType() // 1095782476
// check if this is OK in a playground
let hexStr = String(format: "0x%lx", type) // 0x4150504c -> "APPL" in ASCII
Swift 5 Update:
extension String {
func osType() -> OSType {
return OSType(
data(using: .macOSRoman)?
.withUnsafeBytes {
$0.reduce(into: UInt(0)) { $0 = $0 << 8 + UInt($1) }
} ?? 0
)
}
}
Here's a simple function
func mbcc(foo: String) -> Int
{
let chars = foo.utf8
var result: Int = 0
for aChar in chars
{
result = result << 8 + Int(aChar)
}
return result
}
let a = mbcc("TEXT")
print(String(format: "0x%lx", a)) // Prints 0x54455854
It will work for strings that will fit in an Int. Once they get longer it starts losing digits from the top.
If you use
result = result * 256 + Int(aChar)
you should get a crash when the string gets too big instead.
Using NSHFSTypeCodeFromFileType does work, but only for 4-character strings wrapped with single quotes, aka 6-character strings. It returns 0 for unquoted 4-character strings.
So wrap your 4-character string in ' ' before passing it to the function:
extension FourCharCode: ExpressibleByStringLiteral {
public init(stringLiteral value: StringLiteralType) {
switch (value.count, value.first, value.last) {
case (6, "'", "'"):
self = NSHFSTypeCodeFromFileType(value)
case (4, _, _):
self = NSHFSTypeCodeFromFileType("'\(value)'")
default:
self = 0
}
}
}
Using the above extension, you can use 4-character or single-quoted 6-character string literals:
let record = NSAppleEventDescriptor.record()
record.setDescriptor(NSAppleEventDescriptor(boolean: true), forKeyword: "4444")
record.setDescriptor(NSAppleEventDescriptor(boolean: true), forKeyword: "'6666'")
It would be even better to limit the string literal to 4-character strings at compile time. That does not seem to currently be possible, but is being discussed for Swift here:
Allow for Compile-Time Checked Intervals for Parameters Expecting Literal Values

Java - PBKDF2 with HMACSHA256 as the PRF

I've been given the task of creating a Login API for our project and I'm supposed to use PBKDF2 with HMACSHA256 as the PRF. The plain text password is hashed using MD5 and then fed into the PBKDF2 to generate a derived key. The problem is, I'm not able to get the same output as what the project documentation is telling me.
Here's the PBKDF2 Implementation in Java:
public class PBKDF2
{
public static byte[] deriveKey( byte[] password, byte[] salt, int iterationCount, int dkLen )
throws java.security.NoSuchAlgorithmException, java.security.InvalidKeyException
{
SecretKeySpec keyspec = new SecretKeySpec( password, "HmacSHA256" );
Mac prf = Mac.getInstance( "HmacSHA256" );
prf.init( keyspec );
// Note: hLen, dkLen, l, r, T, F, etc. are horrible names for
// variables and functions in this day and age, but they
// reflect the terse symbols used in RFC 2898 to describe
// the PBKDF2 algorithm, which improves validation of the
// code vs. the RFC.
//
// dklen is expressed in bytes. (16 for a 128-bit key)
int hLen = prf.getMacLength(); // 20 for SHA1
int l = Math.max( dkLen, hLen); // 1 for 128bit (16-byte) keys
int r = dkLen - (l-1)*hLen; // 16 for 128bit (16-byte) keys
byte T[] = new byte[l * hLen];
int ti_offset = 0;
for (int i = 1; i <= l; i++) {
F( T, ti_offset, prf, salt, iterationCount, i );
ti_offset += hLen;
}
if (r < hLen) {
// Incomplete last block
byte DK[] = new byte[dkLen];
System.arraycopy(T, 0, DK, 0, dkLen);
return DK;
}
return T;
}
private static void F( byte[] dest, int offset, Mac prf, byte[] S, int c, int blockIndex ) {
final int hLen = prf.getMacLength();
byte U_r[] = new byte[ hLen ];
// U0 = S || INT (i);
byte U_i[] = new byte[S.length + 4];
System.arraycopy( S, 0, U_i, 0, S.length );
INT( U_i, S.length, blockIndex );
for( int i = 0; i < c; i++ ) {
U_i = prf.doFinal( U_i );
xor( U_r, U_i );
}
System.arraycopy( U_r, 0, dest, offset, hLen );
}
private static void xor( byte[] dest, byte[] src ) {
for( int i = 0; i < dest.length; i++ ) {
dest[i] ^= src[i];
}
}
private static void INT( byte[] dest, int offset, int i ) {
dest[offset + 0] = (byte) (i / (256 * 256 * 256));
dest[offset + 1] = (byte) (i / (256 * 256));
dest[offset + 2] = (byte) (i / (256));
dest[offset + 3] = (byte) (i);
}
// ctor
private PBKDF2 () {}
}
I used test vectors found here PBKDF2-HMAC-SHA2 test vectors to verify the correctness of the implementation and it all checked out. I'm not sure why I couldn't the same results with an MD5 hashed password.
Parameters:
Salt: 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F
Iterations Count: 1000
DKLen: 16 (128-bit derived key)
Using "foobar" as the plaintext password, the expected results are:
PWHash = MD5(PlaintextPassword) = 3858f62230ac3c915f300c664312c63f
PWKey = PBKDF2(PWHash, Salt, IterationsCount, DKLen) = 33C37758EFA6780C5E52FAB3B50F329C
What I get:
PWHash = 3858f62230ac3c915f300c664312c63f
PWKey = 0bd0c7d8339df2c66ce4b6e1e91ed3f1
The iterations count was supposed to 4096, not 1000.
The generation of int l seems wrong. You have specified the maximum between dkLen and hLen but the spec says l = CEIL (dkLen / hLen) with
CEIL (x) is the "ceiling" function, i.e. the smallest integer greater than, or equal to, x.
I think l would be more accurately defined as l = (int)Math.ceil( (double)dkLen / (double)hLen )