raw beginner with scodec here. Does scodec provide a nice way to convert an unsigned decimal integer value to a literal binary unsigned integer string with length specified as an input, left-padding with zeroes as needed up to the specified length? If so, what would that be? Many thanks...
Sample pseudocode:
{convert(unsignedDecimalIntValue = 5, bitCount = 6) => "000101"}
Colt Frederickson kindly answered this on Gitter for the particular case in my example above:
BitVector.fromInt(5).takeRight(6).toBin
Generalizing the Int arguments to the terms of my pseudocode,
BitVector.fromInt(unsignedDecimalIntValue).takeRight(bitCount).toBin
Related
I'm trying to get a hash of a decimal value and convert it to an integer. But the query results in the following error:
Numeric value 'b902cc4550838229a710bfec4c38cbc7eb11082367a409df9135e7f007a96bda' is not recognized
SELECT (CAST(sha2(TO_VARCHAR(ABS(12.5)), 256) AS INTEGER) % 100) AS temp_value
What is the correct way to convert a hash string to an integer in Snowflake?
I can not use any user defined functions. And have to go with Snowflake native functions.
The hash value contains alphabetic character so it will throw an error
SELECT --(CAST(
sha2(
TO_VARCHAR(
ABS(12.5)), 256)-- AS INTEGER) % 100)
AS temp_value;
You need to convert the hex value from the hash encoding to be int.
I've not been able to find a function built into Snowflake that does this, but if you have a look in the following link, it will explain how to create a javascript function to do the conversion for you:
https://snowflakecommunity.force.com/s/article/faq-does-snowflake-have-a-hex-to-int-type-function
If you use the function in the link, then your code becomes something like this:
SELECT (CAST(js_hextoint(sha2(TO_VARCHAR(ABS(12.5)), 256)) AS INTEGER) % 100) AS temp_value
I've not been able to test the above code I'm afraid, so there may be a bracket in the wrong place...
You have a 56 digit hexadecimal number. It's not going to fit into the maximum numeric precision of 38. You could use a floating point number, but that will lose precision.
create or replace function CONV(VALUE_IN string, OLD_BASE float, NEW_BASE float)
returns string
language javascript
as
$$
// Usage note: Loses precision for very large inputs
return parseInt(VALUE_IN, Math.floor(OLD_BASE).toString(Math.floor(NEW_BASE)));
$$;
select conv('b902cc4550838229a710bfec4c38cbc7eb11082367a409df9135e7f007a96bda', 16, 10);
--Returns 8.368282050700398e+76
For hex to integer, check Does snowflake have a ‘hex’ to ‘int’ type native function?. My guess is that most people checking this question (1k views) are looking for that.
But this specific question wants to convert a sha2 digest to integer for comparison purposes. My advice for that specific question is "don't".
That's because the hex string in the question represent the integer 83682820507003986697271120393377917644380327831689630185856829040117055843290, which is too much to handle even by Java's BigInteger.
Instead, just compare strings/binary to check if the values match or not.
I am a developer using high level languages. I usually take the lower level details for granted.
I read that standards such as ASCII and Unicode are for character encodings. A character has to be stored as a number. Is this the same for numbers? For example, if I declare a variable in .NET like this:
dim test as integer=5
In this case the value of test (5) will be represented as decimal: 49 according to this table. Is that correct?
If you code Dim test As String = "5" the value will be stored using the Unicode encoding for the character "5". However, Integers (and other numeric types) are not strings and are not encoded in that way, they are represented internally using their numeric value. An Integer is stored as a 32 bit value.
what you are asking about is data representation in memory.The way integers are represented depends on the whether they are signed or unsigned. IF they are signed (usually the case, unless you specify type as unsigned int or something equvalent) they are represented in binary in two's complement form: http://en.wikipedia.org/wiki/Two%27s_complement
I am trying to return the 2 byte WORD Hex value of a string character which is not typically English. Basically the Unicode representation. Using vb.net
Ex:
FF5F = ((
FF06 = &
These are represented in unicode standard 6.2. I do not have the ability to display some of the foreign language characters displayed in this set.
So would like for my string character to be converted to this 2 byte value. I haven't been able to find a function in .net to do this.
The code is currently nothing more than a for loop cycling through the string characters, so no sample progress.
I have tried the AscW and ChrW functions but they do not return the 2byte value. ASCII does not seem to be reliable above 255.
If necessary I could isolate the possible languages being tested so that only one language is considered through the comparisons, although an English character is always possible.
Any guidance would be appreciated.
I think you could convert your string to a byte array, which, would look something like this in C#:
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
From that you can just grab to two first bytes from the array, and there you go, you have them.
If you want to show them on a screen, I guess you should probably convert them to hex or some such displayable format.
I've stolen this from the question here.
A collegaue assisted in developing a solution. Each character of the string is converted to character array, and then to an unsigned integer, which is then converted to Hex.
lt = myString
Dim sChars() As Char = lt.ToCharArray
For Each c As Char In sChars
Dim intVal As UInteger = AscW(c)
Debug.Print(c & "=" & Hex(intVal))
Next
Note the AscW function... AscW returns the Unicode code point for the input character. This can be 0 through 65535. The returned value is independent of the culture and code page settings for the current thread. http://msdn.microsoft.com/en-us/library/zew1e4wc(v=vs.90).aspx
I then compare the resulting Hex to the spec for reporting.
Converting a floating-point number to an integer using either CInt or CType will cause the value of that number to be rounded. The Int function and Math.Floor may be used to convert a floating-point number to a whole number, rounding toward negative infinity, but both functions return floating-point values which cannot be implicitly used as Integer values without a cast.
Is there a concise and idiomatic alternative to IntVar = CInt(Int(FloatingPointVar));? Pascal included Round and Trunc functions which returned Integer; is there some equivalent in either the VB.NET language or in the .NET framework?
A similar question, CInt does not round Double value consistently - how can I remove the fractional part? was asked in 2011, but it simply asked if there was a way to convert a floating-point number to an integer; the answers suggested a two-step process, but it didn't go into any depth about what does or does not exist in the framework. I would find it hard to believe that the Framework wouldn't have something analogous to the Pascal Trunc function, given that such a thing will frequently be needed when performing graphical operations using floating-point operands [such operations need to be rendered as discrete pixels, and should be rounded in such a way that round(x)-1 = round(x-1) for all x that fit within the range of +/- (2^31-1); even if such operations are rounded, they should use Floor(x+0.5), rather than round-to-nearest-even, so as to ensure the above property]
Incidentally, in C# a typecast from Double to Int using (type)expr notation uses round-to-zero semantics; the fact that this differs from the VB.NET behavior suggests that one or both languages is using its own conversion routines rather an explicit conversion operator included in the Framework. It would seem likely that the Framework should define a conversion operator? Does such an operator exist within the framework? What does it do? Is there a way to invoke it from C# and/or VB.NET?
After some searching, it seems that VB has no clean way of accomplishing that, short of writing an extension method.
The C# (int) cast translates directly into conv.i4 in IL. VB has no such operators, and no framework function seems to provide an alternative.
Usenet had an interesting discussion about this back in 2005 – of course a lot has changed since then but I think this still holds.
You can use the Math.Truncate method.
Calculates the integral part of a specified double-precision floating-point number.
For example:
Dim a As double = 1.6666666
Dim b As Integer = Math.Truncate(a) ' b = 1
I know this is an old case but I saw no one suggest the Math.Round() function.
Yes Math.Round takes a double and returns a double. However it returns a number that has been rounded to a whole number. It should easily and concisely convert to an integer using cInt. Would that suffice?
cInt(math.round(10000.54564)) ' = 10001
cInt(math.round(10000.49564)) ' = 10000
You may need extract the Int part of a float number:
float num = 12.234;
string toint = "" + num;
string auxil = toint.Split('.');
int newnum = Int.Parse(auxil[0]);
So at first i was trying to use character at index, then convert it to nsnumber, and then get the int value, but for 9 i got a value of 57. So I knew what was going wrong, I'm getting the int of the character itself.
So I read a little, and found atoi, but I get this error, that doesnt crash my app jsut pauses it.
My code is:
int current = atoi([startSquares characterAtIndex: i]);
Now startSquares is a big string full of numbers, and this above line is in a for loop where i goes from 0 to 99.
57 is ASCII for the digit '9'. Assuming that by the "value of the char" you mean "the numeric value of the digit the char represents", you can use the simple trick available in ASCII:
int digit = char - '0';
This trick works, because all digits are encoded in order starting with the digit zero (ASCII code 48). So when you subtract '0' (which is another way to write 48) from 57, you get 9, the value of the digit '9'.
This is a bad design, you should use an int array to hold your squares.
But if you absolutely insist on sticking with your approach, dasblinkenlight's way is the way to go. Just subtract the int value of char '0' from the char that you read.
If you have a character c '9' and you want the numeric value 9, you can use c - '0'. It isn't clear that this is what you want, though.
If you have an array of char that contains a series of numbers (of more than one digit), you need to advance a pointer through that array, and then you could call atoi with that pointer, when it points at a digit (see isdigit), or you could use sscanf, or, you could put it in an NSString and get the next number using intValue. But that would give you an NSInteger, not an NSNumber. I don't think you really want an NSNumber, since you can't directly take the square of one.