String Hashed in both Kotlin and Golang - kotlin

In service A I have a string that get hashed like this:
fun String.toHash(): Long {
var hashCode = this.hashCode().toLong()
if (hashCode < 0L) {
hashCode *= -1
}
return hashCode
}
I want to replicate this code in service B written in Golang so for the same word I get the exact same hash. For what I understand from Kotlin's documentation the hash applied returns a 64bit integer. So in Go I am doing this:
func hash(s string) int64 {
h := fnv.New64()
h.Write([]byte(s))
v := h.Sum64()
return int64(v)
}
But while unit testing this I do not get the same value. I get:
func Test_hash(t *testing.T) {
tests := []struct {
input string
output int64
}{
{input: "papafritas", output: 1079370635},
}
for _, test := range tests {
got := hash(test.input)
assert.Equal(t, test.output, got)
}
}
Result:
7841672725449611742
Am I doing something wrong?

Java and therefore Kotlin uses different hash function than Go.
Possible options are:
Use a standard hash function.
Reimplement Java hashCode for Strings in Go.

Related

Get pointers to all fields of a struct dynamically using reflection

I'm trying to build a simple orm layer for golang.
Which would take a struct and generate the cols [] which can then be passed to sql function
rows.Scan(cols...) which takes pointers of fields in the struct corresponding to each of the columns it has found in the result set
Here is my example struct
type ExampleStruct struct {
ID int64 `sql:"id"`
aID string `sql:"a_id"`
UserID int64 `sql:"user_id"`
And this is my generic ORM function
func GetSqlColumnToFieldMap(model *ExampleStruct) map[string]interface{} {
typeOfModel := reflect.TypeOf(*model)
ValueOfModel := reflect.ValueOf(*model)
columnToDataPointerMap := make(map[string]interface{})
for i := 0; i < ValueOfModel.NumField(); i++ {
sql_column := typeOfModel.Field(i).Tag.Get("sql")
structValue := ValueOfModel.Field(i)
columnToDataPointerMap[sql_column] = structValue.Addr()
}
return columnToDataPointerMap
}
Once this method works fine i can use the map it generates to create an ordered list of sql pointers according to the column_names i get in rows() object
However i get below error on the .Addr() method call
panic: reflect.Value.Addr of unaddressable value [recovered]
panic: reflect.Value.Addr of unaddressable value
Is it not possible to do this ?
Also in an ideal scenario i would want the method to take an interface instead of *ExampleStruct so that it can be reused across different db models.
The error says the value whose address you want to get is unaddressable. This is because even though you pass a pointer to GetSqlColumnToFieldMap(), you immediately dereference it and work with a non-pointer value later on.
This value is wrapped in an interface{} when passed to reflect.ValueOf(), and values wrappped in interfaces are not addressable.
You must not dereference the pointer, but instead use Type.Elem() and Value.Elem() to get the element type and pointed value.
Something like this:
func GetSqlColumnToFieldMap(model *ExampleStruct) map[string]interface{} {
t := reflect.TypeOf(model).Elem()
v := reflect.ValueOf(model).Elem()
columnToDataPointerMap := make(map[string]interface{})
for i := 0; i < v.NumField(); i++ {
sql_column := t.Field(i).Tag.Get("sql")
structValue := v.Field(i)
columnToDataPointerMap[sql_column] = structValue.Addr()
}
return columnToDataPointerMap
}
With this simple change it works! And it doesn't depend on the parameter type, you may change it to interface{} and pass any struct pointers.
func GetSqlColumnToFieldMap(model interface{}) map[string]interface{} {
// ...
}
Testing it:
type ExampleStruct struct {
ID int64 `sql:"id"`
AID string `sql:"a_id"`
UserID int64 `sql:"user_id"`
}
type Point struct {
X int `sql:"x"`
Y int `sql:"y"`
}
func main() {
fmt.Println(GetSqlColumnToFieldMap(&ExampleStruct{}))
fmt.Println(GetSqlColumnToFieldMap(&Point{}))
}
Output (try it on the Go Playground):
map[a_id:<*string Value> id:<*int64 Value> user_id:<*int64 Value>]
map[x:<*int Value> y:<*int Value>]
Note that Value.Addr() returns the address wrapped in a reflect.Value. To "unwrap" the pointer, use Value.Interface():
func GetSqlColumnToFieldMap(model interface{}) map[string]interface{} {
t := reflect.TypeOf(model).Elem()
v := reflect.ValueOf(model).Elem()
m := make(map[string]interface{})
for i := 0; i < v.NumField(); i++ {
colName := t.Field(i).Tag.Get("sql")
field := v.Field(i)
m[colName] = field.Addr().Interface()
}
return m
}
This will output (try it on the Go Playground):
map[a_id:0xc00007e008 id:0xc00007e000 user_id:0xc00007e018]
map[x:0xc000018060 y:0xc000018068]
For an in-depth introduction to reflection, please read blog post: The Laws of Reflection

Go validator with sql null types?

I am having problems getting the golang validator to work with SQL null types. Here's an example of what I tried:
package main
import (
"database/sql"
"database/sql/driver"
"log"
"gopkg.in/go-playground/validator.v9"
)
// NullInt64
type NullInt64 struct {
sql.NullInt64
Set bool
}
func MakeNullInt64(valid bool, val int64) NullInt64 {
n := NullInt64{}
n.Set = true
n.Valid = valid
if valid {
n.Int64 = val
}
return n
}
func (n *NullInt64) Value() (driver.Value, error) {
if !n.NullInt64.Valid {
return nil, nil
}
return n.NullInt64.Int64, nil
}
type Thing struct {
N2 NullInt64 `validate:"min=10"`
N3 int64 `validate:"min=10"`
N4 *int64 `validate:"min=10"`
}
func main() {
validate := validator.New()
n := int64(6)
number := MakeNullInt64(true, n)
thing := Thing{number, n, &n}
e := validate.Struct(thing)
log.Println(e)
}
When I run this code, I only get this output:
Key: 'Thing.N3' Error:Field validation for 'N3' failed on the 'min'
tag
Key: 'Thing.N4' Error:Field validation for 'N4' failed on the
'min' tag
The problem is that I want it to also show that Thing.N2 failed for the same reasons as Thing.N3 and Thing.N4.
I tried introducing the func (n *NullInt64) Value() method because it was mentioned in the documentation. But I think I misunderstood something. Can anyone tell me what I did wrong?
UPDATE
There is an Example specifically for that. You may check it out. My other proposed solution should still work though.
Since the value you are trying to validate is Int64 inside sql.NullInt64, the easiest way would be to remove the validate tag and just register a Struct Level validation using:
validate.RegisterStructValidation(NullInt64StructLevelValidation, NullInt64{})
while NullInt64StructLevelValidation is a StructLevelFunc that looks like this:
func NullInt64StructLevelValidation(sl validator.StructLevel) {
ni := sl.Current().Interface().(NullInt64)
if ni.NullInt64.Int64 < 10 {
sl.ReportError(ni.NullInt64.Int64, "Int64", "", "min", "")
}
}
Note #1: this line thing := Thing{number,&number,n,&n} has one argument too many. I assume you meant thing := Thing{number, n, &n}
Note #2: Go tooling including gofmt is considered to be one of the most powerful features of the language. Please consider using it/them.
EDIT #1:
I don't think implementing Valuer interface is of any value in this context.

How to implement BDD practices with standard Go testing package?

I want to write tests first, then write code that makes the tests pass.
I can write tests functions like this:
func TestCheckPassword(t *testing.T) {
isCorrect := CheckPasswordHash("test", "$2a$14$rz.gZgh9CHhXQEfLfuSeRuRrR5uraTqLChRW7/Il62KNOQI9vjO2S")
if isCorrect != true {
t.Errorf("Password is wrong")
}
}
But I'd like to have more descriptive information for each test function.
For example, I am thinking about creating auth module for my app.
Now, in plain English, I can easily describe my requirements for this module:
It should accept a non-empty string as input.
String must be from 6 to 48 characters long.
Function should return true if password string fits provided hash string and false if not.
What's the way to put this information that is understandable by a non-tech business person into tests besides putting them into comments?
In Go, a common way of writing tests to perform related checks is to create a slice of test cases (which is referred to as the "table" and the method as "table-driven tests"), which we simply loop over and execute one-by-one.
A test case may have arbitrary properties, which is usually modeled by an anonymous struct.
If you want to provide a description for test cases, you can add an additional field to the struct describing a test case. This will serve both as documentation of the test case and as (part of the) output in case the test case would fail.
For simplicity, let's test the following simple Abs() function:
func Abs(x int) int {
if x < 0 {
return -x
}
return x
}
The implementation seems to be right and complete. If we'd want to write tests for this, normally we would add 2 test cases to cover the 2 possible branches: test when x is negative (x < 0), and when x is non-negative. In reality, it's often handy and recommended to also test the special 0 input and the corner cases: the min and max values of the input.
If we think about it, this Abs() function won't even give a correct result when called with the minimum value of int32, because that is -2147483648, and its absolute value is 2147483648 which doesn't fit into int32 because max value of int32 is: 2147483647. So the above implementation will overflow and incorrectly give the negative min value as the absolute of the negative min.
The test function that lists cases for each possible branches plus includes 0 and the corner cases, with descriptions:
func TestAbs(t *testing.T) {
cases := []struct {
desc string // Description of the test case
x int32 // Input value
exp int32 // Expected output value
}{
{
desc: "Abs of positive numbers is the same",
x: 1,
exp: 1,
},
{
desc: "Abs of 0 is 0",
x: 0,
exp: 0,
},
{
desc: "Abs of negative numbers is -x",
x: -1,
exp: 1,
},
{
desc: "Corner case testing MaxInt32",
x: math.MaxInt32,
exp: math.MaxInt32,
},
{
desc: "Corner case testing MinInt32, which overflows",
x: math.MinInt32,
exp: math.MinInt32,
},
}
for _, c := range cases {
got := Abs(c.x)
if got != c.exp {
t.Errorf("Expected: %d, got: %d, test case: %s", c.exp, got, c.desc)
}
}
}
In Go, the idiomatic way to write these kinds of tests is:
func TestCheckPassword(t *testing.T) {
tcs := []struct {
pw string
hash string
want bool
}{
{"test", "$2a$14$rz.gZgh9CHhXQEfLfuSeRuRrR5uraTqLChRW7/Il62KNOQI9vjO2S", true},
{"foo", "$2a$14$rz.gZgh9CHhXQEfLfuSeRuRrR5uraTqLChRW7/Il62KNOQI9vjO2S", false},
{"", "$2a$14$rz.gZgh9CHhXQEfLfuSeRuRrR5uraTqLChRW7/Il62KNOQI9vjO2S", false},
}
for _, tc := range tests {
got := CheckPasswordHash(tc.pw, tc.hash)
if got != tc.want {
t.Errorf("CheckPasswordHash(%q, %q) = %v, want %v", tc.pw, tc.hash, got, want)
}
}
}
This is called "table-driven testing". You create a table of inputs and expected outputs, you iterate over that table and call your function and if the expected output does not match what you want, you write an error message describing the failure.
If what you want isn't as simple as comparing a return against a golden value - for example, you want to check that either an error, or a value is returned, or that a well-formed hash+salt is returned, but don't care what salt is used (as that's not part of the API), you'd write additional code for that - in the end, you simply write down what properties the result should have, add some if's to check that and provide a descriptive error message if the result is not as expected. So, say:
func Hash(pw string) (hash string, err error) {
// Validate input, create salt, hash thing…
}
func TestHash(t *testing.T) {
tcs := []struct{
pw string
wantError bool
}{
{"", true},
{"foo", true},
{"foobar", false},
{"foobarbaz", true},
}
for _, tc := range tcs {
got, err := Hash(tc.pw)
if err != nil {
if !tc.wantError {
t.Errorf("Hash(%q) = %q, %v, want _, nil", tc.pw, got, err)
}
continue
}
if len(got) != 52 {
t.Errorf("Hash(%q) = %q, want 52 character string", tc.pw, got)
}
if !CheckPasswordHash(tc.pw, got) {
t.Errorf("CheckPasswordHash(Hash(%q)) = false, want true", tc.pw)
}
}
}
If you want a test suite with descriptive texts and contexts (like rspec for ruby) you should check out ginko: https://onsi.github.io/ginkgo/

How to test a function's output (stdout/stderr) in unit tests

I have a simple function I want to test:
func (t *Thing) print(min_verbosity int, message string) {
if t.verbosity >= minv {
fmt.Print(message)
}
}
But how can I test what the function actually sends to standard output? Test::Output does what I want in Perl. I know I could write all my own boilerplate to do the same in Go (as described here):
orig = os.Stdout
r,w,_ = os.Pipe()
thing.print("Some message")
var buf bytes.Buffer
io.Copy(&buf, r)
w.Close()
os.Stdout = orig
if(buf.String() != "Some message") {
t.Error("Failure!")
}
But that's a lot of extra work for every single test. I'm hoping there's a more standard way, or perhaps an abstraction library to handle this.
One thing to also remember, there's nothing stopping you from writing functions to avoid the boilerplate.
For example I have a command line app that uses log and I wrote this function:
func captureOutput(f func()) string {
var buf bytes.Buffer
log.SetOutput(&buf)
f()
log.SetOutput(os.Stderr)
return buf.String()
}
Then used it like this:
output := captureOutput(func() {
client.RemoveCertificate("www.example.com")
})
assert.Equal(t, "removed certificate www.example.com\n", output)
Using this assert library: http://godoc.org/github.com/stretchr/testify/assert.
You can do one of three things. The first is to use Examples.
The package also runs and verifies example code. Example functions may include a concluding line comment that begins with "Output:" and is compared with the standard output of the function when the tests are run. (The comparison ignores leading and trailing space.) These are examples of an example:
func ExampleHello() {
fmt.Println("hello")
// Output: hello
}
The second (and more appropriate, IMO) is to use fake functions for your IO. In your code you do:
var myPrint = fmt.Print
func (t *Thing) print(min_verbosity int, message string) {
if t.verbosity >= minv {
myPrint(message) // N.B.
}
}
And in your tests:
func init() {
myPrint = fakePrint // fakePrint records everything it's supposed to print.
}
func Test...
The third is to use fmt.Fprintf with an io.Writer that is os.Stdout in production code, but bytes.Buffer in tests.
You could consider adding a return statement to your function to return the string that is actually printed out.
func (t *Thing) print(min_verbosity int, message string) string {
if t.verbosity >= minv {
fmt.Print(message)
return message
}
return ""
}
Now, your test could just check the returned string against an expected string (rather than the print out). Maybe a bit more in-line with Test Driven Development (TDD).
And, in your production code, nothing would need to change, since you don't have to assign the return value of a function if you don't need it.

Are dynamic variables supported?

I was wondering if it is possible to dynamically create variables in Go?
I have provided a pseudo-code below to illustrate what I mean. I am storing the newly created variables in a slice:
func method() {
slice := make([]type)
for(i=0;i<10;i++)
{
var variable+i=i;
slice := append(slice, variablei)
}
}
At the end of the loop, the slice should contain the variables: variable1, variable2...variable9
Go has no dynamic variables.
Dynamic variables in most languages are implemented as Map (Hashtable).
So you can have one of following maps in your code that will do what you want
var m1 map[string]int
var m2 map[string]string
var m3 map[string]interface{}
here is Go code that does what you what
http://play.golang.org/p/d4aKTi1OB0
package main
import "fmt"
func method() []int {
var slice []int
for i := 0; i < 10; i++ {
m1 := map[string]int{}
key := fmt.Sprintf("variable%d", i)
m1[key] = i
slice = append(slice, m1[key])
}
return slice
}
func main() {
fmt.Println(method())
}
No; you cannot refer to local variables if you don’t know their names at compile-time.
If you need the extra indirection you can do it using pointers instead.
func function() {
slice := []*int{}
for i := 0; i < 10; i++ {
variable := i
slice = append(slice, &variable)
}
// slice now contains ten pointers to integers
}
Also note that the parentheses in the for loop ought to be omitted, and putting the opening brace on a new line is a syntax error due to automatic semicolon insertion after ++. makeing a slice requires you to pass a length, hence I don’t use it since append is used anyway.