perl6 precision base4 conversion - raku

Perl 6 is losing precision when converting to/from base4. How to retain precision?
'0.2322130120323232322110'.parse-base(4)
--> perl6 output : 0.728295262649453
--> high precission value: 0.728295262649453434278257191181182861328125
The problem is, when converting 0.728295262649453 to base(4), output is not the original number.
0.72829526264945.base(4)
--> output: 0.232213012032323232210333
--> original: 0.2322130120323232322110
How to get same values after to/from conversion?

The problem is probably in the way you created your "perl6 output":
say "0.2322130120323232322110".parse-base(4) # 0.72829526264945
This is because say calls the .gist method on whatever it is given. Or you tried to stringify it (which calls .Str, which gives the same result as .gist). If you would call the .perl method on the result:
say "0.2322130120323232322110".parse-base(4).perl
you do get the expected 0.728295262649453434278257191181182861328125. The .perl method returns a string that you could EVAL to get the originally given value.
In any case, if you do:
say "0.2322130120323232322110".parse-base(4).base(4)
you will see that you do get back the original value 0.2322130120323232322110. I guess this is just a case of just doing it rather than saying it. :-)
One could argue that .Str on a Rat should use .perl instead of .gist. Perhaps that should be a point of attention: it would probably have prevented you from needing to ask this question.

Related

Checking to see if an image format supports a usage in Vulkan?

If I want to see what an image format can be used for I can do the vkGetPhysicalDeviceImageFormatProperties2() and set the usage flag for the image format. I've noticed if the format isn't supported for those usages and settings the structure I pass in is set to all zero, and I can know if the format supports those uses. So if I want to know if VK_FORMAT_R8G8B8_UINT supports sampling from a shader I set the VK_IMAGE_USAGE_SAMPLED_BIT in the usage flags and call that function.
What I wanted to know is if that's equivalent to calling another function, called vkGetPhysicalDeviceFormatProperties2(), exactly the same name but without 'image' in the name, give that function the format, and check whether the VK_IMAGE_USAGE_SAMPLED_BIT is set.
So using the first method I give the format and usages I want from it, and then check if the values returned are zero max width, max height, etc, meaning those usages aren't supported, versus the second method of passing the format, getting back the flags and then checking the flags.
Are these two methods equivalent?
TL;DR: Do your image format checking properly: ask how you can use the format, then ask what functionality is available from usable format&usage combinations.
If you call vkGetPhysicalDeviceImageFormatProperties2 with usage flags and the like that don't correspond to a supported image type, you get an error: VK_ERROR_FORMAT_NOT_SUPPORTED. It inherits this due to the fact that it is said to "behave similarly to vkGetPhysicalDeviceImageFormatProperties", which has an explicit statement about this error:
If format is not a supported image format, or if the combination of format, type, tiling, usage, and flags is not supported for images, then vkGetPhysicalDeviceImageFormatProperties returns VK_ERROR_FORMAT_NOT_SUPPORTED.
Now normally, a function which gives rise to an error will yield undefined values in any return values. But there is a weird exception:
If the combination of parameters to vkGetPhysicalDeviceImageFormatProperties2 is not supported by the implementation for use in vkCreateImage, then all members of imageFormatProperties will be filled with zero.
However, there's an explicit note saying that this was old, bad behavior and is only preserved for compatibility's sake. Being a compatibility feature means that you can rely on it, but you shouldn't. Also, it only applies to the imageFormatProperties data and not any of the extension structures you can pass.
So it's best to just ignore this and ask your questions in the right order.

What is SBLineEntry.GetLine()?

SBLineEntry is a proxy object in LLDB Python interface. SBLineEntry.GetColumn() returns point in a line, but I am not sure what it actually means.
In C++ side source, it resolves to LineEntry.column value, but it also lacks how it is measured in.
At first, I thought it as UTF-8 code unit offset. But it seems it isn't because when I measure it it looks like UTF-16 code unit offset. But I still couldn't find any definition for this value.
What is this value?
Raw byte offset in source code file?
UTF-8 code unit offset?
UTF-16 code unit offset?
Something else?
That's a good question! If the debug information is DWARF (except for Windows systems, it is), lldb is providing the DNS_LNS_set_column data from the DWARF line table as the number returned by SBLineEntry::GetColumn(). The DWARF5 specification doesn't say what this integer is counting -- it says only,
The DW_LNS_set_column opcode takes a single unsigned LEB128 operand and stores it in the column register of the state machine.
You're probably seeing that clang puts the UTF-16 code unit offset in the DWARF, but the standard doesn't require that. This would be a reasonable clarification request to file with the DWARF standards committee, http://dwarfstd.org
For the case of Rust programs, I think it's Unicode Scalar value offset.
Here's an open issue about column number. It says span_start function produces the column number.
span_start calls lookup_char_pos.
lookup_char_pos calls bytepos_to_file_charpos.
bytepos_to_file_charpos
They are repeating the word "char", and in Rust, "char" means Unicode Scalar Value.

Go application making SQL Query using GROUP_CONCAT on FLOATS returns []uint8 instead of actual []float64

Have a problem using group_concat in a query made by my go application.
Any idea why a group_concat of FLOATS would look like a []uint8 on the Go side?
Cant seem to properly convert the suckers either.
It's definitely floats, I can see it in the raw query results, but when I do the same query in go and try to .Scan the result, Go complains that it's a []uint8 not a []float64 (which it actually is) Attempts to convert to floats gives me the wrong values (and way too many of them).
For example, at the database, I query and get 2 floats for the column in question, looks like this:
"5650.50, 5455.00"
On the go side however, go sees a []uint8 instead of []float64. Why does this happen? How does one workaround this to get the actual results?
My problem is that I have to use this SQL with the group_concat, due to the nature of the database I am working with, this is the best way to get the information, and more importantly the query itself works great, returns the data the function needs, but now I cant read it out because of type issues. No stranger to those, but Go isn't cooperating with me today.
I'd be more than pleased to learn WHY go is doing it this way, and delighted to learn of a way to deal with it.
Example:
SELECT ID, getDistance(33.1543,-110.4353, Loc.Lat, Loc.Lng) as distance,
GROUP_CONCAT(values) FROM stuff INNER JOIN device on device.ID = stuff.ID WHERE (someConditionsETC) GROUP BY ID ORDER BY ID
The actual result, when interfacing with the actual database (not within my application), is
"5650.00, 5850.50"
It's clearly 2 floats.
The same result produces a slice of uint8 when queried from Go and trying to .Scan the result in. If I range through and print those values, I get way more than 2, and they are uint8 (bytes) that look like this:
53,55,56,48,46,48,48
Not sure how Go expects me to handle this.
Solution.... stupid simple and not terribly obvious:
The solution: 
crazyBytes := []uint8("5760.00,5750.50")
aString := string(crazyBytes)
strSlice := strings.Split(aString,",") // string representation of our array (of floats)
var floatz []float64
for _, x := range strSlice {
fmt.Printf("At last, Float: %s \r\n",x)
f,err := strconv.ParseFloat(x,64)
if err != nil { fmt.Printf("Error: %s",err) }
floatz = append(floatz, f)
fmt.Printf("as float: %s \r\n", strconv.FormatFloat(f,'f',-1,64))
}
Yea sure, it's obvious NOW.
GROUP_CONCAT returns a string. So in Go you get a byte array of characters, not a float. The result you posted 53,55,56,48,46,48,48 translates into a string "5780.00" which does look like one of your values. So you need to either fix your SQL to return floats or use strings and strconv modules in Go to parse and convert your string into floats. I think the former approach is better, but it is up to you.

#NLConstraint with vectorized constraint JuMP/Julia

I am trying to solve a problem involving the equating of sums of exponentials.
This is how I would do it hardcoded:
#NLconstraint(m, exp(x[25])==exp(x[14])+exp(x[18]))
This works fine with the rest of the code. However, when I try to do it for an arbitrary set of equations like the above I get an error. Here's my code:
#NLconstraint(m,[k=1:length(LHSSum)],sum(exp.(LHSSum[k][i]) for i=1:length(LHSSum[k]))==sum(exp.(RHSSum[k][i]) for i=1:length(RHSSum[k])))
where LHSSum and RHSSum are arrays containing arrays of the elements that need to be exponentiated and then summed over. That is LHSSum[1]=[x[1],x[2],x[3],...,x[n]]. Where x[i] are variables of type JuMP.Variable. Note that length(LHSSum)=length(RHSSum).
The error returned is:
LoadError: exp is not defined for type Variable. Are you trying to build a nonlinear problem? Make sure you use #NLconstraint/#NLobjective.
So a simple solution would be to simply do all the exponentiating and summing outside of the #NLconstraint function, so the input would be a scalar. However, this too presents a problem since exp(x) is not defined since x is of type JuMP.variable, whereas exp expects something of type real. This is strange since I am able to calculate exponentials just fine when the function is called within an #NLconstraint(). I.e. when I code this line#NLconstraint(m,exp(x)==exp(z)+exp(y)) instead of the earlier line, no errors are thrown.
Another thing I thought to do would be a Taylor Series expansion, but this too presents a problem since it goes into #NLconstraint land for powers greater than 2, and then I get stuck with the same vectorization problem.
So I feel stuck, I feel like if JuMP would allow for the vectorized evaluation of #NLconstraint like it does for #constraint, this would not even be an issue. Another fix would be if JuMP implements it's own exp function to allow for the exponentiation of JuMP.Variable type. However, as it is I don't see a way to solve this problem in general using the JuMP framework. Do any of you have any solutions to this problem? Any clever workarounds that I am missing?
I'm confused why i isn't used in the expressions you wrote. Do you mean:
#NLconstraint(m, [k = 1:length(LHSSum)],
sum(exp(LHSSum[k][i]) for i in 1:length(LHSSum[k]))
==
sum(exp(RHSSum[k][i]) for i in 1:length(RHSSum[k])))

wxWidgets - wxGrid - reading/writing non string cell values

I have a wxGrid to edit an array of numerical data.
I was wondering what's the best way to get non-string data in and out of the cells without going through the string to numeric conversion all the time.
I've used SetCellEditor() to control the data entry.
currently I use this:
// numeric value into cell
str.clear();
str << val1;
m_grid4->SetCellValue(row, col, str);
..
// read value from back into variable
val = atoi(m_grid4->GetCellValue(row, col));
Apart from the fact that atoi() is a bit ugly and a template function with a stringstream would be better, is there a way do get non-string values a bit better in and out of cells?
I was looking at the editors and renderers but can't figure it out.
If you worry about efficiency, you almost certainly should use a custom table class deriving from wxGridTableBase instead of using the default trivial wxGridStringTable implementation which stores everything as strings. Then, and much less importantly, if it makes sense in your case, you can use wxGridCellNumberRenderer which will call your table GetValueAsLong() method instead of GetValue() (which returns a string).
Both of those are demonstrated in wxGrid sample, notably look at BugsGridTable there.
Good luck!