Is there a perl6 counterpart of powershells get-member to "analyze" a variable(-object)? - raku

Question:
Is there/What is the Perl6 counterpart of Powershells get-member to "analyse" the attributes of a variable?
Explanation:
In Perl 6 you can get properties/attributes of a variable, e.g.:
my $num=16.03;
say $num.numerator; # output: 1603
say $num.denominator; # output: 100
say $num.nude; # output: (1603 100)
say $num.WHAT; # output: (Rat)
How can I find out, which attributes/properties (numerator etc.) and methods/functions (WHAT) a variable has?
In Powershell I would pipe the variable to get-member, like:
$num | get-member and would get all properties and function displayed.

The best way would be to consult the docs for whatever type .WHAT told you, e.g. https://docs.perl6.org/type/Rat for Rat.
If you must have it programmatically, you can ask the object for its methods with .^methods.
> my $num = 16.03
16.03
> $num.^methods
(Rat FatRat Range atanh Bridge sign sqrt asech sin tan atan2 acosech truncate
asinh narrow base floor abs conj acosh pred new asec cosec acotan cosh ceiling
nude acos acosec sech unpolar log exp roots cotan norm sinh tanh acotanh Int
Num Real sec asin rand polymod log10 cos round REDUCE-ME succ base-repeating
cis cosech isNaN Complex cotanh atan perl WHICH Str ACCEPTS gist Bool Numeric
DUMP numerator denominator)
You can similarly see the attributes ('properties') with .^attributes, but any that you should access will have accessor methods anyway, so you shouldn't really need to do that.

Related

gnuplot 'set title' with sprintf : representing angle in terms of fractions of pi

I'd like to run a gnuplot .inp file so all the angles in the script show up automatically in the title as fractions based on the Greek letter pi - instead of a decimal form for the angle. I already know how to use {/Symbol p}, but that is a manual intervention that is impractical in this case.
I have an example sprintf line in a gnuplot input file which can produce nice title information :
angle=( (3*pi) /4 )
set title sprintf ("the angle is %g radians", angle)
plot sin(x)
... the output file (e.g. svg) or terminal (e.g. wxt) shows "2.35619", which is correct, however ; it would be nice to see the Greek letter for pi and the fraction itself, as is typically read off of a polar plot, e.g " 3/4 pi". Likewise for more complex or interesting representations of pi, such as "square root of two over two".
I already know I can manually go into the file and type in by hand "3{/Symbol p}/4", but this needs to be done automatically, because the actual title I am working with has numerous instances of pi showing up as a result of a setting of an angle.
I tried searching for examples of gnuplot being used with sprintf to produce the format of the angle I am interested in, and could not find anything. I am not aware of sprintf being capable of this. So if this is in fact impossible with gnuplot and sprintf, it will be helpful to know. Any tips on what to try next appreciated.
UPDATE: not a solution, but very interesting, might help :
use sprintf after the 'plot' to set the title that appears in the key (but not the overall title):
gnuplot setting line titles by variables
so for example here, the idea would be :
foo=20
plot sin(x)+foo t sprintf ("The angle is set to %g", foo)```
Here is an attempt to define a function to find fractions of Pi.
Basically, sum (check help sum) is used to find suitable multiples/fractions of Pi within a certain tolerance (here: 0.0001). It is "tested" until a denominator of 32. If no integer number is found, the number itself is returned.
In principle, the function could be extended to find multiples or fractions of roots, sqrt(2) or sqrt(3), etc.
This approach can certainly be improved, maybe there are smarter solutions.
Script:
### format number as multiple of pi
reset session
$Data <<EOD
1.5707963267949
-1.5707963267949
6.28318530717959
2.35619449019234
2.0943951023932
-0.98174770424681
2.24399475256414
1.0
1.04
1.047
1.0471
1.04719
EOD
set xrange[-10:10]
set yrange[:] reverse
set offset 0.25,0.25,0.25,0.25
set key noautotitle
dx = 0.0001
fPi(x) = (_x=x/pi, _p=sprintf("%g",x), _d=NaN, sum [_i=1:32] \
(_d!=_d && (abs(_x*_i - floor(_x*_i+dx)) < dx) ? \
(_n=floor(_x*_i+dx),_d=_i, \
_p=sprintf("%sπ%s",abs(_n)==1?_n<0?'-':'':sprintf("%d",_n),\
abs(_d)==1 ? '' : sprintf("/%d",_d)),0) : 0 ), _p)
plot $Data u (0):0:(fPi($1)) w labels font "Times New Roman, 16"
### end of script
Result:
I have [1] a workaround below that might be feasible, and [2] apparently what I was looking for below that (I am writing this in haste). I will mark the question "answered" anyway. To avoid reproducing theozh's script, I offer :
[1]:
add three lines to theozh's script - ideally, immediately before the 'plot' command :
set title sprintf ("Test: %g $\\sqrt{\\pi \\pi \\pi \\pi}$", pi)
set terminal tikz standalone
set output 'gnuplot_test.tex'
one can observe a little testing going on with nonsensical expressions of pi - it is just to see the vinculum extend, and this is a hasty thing - and the double-escapes - they appear to have made it to Stack Overflow correctly.
change the 'plot' line to remove the Times Roman part, but this might not be necessary :
plot $Data u (0):0:(fPi($1)) w labels
importantly, edit gnuplot_test.tex so an \end{document} is on the last line.
run 'pdflatex gnuplot_test.tex'.
This should help move things along - it appears the best approach is to go into the LaTeX world for this - thanks. I tried cairolatex pdf and eps but I was very confused with the LaTeX output. the tikz works almost perfectly.
[2]: What I was looking for : put this below the fPi(x) expression in gnuplot :
set title sprintf ("Testing : \n wxt terminal : \
%g %s %s %s \n tikz output : $\\sqrt{\\pi \\pi \\pi \\pi}$", \
pi, fPi(myAngle01), fPi(myAngle02), fPi(myAngle03) )
# set terminal tikz standalone
# set output 'gnuplot_test.tex'
plot $Data u (0):0:(fPi($1)) w labels t sprintf ("{/Symbol p}= %g, %s, %s, %s, %s", \
pi, fPi(pi), fPi(myAngle01), fPi(myAngle02), fPi(myAngle03) )
... the wxt terminal displays the angles as fractions of pi. I didn't test the output in the LaTeX pipeline - remove if undesired. I think the gnuplot script has to be written for the terminal or output desired - but at least the values can be computed - instead of writing them in "manually".

Use of selected_real_kind in a module [duplicate]

So I am doing 2 modules which are linking to the main program. The first one has all the variables defined in it and the second one is with the functions.
Module1:
module zmienne
implicit none
integer, parameter :: ngauss = 8
integer, parameter :: out_unit=1000
integer, parameter :: out_unit1=1001
integer, parameter :: out_unit2=1002, out_unit3=1003
real(10), parameter :: error=0.000001
real(10):: total_calka, division,tot_old,blad
real(10),parameter:: intrange=7.0
real(10),dimension(ngauss),parameter::xx=(/-0.9602898565d0,&
-0.7966664774d0,-0.5255324099d0,-0.1834346425d0,&
0.1834346425d0,0.5255324099d0,0.7966664774d0,0.9602898565d0/)
real(10),Dimension(ngauss),parameter::ww=(/0.1012285363d0,&
0.2223810345d0,0.3137066459d0,0.3626837834d0,&
0.3626837834d0,0.3137066459d0,0.2223810345d0,0.1012285363d0/)
real(10) :: r, u, r6, tempred, f, r2, r1, calka,beta
real(10) :: inte
real :: start, finish
integer:: i,j,irange
real(10),dimension(ngauss)::x,w,integrand
end module zmienne
Module2
module in
implicit none
contains
real(10) function inte(y,beta,r2,r1)
real(kind=10)::r,beta,r6,r2,r1,u,y
r=(r2-r1)*y+r1
r6=(1.0/r)**6
u=beta*r6*(r6-1.0d0)
if (u>100.d0) then
inte=-1.0d0
else
inte=exp(-u)-1.d0
endif
inte=r*r*inte
end function
end module in
And while im calling them like that:
use zmienne; use in
I am getting following error:
Name 'inte' at (1) is an ambiguous reference to 'inte' from module 'zmienne'
I've deleted "inte" in the module1 but now I am getting following error:
irange=inte(intrange/division)
1
Error: Missing actual argument for argument 'beta' at (1)
The main program code is:
program wykres
use zmienne; use in
implicit none
open(unit=out_unit, file='wykresik.dat', action='write', status='replace')
open(unit=out_unit1, file='wykresik1.dat', action='write')
open(unit=out_unit2, file='wykresik2.dat', action='write')
open(out_unit3, file='wykresik3.dat', action='write')
! the gaussian points (xx) and weights (ww) are for the [-1,1] interval
! for [0,1] interval we have (vector instr.)
x=0.5d0*(xx+1.0d0)
w=0.5d0*ww
! plots
tempred = 1.0
call cpu_time(start)
do i=1,1000
r=float(i)*0.01
r6=(1.0/r)**6
u=beta*r6*(r6-1.0)
f=exp(-u/tempred)-1.0
write(out_unit,*) r, u
write(out_unit1,*)r, f
write(out_unit2,*)r, r*r*f
end do
call cpu_time(finish)
print '("Time = ",f6.3," seconds.")',finish-start
! end of plots
! integration 1
calka=0.0
r1=0.0
r2=0.5
do i=1,ngauss
r=(r2-r1)*x(i)+r1
r6=(1.0/r)**6
u=beta*r6*(r6-1.0d0)
! check for underflows
if (u>100.d0) then
f=-1.0d0
else
f=exp(-u)-1.d0
endif
! the array integrand is introduced in order to perform vector calculations below
integrand(i)=r*r*f
calka=calka+integrand(i)*w(i)
enddo
calka=calka*(r2-r1)
write(*,*)calka
! end of integration
! integration 2
calka=0.0
do i=1,ngauss
integrand(i)=inte(x(i),beta,r2,r1)
calka=calka+integrand(i)*w(i)
enddo
calka=calka*(r2-r1)
! end of integration 2
write(*,*)calka
! vector integration and analytical result
write(*,*)sum(integrand*w*(r2-r1)),-(0.5**3)/3.0
!**************************************************************
! tot_calka - the sum of integrals all integration ranges
! dividion the initial length of the integration intervals
! tot_old - we will compare the results fro two consecutive divisions.
! at the beginning we assume any big number
! blad - the difference between two consecutive integrations,
! at the beginning we assume any big number
! error - assumed precission, parameter, it is necassary for
! performing do-while loop
total_calka=0.0
division=0.5
tot_old=10000.0
blad=10000.0
do while (blad>error)
! intrange - the upper integration limit, it should be estimated
! analysing the plot of the Mayer function. Here - 7.
! irange = the number of subintegrals we have to calculate
irange=inte(intrange/division)
total_calka=-(0.5**3)/3.0
! the analytical result for the integration range [0,0.5]
! the loop over all the intervals, for each of them we calculate
! lower and upper limits, r1 and r2
do j=1,irange
r1=0.5+(j-1)*division
r2=r1+division
calka=0.0
! the integral for a given interval
do i=1,ngauss
integrand(i)=inte(x(i),beta,r2,r1)
calka=calka+integrand(i)*w(i)
enddo
total_calka=total_calka+calka*(r2-r1)
enddo
! aux. output: number of subintervals, old and new integrals
write(*,*) irange,division,tot_old,total_calka
division=division/2.0
blad=abs(tot_old-total_calka)
tot_old=total_calka
! and the final error
write(*,*) blad
enddo
open(1,file='calka.dat', access='append')
! the secod viarial coefficient=CONSTANT*total_calka,
! CONSTANT is omitted here
write(1,*)tempred,total_calka
close(1)
end program wykres
The inte is declared in both modules.
Upd. The inte(y,beta,r2,r1) function is defined in the module in, and is used in the main program. This function requires four arguments, but this call
irange=inte(intrange/division)
provides only one argument. I'm not sure if this function should be used in this case. Try to use long meaningful names for variables and functions to avoid similar issues.

Convert a float in Pharo Smalltalk to a bytearray?

I am using Pharo Smalltalk 2.0. I need to convert a float into a ByteArray. There seems to be no method to do this, is there a roundabout way of doing it?
For instance, 1 asFloat asByteArray would be perfect.
Context: I'm trying to send binary data through websocket using the Zinc Websocket package.
A Float already is a variable class, i.e. a little bit similar to an array:
3.14. "=> 3.14"
3.14 size. "=> 2"
3.14 at: 1. "=> 1074339512"
3.14 at: 2. "=> 1374389535"
You can also modify it:
| f |
f := 3.14.
f at: 1 put: 10000.
f. "=> 2.1220636948306e-310"
With that in mind, you now can handle those two integers.
However, Pharo 2.0 typically comes with Fuel pre-installed,
and it already contains means to serialize a float:
ByteArray streamContents: [ :s |
FLEncoder on: s globalEnvironment: Dictionary new do: [ :e |
3.14 serializeOn: e ]] "=> #[64 9 30 184 81 235 133 31]"
Probably, you want to use the Fuel serializing altogether, if you have Pharo or
Squeak on both ends.
If you are on i386 CPU, you can do it with native boost
(ByteArray new: 8) nbFloat64AtOffset: 0 put: Float pi; yourself
Note that byte order is littleEndian in this case.
Otherwise, you have those platform independent access:
(ByteArray new: 8) doubleAt: 1 put: Float pi bigEndian: true ; yourself
Note the difference with 0-based index for first case, and 1-based for second case.

Forth as an interactive C program tester

I'm willing to use an interactive language to test some C code from a legacy project. I know a little Forth, but I haven't ever used it in a real world project. I'm looking at pForth right now.
Is it reasonable to use an interactive Forth interpreter to test the behavior of some function in a C program? This C code has lots of structs, pointers to structs, handles and other common structures found in C.
I suppose I'll have to write some glue code to handle the parameter passing and maybe some struct allocation in the Forth side. I want an estimate from someone with experience in this field. Is it worth it?
If you want interactive testing and are targeting embedded platforms, then Forth is definitely a good candidate. You'll always find a Forth implementation that runs on your target platform. Writing one is not even hard either if need be.
Instead of writing glue code specific to your immediate needs, go for a generic purpose Forth to C interface. I use gforth's generic C interface which is very easy to use. For structure handling in Forth, I use an MPE style implementation which is very flexible when it comes to interfacing with C (watch out for proper alignment though, see gforth %align / %allot / nalign).
The definition of generic purpose structure handling words takes about 20 lines of Forth code, same for single linked lists handling or hash tables.
Since you cannot use gforth (POSIX only), write an extension module for your Forth of choice that implements a similar C interface. Just make sure that your Forth and your C interface module uses the same malloc() and free() than the C code you want to test.
With such an interface, you can do everything in Forth by just defining stub words (i.e. map Forth words to C functions and structures).
Here's a sample test session where I call libc's gettimeofday using gforth's C interface.
s" structs.fs" included also structs \ load structure handling code
clear-libs
s" libc" add-lib \ load libc.so. Not really needed for this particular library
c-library libc \ stubs for C functions
\c #include <sys/time.h>
c-function gettimeofday gettimeofday a a -- n ( struct timeval *, struct timezone * -- int )
end-c-library
struct timeval \ stub for struct timeval
8 field: ->tv_sec \ sizeof(time_t) == 8 bytes on my 64bits system
8 field: ->tv_usec
end-struct
timeval buffer: tv
\ now call it (the 0 is for passing NULL for struct timezone *)
tv 0 gettimeofday . \ Return value on the stack. output : 0
tv ->tv_sec # . \ output : 1369841953
Note that tv ->tv_sec is in fact the equivalent of (void *)&tv + offsetof(struct timeval, tv_sec) in C, so it gives you the address of the structure member, so you have to fetch the value with #. Another issue here: since I use a 64 bits Forth where the cell size is 8 bytes, storing/fetching an 8 bytes long is straightforward, but fetching/storing a 4 bytes int will require some special handling. Anyhow, Forth makes this easy: just define special purpose int# and int! words for that.
As you can see, with a good generic purpose C interface you do not need to write any glue code in C, only the Forth stubs for your C functions and structures are needed, but this is really straightforward (and most of it could be automatically generated from your C headers).
Once you're happy with your interactive tests, you can move on to automated tests:
Copy/paste the whole input/output from your interactive test session to a file named testXYZ.log
strip the output (keeping only the input) from your session log and write this to a file named testXYZ.fs
To run the test, pipe testXYZ.fs to your forth interpreter, capture the output and diff it with testXYZ.log.
Since removing output from an interactive session log can be somewhat tedious, you could also start by writing the test script testXYZ.fs then run it and capture the output testXYZ.log, but I prefer starting from an interactive session log.
Et voilà !
For reference, here's the structure handling code that I used in the above example :
\ *****************************************************************************
\ structures handling
\ *****************************************************************************
\ Simple structure definition words. Structure instances are zero initialized.
\
\ usage :
\ struct foo
\ int: ->refCount
\ int: ->value
\ end-struct
\ struct bar
\ int: ->id
\ foo struct: ->foo
\ 16 chars: ->name
\ end-struct
\
\ bar buffer: myBar
\ foo buffer: myFoo
\ 42 myBar ->id !
\ myFoo myBar ->foo !
\ myBar ->name count type
\ 1 myBar ->foo # ->refCount +! \ accessing members of members could use a helper word
: struct ( "name" -- addr 0 ; named structure header )
create here 0 , 0
does>
# ;
\ <field-size> FIELD <field-name>
\ Given a field size on the stack, compiles a word <field-name> that adds the
\ field size to the number on the stack.
: field: ( u1 u2 "name" -- u1+u2 ; u -- u+u2 )
over >r \ save current struct size
: r> ?dup if
postpone literal postpone +
then
postpone ;
+ \ add field size to struct size
; immediate
: end-struct ( addr u -- ; end of structure definition )
swap ! ;
: naligned ( addr1 u -- addr2 ; aligns addr1 to alignment u )
1- tuck + swap invert and ;
\ Typed field helpers
: int: cell naligned cell postpone field: ; immediate
: struct: >r cell naligned r> postpone field: ; immediate
: chars: >r cell naligned r> postpone field: ; immediate
\ with C style alignment
4 constant C_INT_ALIGN
8 constant C_PTR_ALIGN
4 constant C_INT_SIZE
: cint: C_INT_ALIGN naligned C_INT_SIZE postpone field: ; immediate
: cstruct: >r C_PTR_ALIGN naligned r> postpone field: ; immediate
: cchars: >r C_INT_ALIGN naligned r> postpone field: ; immediate
: buffer: ( u -- ; creates a zero-ed buffer of size u )
create here over erase allot ;

How to prevent common sub-expression elimination (CSE) with GHC

Given the program:
import Debug.Trace
main = print $ trace "hit" 1 + trace "hit" 1
If I compile with ghc -O (7.0.1 or higher) I get the output:
hit
2
i.e. GHC has used common sub-expression elimination (CSE) to rewrite my program as:
main = print $ let x = trace "hit" 1 in x + x
If I compile with -fno-cse then I see hit appearing twice.
Is it possible to avoid CSE by modifying the program? Is there any sub-expression e for which I can guarantee e + e will not be CSE'd? I know about lazy, but can't find anything designed to inhibit CSE.
The background of this question is the cmdargs library, where CSE breaks the library (due to impurity in the library). One solution is to ask users of the library to specify -fno-cse, but I'd prefer to modify the library.
How about removing the source of the trouble -- the implicit effect -- by using a sequencing monad that introduces that effect? E.g. the strict identity monad with tracing:
data Eval a = Done a
| Trace String a
instance Monad Eval where
return x = Done x
Done x >>= k = k x
Trace s a >>= k = trace s (k a)
runEval :: Eval a -> a
runEval (Done x) = x
track = Trace
now we can write stuff with a guaranteed ordering of the trace calls:
main = print $ runEval $ do
t1 <- track "hit" 1
t2 <- track "hit" 1
return (t1 + t2)
while still being pure code, and GHC won't try to get to clever, even with -O2:
$ ./A
hit
hit
2
So we introduce just the computation effect (tracing) sufficient to teach GHC the semantics we want.
This is extremely robust to compile optimizations. So much so that GHC optimizes the math to 2 at compile time, yet still retains the ordering of the trace statements.
As evidence of how robust this approach is, here's the core with -O2 and aggressive inlining:
main2 =
case Debug.Trace.trace string trace2 of
Done x -> case x of
I# i# -> $wshowSignedInt 0 i# []
Trace _ _ -> err
trace2 = Debug.Trace.trace string d
d :: Eval Int
d = Done n
n :: Int
n = I# 2
string :: [Char]
string = unpackCString# "hit"
So GHC has done everything it could to optimize the code -- including computing the math statically -- while still retaining the correct tracing.
References: the useful Eval monad for sequencing was introduced by Simon Marlow.
Reading the source code to GHC, the only expressions that aren't eligible for CSE are those which fail the exprIsBig test. Currently that means the Expr values Note, Let and Case, and expressions which contain those.
Therefore, an answer to the above question would be:
unit = reverse "" `seq` ()
main = print $ trace "hit" (case unit of () -> 1) +
trace "hit" (case unit of () -> 1)
Here we create a value unit which resolves to (), but which GHC can't determine the value for (by using a recursive function GHC can't optimise away - reverse is just a simple one to hand). This means GHC can't CSE the trace function and it's 2 arguments, and we get hit printed twice. This works with both GHC 6.12.4 and 7.0.3 at -O2.
I think you can specify the -fno-cse option in the source file, i.e. by putting a pragma
{-# OPTIONS_GHC -fno-cse #-}
on top.
Another method to avoid common subexpression elimination or let floating in general is to introduce dummy arguments. For example, you can try
let x () = trace "hi" 1 in x () + x ()
This particular example won't necessarily work; ideally, you should specify a data dependency via dummy arguments. For instance, the following is likely to work:
let
x dummy = trace "hi" $ dummy `seq` 1
x1 = x ()
x2 = x x1
in x1 + x2
The result of x now "depends" on the argument dummy and there is no longer a common subexpression.
I'm a bit unsure about Don's sequencing monad (posting this as answer because the site doesn't let me add comments). Modifying the example a bit:
main :: IO ()
main = print $ runEval $ do
t1 <- track "hit 1" (trace "really hit 1" 1)
t2 <- track "hit 2" 2
return (t1 + t2)
This gives us the following output:
hit 1
hit 2
really hit 1
That is, the first trace fires when the t1 <- ... statement is executed, not when t1 is actually evaluated in return (t1 + t2). If we define the monadic bind operator as
Done x >>= k = k x
Trace s a >>= k = k (trace s a)
instead, the output will reflect the actual evaluation order:
hit 1
really hit 1
hit 2
That is, the traces will fire when the (t1 + t2) statement is executed, which is (IMO) what we really want. For example, if we change (t1 + t2) to (t2 + t1), this solution produces the following output:
hit 2
really hit 2
hit 1
The output of the original version remains unchanged, and we don't see when our terms are really evaluated:
hit 1
hit 2
really hit 2
Like the original solution, this also works with -O3 (tested on GHC 7.0.3).