Issue with declaring multiline functions in APL - scripting

#!/usr/bin/dyalog -script
⍝ /usr/bin/dyalog is a symlink to /opt/mdyalog/18.0/64/unicode/mapl
factors←{⎕ML ⎕IO←1 ⋄ ⍵{ ⍵,(⍺÷×/⍵)~1}∊⍵{(0=(⍵*⍳⌊⍵⍟⍺)|⍺)/⍵}¨⍬{nxt←⊃⍵ ⋄ msk←0≠nxt|⍵ ⋄ ∧/1↓msk:⍺,⍵ ⋄ (⍺,nxt)∇ msk/⍵}⍵{ (0=⍵|⍺)/⍵}2,(1+2×⍳⌊0.5×⍵*÷2),⍵}
factors 20
Copied from https://dfns.dyalog.com/c_factors.htm
It works exactly as the example apart from the fact I am not able to to type it as separate lines and have to resort to ⋄'s
Using the example it instead results in
./.local/src/sandbox/apl/Main.apl
SYNTAX ERROR
factors←{⎕ML ⎕IO←1 ⍝ Prime factors of ⍵.
Another issue is using ] commands like ]display or ]box on
Using them always results in
./.local/src/sandbox/apl/Main.apl
VALUE ERROR: Undefined name: ⎕SE.UCMD

Try* adding setting DYALOG_LINEEDITOR_MODE to 1:
#!/usr/bin/dyalog -script DYALOG_LINEEDITOR_MODE=1
When running in script mode, SALT, and therefore user commands, are not initialised automatically. As per APLcart, you can enable both with:
(⎕NS⍬).(_←enableSALT⊣⎕CY'salt')
However, under program control, you're usually better off using proper functions than user commands. You can copy in the display and disp functions (which take an array and produce character matrices equivalent to what you'd see from ]display and ]box on) with:
'display' 'disp'⎕CY'dfns'
* Both -script and DYALOG_LINEEDITOR_MODE are experimental in version 18.0, while 18.2 (scheduled for release in March 2022) has dedicated #! script support.

Related

How to interact with a subprocess through its stdin, stdout, stderr in Smalltalk?

This Python code shows how to call some process in Windows 10 and to send to it string commands, to read its string responses through stdin, stdout pipes of the process:
Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from subprocess import *
>>> p = Popen("c:/python38/python.exe", stdin=PIPE, stdout=PIPE)
>>> p.stdin.write(b"print(1+9)\n")
11
>>> p.communicate()
(b'10\r\n', None)
>>>
As you can see the python.exe process returned 10 as an answer to print(1+9). Now I want to do the same in Pharo (or Squeak): in Windows 10 OS - I suppose something similar, i.e. short, simple, understandable, really working.
I installed OSProcess, ProcessWrapper (they were missing in Pharo, also its strange that I got warning that they are not marked for Pharo 8.0 and were not checked to work in Pharo 8.0, but OK), and I tried ProcessWrapper, PipeableOSProcess (copy-pasted different snippets from the Web), etc - with zero success! The results were:
nothing happens, python.exe was not started
VM errors console was opened (white console in the bottom of the Pharo, which is controlled with F2 menu)
different exceptions
etc
Would somebody show me simple working example how to start a process and to to send it commands, read answers, then send again, and so on in some loop - I plan to have such communication in a detached thread and to use it as some service, because Pharo, Smalltalk in general is missing most bindings, so then I will use subprocess communication like in "good" old days...
I know how to call a command and to get its output:
out := LibC resultOfCommand: 'dir ', aDir.
but I am talking about another scenario: a communication with a running process interactively (for example, with SSH or similar like in the example above - python.exe).
PS. Maybe it's possible to do it with LibC #pipe:mode even?
Let me start with that the PipeableOsProcess is probably broken on Windows. I have tried it and it just opened a command line and nothing else (it does not freeze my Pharo 8). The whole OSProcess does not work correctly in my eyes.
So I took a shot at LibC which is supposed to not work with Windows.
I’m a module defining access to standard LibC. I’m available under Linux and OSX, but not under Windows for obvious reasons :)
Next is to say that Python's Windows support is probably much better than Pharo's.
The solution, which is more like a workaround using files, is to use LibC and #runCommand: (I tried to come up with a similar example as you had shown above):
| count command result outputFile errorFile |
count := 9+1. "The counting"
command := 'echo ', count asString. "command run at the command line"
outputFile := 'output'. "a file into which the output is redirected"
errorFile := 'error'. "a file where the error output is redirected "
result := LibC runCommand: command, "run the command "
' >', outputFile, "redirect the output to output file"
' 2>', errorFile.
"reading back the value from output file"
outputFile asFileReference contents lines.
"reading back the value from the error file - which is empty in this case"
errorFile asFileReference contents lines.

How do I get Source Extractor to Analyze an Image?

I'm relatively inexperienced in coding, so right now I'm just familiarizing myself with the basics of how to use SE, which I'll need to use in the near future.
At the moment I'm trying to get it to analyze a FITS file on my computer (which is a Mac). I'm sure this is something obvious, but I haven't been able to get it do that. Following the instructions in Chapters 6 and 7 of Source Extractor for Dummies (linked below), I input the following:
sex MedSpiral_20deg_Serl2_.45_.fits.fits -c configuration_file.txt
And got the following error message:
WARNING: configuration_file.txt not found, using internal defaults
----- SExtractor 2.19.5 started on 2020-02-05 at 17:10:59 with 1 thread
Setting catalog parameters
ERROR: can't read default.param
I then tried entering parameters manually:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c configuration_file.txt -DETECT_TYPE CCD -MAG_ZEROPOINT 2.5 -PIXEL_SCALE 0 -SATUR_LEVEL 1 -SEEING_FWHM 1
And got the same error message. I tried referencing default.sex directly:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c default.sex
And got the same error message again, substituting "configuration_file.txt not found" with "default.sex not found" (I checked that default.sex was on my computer, it is). The same thing happened when I tried to use default.param.
Here's the link to SE for Dummies (Chapter 6 begins on page 19):
http://astroa.physics.metu.edu.tr/MANUALS/sextractor/Guide2source_extractor.pdf
If you run the command "sex MedSpiral_20deg_Ser12_.45_fits.fits -c default.sex" within the config folder (within the sextractor folder), you will be able to run it.
However, I wonder how I can possibly run sextractor command from any folder in my computer?

When trying to get the source of a function in Postgres using psql, what does the error "column p.proisagg does not exist" mean?

Background:
using postgres 11 on RDS, interface is psql on a Centos 7 box; objective is to show the source of certain stored procs / functions so that I can work with them
Problem description : When I attempt to list / show the source of a given stored function using the \df+ command which I understand to be correct for this use based on [official docs here](https://www.postgresql.org/docs/current/app-psql.html], an error is given as shown:
psql=> \df+ schema_foo.proc_bar;
ERROR: column p.proisagg does not exist
LINE 6: WHEN p.proisagg THEN 'agg'
I have no clue how to interpret this; the function in question does not contain the snippet of logic shown in the error, nor the column referenced there p.proisagg. I have verified this by opening the function in vim with \ef.
My guess based on several unrelated github issues that mention this same error for example is that it is in reference to some schema code internal to postgres.
Summary: in short, I can view the source of the functions using \ef, so my work is not impaired from a practical standpoint, however I wish to understand this error and why I'm encountering it with \df+.
I had the same issue and ran these 2 commands to fix it
sed -i "s/NOT pp.proisagg/pp.prokind='f'/g" /usr/share/phpPgAdmin/classes/database/Postgres.php
sed -i "s/NOT p.proisagg/p.prokind='f'/g" /usr/share/phpPgAdmin/classes/database/Postgres.php

In Lettuce(4.x) for Redis how to reduce round trips and use output of one command as input for another command, especially for Georadius

I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.

tcl tcltest unknown option -run

When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest