Change REPL module/namespace in Julia - module

I'm looking for a way to "enter" a module in the REPL, so that I can access all symbols without qualification (not just the exported ones), and any function (re)defined at the REPL gets in the specified module. (Basically this is the functionality of Common Lisp's in-package macro.)
This would be useful in a REPL-oriented workflow, as I would be able to write the same code in the REPL as in the module I am developing.
The manual recommends a workflow where I qualify everything, but that seems annoying.

I started a package called REPLMods.jl for this a while back. It should probably be polished up, but I haven't had the time.
I spoke to core Julia members and there was interest in getting it merged into base once things were clean, but again, no time!

I know this isn't quite what you're asking, but just in case the 'obvious' had not occured to you (or future visitors to the question), assuming you loaded a module with an annoyingly cumbersome name, e.g.
import LaTeXStrings
and you don't want to have to type LaTeXStrings all the time just to explore its accessibles, i.e.
LaTeXStrings.[TAB]
you can just assign the imported module as a whole to another variable, i.e.
const l = LaTeXStrings
I'm sure in the absence of a more appropriate built-in solution, at least typing l.[TAB] as opposed to LaTeXStrings.[TAB]is a lot more tolerable :)
(I find it odd, in fact, that julia doesn't seem to support the import LaTeXStrings as l syntax ...)

It's 2020, I'm using Julia 1.4, and was unable to get REPLMods.jl to work. I think the following seem good enough for the time being:
ExportAll.jl - see Exporting all symbols in Julia for a discussion (just that one shouldn't ExportAll to replace normal export)
and Revise.jl

Related

How to list the exposed members of a package/dir-like method in Elm?

I have been searching the official docs and existing questions and could not find any information on this - in Elm, how it would be possible to see the members/methods/variables that belong to or are exposed by a package in Elm, (such as the dir method in python), without having to dive into the source code each time?
What I want to do is get a simple list of what methods are exposed by an imported package. (So for a package like List, it should output reverse , all, any, map, etc.) I have attempted tab completion in elm repl and the elm extension available in VS code editor, and elm repl does not offer any methods such as help, doc, ?, dir, man, etc., so I have no idea where to even start. I'm wondering how everyone else does this other than pulling up the source code for each and every package they use.
I apologize for the newbie question and if I misread or have been missing anything, but I couldn't even find anything in the https://elmprogramming.com tutorial. Thanks in advance.
Nothing like this exists in Elm to do reflection over modules, unfortunately (as of 0.19.1, at least).
However, if you aren't looking to actually do this kind of thing at runtime, but rather as a convenient way of finding out for development, the elm packaging system enforces the requirement that all public functions are documented, so if you visit the package page, every public function and type will be documented there (obviously it can't enforce the content of the documentation, but at the very least it will be listed).

How do I determine what to import using gnulib-tool

autoscan produced a series of function checks (AC_FUNC_* and AC_CHECK_FUNCS), and AC_CHECK_HEADERS. Now I need to pull in gnulib code. Do I just try to import every function and header that autoscan identified? Half of them don't show up in the gnulib-tool --list output. Or do I wait until ./configure fails on some platform?
My suggestion would be to ignore autoscan entirely and instead try to figure out which platform you're targeting.
In particular autoscan will try to add tests for functions that are available in every single platform since 1990, but might be missing on a non-SysV OS for instance.
Start with a base platform target (such as C99 and the latest POSIX), and then figure out which other OSes you want to target and what they are missing.
Don't try to cover bases for OSes you have no access to, it's likely you won't actually know what is needed for them to build, let alone run.
Full disclosure: I have changed my opinion on gnulib over the years and not suggest it to be used anymore, if at all possible.
There are two ways to determine which list of gnulib modules you might need:
You can scan your source code or your object files (with nm) by hand.
You can import all the POSIX header file modules via gnulib-tool, then add -DGNULIB_POSIXCHECK to the CFLAGS, compile your package, and analyze the resulting warnings.
Also, some gnulib overrides exist for the sake of old platforms only. You can read the list of what each override fixes, and apply your own judgement.

Problems with Code in the Frege REPL

While trying to learn Frege I copied some code from Dierk's Real World Frege to the online REPL an tried to execute it (see also How to execute a compiled code snipped in Frege online repl). The scripts I've tried don't compile :-(
What am I doing wrong?
Here are examples of what does not compile:
println ( 2 *-3 ) -- unlike haskell, this will work!
and the whole ValuesAndVariables.fr code
It is unavoidable that over the course of more than a year, an evolving language (and its libraries) change so that older code will not compile anymore.
It would be nice, if we could see an example, instead of a generalization like "most".
The next best thing would be to have an issue in Dierks project that points to the error(s).
But the very best would be en effort to find out what is wrong. This would also intensify your learning process.
Here are two ressources that could help:
https://github.com/Frege/frege/wiki/New-or-Changed-Features -- the release notes for every release, contains a summary of things that have changed between releases, and especially the reasons why code would not compile anymore and how to correct it.
http://www.frege-lang.org/doc/fregedoc.html -- the library docs. May explain possible errors like import not found, or missing identifiers.
Go, give it a try. And I'm convinced Dierk will be happy to accept pull requests.
Edit: Fixes for announced errors.
The error in:
println ( 2 *-3 )
stems indeed from a syntactical change.
It is, as of recently, demanded that adjacent operators be separated by at least one space.
Hence
println (2 * -3)
However, the error message you got here was:
can't resolve `*-`, did you mean `-` perhaps?
which could have triggered the idea that it tries to interpret *- as a single operator.
The other error in ValuesAndVariables1.fr is indeed a show stopper for a beginner. The background is that we have one pi that has type Double and one that has type Float and potentially many more through type class Floating, so one needs to tell which one to print.
The following will work:
import Prelude.Math -- unless already imported
println Float.pi
println (pi :: Double)
the online REPL at http://try.frege-lang.org is currently based on Frege V3.23.370-g898bc8c . Dierk's code examples are based on V3.21.500-g88270a0 (which can be seen in the gradle build file).
It seems that the Frege developers decided to change the Frege syntax slightly between those versions. THe result is that you will not be able to run these code snippets in the online REPL anymore.

What is the difference between "package" and "module" in Frege?

Hi I've been playing a little bit with Frege and I just noticed in some examples that package and module are used interchangeably:
package MyModuleOne where
and sometimes:
module MyModuleTwo where
When importing from one or the other I don't see any difference in the behavior of my program. Is there something I should keep in mind when using package or module keywords ?
Yes. It used to start out with package, but later I realized this was an obstacle when porting Haskell code which uses module. Hence I added module, and thus currently module and package are the same keyword, just spelled differently.
But the intention is, of course, to retire package sooner or later. So my advice would be to use module only.
(This reminds me that I probably have to update the lang spec with regard to this. Never mind.)

What is a good workflow for developing Julia modules with IPython/Jupyter?

I find myself frequently developing new Julia modules while at the same time using those modules for my work. So I'll have an IPython (Jupyter) notebook, with something like:
using DataFrames
using MyModule
Then I'll do something like:
x = myfunction(7, 3)
But I'll have to modify that function, and unfortunately by that point I can't simply do
using MyModule
again. I'm not really sure why; I thought that calling using simply declares available modules in order to make the global scope aware of them, and then when a name is actually needed, the runtime searches for the definition among the currently loaded modules (starting with Main).
So shouldn't using MyModule simply just refresh the definitions of the items in the already declared module? Why do I have to completely stop and restart the kernel in order to use my updated functions? (Is it because names are bound only once to functions that are declared using the function keyword?)
I've looked at Julia Workflow Tips, but I don't find the whole Tmp, tst.jl system very simple or elegant... at least for a notebook.
Any suggestions?
I think there's a lot of truth in this statement attributed to one of the Juno developers: Jupyter notebook is for working with data. Juno IDE is for working with code.
Jupyter is great for using modules in a notebook style that the output you're getting is reproducible. Juno and the REPL have less overhead that let you keep starting new sessions (faster testing, and fixes the problem you noted), have multiple tabs open to follow code around a complex module, and can use the debugger (in v0.5). They address different development issues for difference stages of use. I think you're pushing against the tide if you're using the wrong tool for the wrong job.