Is there a Perl equivalent to the python setup.py develop convention for installing a module that can also be actively developed?
If not, what is the best practice for active development on a Perl module that is also installed into a local library path (for example a path setup using local::lib)?
I am just starting to make a module, so I will be developing the installation package (Makefile.PL etc.) alongside the meat of the module, and wondering what is the best way to set up the development environment. There are many good tutorials about making a module using h2xs or other tools, but I have not seen this question addressed.
The blib core module sets up the include paths to use the blib directory structure that's created when you do make in a standard module directory. That way you can easily use your new code in scripts by running them with perl -Mblib foo.pl.
Another, arguably better, way is to write your test code while developing as standard test scripts and run them via the prove script. See the documentation on Test::Simple for how to get started on that.
If you are not doing XS stuff where you have to actually rebuild things and produce build artifacts, you can use perl -Ilib .... This assumes that your lib directory is structured as it will be installed (but then, if it's not, you're rebuilding to produce build artifacts :).
If you are playing with this module in development from outside its directory structure:
$ perl -I/path/to/module/repo/lib ...
Or many of the other ways to set #INC:
$ export PERL5LIB=/path/to/module/repo/lib
$ perl some_program.pl
I typically don't use prove or make while developing because I'm concentrating on one test file at a time:
$ perl -Ilib t/some_test.t
When I think I've fixed the issue, I then try it against all the tests:
$ make test
Related
I'm playing around with writing modules in Raku, when it made sense for me to break a piece of functionality off into another .rakumod file. How can I link these files together when I compile?
I tried to pull the other module to my main module via:
use MyProject::OtherModule;
But I get an error that it can't find this module even though they're side by side in the directory. I tried looking at some OSS projects in the Raku world, most of them are one file, the compiler Rakudo seems to use multiple module files but I can't figure out how they're linked.
Do I have to publish this module every time I want to run my project? How do I structure this if my project gets huge? Surely the best solution isn't to have it all in one file?
Edit: I should also note that I used this at the top of my new module too:
unit module MyProject::OtherModule;
When running locally, if you have your META6.json declared, you can use
raku -I. script.raku
and it will use the uninstalled versions, and you don't need to add any use lib in the script.
I'm looking for a free open source tool-set that will compile various "classic" scripting languages, e.g. Korn Shell, ksh, csh, bash etc. as an executable -- and if the script calls other programs or executables, for them to be included in the single executable.
Reasons:
To obfuscate the code for delivery to a customer so as not to reveal our Intellectual Property - for delivery onto a customer's own machine/systems for which I have no control over what permissions I can set regarding access, so the program file has to be binary whereby the workings cannot be easily seen by viewing in a text editor or hexdump viewer.
To make a single, simply deployed program for the customer without/or a minimal amount of any external dependencies.
I would prefer something simple without the need for package manager since:
I can't rely on the customer's knowledge to carry out (un) packaging instructions and
I can't rely on the policies governing their machines regarding installing packages (and indeed from third parties).
The simplest preferred approach is to be able to compile to proper machine code a single executable that will run out of the box without any dependencies.
The solution that fully meets my needs would be SHC - a free tool, or CCsh a commercial tool. Both compile shell scripts to C, which then can be compiled using a C compiler.
Links about SHC:
https://github.com/neurobin/shc
http://www.datsi.fi.upm.es/~frosal/
http://www.downloadplex.com/Linux/System-Utilities/Shell-Tools/Download-shc_70414.html
Links about CCsh:
http://www.comeaucomputing.com/faqs/ccshlit.html
You could use this: http://megastep.org/makeself/
This generates a shell script that auto-extracts a bundled tar.gz archive into the temporary directory, and then can run an arbitrary command upon extraction.
Using this tool, you can provide only one shell script to the client.
This script will then extract your ofbsh obfuscated scripts and binaries into /tmp, and run them transparently.
You can obfuscate shell scripts with something like ofbsh. You won't easily bundle other programs into a single executable for unix, though. Normally the approach for installation would be to buld a package for your platform's package manager (e.g. rpm, deb, pkg) or to provide a tarball to unravel in the appropriate directory.
If you need an executable file that unpacks the contents you might be able to use a shell archive. Take a look at the docs for shar(1) and see if that will get what you want
If you really need a scripting capability to glue multiple C programs together, take a look at the Tcl language. It has an API that is designed to trivially wrap C programs that expect to see argv[] style parameters. You can even embed the chunks of C code into a custom Tcl interpreter and glue it together with various Tcl scripts.
If you really need to make it opaque, you could encrypt the tcl scripts and wrap the whole thing in something that unencrypts the tcl scripts to a buffer and then runs the Tcl interpreter on them. Tcl can accept scripts from a file or a char* buffer, so the unencrypted scripts never have to hit the file system.
shc
I have modified the original source and upgraded to a new version with some feature addition and bug fixes.
It's here.
Example Usage:
shc -f script.sh -o binary_name
script.sh will be compiled to a binary named binary_name
Note that, you still need the required shell to be installed in your system to run this executable.
arx is a great bundler, and you may be able to integrate a obfuscator in its workflow.
Options that are available to you:
Write a logic in your code that, when the code is run for the first time on a box, it'll check to see if all the required packages exist. And if they do not, the code will automatically go get the packages itself and will install them...without asking to the user to do anything. The only question the user needs to be asked is "Is it ok to proceed with the install of the aforementioned packages? (Y/N)". Anything outside of that is too much.
Once the above code is complete (yes, i'm aware it may not be all that simple for you to code this, or may be it is, i don't know your coding capabilities), copy and paste your completed code to a site like kinglazy.com and an actual executable file will be generated for you.
There are quite a few benefits of this particular option:
Yes, you will be able to run the encrypted version of your script without exposing any proprietary information.
No one can try to "view" your script, because if they do, they'll see nothing but indecipherable, encrypted jargon which wont make sense to them.
No one can attempt to modify your script because if they do, the script will immediately become inoperable.
No one can run a debugger on your script to see how it works. If they do, the script will abort.
Also, no one can create copies of your script on the same server. If they do, it will abort and won't work. It'll only allow users to create symlinks to the original location of wherever you want the script to be.
I may be missing some things in what you asked for, but i believe the above satisfies a good portion of what you wanted.
Not sure if this works on other scripts but it certainly does for shell scripts.
You can also use the free online version of CCsh to compile a shell script into a binary:
http://www.comeaucomputing.com/tryccsh/
I've been trying to install Clojure on my computer to learn and use. I'm running Ubuntu 10.04, and have installed the latest Sun Java SDK and environment from Synaptic.
Searching with Google, I found multiple guides that give pretty clear guides on how to go about installing all the dependencies and useful tools and builders like ant, maven, leiningen, and emacs with SLIME.
Some of the guides are a bit dated, especially considering how fast Clojure development moves, so I searched for the most up-to-date one I could. I've been following this guide from December of 2010 and it's very similar to most others.
The one big problem I come up against is at the step where I have to fire up the REPL with
java -cp clojure.jar clojure.main
I see that in the clojure source I've gotten from github.com/clojure/clojure.git and the github.com/clojure/clojure-contrib.git that neither actually have a clojure.jar to point the JVM to...
I think that maybe there's something I'm doing wrong, since no one has had this problem before apparently from my searches on Google. I double checked the repos on Github via a browser and there is no .jar file there either.
So...where do I get this .jar file or is there another way I'm supposed to go about this?
FWIW, if you don't have an intense desire to compile stuff, your life will be easier if you just download leiningen or cake, and get one of them to manage all the jars and classpaths and stuff. For example, here's all it takes to get lein running on a vanilla unix system. (I've omitted the screensful of output some of these commands generate to emphasize that you only have to type a few things).
akm#li231-96: ~
$ curl https://raw.github.com/technomancy/leiningen/stable/bin/lein > lein
akm#li231-96: ~
$ chmod +x lein
akm#li231-96: ~
$ ./lein self-install
akm#li231-96: ~
$ ./lein repl
Using JLine for console I/O; install rlwrap for optimum experience.
REPL started; server listening on localhost:60099.
user=> (inc 1)
2
Your experience will be better if you put lein on your PATH somewhere (eg ~/bin) rather than calling it by full path, but it's not at all necessary.
So I'm trying to follow the suggested structure of a Haskell project, and I'm having a couple problems organizing my tests.
For simplicity, let's start with:
src/Clue/Cards.hs # defines Clue.Cards module
testsuite/tests/Clue/Cards.hs # tests Clue.Cards module
For one, I'm not sure what to name the module in testsuite/tests/Clue/Cards.hs that contains the test code, and for another, I'm no sure how to compile my test code so that I can link to my source:
% ghc -c testsuite/tests/Clue/Cards.hs -L src
testsuite/tests/Clue/Cards.hs:5:0:
Failed to load interface for `Clue.Cards':
Use -v to see a list of the files searched for.
I use myself the approach taken by Snap Framework for their test-suites, which basically boils down to:
Use a test-framework such as haskell-test-framework or HTF
Name the modules containing tests by appending .Tests to the module-name containing the IUT, e.g.:
module Clue.Cards where ... -- module containing IUT
module Clue.Cards.Tests where ... -- module containing tests for IUT
By using separate namespaces, you can put your tests in a separate source-folder tests/, you can then use a separate Cabal build-target (see also cabal test-build-target support in recent Cabal versions) for the test-suite which includes the additional source folder in its hs-source-dirs setting, e.g.:
Executable clue
hs-source-dirs: src
...
Executable clue-testsuite
hs-source-dirs: src tests
...
This works, since there's no namespace collision between the modules in your IUT and the test-suite anymore.
Here's another way:
Each module's unit tests are defined as a hunit TestList at the end of the module, with some consistent naming scheme, such as "tests_Path_To_Module". I think this helps me write tests, since I don't have to search for another module far away in the source tree, nor keep two parallel file hierarchies in sync.
A module's test list also includes the tests of any sub-modules. Hunit's runTestTT runner is built in to the app, and accessible via a test command. This means a user can run the tests at any time without special setup. Or if you don't like shipping tests in the production app, use CPP and cabal flags to include them only in dev builds, or in a separate test runner executable.
There are also functional tests, one or more per file in the tests/ directory, run with shelltestrunner, and some dev-process-related tests based in the Makefile.
Personally I feel that an extra ./src/ directory doesn't make much sense for small Haskell projects. Of coarse there's source, I downloaded the source code.
Either way (with or without src), I'd suggest you refactor and have a Clue directory and a Test directory:
./Clue/Cards.hs -- module Clue.Cards where ...
./Test/Cards.hs -- module Test.Cards where ...
This allows GHCi + Test.Cards to see Clue.Cards without any extra args or using cabal. On that note, if you don't use cabal + flags for optionally building your test modules then you should look into it.
Another option, which I use in many of my projects, is to have:
./Some/Module/Hierarchy/File.hs
./tests/someTests.hs
And I cabal install the package then run the tests/someTests.hs stuff. I guess this would be annoying if my packages were particularly large and too a long time to install.
For completeness sake, it worth mentioning a very easy approach for small project through ghci -i. For example, in your case,
>ghci -isrc:testsuite
ghci>:l Clue.Cards
ghci>:l tests.Clue.Cards
I just built Rakudo and Parrot so that I could play with it and get started on learning Perl 6. I downloaded the Perl 6 book and happily typed in the first demo program (the tennis tournament example).
When I try to run the program, I get an error:
Divide by zero
current instr.: '' pc -1 ((unknown file):-1)
I have my perl6 binary in the build directory. I added a scripts directory under the rakudo build directory:
rakudo
|- perl6
\- scripts
|- perlbook_02.01
\- scores
If I try to run even a simple hello world script from my scripts directory I get the same error:
#!/home/daotoad/rakudo/perl6
use v6;
say "Hello nurse!";
However if I run it from the rakudo directory it works.
It sounds like there are some environment variables I need to set, but I am at a lost as to what the are and what values to give them.
Any thoughts?
Update:
I'd rather not install rakudo at this point, I'd rather just run things from the build directory. This will allow me to keep my changes to my system minimal as I try out different Perl6 builds (Rakudo * is out very soon).
The README file encouraged me to think that this was possible:
$ cd rakudo
$ perl Configure.pl --gen-parrot
$ make
This will create a "perl6" or "perl6.exe" executable in the
current (rakudo) directory. Programs can then be run from
the build directory using a command like:
$ ./perl6 hello.pl
Upon rereading, I found a reference to the fact that it is necessary to install rakudo before running scripts outside the build directory:
Once built, Rakudo's make install target will install Rakudo
and its libraries into the Parrot installation that was used to
create it. Until this step is performed, the "perl6" executable
created by make above can only be reliably run from the root of
Rakudo's build directory. After make install is performed,
the installed executable can be run from any directory (as long as
the Parrot installation that was used to create it remains intact).
So it looks like I need to install rakudo to play with Perl 6.
The next question is, where rakudo be installed? README says into the Parrot install used to build.
I used the --gen-parrot option in my build, which looks like it installs into rakudo/parrot-install. So rakudo will be installed into my rakudo\parrot-install?
Reading the Makefile, supports this conclusion. I ran make install, and it did install into parrot_install.
This part of the build/install process is unclear for a newbie to Perl6. I'll see if I can up with a documentation patch to clarify things.
Off the top of my head:
Emphasize running make install before running scripts outside of build. This requirement is currently burried in the middle of a paragraph and can be easily missed by someone skimming the docs (me).
Explicitly state that with --gen-parrot will install perl6 into the parrot_install directory.
Did you run make install in Rakudo?
It's necessary to do it to be able to use Rakudo outside its build directory (and that's why both the README and http://rakudo.org/how-to-get-rakudo tell you to do it.
Don't worry, the default install location is local (in parrot_install/bin/perl inside your rakudo directory).
In response to your update I've now updated the README:
http://github.com/rakudo/rakudo/commit/261eb2ae08fee75a0a0e3935ef64c516e8bc2b98
I hope you find that clearer than before. If you still see room for improvement, please consider submitting a patch to rakudobug#perl.org.