Elm 0.19 --optimize and ports - elm

According to https://elm-lang.org/0.19.0/optimize:
Step two is to call uglifyjs with a bunch of special flags. The flags unlock optimizations that are unreliable in normal JS code, but because Elm does not have side-effects, they work fine for us!
However what about ports? The ports might have side effects. Won't this advice be problematic if one uses ports in ELM?
If so how would one go about splitting the ports out of the elm.js file.
PS I am busing https://github.com/elm-community/elm-webpack-loader and its get bundled into 1 big js file.

The instructions at https://elm-lang.org/0.19.0/optimize are for optimizing (size wise, not performance) code generated by the elm compiler only. Do not use those flags to optimize javascript written by hand or from other libraries. If you want to optimize and combine your elm with external javascript into one file, optimization needs to happen separately on your elm compiler generated javascript and all other javascript, and then combined into one file.
Which means:
The modules you write in elm for ports can be optimized like any other elm code. The javascript you write to interact with those ports / subscriptions must not be optimized like elm generated javascript.

Related

GraalVM: How do I import libraries from different languages in a single project? I am using IntelliJ

I have to make some functions that will use different lanaguages (python, R, js).
I got stuck at the part of generating random numbers in Python to initialize a list with random elements. I looked up on ways of initializing random lists, and then I decided to use result = polyglot.eval("python", "[random.randint(0,10) for i in range(20)];").
The problem that I face now is that I need to import the "random" library from python, or whatever libraries will I need from different languages. I heard that it might be a problem with the dependencies, but I am not sure...
What am I supposed to do? Is it even possible to import libraries from more languages in a single project? What other alternatives do I have?
Note that solution for different dynamic languages may differ.
Also js component is stable, while python (as of 2021) is still experimental.
Here is example for Python with modules
https://github.com/paulvi/java-python-graalvm-template
And if you really do polyglot (using Python object in Java code),
see https://github.com/hpi-swa-lab/graalpython-java-example
There is still issue how to actually deploy this in production
https://github.com/hpi-swa-lab/graalpython-java-example/issues/6
as just bundling venv subfolder into jar, will just work.
One solution is in ttps://github.com/paulvi/java-python-graalvm-template
Also randon, i.e. any library with graalvm is still big issue, as different packages have different issues, see https://github.com/oracle/graalpython/issues/228
I suggest, that before really mixing a lot of languages, just try one, e.g. js that is more stable, make it work, and then try next.
BTW PyCharm does not yet support graalpython.
If you do any open source, or later find somethin new, please let me know via GitHub issue

OCaml - testing functions not included in the signature

I'm writing tests for an OCaml module. Some of the functions in the module are not meant to be publicly visible, and so they're not included in the signature (.mli file).
I can't call these functions from my tests, because they're not visible outside of the module. So I'm having a hard time testing them. Is there a good way to get around this? For example, a way to tell ocamlc not to read the signature from the .mli file when it's compiling tests?
Some ideas:
Actually export the test functions, but use ocamldoc's stop comment (**/**) feature to avoid displaying the exports in the documentation.
Put all of your tests entirely in another module. However, this is difficult if you have abstract types because your tests may very well need access to the internal implementation.
Create a submodule Test, where all your tests go. That way it is clear what functions are just for testing. Possibly combine this with the (**/**) feature to also hide the sub-module from documentation.
I've heard that people sometimes separate their .mli files from their .ml files (in a different directory) so that they can compile with or without them (by telling ocamlc to look in the separate directory or not). I just tried a few experiments with this. I think it can be made to work, but it seems a little bit error prone to me. Maybe you could put the tests of the internal functions into the module. Exporting the test functions might not violate the modularity too badly. (Though of course it clutters up the module.)

How few a files does it take to load a program on Linux?

The (hypothetical for now) situation is the user of my system is going to be given a chunk of C code and needs my system to compile and run it in a chroot sandbox that is generated on the fly and I want to require the fewest files in the box as possible. I'm only willing to play with compiler and linker settings (e.g. static link everything I can expect to be able to find) and make some moderate restriction on what the code can expect use (e.g. they can't use arbitrary libs).
The question is how simple can I get the sandbox. Clearly I need the executable, but what about an ELF loader and a .so for the system calls? Can I dump either of them and is there something else I'll need?
You don't need anything except the executable to run a statically-linked hello world. You will, of course, need a lot more to compile it.
You can test this fairly easily, I did so with the following trivial C code:
#include <stdio.h>
int main() {
puts("Hello, world\n");
return 0;
}
compile it with gcc -static. Then make a new directory (I called it "chroot-dir"), move the output ("hello") into it. So the only file in the chroot is now the executable. Then run chroot chroot-dir ./hello, and you'll get Hello, world.
Note that there are some things that can not be compiled statically. For example, if your program does authentication (through PAM), PAM modules are always loaded dynamically. Also note that various files in /etc are needed for certain calls; any of the getpw* and getgr* functions, the domain name resolution functions, etc. will require nsswitch.conf (and some shared objects, and maybe more config files, and sometimes even more executables, depending on the lookup methods configured.) /etc/hosts, /etc/services, and /etc/protocols will probably be quite useful for any networking.
One easy way to figure out what files a program uses is to run it under strace. You must trust the program first, of course.
no need for any ELF loader. to check what dynamic libraries you need do ldd <executable>. If you manage to static compile everything, it won't need any .so. Beyond that, it's only about the data and directory structure your program might need.
But all this is only if you use the /usr/bin/chroot command; if you make your program call int chroot(const char *path); itself after making sure all dynamic libraries are loaded, they you won't need anything on the directory sandbox. not even the executable itself.
edit: A different idea: use TCC (or rather, libtcc to compile, link, load and run the given C chunk. run the whole process inside an 'outer' chroot jail, dropping to an 'inner' (empty) one just before execution. (of course, execute in a fork(), or you won't be able to break out of the 'inner' jail to the 'outer' one). You might also take advantage of libtcc's bound's checked execution.

Does Ada have a preprocessor?

To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.

Process for reducing the size of an executable

I'm producing a hex file to run on an ARM processor which I want to keep below 32K. It's currently a lot larger than that and I wondered if someone might have some advice on what's the best approach to slim it down?
Here's what I've done so far
So I've run 'size' on it to determine how big the hex file is.
Then 'size' again to see how big each of the object files are that link to create the hex files. It seems the majority of the size comes from external libraries.
Then I used 'readelf' to see which functions take up the most memory.
I searched through the code to see if I could eliminate calls to those functions.
Here's where I get stuck, there's some functions which I don't call directly (e.g. _vfprintf) and I can't find what calls it so I can remove the call (as I think I don't need it).
So what are the next steps?
Response to answers:
As I can see there are functions being called which take up a lot of memory. I cannot however find what is calling it.
I want to omit those functions (if possible) but I can't find what's calling them! Could be called from any number of library functions I guess.
The linker is working as desired, I think, it only includes the relevant library files. How do you know if only the relevant functions are being included? Can you set a flag or something for that?
I'm using GCC
General list:
Make sure that you have the compiler and linker debug options disabled
Compile and link with all size options turned on (-Os in gcc)
Run strip on the executable
Generate a map file and check your function sizes. You can either get your linker to generate your map file (-M when using ld), or you can use objdump on the final executable (note that this will only work on an unstripped executable!) This won't actually fix the problem, but it will let you know of the worst offenders.
Use nm to investigate the symbols that are called from each of your object files. This should help in finding who's calling functions that you don't want called.
In the original question was a sub-question about including only relevant functions. gcc will include all functions within every object file that is used. To put that another way, if you have an object file that contains 10 functions, all 10 functions are included in your executable even if one 1 is actually called.
The standard libraries (eg. libc) will split functions into many separate object files, which are then archived. The executable is then linked against the archive.
By splitting into many object files the linker is able to include only the functions that are actually called. (this assumes that you're statically linking)
There is no reason why you can't do the same trick. Of course, you could argue that if the functions aren't called the you can probably remove them yourself.
If you're statically linking against other libraries you can run the tools listed above over them too to make sure that they're following similar rules.
Another optimization that might save you work is -ffunction-sections, -Wl,--gc-sections, assuming you're using GCC. A good toolchain will not need to be told that, though.
Explanation: GNU ld links sections, and GCC emits one section per translation unit unless you tell it otherwise. But in C++, the nodes in the dependecy graph are objects and functions.
On deeply embedded projects I always try to avoid using any standard library functions. Even simple functions like "strtol()" blow up the binary size. If possible just simply avoid those calls.
In most deeply embedded projects you don't need a versatile "printf()" or dynamic memory allocation (many controllers have 32kb or less RAM).
Instead of just using "printf()" I use a very simple custom "printf()", this function can only print numbers in hexadecimal or decimal format not more. Most data structures are preallocated at compile time.
Andrew EdgeCombe has a great list, but if you really want to scrape every last byte, sstrip is a good tool that is missing from the list and and can shave off a few more kB.
For example, when run on strip itself, it can shave off ~2kB.
From an old README (see the comments at the top of this indirect source file):
sstrip is a small utility that removes the contents at the end of an
ELF file that are not part of the program's memory image.
Most ELF executables are built with both a program header table and a
section header table. However, only the former is required in order
for the OS to load, link and execute a program. sstrip attempts to
extract the ELF header, the program header table, and its contents,
leaving everything else in the bit bucket. It can only remove parts of
the file that occur at the end, after the parts to be saved. However,
this almost always includes the section header table, and occasionally
a few random sections that are not used when running a program.
Note that due to some of the information that it removes, a sstrip'd executable is rumoured to have issues with some tools. This is discussed more in the comments of the source.
Also... for an entertaining/crazy read on how to make the smallest possible executable, this article is worth a read.
Just to double-check and document for future reference, but do you use Thumb instructions? They're 16 bit versions of the normal instructions. Sometimes you might need 2 16 bit instructions, so it won't save 50% in code space.
A decent linker should take just the functions needed. However, you might need compiler & linke settings to package functions for individual linking.
Ok so in the end I just reduced the project to it's simplest form, then slowly added files one by one until the function that I wanted to remove appeared in the 'readelf' file. Then when I had the file I commented everything out and slowly add things back in until the function popped up again. So in the end I found out what called it and removed all those calls...Now it works as desired...sweet!
Must be a better way to do it though.
To answer this specific need:
•I want to omit those functions (if possible) but I can't find what's
calling them!! Could be called from any number of library functions I
guess.
If you want to analyze your code base to see who calls what, by whom a given function is being called and things like that, there is a great tool out there called "Understand C" provided by SciTools.
https://scitools.com/
I have used it very often in the past to perform static code analysis. It can really help to determine library dependency tree. It allows to easily browse up and down the calling tree among other things.
They provide a limited time evaluation, then you must purchase a license.
You could look at something like executable compression.