What level do LLVM optimization passes need to work on? - optimization

I have been exploring LLVM Optimizations recently but have a small question:
How do we know that a built-in pass (not LLVM Passes that we write) can be applied at the function level (using a FunctionPassManager), or Module level, etc?
Example - As seen in the KaleidoScope tutorial 4:
TheFPM->add(createCFGSimplificationPass());
This one is fairly obvious that it should be at function level but what about other passes? Are they all meant to work at any level (BasicBlock, Function, Module, etc) correctly?

I think you can find that out by looking at the source code.
For example, here is code for SimplifyCFGPass which inherites FunctionPass and here is the source code for MemoryDependencyAnalysisPass and as you can see it's a function level pass.

Related

Compiling Pascal code for embedded system (AT89C51RC2)

I am working on making a pretty trivial change to an old existing pascal source file. I have the source code, but need to generate a new hex file with my changes.
First, I tried compiling with "Embedded Pascal", which is the program used by my predecessor. Unfortunately, it is an unregistered copy and gives the message that the file is too large for the unregistered version. Support for and even the homepage for the project has disappeared (old), so I have no idea how I would register.
I tried a couple other compilers, "Free Pascal" and "Turbo51", and they are both giving similar errors:
Filename.pas (79): Error 36: BEGIN expected.
Linkcode $2E
^
The source code begins with
Linkcode $2E
LinkData $0A // normally 8 - make room for capacitance data
Program Main; Vector LongJmp Startup_Vector; //This inserts the start to the main routine.
uses IntLib;
I'm not well-versed in Pascal or embedded programming, but as I understand it, the Linkcode and LinkData lines are required to set up the RAM as needed. Following the "Const" and "var" declarations are subroutines that indeed start with procedure... begin... end.
I realize that Pascal is a bit out of date, but we are stuck with it and our old micro. Any ideas why previously working source code with trivial changes cannot be compiled? I am willing to consider other compilers, including paid options, if any are available with decent support. I am using Windows 10 x64 processor to compile, and flashing to an Atmel 89C51RC2.
If more source code is needed for diagnosis, please let me know what in particular, as I'll need to change some proprietary information before posting. Thanks!
Statements like linkcode and linkdata are not general, but target and compiler specific. Unless you have the know-how to reengineer to a different compiler, getting the original one is best.
Thanks to all for the information. While I didn't find an exact solution here, your comments were helpful for me to understand just how compiler-specific the Pascal code was.
In the end, I was able to get into my predecessors files and transfer registration, solving the issue for now. As suggested, I think I will port to C in the future to avoid fighting all the unsupported compiler nonsense.

How to build a call graph for a function module?

A while ago during documenting legacy code I found out there is a tool for displaying call graph (call stack) of any standard program. Absurdly I wasn't aware of this tool for years :D
It gives fancy list/hierarchy of calls of the program, though it is not a call graph in a full sense, it is very helpful in some cases.
The problem is this tool is linked only to SE93 so it can be used only for transactions.
I tried to search but didn't find any similar tool for reports or function modules. Yes, I can create a tcode for report, but for function module this approach doesn't work.
If I put FM call inside report and build a graph using this tool, it wraps this call as a single unit and does not analyze deeper. And that's it
Anybody knows a workaround how we can build graph for smth besides transaction?
The cynic in me thinks RS_CALL_HIERARCHY was left to rot. Sandra is right, it definitely used to work. Once OO came to abap, interfaces and dynamic/generic code became possible. So a call heirarchy based on static code analysis was pushing proverbial up hill.
IMO the best way to solve this is a FULL trace and then to extract the data from the trace.
There are even external tool that do that.
This is of course, still limited as running a trace on every execution path can be very time consuming. Did I hear someone say small Classes please ?
Trans SAT.
Make sure teh profile you use isnt aggregating, and measure the blocks you are interested.
Now wade you way through the trace.
https://help.sap.com/doc/saphelp_ewm93/9.3/en-US/4e/c3e66b6e391014adc9fffe4e204223/content.htm?no_cache=true
Have fun :)
The call hierarchy displays also works for programs and function modules.
In my S/4HANA system, for VA01, it displays:
Clicking the hierarchy of function module CJWI_INIT displays:
I get exactly the same result by calling the function module RS_CALL_HIERARCHY this way:
The parameter OBJECT_TYPE may have these values:
P : program
FF : function module
The "call graph" is not maintained anymore since at least Basis 4.6, and it doesn't work for classes and methods.
But the tool is buggy: in some cases, a function module containing a PERFORM at the first line, it may not be displayed, whatever the call graph is launched from SE93 or directly from RS_CALL_HIERARCHY.

How to list the exposed members of a package/dir-like method in Elm?

I have been searching the official docs and existing questions and could not find any information on this - in Elm, how it would be possible to see the members/methods/variables that belong to or are exposed by a package in Elm, (such as the dir method in python), without having to dive into the source code each time?
What I want to do is get a simple list of what methods are exposed by an imported package. (So for a package like List, it should output reverse , all, any, map, etc.) I have attempted tab completion in elm repl and the elm extension available in VS code editor, and elm repl does not offer any methods such as help, doc, ?, dir, man, etc., so I have no idea where to even start. I'm wondering how everyone else does this other than pulling up the source code for each and every package they use.
I apologize for the newbie question and if I misread or have been missing anything, but I couldn't even find anything in the https://elmprogramming.com tutorial. Thanks in advance.
Nothing like this exists in Elm to do reflection over modules, unfortunately (as of 0.19.1, at least).
However, if you aren't looking to actually do this kind of thing at runtime, but rather as a convenient way of finding out for development, the elm packaging system enforces the requirement that all public functions are documented, so if you visit the package page, every public function and type will be documented there (obviously it can't enforce the content of the documentation, but at the very least it will be listed).

What is the difference between "package" and "module" in Frege?

Hi I've been playing a little bit with Frege and I just noticed in some examples that package and module are used interchangeably:
package MyModuleOne where
and sometimes:
module MyModuleTwo where
When importing from one or the other I don't see any difference in the behavior of my program. Is there something I should keep in mind when using package or module keywords ?
Yes. It used to start out with package, but later I realized this was an obstacle when porting Haskell code which uses module. Hence I added module, and thus currently module and package are the same keyword, just spelled differently.
But the intention is, of course, to retire package sooner or later. So my advice would be to use module only.
(This reminds me that I probably have to update the lang spec with regard to this. Never mind.)

Is it feasible to use Antlr for source code completion?

I don't know, if this question is valid since i'm not very familiar with source code parsing. My goal is to write a source code completion function for one existing programming language (Language "X") for learning purposes.
Is Antlr(v4) suitable for such a task or should the necessary AST/Parse Tree creation and parsing be done by hand, assuming no existing solutions exists?
I haven't found much information about that specific topic, except a list of compiler books, except a compiler is not what i'm after for.
The code completion in GoWorks is completely implemented using ANTLR 4. The following video shows the level of completion of this code completion engine. The code completion example runs from 5 minutes through the end of the video.
Intro to Tunnel Vision Labs' GoWorks IDE (Preview Release)
I have been working on code completion algorithms for many years, and strongly believe that there is no better solution (automated or manual) for producing a code completion solution for a new language that meets the requirements for what I would call highly-responsive code completion. If you are not interested in that level of performance or accuracy, other solutions may be easier for you to get involved with (I don't work with those personally, because I am too easily disappointed in the results).
Xtext uses ANTLR3 and has good autocomplete facilities. The problem is, it generates a seperate parser (again using antlr3) for autocomplete processing which is derived from AbstractInternalContentAssistParser. This multi-thousand line code part shows that the error recovery of ANTLR3 alone found to be insufficient by the xtext team.
Meanwhile ANTLR4 has a function parser.getExpectedTokensWithinCurrentRule() which lists possible token types for given position. It works when used in a ParseTreeListener. Remaining is semantics, scoping etc which is out of ANTLRs scope.