Why do I need the shared sources file in smalltalk dialects like Pharo? - smalltalk

the usual installation descriptions tells me that I need to run Pharo at least three files:
image file
changes file
source file (e.g. PharoV10.sources)
I've run Pharo 2 without the sources file and i didn't see any problem. All sources seemed to be avaiable.
So, why do I need the sources file (e.g. PharoV10.sources)?

The image file contains only the compiled code, not the original source code. The changes file contains source code for the stuff that you added to the system yourself but not source code for existing system classes. To get the source code for existing system classes you need the sources file.
Having said that, Smalltalk can decompile the code and produce what looks like source code if the sources file isn't available. This code will be missing proper variable names, comments and spacing. You really don't want to use decompiled source code so you need access to the sources file.

3 possible explanations, try to identify the joke:
the Browser is smart enough to show you a source reconstructed (Decompiled) from the CompiledMethod byte codes. Hint: in this case, you loose all comments
there is a search path for the sources files, and one is found somewhere on your disk
Pharo is changing so fast that every source is now found in the .changes file
For verifying 1., you could try to browse references to Decompiler (there are a bit too many uses to my own taste).
For verifying 2., you could start browsing implementors of #openSourceFiles
For verifying 3., you could evaluate this snippet:
| nSources nChanges |
nSources := nChanges := 0.
SystemNavigation default allBehaviorsDo: [:b |
b selectorsDo: [:s |
(b compiledMethodAt: s) fileIndex = 1
ifTrue: [nSources := nSources+1]
ifFalse: [nChanges := nChanges+1]]].
^{nSources. nChanges}

It is also possible that Pharo downloads PharoV10.sources automatically.

Related

How to do an incremental read of binary files

TL;DR: can I do an incremental read of binary files with Red or Rebol?
I would like to use Red to process some large (13MB to 2GB) structured binary files (Kurzweil synthesizer files). I've used other languages (C, Go, Tcl, Ruby, Dart) to walk through these files, and now I'd like to do the same with Red or Rebol.
Is there a way to incrementally read binary files, byte by byte? All I see is read/binary which seems to slurp the entire file at once (or a part of a file).
I'll need to jump around a little bit, too (either peek at the next byte, or skip to the end of a section, or skip past variable length strings to the start of data).
(Yes, I could make some helpers that tracked the position and used read/part/seek.)
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
This is on macos, but a portable solution would be great.
Thanks!
PS: "open/read %abc" gives an error "*** Script Error: open does not allow file! for its port argument", even though the help message say the port argument is "port [port! file! url! block!]"
Rebol has ports for that, which are planned for 0.7.0 release in Red. So, current I/O is very basic and buffer-only, and open is a preliminary stub.
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
You can leverage Rebol or Red/System FFI as a learning excercise.
Here is how you would do it in Rebol:
>> file: open/direct/binary %file.dat
>> until [none? probe copy/part file 20]
>> close file
#{732F7072696E74657253657474696E6773312E62}
#{696E504B01022D00140006000800000021006149}
#{0910890100001103000010000000000000000000}
...
#{000000006A290000646F6350726F70732F617070}
#{2E786D6C504B0506000000000D000D0068030000}
#{292C00000000}
none
first file or pick file 1 will return the next byte value (integer!)
This even works with text files: open/lines/direct, in that case copy/part file 20 will return 20 lines, or you can use pick file 1 or first file to get the next line.
Soon this will be available on Red too.

In CMake how do I deal with generated source files which number and names are not known before?

Imagine a code generator which reads an input file (say a UML class diagram) and produces an arbitrary number of source files which I want to be handled in my project. (to draw a simple picture let's assume the code generator just produces .cpp files).
The problem is now the number of files generated depends on the input file and thus is not known when writing the CMakeLists.txt file or even in CMakes configure step. E.g.:
>>> code-gen uml.xml
generate class1.cpp..
generate class2.cpp..
generate class3.cpp..
What's the recommended way to handle generated files in such a case? You could use FILE(GLOB.. ) to collect the file names after running code-gen the first time but this is discouraged because CMake would not know any files on the first run and later it would not recognize when the number of files changes.
I could think of some approaches but I don't know if CMake covers them, e.g.:
(somehow) define a dependency from an input file (uml.xml in my example) to a variable (list with generated file names)
in case the code generator can be convinced to tell which files it generates the output of code-gen could be used to create a list of input file names. (would lead to similar problems but at least I would not have to use GLOB which might collect old files)
just define a custom target which runs the code generator and handles the output files without CMake (don't like this option)
Update: This question targets a similar problem but just asks how to glob generated files which does not address how to re-configure when the input file changes.
Together with Tsyvarev's answer and some more googling I came up with the following CMakeList.txt which does what I want:
project(generated)
cmake_minimum_required(VERSION 3.6)
set(IN_FILE "${CMAKE_SOURCE_DIR}/input.txt")
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS "${IN_FILE}")
execute_process(
COMMAND python3 "${CMAKE_SOURCE_DIR}/code-gen" "${IN_FILE}"
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
INPUT_FILE "${IN_FILE}"
OUTPUT_VARIABLE GENERATED_FILES
OUTPUT_STRIP_TRAILING_WHITESPACE
)
add_executable(generated main.cpp ${GENERATED_FILES})
It turns an input file (input.txt) into output files using code-gen and compiles them.
execute_process is being executed in the configure step and the set_property() command makes sure CMake is being re-run when the input file changes.
Note: in this example the code-generator must print a CMake-friendly list on stdout which is nice if you can modify the code generator. FILE(GLOB..) would do the trick too but this would for sure lead to problems (e.g. old generated files being compiled, too, colleagues complaining about your code etc.)
PS: I don't like to answer my own questions - If you come up with a nicer or cleaner solution in the next couple of days I'll take yours!

make/0 functionality for SICStus

How can I ensure that all modules (and ideally also all other files that have been loaded or included) are up-to-date? When issuing use_module(mymodule), SICStus compares the modification date of the file mymodule.pl and reloads it, if newer. Also include-ed files will trigger a recompilation. But it does not recheck all modules used by mymodule.
Brief, how can I get similar functionality as SWI offers it with make/0?
There is nothing in SICStus Prolog that provides that kind of functionality.
A big problem is that current Prologs are too dynamic for something like make/0 to work reliably except for very simple cases. With features like term expansion, goals executed during load (including file loading goals, which is common), etc., it is not possible to know how to reliably re-load files. I have not looked closely at it, but presumably make/0 in SWI Prolog has the same problem.
I usually just re-start the Prolog process and load the "main" file again, i.e. a file that loads everything I need.
PS. I was not able to get code formatting in the comments, so I put it here instead: Example why make/0 needs to guard against 'user' as the File from current_module/2:
| ?- [user].
% compiling user...
| :- module(m,[p/0]). p. end_of_file.
% module m imported into user
% compiled user in module m, 0 msec 752 bytes
yes
| ?- current_module(M, F), F==user.
F = user,
M = m ? ;
no
| ?-
So far, I have lived with several hacks:
Up to 0.7 – pre-module times
SICStus always had ensure_loaded/1 of Quintus origin, which was not only a directive (like in ISO), but was also a command. So I wrote my own make-predicate simply enumerating all files:
l :-
ensure_loaded([f1,f2,f3]).
Upon issuing l., only those files that were modified in the meantime were reloaded.
Probably, I could have written this also like — would I have read the meanual (sic):
l :-
\+ ( source_file(F), \+ ensure_loaded(F) ).
3.0 – modules
With modules things changed a bit. On the one hand, there were those files that were loaded manually into a module, like ensure_loaded(module:[f1,f2,f3]), and then those that were clean modules. It turned out, that there is a way to globally ensure that a module is loaded — without interfering with the actual import lists simply by stating use_module(m1, []) which is again a directive and a command. The point is the empty list which caused the module to be rechecked and reloaded but thanks to the empty list that statement could be made everywhere.
In the meantime, I use the following module:
:- module(make,[make/0]).
make :-
\+ ( current_module(_, F), \+ use_module(F, []) ).
This works for all "legal" modules — and as long as the interfaces do not change. What I still dislike is its verboseness: For each checked and unmodified module there is one message line. So I get a page full of such messages when I just want to check that everything is up-to-date. Ideally such messages would only show if something new happens.
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
An improved version takes #PerMildner's remark into account.
Further files can be reloaded, if they are related to exactly one module. In particular, files loading into module user are included like the .sicstusrc. See above link for the full code.
% reload files that are implicitly modules, but that are still simple to reload
\+ (
source_file(F),
F \== user,
\+ current_module(_, F), % not officially declared as a module
setof(M,
P^ExF^ExM^(
source_file(M:P,F),
\+ current_module(M,ExF), % not part of an official module
\+ predicate_property(M:P,multifile),
\+ predicate_property(M:P,imported_from(ExM))
),[M]), % only one module per file, others are too complex
\+ ensure_loaded(M:F)
).
Note that in SWI neither ensure_loaded/1 nor use_module/2 compare file modification dates. So both cannot be used to ensure that the most recent version of a file is loaded.

finding a corrupted part from the parts of a split archive

I have 7 files with extensions like xyz.rar.001 - xyz.rar.007 clearly they are parts of a single file. I have all the 7 parts. I join them using a file joiner into a single file xyz.rar and try to unrar them with WINRAR , it says that archive is corrupted It is clear that 1 or 2 parts are corrupted. IS THERE ANY WAY TO FIND THEM ? Please help I don't want to re download all of them NOTE- winrar can detect a corrupt part if the parts were splitted using winrar (with extensions like part1.rar , part2.rar etc. ) but not if they are named as rar.001
Parts .001 - .006 should have the same size. Check if there is a file with a different byte size.
Are there multiple files in the RAR or just the one? With multiple you could run a Test and see which is the first file to fail.
I think it's strange that there is a second tool used to split the RAR archive up. (e.g. HJSplit) This lets me think that .002 could be a RAR archive too. Try opening xyz.rar.001 with WinRAR and test/exctract. It happens more that RAR archives have the extension .001 instead of .rar. An example.
Naming your archives in WinRAR like this can be accomplished by putting "xyz.rar.001" as Archive name on the General tab and checking "Old style volume names" on the Advanced tab.
If I then join the files with HJSplit, I get one .rar file (that is corrupt). When I Test it, it says "Next volume is required". In the diagnostic messages I can see "The required volume is absent" and "CRC failed in X. The file is corrupt"
If there is one file stored inside the RAR and the RAR is indeed just chopped up into 7 pieces, there is no way of telling without additional files such as .sfv or .par2. (unless the RAR does not use compression: you can parse the underlying file for errors and calculate the part where it goes wrong)

Ignore includes with #pycparser and define multiple Subgraphs in #pydot

I am new to stackoverflow, but I got a lot of help until now, thanks to the community for that.
I'm trying to create a software showing me caller depandencys for legacycode.
I'parsing a directory with c code with pycparcer, and for each file i want to create a subgraph with pydot.
Two questions:
When parsing a c file, the parser references the #includes, an i get also functions in my AST, from the included files. How can i know, if the function is included, or originaly from this actual file/ or ignore the #includes??
For each file i want to create a subgraph, an then add all functions in this file to this subgraph. I don't know how many subgraphs i have to create...
I have a set of files, where each file is a frozenset with the functions of this file
somthing like this is pssible?
for files in SetOfFiles:
#how to create subgraph with name of files?
for function in files:
self.graph.add_node(pydot.Node(funktion)) #--> add node to subgraph "files"
I hope you got my challange... any ideas?
Thanks!
EDIT:
I solved the question about pydot, it was quiet easy... So I stay with my pycparser problem :(
for files in ListOfFuncs:
cluster_x = pydot.Cluster(files, label=files)
for functions in files:
cluster_x.add_node(pydot.Node(functions))
graph.add_subgraph(cluster_x)
I can address the pycparser part. The preprocessor leaves #line directives that specify which file & line code came for, and pycparser consumes those. You can get that information from the AST it creates (see tests for an example).