How to fix the auto code formatting in Pharo? - smalltalk

When I save a method and get back to it later, all of my variable names become temp and all of my parameters becomes arg and the code indentation get changed.
Any thoughts on how I can fix this?

The behaviour that you are experiencing is not code formatting at all. You immage is experiencing an issue where it can't access original source code. Thus it uses a backup solution and decomples method bytecode. During the compilation process the variable names are erased, so they can't be re-created during the decompilation, and generic substitudes are used instead.
Now, why you are missing sources is another question. First of all it's important to check if you get some exceptions. Often these happen when you open or save your image, but also thaty may occur when you save methods.
Depending on the Pharo version you may be missing .changes or .sources files. This often happens when you more an image without moving other supporting files.

Related

IDE generated USEFORM macro calls changing their order

We have a C++Builder XE project (VCL Forms Application) that has a few dozen forms and units in it. Whenever a file belonging to the project is added, deleted, or renamed, the IDE should do two things:
A call to USEFORM macro is added to or altered in the Project Source file (ProjectName.cpp) if the affected unit is a form or frame
A CppCompile element in the project file (ProjectName.cbproj) is added or altered
However instead of just doing the necessary changes, the IDE shuffles some of the existing USEFORMs and CppCompile records, even if they aren't affected by the changes. If I add a Unit (cpp and header file), the USEFORMs are shuffled even when that wouldn't require any changes to the Project Source, only to the cbproj-file.
I don't see a specific pattern on how the new order is formed. If I edit or rename a single unit, about half of the USEFORMs seem to change position and just a couple or none of the CppCompile records. If a change is made to a copy of the project in two different machines, most of the changes seem to be similar, but not all. This indicates that the reordering is not random.
The behaviour causes problems when using Subversion to merge changes, because it forces to manually resolve conflicts inflicted by the changing order.
So the question is: What might be causing the foregoing behaviour and how to get rid of it?
I haven't been able to find a proper solution to the problem, but here's a simple method for making it slightly less annoying:
Adopt a policy of never committing the random IDE-generated changes to the version control repository. Whenever you make changes to code that trigger mixing up the files, revert all unnecessary changes in ProjectName.cpp and ProjectName.cbproj. At this point it is still fairly easy, as you know which parts of the files actually should have changed. That way, the manual labour is carried out when it still requires the minimal amount of work. Additionally, the work has to be carried out only once, in contrast to leaving the changes untouched, in which case the work has to be repeated every time someone merges the changes.

Ipython QtConsole %edit

When using the magic function %edit from QtConsole with IPython, the call does not block, and does not execute the saved code. It does however save a temporary file...
I think this is intended behavior due to GUI editors and uncertainty, and whatever that reason is for not being able to communicate with subprocess (pyZMQ?).
What do you suggest as the best way to mix %edit/%run magics?
I would not mind calling two different commands (one to edit, and one after I have saved and execution is safe). But those commands need a way to synchronize this target file location, or someone to persist storage, and probably need some crude form of predicatably generating filenames such that you can edit more than one file at a time, and execute in arbitrarily. Session persistence is not a must.
Would writing my own magic do any good? Hope we can %edit macros soon, that would do well enough to make it work.
you shoudl be able to do %edit filename.py and %run filename.py. The non blocking behavior is expected, and IIRC due to technical reason. Not unsurmountable but difficult.
You could define your own magic if you wish, improvement are welcomed.
Hope we can %edit macros soon, that would do well enough to make it work.
For that too, PR are welcomed. I guess as a workaround/option you can %load macro which would put macro on input n+1 , edit it and redefine it, that might be a good extension for a cell magic %%macro macroname
If you have some executable code on your input (from QtConsole), you can type
%edit 1-5
This fires the editor, creates a temporarily file (automatically managed), and loads your input lines. This is nearly enough, now how to retrieve the name of that temp file pragmatically?
I see the print statement on Stdout, but its not visible to QtConsole AFAIK. Could maybe redirect stdout to catch that line, but that may not be an option anyway if your doing something else with stdout.
If I could retrieve the full pathname that was just created, this would be cake. Store it where some magics will know how to find it. Then issue a followup command when ready,pops the name off the stack, loads it into a macro, and run. All this with 2 input commands and no names to remember (unless you want to find and use that macro again, but for 1 shot stuff...)
How do I catch or retrieve the path of that temporary file?

INotifyPropertyChanged.PropertyChanged implemented and not implemented; Visual Studio build error

I'm seeing a strange build bug a lot. Sometimes after typing some code we receive the following build error.
Class 'clsX' must implement 'Event PropertyChanged(sender As Object, e As PropertyChangedEventArgs)' for interface System.ComponentModel.INotifyPropertyChanged'.
And
'PropertyChanged' cannot implement 'PropertyChanged' because there is no matching event on interface 'System.ComponentModel.INotifyPropertyChanged'.
Those error should never go together! Usually we can just ignore the exception and build the solution but often enough this bug stops our build. (this happens a lot using Edit and Continue which is annoying)
We're using Vb.net and c# mixed in one big solution.
Removing the PropertyChanged event and retyping the same code! sometimes fixes this.
Question:
Has anyone else seen this problem and has some suggestions how to prevent his?
We're using a code generator that causes this error to surface but just editing some files manually triggers this exception too. This error occur's on multiple machines using various setups.
Someone had the same exact issue discussed here. It sounds like there is an issue with this build picking up an old version of a binary. I would try the following in order:
Verify all assembly references use project references where possible within the Visual Studio solution.
Disable build parallelization in case there is some weird file locking issue with concurrent project builds. Go to Tools -> Options, Projects and Solutions -> Build and Run, then set "maximum number of parrellel project builds" to 1. Not the best solution but it may help narrow down the problem.
Disable the Hosting Process in case it's locking some file causing an assembly to not get rebuilt correctly. For C# project go to Project Properties, Debug tab, and uncheck "Enable the Visual Studio hosting process". For VB.NET project you'll need to Unload Project, Edit the project file, and add <UseVSHostingProcess>false</UseVSHostingProcess> to the PropertyGroup of each configuration. Again, not the best solution but you probably won't notice a difference.
Lastly, try doing a Clean + Build to try and resolve the issue when it occurs (I know this is not a fix but it's easy enough to do), also Rebuild may be slightly different than Clean + Build so try the latter if the former doesn't work.
As I can not comment due to lack of appropriate points.
But I would like to share one of my experience:
In an aspx.cs page I was working, used to compile fine and some time gave mysterious error of a variable not defined or function not defined or sometime variable or the function defined two times. I changed possibly each and every variable and function name but there seemed no effect , but after entering a simple space or a new line at any place in the file used to solve the compile error. At one time I tried to save the file (in a different encoding as i am used to experiments) and found that the file was not saving in the correct encoding (i.e. the ansi encoding because the file had a unicode character ), I removed the unicode character and that compile error didn't bothered me again.
This unicode character problem could be (not a hard and fast rule) there so you could check it.
Nuke & restore using source control (TFS instructions here):
Make sure you have everything checked in
Exit Visual Studio
Rename the project directory to .Bak (effectively deleting it)
Reopen Visual Studio and in source control:
Get Specific Version
check 'Overwrite... not checked out' and 'Overwrite ... even if local version matches'
Re-open project
Another problem: Make sure some source files are not newer than the current date (or your date is set back). Often this happens in apps where you are doing logic that requires certain things to happen differently on certain dates. You change your clock to test it, make a revision to the source with the date advanced, set the date back, and viola, rebuild does not rebuild that file.
You say 'typing it in again' - can you try just saving? After 40 years since MULTIX the .net build still decides what has changed by checking the file timestamp.
good luck!
When you get the error, is it always on the VB calling C# side, or vice-versa, or does it work both ways?
If the answer is either of the first two situations, try building the "callee" project within the solution before building the "caller" project to see if it stops the situation.
Also, just in case it may jog something for you to think about, does this error crop up when you change a VB file or a C# file, or is there no correllation?
Oh, and sorry this looks like an answer instead of a comment, I cannot post comments yet (need 50 rep).

Error checking overkill?

What error checking do you do? What error checking is actually necessary? Do we really need to check if a file has saved successfully? Shouldn't it always work if it's tested and works ok from day one?
I find myself error checking for every little thing, and most of the time if feels overkill. Things like checking to see if a file has been written to a file system successfully, checking to see if a database statement failed.......shouldn't these be things that either work or don't?
How much error checking do you do? Are there elements of error checking that you leave out because you trust that it'll just work?
I'm sure I remember reading somewhere something along the lines of "don't test for things that'll never really happen".....can't remember the source though.
So should everything that could possibly fail be checked for failure? Or should we just trust those simpler operations? For example, if we can open a file, should we check to see if reading each line failed or not? Perhaps it depends on the context within the application or the application itself.
It'd be interesting to hear what others do.
UPDATE: As a quick example. I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have to image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
Thanks,
James.
if you run out of free space and try to write file and don't check errors your appliation will fall silently or with stupid messages. i hate when i see this in other apps.
I'm not addressing the entire question, just this part:
So should everything that could
possibly fail be checked for failure?
Or should we just trust those simpler
operations?
It seems to me that error checking is most important when the NEXT step matters. If failure to open a file will allow error messages to get permanently lost, then that is a problem. If the application will simply die and give the user an error, then I would consider that a different kind of problem. But silently dying, or silently hanging, is a problem that you should really do your best to code against. So whether something is a "simple operation" or not is irrelevant to me; it depends on what happens next, or what would be the result if it failed.
I generally follow these rules.
Excessively validate user input.
Validate public APIs.
Use Asserts that get compiled out of production code for everything else.
Regarding your example...
I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have [no] image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
In this case, I would recommend saving the image to disk first before saving the object. That way, if the image can't be saved, you don't have to try to roll back the gallery. In general, dependencies should get written to disk (or put in a database) first.
As for error checking... check for errors that make sense. If fopen() gives you a file ID and you don't get an error, then you don't generally need to check for fclose() on that file ID returning "invalid file ID". If, however, file opening and closing are disjoint tasks, it might be a good idea to check for that error.
This may not be the answer you are looking for, but there is only ever a 'right' answer when looked at in the full context of what you're trying to do.
If you're writing a prototype for internal use and if you get the odd error, it doens't matter, then you're wasting time and company money by adding in the extra checking.
On the other hand, if you're writing production software for air traffic control, then the extra time to handle every conceivable error may be well spent.
I see it as a trade off - extra time spent writing the error code versus the benefits of having handled that error if and when it occurs. Religiously handling every error is not necessary optimal IMO.

How to get rid of unmanaged code in VW 3.1d and ENVY

I have an old VW3/ENVY image with a parcel loaded as unmanaged code (exactly the situation Mastering ENVY/DEVELOPER warns against). Unfortunately, this problem happened a long time ago and it's too late to just "go back" to an image without the parcel loaded.
Apparently, there is a way to solve this problem (we have one development image where this has been solved, and there are normal configuration maps that contain the exact same code as the unmanaged parcel but they can't be loaded), but the exact way has long since been forgotten (and there are some problems with taking that particular dev image as the base for a new runtime image, so I need to find out how how to do it again).
In theory, it should be possible to remove the parcel and reload the code from a configuration map. In practice, all normal ways (using the ParcelBrowser or directly calling UnmanagedCode>>remove) fail. I even tried manually removing the offending selectors from the method dictionary, but past a certain point (involving a call to #primBecome:) the whole image hangs completely (I can't even drop into the debugger). I started hacking the instances of the classes and methods, hoping I'd trick ENVY into thinking that these particular methods are normal versioned code, but without any success yet.
Are there any smalltalk/envy gurus around that still remember enough of VW 3 to provide me with any pointers?
Status update
After a week of trying to solve the problem I finally made it, at least partially, so in case anyone's interested...
First, I had to fix file pointers for the umnanaged code (otherwise, all everything that tried to touch the methods would throw an exception). It looks like ENVY extends Parcel so that, in theory, all integer file pointers are changed to ENVY's void filepointer when loaded, but in my case, I had to do it manually (a Parcel provides enumeration for all selectors it defines). Another way would be to tweak the filePointer code, but that can't easily be done automatically on every image where it's needed.
Then, the parcel can be discarded, which drops the parcel information, but keeps the code. The official "Discard" mechanism needs to have a valid changes file (which envy doesn't use so it has to be set manually, and reset afterwards) and the parcel source (which we fortunately had).
To be able to make any changes to the methods (either manually, or via loading an application or class from ENVY), they need to get rid of their unmanaged status. This can be done by manually tweaking TheClass>>applicationAssocs (I also got rid of all references to the classes in UnmanagedCode sich as timestamps, and removed the reference to the discarded parcel). I actually had some info on how to get to this point from my boss, but I haven't been able to understand the instructions until I almost figured it out by myself.
This finally allowed me to load and reload all the Applications that contained the classes. In theory. In practice, the image still hung completely whenever I tried to load a newer version of the Application (that contained the code formerly in the parcel).
It turned out that the crashes had absolutely nothing to do with the code being unmanaged, but with the fact that the parcel in question modified InputState>>process:, where it caused an exception due to a missing and/or uninitialized class variable (the InputState>>initialize method wasn't called until after the new process: method was in place). I had to modify the Notifier class to dump all exceptions to a file to find out what was going on. Adding the class variable to the source of the class (instead of adding it via reflection), suspending the input processing thread via toBeLoadedCode and starting it again in the loaded method and creating a new version of the application solved even this problem.
Now everything works, in theory. In practice it's still unusable, because reloading the WindowSystem or VisualworksBase applications causes their initialization blocks to run, and a whole lot of settings are reset to their defaults - fonts and font sizes, window colors, UI settings... And there doesn't seem to be any way to just save the settings to a file and load them later on, or just to see what all the settings are (either the official Settings menu doesn't show everything, or we have a heavily tweaked image... so much for reconstructing it from scratch). But that's a completely different question.
Well, normally the recommendation would be that you should be able to rebuild your development image from scratch by loading your code from the repository. But if you had that, then the answer would be simple, just discard that image and reload. I think it's been long enough that I've lost whatever knowledge I've had about how to mess with the internal structures to get it back, and it sounds like you've tried a lot of things. So, although it might be painful, figuring out the recipe to rebuild your development image by loading stuff from the repository sounds like it may be your best bet. It probably isn't all that horrible, there just might be a few dependencies on the image state, or special doits that need to be executed.
You also probably need to validate what's in the repository against what's in the image you're working from. If there was unmanaged code loaded and then someone modified it and saved it, it's not clear to me that it would have been saved to ENVY. So you probably want to audit everything that was unmanaged code and if it's been changed, save that to a repository edition.
Sorry I don't have any better answers.