Pharo 3.0 - Is persistence automatic? - smalltalk

I noticed that after running into an issue last night, relaunching Pharo 3.0 didn't "undo" my working set - everything appeared to be as it was when I closed it. I saw where Fuel is included with Pharo now - does it automatically persist your session? I was under the impression that you had to do some tricks to make it actually work with your application.
Am I wrong?

Pharo uses an Image. The image basically is the snapshot of your memory contents when you use Pharo.
Upon startup this image is loaded from the image-file into memory and Pharo starts to run. The inverse happens when you save (snapshot) your session: the current state/memory is saved to the .image file. That includes all tools opened in the current session, all running processes and all live objects.
This has nothing to do with Fuel, which is a separate object serialization library.

There are two mechanisms in Pharo:
The image. The image is a memory snapshot containing all the objects (and in particular the compiled methods and classes as objects). When you save the image, you are saving the complete state of the system to disk. You can open an image (it loads the memory back and the execution continues where it stopped). In fact there is also another file that is called the change file. This file contains the textual representation of the classes and methods you edited. The tools are using this file to show you method code for example.
Now in addition to the concept of image (memory snapshot). The system records in permanence your code edition. After each compilation phase, the change is committed to the changes file. You can see what you did using the changeSorter or version browser (note that if you do not save your image, your changes will not be browsable using a changesorter because it is a simple tools). Now even if you did not save your image, your changes are logged in the changes file. There is a way to recover your changes using the "Recovery lost changes..." menu item under the Tools menu.
With this tools you can browse all the changes that have been recorded automatically and replay them. We are working on new tools for the future.
Now in general you should not rely on such tools. Using the Pharo distributed version management system (monticello) to create packages and publish them on forges such as SmalltalkHub.
Finally Fuel is an object serializer that is not used for saving Pharo snapshot. Fuel is a fast serializers that people used when they want to select what they serialize - usually graphs of objects.
All this information is also available in the free Pharo books: http://pharobyexample.org
and http://rmod.lille.inria.fr/pbe2/

Related

Objective-C - Finding directory size without iterating contents

I need to find the size of a directory (and its sub-directories). I can do this by iterating through the directory tree and summing up the file sizes etc. There are many examples on the internet but it's a somewhat tedious and slow process, particularly when looking at exceptionally large directory structures.
I notice that Apple's Finder application can instantly display a directory size for any given directory. This implies that the operating system is maintaining this information in real time. However, I've been unable to determine how to access this information. Does anyone know where this information is stored and if it can be retrieved by an Objective-C application?
IIRC Finder iterates too. In the old days, it used to use FSGetCatalogInfo (an old File Manager call) to do this quickly. I think there's a newer POSIX call for that these days that's the fastest, lowest-level API for this, especially if you're not interested in all the other info besides the size and really need blazing speed over easily maintainable code.
That said, if it is cached somewhere in a publicly accessible place, it is probably Spotlight. Have you checked whether the spotlight info for a folder includes its size?
PS - One important thing to remember when determining the size of a file: Mac files can have two "forks", the data fork, and the resource fork (where e.g. Finder keeps the info if you override a particular file to open with another application than the default for its file type, and custom icons assigned to files). So make sure you add up both forks' sizes, or your measurements will be off.

How do I resolve Labview load conflicts

I am developing a data acquisition program in Labview that uses multiple translation stages, cameras, a high speed digitizer, and other instrumentation. I'm developing the application on one computer, and will be deploying it to another computer. The development computer has labview 2013, and computer the application will be deployed on currently has Labview 2012, but we will be upgrading it to Labview 2013 when we move the application over there. Some of the drivers need different versions of the driver to function under Labview 2012 than they do for Labview 2013.
I'm trying to keep all of the vi's, subvi's, and drivers for the instrumentation in one directory tree so that I can move the whole tree over to the computer it will be deployed on.
When I load the project in Labview I'm getting a lot of "Resolve Load Conflict" dialog boxes popping up. When I go to investigate, Labview says it can't find one of the files that is causing the conflict, but yet it popping up the dialog. An example is below:
This happens every time I load this project - saving all doesn't enter the new paths into Labview. I also tried creating a new project, and pulling these vi's in, but the new project has the same load conflicts.
Evidently Labview or these vis think that these vi's that no longer exist are still there.
How do I fix my project, vi's, or Labview so that it only uses the vi's that it should, and I don't get all of these conflicts, many of which are with nonexistent files?
I just had this same problem, but solved it like so:
In your project window, expand the Dependencies group. You should see each of the undesired subvi's listed there.
Right click on each one and select 'Replace with item found by project...'. This will bring up the familiar conflict resolution dialog box, go ahead and select the proper path and click OK.
Now, because the dependency has changed, Labview is going to change the dependency path that is saved in the calling VI. You'll see a save dialog asking if you want to save the changes to the VI(s) that is/are calling the dependency whose path you just changed. You want to save changes.
Do this for all the dependencies and you should be good to go.
I've found that when it is necessary to move driver files and libraries from the NI default locations, renaming the files prevents further confusion.
For instance if you have a "instr.vi" that you need to move to a custom directory location, renaming the file "my_instr.vi" and linking to the renamed file prevents future conflicts.
Of course, this may initially involve some amount of work in renaming all the files and then directing your calling VIs to use these newly renamed driver files, but after that initial time invest you shouldn't have any more problems.

What is the difference between image generation and image stripping in Smalltalk?

I read often of an "Image Generation" process in Smalltalk. The process seems to refer creating an image from scratch, from inside a Smalltalk.
But there is also a "Strip" process, which seems to involve removing objects to deploy a runtime.
What is the difference between both? There is any Smalltalk which supports image generation?
Term image generation often refers to process which starts from default vanilla image as shipped with installation, and loading of all code into it that is necessary for some project. This is done periodically during development to ensure that all code actually loads and works in the default image without problems.
Stripping is process that is (sometimes) done before deployment, from the image that contains all necessary code for the project, some of unused classes and methods are "stripped out" from the image. This is done to make deployed image smaller, or less dependent of external shared libraries, or for security reasons, or licensing reasons. For instance stripping might remove many classes related to the UI for the headless server. Or it might remove compiler to prevent user to change the code. In any case stripping is not exact science, since it is difficult to determine what can be removed and what not.
So with image generation you end up with the image that is larger than the one you have started with, and with stripping you end up with smaller image.

Creating a Custom Media Library - Loading Images for Rendering (VB.net)

OK, I'm working on a project right now and I need to create a graphic library.
The game I'm experimenting with is an RPG; this project is expected to contain many big graphic files to use and I would prefer not to load everything into memory at once, like I've done before with other smaller projects.
So, does anyone have experience with libraries such as this one? Here's what I've came up with:
Have graphic library files and paths in an XML file
Each entry in the XML file would be designated "PERMANENT" or "TEMPORARY", with perm. being that once loaded it stays in memory and won't be cleared (like menu-graphics)
The library that the XML file loads into would have a CLEAR command, that clears out all non-PERMANENT graphics
I have experience throwing everything into memory at startup, and with running the program running with the assumption that all necessary graphics are currently in memory. Are there any other considerations I might need to think about?
Ideally everything would be temporary and you would have a sensible evict function that chooses the right objects to victimize (based on access patterns) when your program decides it needs more memory.
There'll be some minimum amount of RAM your game needs to run, otherwise stuff will be constantly swapping, but this approach does mean you're not dumping objects marked TEMPORARY that you will just need to reload next frame because you happen to be using it currently.

What is your review process for Rhapsody development?

My team is using the IBM's Rhapsody tool to do real-time embedded development. Unfortunately, we are unhappy with our current review process.
More specifically, we've had difficulty because:
there is a lack of a good diff tool for diagram changes
the Rhapsody diff tool doesn't generate reports that you can use in a review
source file history is spotty because source files are products in MDD thus not configured in a VCS at a high granularity
running diffs on source code sometimes pulls in unrelated changes made by other devs
sometimes changing a property of a model element changes dozens of source files
it's easy to change a source file through a property change and not know it
Does anyone have any tips for making peer reviews on Rhapsody development robust but low-hassle? Any best practices and lessons learned you would like to share? I'm not looking for a mature process write-up; tidbits I didn't know about would be great.
We use Rhapsody for the same purpose at my workplace. Reviews of model changes are done with a script that opens diffmerge on two copies of our repository (one at the start of the changes, one at the latest). That shows all of the pertinent changes, without any of the internal cruft Rhapsody adds.
Our repo doesn't track the generated sources, but we see plenty of irrelevant changes in Rhapsody's sbs files frequently. We've started setting sbs files as read-only on the filesystem, and then changing them to read/write from the properties panel in Rhapsody. That doesn't stop the files you mark as read/write from having cruft inserted, but it prevents unrelated files from being modified.
I still haven't found a way to make Rhapsody stop inserting irrelevant changes (for example: it sometimes adds and removes filename fields between saves, despite minimal changes to the model). It creates a lot of merge conflicts, and I've personally started taking 5 or so minutes per commit to only add the changes that matter.
We have been using Rhapsody for development for the past 5 years. Our current process involves using the Rhapsody COM interface and the Microsoft Word COM interface to dump review packages to Word for design reviews. We also do this to generate the reference manual portion of our SUM.
For code we review the generated source.
We put the model into our version control system, and lock down model elements after they have been reviewed. If your version control tool makes things read only when they are checked in, it prevents you from accidentally changing a model element.
The COM interface is also good for dumping the model to make PowerPoint slides of diagrams if you want to present your design to a customer. You will have to tweak the slides after they are generated, as the pictures usually end up looking a little funny, but it gives a quick starting point.
It is also possible to prevent Rhapsody from writing timestamps to the sbs files by setting the property CG::General::IncrementalCodeGenAcrossSession to false. This can help reduce the amount of unnecessary data.
See this link