Does Matlab have something akin to a main method? - oop

If I have a single Matlab source file (m-file) containing a class definition (classdef), is there any way to specify a particular set of code that is to be executed if I run the m-file? I mean the entire file, such as via the Run button in the IDE, from a shell, or from the Matlab command-line. I don't mean manually selecting the code to be executed.
Similar behaviour exists in Java with the static main method and in Python by having code outside the class definition (possibly inside a if __name__==__main__ block).

The short answer is "no"; MATLAB classdef M-files are just intended to define objects, not form complete programs.
The long answer is you might be able to get specific behavior out of your classdef function if, for example, you overload the constructor to take a flag specifying whether or not to "act like a variable" or "act like a program".
e.g.
classdef myClass
...
methods
function self = myClass(varargin)
if nargin == 1 && strcmpi(varargin{1},'run')
..... %run the program
else
..... %make the variable
OR you can make a static method called main:
methods (Static = true)
function main()
%enabes: myClass.main()
...
end
The IDE still won't know what to do with your M-file to "run it", but you can run it properly from the command line, or another M-file.
That last sentence is not 100% correct - as Egon pointed out below, you CAN make MATLAB's IDE run that code - use a "run configuration": http://www.mathworks.com/help/matlab/matlab_prog/run-functions-in-the-editor.html

There are a few ways you can do this:
You can create a "run configuration" (either as a script or a specific line of code). This will run whenever you click the run button (or press the run shortcut) from within your classdef file. The big drawback is that those run configurations are stored locally, so this is a nightmare when it comes to collaboration or working at multiple places. So personally, I'd recommend writing a script if you have a complicated run configuration. Mine are mostly called testMyClass where MyClass of course is the class you want to run.
If you don't require complicated code, you can also put everything in the Constructor of your object. If you check whether there are no arguments passed with if nargin == 0 ... end, that piece of code should be called whenever you 'run' the class file. However, you are somewhat limited in what you can do, as you could create infinite loops or an infinite chain of those objects being created if you are not careful. In the end, you will have just the object in your base workspace.
If you do require more complicated code or some code that makes some variables within the base workspace, it can be accomplished but it comes at great cost. Your code might end up a complete mess, so I advise against using this unless you have an extremely good reason otherwise. You can use the previous method and the evil functions evalin and assignin to evaluate and assign to variables in the base workspace.

Related

Can't reference sub-routines in other modules of VB.net program

I have a VB.net program that I received from someone else. I am trying to make modifications to it. The program consists of one main form and 6 classes (all .vb files).
In the main form, I want to call a sub-routine in one of the other modules. What is strange is that if I enter the name of the module followed by a ".", i.e.
QuoteMgr.
I don't see the names of the sub-routines in the module. I only see the Public Const's that are defined.
The sub-routine that I want to call is in a section labelled:
#Region "Methods"
What do I need to do to be able to call one of these methods?
The confusion was because of the wording you used in your original question before you edited it to say "class" instead of "module".
The two terms in VB.net mean entirely different things. A class typically must be insantiated as an object to invoke its methods.
So what you need to do is:
dim qt as new QuoteMgr
qt.Method("foo");
In this case you're creating an instance of QuoteMgr called qt and then invoking its methods. Alternatively you could modify the QuoteMgr class and set the method you're trying to call to "Shared" and then call it by simply going "QuoteMgr.Method" as you were trying before.
A module is more like a free-standing library of methods that can be called by anything in the same project (by default).

VB.net : How to protect a global variable from being modified inside a sub (ByVal, ByRef : not applicable)

I have a global variable X in an winform application.
The variable X is used in different forms inside the application and I don't want it to be modified. It's not used as a parameter in the functions... so ByRef, or ByVal are not applicable.
It's used like that:
Declaration
dim X as whatever;
dim Y as whatever;
private sub SubExample(A as object)
'Do some staff
'Locally modifiy X
X = something else;
end sub
Main program
call SubExample(Y);
'After this, X should still have its original value
Any idea please ?
You can't protect a global variable (unless it has to be assigned only once, in that case it can be Const). By definition it's global so it's visibile by all classes.
I would avoid them every time it's possible because of that: you can't restrict their access to who really has to use it (as you found by yourself) and they couple all classes use them. Main problems I see with them are:
Testing: because they couple many (all?) classes they make code testing pretty hard. You can't really isolate a class or sub-system for testing.
Concurrency: they're free accessed by everything in any thread then you'll have concurrency issues and you'll need to make them thread-safe. A variable in VB.NET can be thread-safe (at least atomic read/write) only for primitive types.
Access: as you saw you can't restrict access to them. Even if you make them global properties you can just make them read-only but somewhere a write function/setter must exist (unless you're using them for singleton pattern or for other - few - corner cases).
Maintenability: your code will be harder to understand because implications won't be obvious and local.
What you can do to replace them with something more "safe"?
If you put them in a global class with Shared members just remove Shared and make them instance members. If they're in a Module just move them to a Class.
Make your class singleton (I would use a method instead of simple property to make this more obvious). This may or not be your case, you may simply create your object in your startup method.
Add a public property in each form will need them and when you create your form just set this property to class you previously created. According to effective implementation this may be or not a Context Object pattern.
If you have multiple sets of global variables (and each set has different users) you may need to create multiple classes (one class for each set of variables).
This is a pretty general method to quickly replace global variables, better way implies some deeper refactoring to make your code more OOP-ish but I can't say without a more complete view of your code.
As a low-tech solution, I would recommend using an unambiguous name like
Dim READONLY_X
as the name of your global variable. Then you are less likely to forget that you should not be writing a new value to it. When you feel the temptation to write the line:
READONLY_X = 2
it should ring an alarm bell. Wrapping inside getter functions etc (without the formalism of a class) seems like a kluge. But that's just an opinion.
As was said before, global variables are a pain; think carefully about the scope you want them to have, and whether there isn't another solution...

How are words bound within a Rebol module?

I understand that the module! type provides a better structure for protected namespaces than object! or the 'use function. How are words bound within the moduleā€”I notice some errors related to unbound words:
REBOL [Type: 'module] set 'foo "Bar"
Also, how does Rebol distinguish between a word local to the module ('foo) and that of a system function ('set)?
Minor update, shortly after:
I see there's a switch that changes the method of binding:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
What does this do differently? What gotchas are there in using this method by default?
OK, this is going to be a little tricky.
In Rebol 3 there are no such things as system words, there are just words. Some words have been added to the runtime library lib, and set is one of those words, which happens to have a function assigned to it. Modules import words from lib, though what "import" means depends on the module options. That might be more tricky than you were expecting, so let me explain.
Regular Modules
For starters, I'll go over what importing means for "regular" modules, ones that don't have any options specified. Let's start with your first module:
REBOL [Type: 'module] set 'foo "Bar"
First of all, you have a wrong assumption here: The word foo is not local to the module, it's just the same as set. If you want to define foo as a local word you have to use the same method as you do with objects, use the word as a set-word at the top level, like this:
REBOL [Type: 'module] foo: "Bar"
The only difference between foo and set is that you hadn't exported or added the word foo to lib yet. When you reference words in a module that you haven't declared as local words, it has to get their values and/or bindings from somewhere. For regular modules, it binds the code to lib first, then overrides that by binding the code again to the module's local context. Any words defined in the local context will be bound to it. Any words not defined in the local context will retain their old bindings, in this case to lib. That is what "importing" means for regular modules.
In your first example, assuming that you haven't done so yourself, the word foo was not added to the runtime library ahead of time. That means that foo wasn't bound to lib, and since it wasn't declared as a local word it wasn't bound to the local context either. So as a result, foo wasn't bound to anything at all. In your code that was an error, but in other code it might not be.
Isolated Modules
There is an "isolate" option that changes the way that modules import stuff, making it an "isolated" module. Let's use your second example here:
REBOL [Type: 'module Options: [isolate]] set 'foo "Bar"
When an isolated module is made, every word in the module, even in nested code, is collected into the module's local context. In this case, it means that set and foo are local words. The initial values of those words are set to whatever values they have in lib at the time the module is created. That is, if the words are defined in lib at all. If the words don't have values in lib, they won't initially have values in the module either.
It is important to note that this import of values is a one-time thing. After that initial import, any changes to these words made outside the module don't affect the words in the module. That is why we say the module is "isolated". In the case of your code example, it means that someone could change lib/set and it wouldn't affect your code.
But there's another important module type you missed...
Scripts
In Rebol 3, scripts are another kind of module. Here's your code as a script:
REBOL [] set 'foo "Bar"
Or if you like, since script headers are optional in Rebol 3:
set 'foo "Bar"
Scripts also import their words from lib, and they import them into an isolated context, but with a twist: All scripts share the same isolated context, known as the "user" context. This means that when you change the value of a word in a script, the next script to use that word will see the change when it starts. So if after running the above script, you try to run this one:
print foo
Then it will print "Bar", rather than have foo be undefined, even though foo is still not defined in lib. You might find it interesting to know that if you are using Rebol 3 interactively, entering commands into the console and getting results, that every command line you enter is a separate script. So if your session looks like this:
>> x: 1
== 1
>> print x
1
The x: 1 and print x lines are separate scripts, the second taking advantage of the changes made to the user context by the first.
The user context is actually supposed to be task-local, but for the moment let's ignore that.
Why the difference?
Here is where we get back to the "system function" thing, and that Rebol doesn't have them. The set function is just like any other function. It might be implemented differently, but it's still a normal value assigned to a normal word. An application will have to manage a lot of these words, so that's why we have modules and the runtime library.
In an application there will be stuff that needs to change, and other stuff that needs to not change, and which stuff is which depends on the application. You will want to group your stuff, to keep things organized or for access control. There will be globally defined stuff, and locally defined stuff, and you will want to have an organized way to get the global stuff to the local places, and vice-versa, and resolve any conflicts when more than one thing wants to define stuff with the same name.
In Rebol 3, we use modules to group stuff, for convenience and access control. We use the runtime library lib as a place to collect the exports of the modules, and resolve conflicts, in order to control what gets imported to the local places like other modules and the user context(s). If you need to override some stuff, you do this by changing the runtime library, and if necessary propagating your changes out to the user context(s). You can even upgrade modules at runtime, and have the new version of the module override the words exported by the old version.
For regular modules, when things are overridden or upgraded, your module will benefit from such changes. Assuming those changes are a benefit, this can be a good thing. A regular module cooperates with other regular modules and scripts to make a shared environment to work in.
However, sometimes you need to stay separate from these kinds of changes. Perhaps you need a particular version of some function and don't want to be upgraded. Perhaps your module will be loaded in a less trustworthy environment and you don't want your code hacked. Perhaps you just need things to be more predictable. In cases like this, you may want to isolate your module from these kinds of external changes.
The downside to being isolated is that, if there are changes to the runtime library that you might want, you're not going to get them. If your module is somehow accessible (such as by having been imported with a name), someone might be able to propagate those changes to you, but if you're not accessible then you're out of luck. Hopefully you've thought to monitor lib for changes you want, or reference the stuff through lib directly.
Still, we've missed another important issue...
Exporting
The other part of managing the runtime library and all of these local contexts is exporting. You have to get your stuff out there somehow. And the most important factor is something that you wouldn't suspect: whether or not your module has a name.
Names are optional for Rebol 3's modules. At first this might seem like just a way to make it simpler to write modules (and in Carl's original proposal, that is exactly why). However, it turns out that there is a lot of stuff that you can do when you have a name that you can't when you don't, simply because of what a name is: a way to refer to something. If you don't have a name, you don't have a way to refer to something.
It might seem like a trivial thing, but here are some things that a name lets you do:
You can tell whether a module is loaded.
You can make sure a module is only loaded once.
You can tell whether an older version of a module was there earlier, and maybe upgrade it.
You can get access to a module that was loaded earlier.
When Carl decided to make names optional, he gave us a situation where it would be possible to make modules for which you couldn't do any of those things. Given that module exports were intended to be collected and organized in the runtime library, we had a situation where you could have effects on the library that you couldn't easily detect, and modules that got reloaded every time they were imported.
So for safety we decided to cut out the runtime library completely and just export words from these unnamed modules directly to the local (module or user) contexts that were importing them. This makes these modules effectively private, as if they are owned by the target contexts. We took a potentially awkward situation and made it a feature.
It was such a feature that we decided to support it explicitly with a private option. Making this an explicit option helps us deal with the last problem not having a name caused us: making private modules not have to reload over and over again. If you give a module a name, its exports can still be private, but it only needs one copy of what it's exporting.
However, named or not, private or not, that is 3 export types.
Regular Named Modules
Let's take this module:
REBOL [type: module name: foo] export bar: 1
Importing this adds a module to the loaded modules list, with the default version of 0.0.0, and exports one word bar to the runtime library. "Exporting" in this case means adding a word bar to the runtime library if it isn't there, and setting that word lib/bar to the value that the word foo/bar has after foo has finished executing (if it isn't set already).
It is worth noting that this automatic exporting happens only once, when the body of foo is finished executing. If you make a change to foo/bar after that, that doesn't affect lib/bar. If you want to change lib/bar too, you have to do it manually.
It is also worth noting that if lib/bar already exists before foo is imported, you won't have another word added. And if lib/bar is already set to a value (not unset), importing foo won't overwrite the existing value. First come, first served. If you want to override an existing value of lib/bar, you'll have to do so manually. This is how we use lib to manage overrides.
The main advantages that the runtime library gives us is that we can manage all of our exported words in one place, resolving conflicts and overrides. However, another advantage is that most modules and scripts don't actually have to say what they are importing. As long as the runtime library is filled in properly ahead of time with all the words you need, your script or module that you load later will be fine. This makes it easy to put a bunch of import statements and any overrides in your startup code which sets up everything the rest of your code will need. This is intended to make it easier to organize and write your application code.
Named Private Modules
In some cases, you don't want to export your stuff to the main runtime library. Stuff in lib gets imported into everything, so you should only export stuff to lib that you want to make generally available. Sometimes you want to make modules that only export stuff for the contexts that want it. Sometimes you have some related modules, a general facility and a utility module or so. If this is the case, you might want to make a private module.
Let's take this module:
REBOL [type: module name: foo options: [private]] export bar: 1
Importing this module doesn't affect lib. Instead, its exports are collected into a private runtime library that is local to the module or user context that is importing this module, along with those of any other private modules that the target is importing, then imported to the target from there. The private runtime library is used for the same conflict resolution that lib is used for. The main runtime library lib takes precedence over the private lib, so don't count on the private lib overriding global things.
This kind of thing is useful for making utility modules, advanced APIs, or other such tricks. It is also useful for making strong-modular code which requires explicit imports, if that is what you're into.
It's worth noting that if your module doesn't actually export anything, there is no difference between a named private module or a named public module, so it's basically treated as public. All that matters is that it has a name. Which brings us to...
Unnamed Modules
As explained above, if your module doesn't have a name then it pretty much has to be treated as private. More than private though, since you can't tell if it's loaded, you can't upgrade it or even keep from reloading it. But what if that's what you want?
In some cases, you really want your code run for effect. In these cases having your code rerun every time is what you want to do. Maybe it's a script that you're running with do but structuring as a module to avoid leaking words. Maybe you're making a mixin, some utility functions that have some local state that needs initializing. It could be just about anything.
I frequently make my %rebol.r file an unnamed module because I want to have more control over what it exports and how. Plus, since it's done for effect and doesn't need to be reloaded or upgraded there's no point in giving it a name.
No need for a code example, your earlier ones will act this way.
I hope this gives you enough of an overview of the design of R3's module system.

In Groovy, if class is not called why instantiation exception?

When I execute the following experimental code subset at http://groovyconsole.appspot.com/
class FileHandler {
def rootDir
FileHandler(String batchName) {
rootDir = '.\\Results\\'+batchName+'\\'
}
}
//def fileHandler = new FileHandler('Result-2012-12-15-10-48-55')
An exception results:
java.lang.NoSuchMethodException: FileHandler.<init>()
When I uncomment the last line that instantiates the class, the error goes away.
Can someone explain why this is? I'm basically attempting to segregate the definition and instantiation of the class into 2 files to be evaluated separately. Thanks
I'm not sure of the exact implementation details behind http://groovyconsole.appspot.com/ (source linked to from there points to Gaelyk, which I've not looked over). I'd bet it's looking for a no-arg constructor for the class you've presented, in an effort to find something runnable. (note that if you provide that, it still won't work, as it wants a main() :/)
Running locally in groovyConsole will die a bit sooner, with the following error message:
groovy.lang.GroovyRuntimeException: This script or class could not be run. It should either:
- have a main method,
- be a JUnit test or extend GroovyTestCase,
- implement the Runnable interface,
- or be compatible with a registered script runner.
This is perhaps more descriptive and to the point. If you want to run some Groovy as a simple script, you'll need to supply a jumping-in point. The easiest way is an executable statement in your groovy file, outside of any class definition (e.g, uncommenting your instantiation statement). Alternatively, a class with a main method should do it. (see here).
If 2 files is how you want to break it up, you can save the class file def in one groovy file (e.g., First.groovy) and create a second (e.g., Second.groovy) with just your executable statements. (I believe the first one will be in the classpath automatically when you run groovy Second, if both are in same directory)

Custom performance profiler for Objective C

I want to create a simple to use and lightweight performance profile framework for Objective C. My goal is to measure the bottlenecks of my application.
Just to mention that I am not a beginner and I am aware of Instruments/Time Profiler. This is not what I am looking for. Time Profiler is a great tool but is too developer oriented. I want a framework that can collect performance data from a QA or pre production users and even incorporate in a real production environment to gather the real data.
The main part of this framework is the ability to measure how much time was spent in Objective C message (I am going to profile only Objective C messages).
The easiest way is to start timer in the beginning of a message and stop it at the end. It is the simplest way but its disadvantage is that it is to tedious and error prone - if any message has more than 1 return path then it will require to add the "stop timer" code before each return.
I am thinking of using method swizzling (just to note that I am aware that Apple are not happy with method swizzling but these profiled builds will be used internally only - will not be uploaded on the App Store).
My idea is to mark each message I want to profile and to generate automatically code for the method swizzling method (maybe using macros). When started, the application will swizzle the original selector with the generated one. The generated one will just start a timer, will call the original method and then will stop the timer. So in general the swizzled method will be just a wrapper of the original one.
One of the problems of the above idea is that I cannot think of an easy way how to automatically generate the methods to use for swizzling.
So I greatly will appreciate if anyone has any ideas how to automate the whole process. The perfect scenario is just to write one line of code anywhere mentioning the class and the selector I want to profile and the rest to be generated automatically.
Also will be very thankful if you have any other idea (beside method swizzling) of how to measure the performance.
I came up with a solution that works for me pretty well. First just to clarify that I was unable to find out an easy (and performance fast) way to automatically generate the appropriate swizzled methods for arbitrary selectors (i.e. with arbitrary arguments and return value) using only the selector name. So I had to add the arguments types and the return value for each selector, not only the selector name. In reality it should be relatively easy to create a small tool that would be able to parse all source files and detect automatically what are the arguments types and the returned value of the selector which we want to profile (and prepare the swizzled methods) but right now I don't need such an automated solution.
So right now my solution includes the above ideas for method swizzling, some C++ code and macros to automate and minimize some coding.
First here is the simple C++ class that measures time
class PerfTimer
{
public:
PerfTimer(PerfProfiledDataCounter* perfProfiledDataCounter);
~PerfTimer();
private:
uint64_t _startTime;
PerfProfiledDataCounter* _perfProfiledDataCounter;
};
I am using C++ to use that the destructor will be executed when object has exited the current scope. The idea is to create PerfTimer in the beginning of each swizzled method and it will take care of measuring the elapsed time for this method
The PerfProfiledDataCounter is a simple struct that counts the number of execution and the whole elapsed time (so it may find out what is the average time spent).
Also I am creating for each class I'd like profile, a category named "__Performance_Profiler_Category" and to conforms to "__Performance_Profiler_Marker" protocol. For easier creating I am using some macros that automatically create such categories. Also I have a set of macros that take selector name, return type and arguments type and create selectors for each selector name.
For all of the above tasks, I've created a set of macros to help me. Also I have a single file with .mm extension to register all classes and all selectors I'd like to profile. On app start, I am using the runtime to retrieve all classes that conforms to "__Performance_Profiler_Marker" protocol (i.e. the registered ones) and search for selectors that are marked for profiling (these selectors starts with predefined prefix). Note that this .mm file is the only file that needs .mm extension and there is no need to change file extension for each class I want to profile.
Afterwards the code swizzles the original selectors with the profiled ones. In each profiled one, I just create PerfTimer and call the swizzled method.
In brief that is my idea which turned out to work pretty smoothly.