Is there a Sage command that tracks the execution of code in real time? - dynamic

I'm on a Mac with OS 10.11.6 and I'm using Sage 7.2's notebook interface. I did things in Mathematica that I want to check in Sage, but I'm a beginner at Sage. In Mathematica it's possible to keep track of the execution of my code, especially to detect when it's hanging, by using the Mathematica Dynamic[] command. Is there anything like this command in Sage, or is there perhaps another way to track the progress of the execution of a lengthy computation that will let me know in real time if and when it hangs? At the moment I insert print commands in my code that result in thousands of tuples of values of a tuple of variables streaming vertically down my screen. Messy. I'd rather see the values of such a tuple simply update in place as they do, say, on a digital clock. This is what Dynamic[] achieves in Mathematica.

Well, Mathematica doesn't have a debugger. So they tried to overcome this obstacle by introducing the Dynamic[] command. But when a programming language does have a debugger, why bother implementing something similar to Dynamic?
There are some ways to debug a sage code, you can refer to this other post for example. Also there is a built-in command, trace, that does a pretty similar job to Dynamic in Mathematica.
You can also refer to this (seemingly old) post for further information.

Related

Does a language describe things beyond itself?

I now have sufficent exposure to the Objective-C that if i'm stuck with anything, I know how to think of the problem in terms of a likely tool I need and go look for it. Simple really. There's A Method For That. So nothings a real problem anymore.
Now I'm looking deeper at the language in broader terms. We write stuff. The compiler hews out all the code to execute it. From a simple flashlight app thats a if/then decision to turn on, to a highly complex accelerometer driven 3D shoot 'em up with blood 'n guts and body parts following all sorts of physics, the compiler prepares the code ready to be executed like a giant railway layout. No matter how random it appears on the screen, everything possible can be generically described and prepared for.
So here's the question:
Are there cases where something completely unexpected to the software designer can still be handled without an execution halt? Maybe I'd better re-frame the question a few different ways: Can a ( objective-C ) program meta-compile within itself in response to an unplanned-for user request? or to re-put my opening remark, are there tools or methods for unlikely descriptions of unlikely problems?
I think #kfb has the right comment about metaprogramming. Check out the Runtime docs in conjunction with metaprogramming tutorials.
Parts of your last question might be in the realm of this doc.
If your looking for ways to reduce the size of your code base for the lesser used features, one idea might be to make the features internet based (assuming connectivity is not a problem).

Difference between Hand code and recorded scripts in Automation testing

Please let me know the difference between the hand written code and recorded scripts in automation testing tools like coded ui or any other tools.
Regards,
Raj
By 'hand-written' I'll assume you mean manually coded...
I can see a few reasons. Coding experience is brilliant. It will be a worthwhile investment if you code your own tests because you can learn a lot about the testing framework you are using (CodedUI, Selenium etc) but also the language you are using (Java, C#). Manually coding these tests, using built in framework methods, will serve you well and give you much more knowledge than an automatic play back tool would.
Automatic playback tools can produce horrific code. Code that is ugly, badly named, no best practices followed, and unreliable location methods.
Playback tools will simply use the most simple way to find an element. This is not always the best. A classic example is XPath.
Most notably, XPath is a powerful tool, it can get you any element you need (or at least, I've never found a situation where XPath cannot be used), but playback tool's will produce horrific XPath queries based purely on position...let's take an example.
You've got a page that has 100 feed items. You want to verify after a particular action a feed item is shown on this page, but not only is it shown but it is the first one. You cannot use ID's etc, because the markup is badly made and so you must use XPath.
A playback tool might make a very odd XPath like: //div[1]/span[2]/table[1]/tbody[1]/tr[10]/[td[2]/a[text()='Test'].
Looks weird, right?
This will work a few times, but what happens if the app gets another tr element shoved at the top of the table? Now, tr[10] wont be the element you want, it'll be tr[11].
Through manual coding, you can account for this, you can put in logic to work around this. Playback tool's wont.
I highly recommend coding these tests yourself. You do not need a few years experience to do this, you do not need any prior programming degrees. You need time.
Playback tools will also be limited in what they can do...you want to take a screenshot when a test fails? I highly highly doubt a playback tool will do this, you'll need to put in logic yourself. However, this isn't hard to do yourself.
There might be a business reason too - playback tools can convert manual tests into automated tests faster, but they won't be reliable - you'll need to have time to dedicate to making them reliable and fast. Time that would better be spent coding them yourself in the first place.

Rakudo test suite progression?

There used to be a graph that tracked the implementation of Perl6 against the test suite for Perl6. I was interested in watching it progress (and, regress). What happened to that graph, it used to be hosted on the site www.rakudo.de
Is there any other easy way an outsider can get an idea of where Rakudo stands in relation to the perfected spec? What features is it yet-missing?
As for he second question, the way is the feature comparison matrix.
There is no official place for spectest graphs, but there are some good ones too
I used to run a cron job that generated the graph, and eventually stopped it. The reason was that lots of people put way too much weight into the numbers of that graph, and generally assumed that the test suit was perfect, covering all features of the spec homogeneously etc.
In addition there's no easy way to count the number of total tests, making the numbers not very reliable.
In the end I had the feeling that the graph was more misleading than informative.
The best way to get a feeling for the progress is to become an insider, ie start playing with Rakudo. It's still lots slower than Perl 5 (though not as much as it used to be), but it's quite usable, and fun to use.

What is a good, simple scripting language to embed into a macro-processor

I want to write a macroprocessor. So far I've done a very simple sketch of how it should look and I came to the conclusion that inventing a completely new language would not be a good idea but I should reuse existing concepts. My sketch so far is a kind of irb with some tex-alike syntax and features, but I'm not sure what I should use as ruby-substitute.
The language should be simple, yet powerful. I don't want to write an OS in it, but it should be less "raw" than e.g. bc or forth. I don't care about execution time at all. Embedding should not be too hard and it'll be nice if the language itself was stable.
So far I've considered these:
Lua - It should process text easily. Lua does not even have a while(c=getchar()){}. I'm skeptic.
awk - Simple, text processing is easy, but never intended for embedding
perl - Way to complex, stable, but it seems almost dead.
python - Significant whitespaces; won't they get in the way for inlined function-definitions?
groovy/nice/java - Hard/impossible to embed? Also way to heavy.
javascript - Really like it (besides DOM) but is there a stable/embeddable implementation? I don't want to mess around with the api every 2 weeks when there's a new v8 version. As I said, I don't care about execution time.
I have not really found any pros/cons for
io
guile/scheme
TCL
Update: The language should have features such as function-definition, library-loading or regexps (loops would also be very nice) I don't want to use a traditional macro-language such as M4 because I want to able to write in a more procedural (or maybe functional) style. Macro languages have their pros, but I requires a completely new way of thinking about a problem which is hard especially for beginners. My Aim is to use the best of both worlds.
Given that TCL is about string and array processing, and is intended for embedding, it would seem an obvious choice.
Luatex has a certain following. Presumably they have found a way to make it work for text processing, so you might like to look at that.
Scheme (including guile) is also very nice for scripting; alternatively you might look at whether there is a way you could embed an elisp processor (embed xemacs?), which after all is all about text processing.

Real time scripting language + MS DLR?

For starters I should let you guys know what I'm trying to do. The project I'm working on has a requirement that requires a custom scripting system to be built. This will be used by non-programmers who are using the application and should be as close to natural language as possible. An example would be if the user needs to run a custom simulation and plot the output, the code they would write would need to look like
variable input1 is 10;
variable input2 is 20;
variable value1 is AVERAGE(input1, input2);
variable condition1 is true;
if condition1 then PLOT(value1);
Might not make a lot of sense, but its just an example. AVERAGE and PLOT are functions we'd like to define, they shouldn't be allowed to change them or really even see how they work. Is something like this possible with DLR? If not what other options would we have(start with ANTRL to define the grammar and then move on?)? In the future this may need to run using XBAP and WPF too, so this is also something we need to consider, but haven't seen much if anything on dlr & xbap. Thanks, and hopefully this all makes sense.
Lua is not an option as it is to different from what they are already accustomed to.
Ralf, its going to reactive, and to be honest the timeframe for when the results should get back to the user may be 1/100 of a second all the way up to 2 weeks or a month(very complex mathematical functions).
Basically they already have a system they purchased that does some of what they need, and included a custom scripting language that does what I mentioned above and they don't want to have to learn a new one, they basically just want us to copy it and add functionality. I think I'll just start with ANTRL and go from there.
Lua
it's small, fast, easy to embed, portable, extensible, and fun!
Lua is definitly the best choice for soft real-time system (like computer games).
See http://shootout.alioth.debian.org/ for detailed benchmarks.
However, last time I checked, Lua used a mark-and-sweep garbage collector which can lead to deadline-violation and non-deterministic jitter in real-time systems.
I believe that you could use theoretically use the DLR, but I'm unsure about support in an XBAP (partially trusted?) scenario.
If you host the DLR you would quickly be able to take advantage of IronRuby or IronPython scripting. You would want to look at these implementations when creating your own language implementation. If you post your question to the IronPython mailing list I'm sure you would get a better reply around the XBAP scenario, and some of the developers there created ToyScript.
What kind of real-time requirement are you trying to fulfill? Is the simulation a hard real-time simulation (some kind of hardware-in-the-loop simulation ==> deadline is less than 1/1000 second)?
Or do you want the scripting-system to be "reactive" to user-input ==> 1/10 should be sufficient.
I am no expert regarding MS DLR, but as far as I know, it does not support hard real-time systems. You may want to take a look at the real-time specification for Java (RTSJ)
Firstly I think that defining your own language is not the way to go.
Primarily because the biggest productivity gains you can get for programmers or non-programmers are the development tools. You (and 99.9% of the rest of us) are not going to write tools as good as what is out their.
Language design is hard.
Language support and documentation, also hard
I would recommend looking for a pre-built solution. If you could find a language that can lock down some functionality, that would be a good starting point. MatLab would be the first that comes to my mind.
Lastly, ditch the natural language part, BASIC, COBOL and YA-TDWTF-Lang all tried and failed at it.
Full disclosure: I work for a company that is developing a generalized domain specific language "system". It's targeted at data-in/text-out applications so it's not apropos and it's not yet to beta. The result is I'm somewhat knowledgeable and biased.