How and when to use ChezScheme's timer interrupt? - chez-scheme

The document of Chez mentioned set-timer and timer-interrupt-handler, but how to "kick off" or combine them? Is it used to implement things like periodic events or just some delayed operation?
I searched around but only find them embedded into larger code chunks which cannot be trivially understood.
Please provide some concrete code examples on how to use them.
https://cisco.github.io/ChezScheme/csug9.5/system.html#./system:h2

(set-timer n) starts an internal timer that calls the timer interrupt handler after n ticks. A tick is a rough measure of the amount of work being done, not a measure of clock time. If scheme is just sitting at the REPL prompt, no work is being done and the timer handler will not be called.
This is probably not the functionality you're looking for. If you want a clock-based timer, you will need to build it yourself, possibly in terms of set-timer.
The following code demonstrates use of set-timer. But again, it's not clock time.
$ scheme
Chez Scheme Version 9.5.8
Copyright 1984-2022 Cisco Systems, Inc.
> (define (start-periodic-timer n f)
(timer-interrupt-handler (lambda ()
(f)
(set-timer n)))
(set-timer n))
> (start-periodic-timer 3000 (lambda () (printf "timer!")))
0
> (do ([i 0 (+ i 1)]) ((= i 2000) (newline)) (printf "."))
timer!timer!timer!timer!.....................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
....................................timer!...................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
.............................................................................
............................timer!...........................................
.............................................................................
..................................

Related

Why is the time complexity of n^2 logn + n(logn)^2 = O(n^2(logn)^2)?

Why is the complexity of n^2 logn + n(logn)^2 = O(n^2 (logn)^2) ?
I saw this as a solution to Cornell's Final exam paper (found online) but I'm not too sure if it is correct.
The statement n² log n + n (log n)² ∈ O(n² (log n)²) is valid. It's just not the best upper bound one could give on this function. But it's still true. Note that f(n) ∈ O(n² log n) ⇒ f(n) ∈ O(n² (log n)²). That's why people never (or shouldn't) write f(n) = O(g(n)), but should instead write f(n) ∈ O(g(n)).
On the other hand, the provided bound is a bit conservative, it could very well be because n² log n is smaller than n (log n)² for small values of n (n < 1). Or maybe, because it's part of a test, it could also be a way to test wether students fully understood what f(x) ∈ O(g(n)) means (a trick question, but an interesting trick question). Even though I would have added a question right before asking wether the following is a valid statement: is "n² log n + n (log n)² ∈ O(n² (log n))".
The more conservative bound could hypothetically make sense depending on the context (if n is very small and 1 could already be considered a "large value"), but usually if that's the case it would be specified explicitly.
If nothing specifies otherwise, people would assume that the best upper bound for this function is O(n² log n).
Let us consider the theory as explained in Wikipedia.
There it is written that if f1 is O(g1) and f2 is O(g2), f1 + f2 is O(max(g1, g2)).
Let's consider f1 = n² log n and f2 = n log² n.
Then f1 is O(n² log n) and f2 is O(n log² n).
Now, for n > 1, since n > log n, we can multiply both sides of the inequality by n log n to obtain n² log n > n log² n. This proves that O(n² log n) > O(n log² n) and, by the definition from above we have: O(n² log n).

Optimization in Typed Racket... Is this going too far?

A question about typed/racket. I'm currently working my way through the Euler Project problems to better learn racket. Some of my solutions are really slow, especially when dealing with primes and factors. So for some problems, I've tried to make a typed/racket version and I find no improvement in speed, quite the opposite. (I try to minimize the impact of overhead by using really big numbers, calculations are around 10 seconds.)
I know from the Racket docs that the best optimizations happen when using Floats/Flonums. So... yeah, I've tried to make float versions of problems dealing with integers. As in this problem with a racket version using integers, and a typed/racket one artificially turning integers to floats. I have to use tricks: checking equality between two numbers actually means checking that they are "close enough", like in this function which checks if x can be divided by y :
(: divide? (-> Flonum Flonum Boolean))
(define (divide? x y)
(let ([r (/ x y)])
(< (- r (floor r)) 1e-6)))
It works (well... the solution is correct) and I have a 30%-40% speed improvement.
How acceptable is this? Do people actually do that in real life? If not, what is the best way to optimize typed/racket solutions when using integers? Or should typed/racket be abandoned altogether when dealing with integers and reserved for problems with float calculations?
In most cases the solution is to use better algorithms rather than converting to Typed Racket.
Since most problems at Project Euler concern integers, here is a few tips and tricks:
The division operator / needs to compute the greatest common division between the denominator and the numerator in order to cancel out common factors. This makes / a bad choice if you only want to know whether one number divides another. Use (= (remainder n m) 0) to check whether m divides n. Also: use quotient rander than / when you know the division has a zero remainder.
Use memoization to avoid recomputation. I.e. use a vector to store already computed results. Example: https://github.com/racket/math/blob/master/math-lib/math/private/number-theory/eulerian-number.rkt
First implement a naive algorithm. Then consider how to reduce the number of cases. A rule of thumb: brute force works best if you can reduce the number of cases to 1-10 million.
To reduce the number of cases look for parametrizations of the search space. Example: If you need to find a Pythagorean triple: loop over numbers m and n and then compute a = m^2 - n^2, b = 2mn, and, c = m^2 + n^2. This will be faster than looping over a, b, and, c skipping those triples where a^2 + b^2 = c^2 is not true.
Look for tips and tricks in the source of math/number-theory.
Not aspiring to be an real answer since I can't provide any general tips soegaard hasn't posted already, but since I recently have done "Amicable numbers
Problem 21", I thought may as well leave you my solution here (Sadly not many Lisp solutions get posted on Euler...).
(define (divSum n)
(define (aux i sum)
(if (> (sqr i) n)
(if (= (sqr (sub1 i)) n) ; final check if n is a perfect square
(- sum (sub1 i))
sum)
(aux (add1 i) (if (= (modulo n i) 0)
(+ sum i (/ n i))
sum))))
(aux 2 1))
(define (amicableSum n)
(define (aux a sum)
(if (>= a n)
sum
(let ([b (divSum a)])
(aux (add1 a)
(if (and (> b a) (= (divSum b) a))
(+ sum a b)
sum)))))
(aux 2 0))
> (time (amicableSum 10000))
cpu time: 47 real time: 46 gc time: 0
When dealing with divisors one can often stop at the square-root of n like here with divSum. And when you find an amicable pair you may as well add both to the sum at once, what saves you an unnecessary computation of (divSum b) in my code.

Clojure - optimize a threaded map reduce

I have the following code :
(defn series-sum
"Compute a series : (+ 1 1/4 1/7 1/10 1/13 1/16 ...)"
[n]
(->> (iterate (partial + 3) 1)
(map #(/ 1 %))
(take n)
(reduce +)
float
(format "%.2f")
(str)))
It is working just fine, except that it's running time explodes when numbers get big. On my computer (series-sum 2500) is maybe a second or two, but (series-sum 25000) and I have to kill my REPL.
I tried moving (take n) as far as possible, but that is not enough. I feel that I don't understand something about Clojure since I don't see why it would be slower (I would expect (series-sum 25000) to take roughly 10 times as (series-sum 2500)).
There is an obvious loop/recur solution to optimize it, but I like the idea of being able to print the steps and to have one step (the (take n) looking like the docstring).
How can I improve the performance of this code while maintaining debuggability ?
Better yet, can I measure the time of each step to see the one taking time ?
yes, it is relevant to #zerkms's link. You map to rationals, probably should better map to floats:
(defn series-sum
"Compute a series : (+ 1 1/4 1/7 1/10 1/13 1/16 ...)"
[n]
(->> (iterate (partial + 3) 1)
(take n)
(map #(/ 1.0 %))
(reduce +)
(format "%.2f")))
now it works much faster:
user> (time (series-sum 2500000))
"Elapsed time: 686.233199 msecs"
"5,95"
For this type of mathematical operation, computing in a loop is faster than using lazy sequences. This is an order of magnitude faster than the other answer for me:
(defn series-sum
[n]
(loop [i 0
acc 0.0]
(if (< i n)
(recur (inc i)
(+ acc (/ (float 1) (inc (* 3 i)))))
(format "%.2f" acc))))
Note: you don't need the str because format returns a string.
Edit: of course this is not the main issue with the code in the original question. The bulk of the improvement comes from eliminating rationals as shown by the other answer. This is just a further optimization.

Speed up string matching in emacs

I want to implement the vim commandT plugin in emacs. This code is mostly a translation from the matcher.
I've got some elisp here that's still too slow to use on my netbook -
how can I speed it up?
(eval-when-compile (require 'cl))
(defun commandT-fuzzy-match (choices search-string)
(sort (loop for choice in choices
for score = (commandT-fuzzy-score choice search-string (commandT-max-score-per-char choice search-string))
if (> score 0.0) collect (list score choice))
#'(lambda (a b) (> (first a) (first b)))
))
(defun* commandT-fuzzy-score (choice search-string &optional (score-per-char (commandT-max-score-per-char choice search-string)) (choice-pointer 0) (last-found nil))
(condition-case error
(loop for search-char across search-string
sum (loop until (char-equal search-char (elt choice choice-pointer))
do (incf choice-pointer)
finally return (let ((factor (cond (last-found (* 0.75 (/ 1.0 (- choice-pointer last-found))))
(t 1.0))))
(setq last-found choice-pointer)
(max (commandT-fuzzy-score choice search-string score-per-char (1+ choice-pointer) last-found)
(* factor score-per-char)))))
(args-out-of-range 0.0) ; end of string hit without match found.
))
(defun commandT-max-score-per-char (choice search-string)
(/ (+ (/ 1.0 (length choice)) (/ 1.0 (length search-string))) 2))
Be sure to compile that part, as that already helps a lot.
And a benchmark:
(let ((choices (split-string (shell-command-to-string "curl http://sprunge.us/FcEL") "\n")))
(benchmark-run-compiled 10
(commandT-fuzzy-match choices "az")))
Here are some micro optimizations you can try:
Use car-less-than-car instead of your lambda expression. This has no visible effect since the time is not spent in sort but in commandT-fuzzy-score.
Use defun instead of defun*: those optional arguments with a non-nil default have a non-negligible hidden cost. This reduces the GC cost by almost half (and you started with more than 10% of the time spent in the GC).
(* 0.75 (/ 1.0 XXX)) is equal to (/ 0.75 XXX).
use eq instead of char-equal (that changes the behavior to always be case-sensitive, tho). This makes a fairly large difference.
use aref instead of elt.
I don't understand why you pass last-found in your recursive call, so I obviously don't fully understand what your algorithm is doing. But assuming that was an error, you can turn it into a local variable instead of passing it as an argument. This saves you time.
I don't understand why you make a recursive call for every search-char that you find, instead of only for the first one. Another way to look at this is that your max compares a "single-char score" with a "whole search-string score" which seems rather odd. If you change your code to do the max outside of the two loops with the recursive call on (1+ first-found), that speeds it up by a factor of 4 in my test case.
The multiplication by score-per-char can be moved outside of the loop (this doesn't seem to be true for your original algorithm).
Also, the Elisp as implemented in Emacs is pretty slow, so you're often better off using "big primitives" so as to spend less time interpreting Elisp (byte-)code and more time running C code. Here is for example an alternative implementation (not of your original algorithm but of the one I got after moving the max outside of the loops), using regexp pattern maching to do the inner loop:
(defun commandT-fuzzy-match-re (choices search-string)
(let ((search-re (regexp-quote (substring search-string 0 1)))
(i 1))
(while (< i (length search-string))
(setq search-re (concat search-re
(let ((c (aref search-string i)))
(format "[^%c]*\\(%s\\)"
c (regexp-quote (string c))))))
(setq i (1+ i)))
(sort
(delq nil
(mapcar (lambda (choice)
(let ((start 0)
(best 0.0))
(while (string-match search-re choice start)
(let ((last-found (match-beginning 0)))
(setq start (1+ last-found))
(let ((score 1.0)
(i 1)
(choice-pointer nil))
(while (setq choice-pointer (match-beginning i))
(setq i (1+ i))
(setq score (+ score (/ 0.75 (- choice-pointer last-found))))
(setq last-found choice-pointer))
(setq best (max best score)))))
(when (> best 0.0)
(list (* (commandT-max-score-per-char
choice search-string)
best)
choice))))
choices))
#'car-less-than-car)))

Alternative for swi prologs clpq library for soving simplex

Excuse me if this is the wrong place to ask.
I have been using SWI Prolog's clpq library to solve simplex. I find the syntax pretty simple and expressive. It looks like this:
:- use_module(library(clpq)).
main(U, V, W) :-
{ 0 =< U, U =< 1,
0 =< V, V =< 1,
0 =< W, W =< 1
},
maximize(U + V - W).
No need to convert into any special format, you just type your constraints and the object function. Great, but it has come to my attention that clpq has bugs and is un-maintained, so I lack confidence in it.
So I was wondering if someone knows something opensource and equally as simple, without bugs? The best I have found so far is the GNU linear programming kit. What are other people using for experimenting with simplex?
For the archive, the simplex implementation in maxima (http://maxima.sourceforge.net/) is very good.