Bootstrap-table fails with more than 1,000 rows - html-table

I have two draft pages using bootstrap-table that differ only in the number of rows to be shown.
more that 1,000 causes the pagination and search fields to disappear:
http://www.jeffmaynard.com/testAir/testSmall.html works fine
http://www.jeffmaynard.com/testAir/testLarge.html loses search field and pagination
The latter also produces the following warning in the console:
jQuery.Deferred exception: undefined is not an object (evaluating 'columns[x].field') (2)
(anonymous function) — bootstrap-table.js:4024
each — jquery.js:381
(anonymous function) — bootstrap-table.js:4003
each — jquery.js:381
trToData — bootstrap-table.js:3995
initTable — bootstrap-table.js:4468
init — bootstrap-table.js:4310
(anonymous function) — bootstrap-table.js:7787
each — jquery.js:381
(anonymous function) — bootstrap-table.js:7762
(anonymous function) — bootstrap-table.js:7805
mightThrow — jquery.js:3762
(anonymous function) — jquery.js:3830
I'm sure the issue is something simple but days of trial and error (and much Googling) has not helped. Suggestions for fixing this will be greatly appreciated...

The problem is that in http://www.jeffmaynard.com/testAir/testLarge.html you missed a double quote " in this line:
<tr><td class="q">CXH</td><td class="a">CYHC</td><td class="f">Vancouver Harbour Water Aerodrome (Coal Harbour Seaplane Base)</td><td class="l>Vancouver, British Columbia, Canada</td></tr>
you wrote td class="l instead of td class="l" so all html after that line is corrupted.
Just fix that and it will work normally.

Related

How to generate valgrind suppressions without manual cut and paste?

I want to generate a suppressions file with --gen-suppressions in valgrind.
However, I do not want to have to go through thousands of lines of output the cut and paste out the suppressions and remove the valgrind stack traces / other valgrind output, and resolve .
Is there a way to do this easily? This seems like a very basic use case...
// I want this part vvvvv
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: reachable
fun:malloc
fun:strdup
fun:_XlcCreateLC
fun:_XlcDefaultLoader
fun:_XOpenLC
fun:_XrmInitParseInfo
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
fun:XrmGetStringDatabase
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
fun:XGetDefault
fun:GetXftDPI
fun:X11_InitModes_XRandR
fun:X11_InitModes
fun:X11_VideoInit
}
// I do not want this part vvvv
==187526== 2 bytes in 1 blocks are still reachable in loss record 2 of 137
==187526== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==187526== by 0x4B7C50E: strdup (strdup.c:42)
==187526== by 0x5922D81: _XlcResolveLocaleName (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5926387: ??? (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5925956: ??? (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x592615C: _XlcCreateLC (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5943664: _XlcDefaultLoader (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x592D995: _XOpenLC (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
It is quite unlikely that all of the suppressions are different.
If you create a suppression like
{
XINIT-1
Memcheck:Leak
match-leak-kinds: reachable
fun:malloc
fun:strdup
fun:_XlcCreateLC
fun:_XlcDefaultLoader
fun:_XOpenLC
fun:_XrmInitParseInfo
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
}
Then re-run. Typically the error count will go down very quickly and you will only need to add a fairly small number of suppressions (single or low double digits).
(you need to apply your knowledge of the code and libs(s) to get a sensible stack depth for suppressions - too many stack entries and the suppression will be too specific and you need more suppressions, too few and you risk suppressing real problems).

Can't declare constant variable locally Error installing Artificial Intelligence a Modern Approach Code on Ubuntu Trusty Tahr - Common Lisp

Im installing the code for the book Artificial Intelligence a Modern Approach i got here http://aima.cs.berkeley.edu/lisp/doc/install.html its the lisp version im installing btw
I'm on Ubuntu Trusty using Emacs SBCL slime, I placed the code in ~/.emacs.d
so per the instructions at above link i run (load "/home/w/.emacs.d/aima/code/aima.lisp")
which loads fine i get "T" as output
i run (aima-load 'all) that works I get "T" as output
but when i run (aima-compile) I get the error
Can't declare constant variable locally special: +NO-BINDINGS+
[Condition of type SIMPLE-ERROR]
I'm not sure I understand the error I read the Hyperspec on Declare and Special but that didn't help. I'm not opposed to a hack to make this work so if someone can help me reword the code or figure out if its a emacs setting or sbcl setting i could change to get the aforementioned variable declared that would be great. The error message is referring to this file /home/w/.emacs.d/aima/code/logic/algorithms/tell-ask.lisp
this section
(defmethod ask-each ((kb literal-kb) query fn)
"For each proof of query, call fn on the substitution that
the proof ends up with."
(declare (special +no-bindings+))
(for each s in (literal-kb-sentences kb) do
(when (equal s query) (funcall fn +no-bindings+))))
I verified all the permisions of all the files in my aima folder are set to read write for my username with my username as owner as a step to correct....but as far as understanding the error I could use help figuring out the next step one would take to debug this...The code is downloadable here http://aima.cs.berkeley.edu/lisp/code.tar.gz and its an easy install for a veteran emacs/lisp user....any help is appreciated.
Using SBCL:
I tried loading the AIMA code as well, with the same problem in SBCL. Just comment out line in ./logic/algorithms/tell-ask.lisp:
(declare (special +no-bindings+))
like so
;;(declare (special +no-bindings+))
, or delete the whole line altogether. Nevertheless, this is what I did:
(defmethod ask-each ((kb literal-kb) query fn)
"For each proof of query, call fn on the substitution that
the proof ends up with."
;;(declare (special +no-bindings+))
(for each s in (literal-kb-sentences kb) do
(when (equal s query) (funcall fn +no-bindings+))))
You will still have an issue running (aims-compile) with SBCL. It will complain about some constants being redefined. Just look at the possible restarts and select every time:
0: [CONTINUE] GO ahead and change the value.
Do this as many times (about 6 times that is) as it needs to, and it will load eventually. This is probably happening because of AIMA's code non-standard build/compile system. This can be annoying, but an alternative is to trace the code and see why/where some files are being reloaded.
USING Clozure (CCL):
CCL has a different problem with ./utilities/utilities.lisp. CCL has both true and false functions predefined, therefore you have to make sure that both lines:
#-(or MCL Lispworks)
that directly precede both (defun true ...) and (defun false ...) are changed to:
#-(or MCL Lispworks CCL)
also, in the same source, modify error inside for-each macro to look like so:
(error "~a is an illegal variable in (for each ~a in ~a ...)"
var var list)
With these modifications, CCL seems load AIMA code just fine.
In general:
It's a bad idea to redefine constants or to somehow bypass debugger restarts. Best solution is to have (defconstant ...)s evaluated once only, perhaps by placing them in a separate source file and making sure that the build system picks it up only once.
Another solution found here, which entails wrapping calls to defconstantin a macro like so (borrowed from here):
(defmacro define-constant (name value &optional doc)
(if (boundp name)
(format t
"~&already defined ~A~%old value ~s~%attempted value ~s~%"
name (symbol-value name) value))
`(defconstant ,name (if (boundp ',name) (symbol-value ',name) ,value)
,#(when doc (list doc))))
And then replacing all occurrences of defconstant like so:
(defconstant +no-bindings+ '((nil))
"Indicates unification success, with no variables.")
with:
(define-constant +no-bindings+ '((nil))
"Indicates unification success, with no variables.")
If you opt for define-constant "solution", make sure that define-constant is evaluated first.

Undefined reference on libdc1394

I'm using libdc1394-2.2 for camera Bumblebee2.
However, when I try to release bandwith with code below:
if (dc1394_iso_release_bandwidth(camera, val)==DC1394_SUCCESS)
printf( "Succesfully released %d bytes of bandwidth\n", val);
Throws the next error:
undefined reference to `dc1394_iso_release_bandwidth'
However, the function 'dc1394_iso_release_bandwidth', is included in 'iso.h' and this header is included in the main program.
Someone knows how solve the problem?
You're correct, that function is indeed listed in the dc1394-2 stream iso.h header file and with no complex conditional compilation which may cause it to not appear in your translation unit.
One thing that may be an issue is the rather common name iso.h - I'd modify your g++ compilation statement to include a -H flag, which should list the headers being loaded up. It's possible that the iso.h header file you're loading is not actually the dc1394 one.
A long shot, I know, but worth checking if only to discount the possibility.

What is the 'accumulator' in HQ9+?

I was just reading a bit about the HQ9+ programming language:
https://esolangs.org/wiki/HQ9+,
https://en.wikipedia.org/wiki/HQ9+, and
https://cliffle.com/esoterica/hq9plus,
and it tells me something about a so-called “accumulator” which can be incremented, but not accessed. Also, using + doesn't manipulate the result, so that the input
H+H
gives the result:
Hello World
Hello World
Can anyone explain me how this works, what it does, and whether it makes any sense? Thanks.
Having recently completed an implementation in Clojure (which follows) I can safely say that the accumulator is absolutely central to a successful implementation of HQ9+. Without it one would be left with an implementation of HQ9 which, while doubtless worthy in and of itself, is clearly different, and thus HQ9+ without an accumulator, and the instruction to increment it, would thus NOT be an implementation of HQ9+.
(Editor's note: Bob has taken his meds today but they haven't quite kicked in yet; thus, further explanation is perhaps needed. What I believe Bob is trying to say is that HQ9+ is useless as a programming language, per se; however, implementing it can actually be useful in the context of learning how to implement something successfully in a new language. OK, I'll just go and curl up quietly in the back of Bob's brain now and let him get back to doing...whatever it is he does when I'm not minding the store...).
Anyways...implementation in Clojure follows:
(defn hq9+ [& args]
"HQ9+ interpreter"
(loop [program (apply concat args)
accumulator 0]
(if (not (empty? program))
(case (first program)
\H (println "Hello, World!")
\Q (println (first (concat args)))
\9 (apply println (map #(str % " bottles of beer on the wall, "
% " bottles of beer, if one of those bottles should happen to fall, "
(if (> % 0) (- % 1) 99) " bottles of beer on the wall") (reverse (range 100))))
\+ (inc accumulator)
(println "invalid instruction: " (first program)))) ; default case
(if (> (count program) 1)
(recur (rest program) accumulator))))
Note that this implementation only accepts commands passed into the function as parameters; it doesn't read a file for its program. This may be remedied in future releases. Also note that this is a "strict" implementation of the language - the original page (at the Wayback Machine) clearly shows that only UPPER CASE 'H's and 'Q's should be accepted, although it implies that lower-case letters may also be accepted. Since part of the point of implementing any programming language is to strictly adhere to the specification as written this version of HQ9+ is written to only accept upper-case letters. Should the need arise I am fully prepared to found a religion, tentatively named the CONVOCATION OF THE HOLY CAPS LOCK, which will declare the use of upper-case to be COMMANDED BY FRED (our god - Fred - it seems like such a friendly name for a god, doesn't it?), and will deem the use of lower-case letters to be anathema...I MEAN, TO BE ANATHEMA!
Share and enjoy.
Having written an implementation, I think I can say without a doubt that it makes no sense at all. I advise you to not worry about it; it's a very silly language after all.
It's a joke.
There's also an object-oriented extension of HQ9+, called HQ9++. It has a new command ++ which instantiates an object, and, for reasons of backwards-compatibility, also increments the accumulator register twice. And again, since there is no way to store, retrieve, access, manipulate, print or otherwise affect an object, it's completely useless.
It increments something not accessible, not spec-defined, and apparently not really even used. I'd say you can implement it however you want or possibly not at all.
The right answer is one that has been hinted at by the other answers but not quite stated explicitly: the effect of incrementing the accumulator is undefined by the language specification and left as a choice of the implementation.
Actually, I am mistaken.
The accumulator is the register to which the result of the last calculation is stored. In an Intel x86, any register may be specified as the accumulator, except in the case of MUL.
Source:
http://en.wikipedia.org/wiki/Accumulator_(computing)
I was quite surprised the first time I visited the third site in your question to find out a schoolmate of mine wrote the OCaml implementation at the bottom of the page.
(updated site link)
I think that there is, or there must be, a reason for this accumulator and the most important operation on it - increment: future compatibility.
Very often we see that a language is invented, often inspirated by some other language, with of course some salt (new concepts, or at least some improvement). Later, when the language spreads, problems arise and modifications, additions or whatever are introduced. That is the same as saying "we were wrong, this thing was necessary but we didn't tought of it at the time".
Well, this accumulator idea in HQ9+ is exactly the opposite. In the future, when the language will be spread, nobody will be able to say "we need an accumulator, but HQ9+ lacks it", because the standard of the language, even in its first draft, states that an accumulator is present and it is even modifiable (otherwise, it would be a non-sense).

Performance overhead of perform: in Smalltalk (specifically Squeak)

How much slower can I reasonably expect perform: to be than a literal message send, on average? Should I avoid sending perform: in a loop, similar to the admonishment given to Perl/Python programmers to avoid calling eval("...") (Compiler evaluate: in Smalltalk) in a loop?
I'm concerned mainly with Squeak, but interested in other Smalltalks as well. Also, is the overhead greater with the perform:with: variants? Thank you
#perform: is not like eval(). The problem with eval() (performance-wise, anyway) is that it has to compile the code you're sending it at runtime, which is a very slow operation. Smalltalk's #perform:, on the other hand, is equivalent to Ruby's send() or Objective-C's performSelector: (in fact, both of these languages were strongly inspired by Smalltalk). Languages like these already look up methods based on their name — #perform: just lets you specify the name at runtime rather than write-time. It doesn't have to parse any syntax or compile anything like eval().
It will be a little slower (the cost of one extra method call at least), but it isn't like eval(). Also, the variants with more arguments shouldn't show any difference in speed vs. just plain perform:whatever. I can't talk with that much experience about Squeak specifically, but this is how it generally works.
Here are some numbers from my machine (it is Smalltalk/X, but I guess the numbers are comparable - at least the ratios should be):
The called methods "foo" and "foo:" are a noops (i.e. consist of a ^self):
self foo ... 3.2 ns
self perform:#foo ... 3.3 ns
[self foo] value ... 12.5 ns (2 sends and 2 contexts)
[ ] value ... 3.1 ns (empty block)
Compiler valuate:('TestClass foo') ... 1.15 ms
self foo:123 ... 3.3 ns
self perform:#foo: with:123 ... 3.6 ns
[self foo:123] value ... 15 ns (2 sends and 2 contexts)
[self foo:arg] value:123 ... 23 ns (2 sends and 2 contexts)
Compiler valuate:('TestClass foo:123') ... 1.16 ms
Notice the big difference between "perform:" and "evaluate:"; evaluate is calling the compiler to parse the string, generate a throw-away method (bytecode), execute it (it is jitted on the first call) and finally discarded. The compiler is actually written to be used mainly for the IDE and to fileIn code from external streams; it has code for error reporting, warning messages etc.
In general, eval is not what you want when performance is critical.
Timings from a Dell Vostro; your milage may vary, but the ratios not.
I tried to get the net execution times, by measuring the empty loop time and subtracting;
also, I ran the tests 10 times and took the best times, to eliminate OS/network/disk/email or whatever disturbances. However, I did not really care for a load-free machine.
The measure code was (replaced the second timesRepeat-arg with the stuff above):
callFoo2
|t1 t2|
t1 :=
TimeDuration toRun:[
100000000 timesRepeat:[]
].
t2 :=
TimeDuration toRun:[
100000000 timesRepeat:[self foo:123]
].
Transcript showCR:t2-t1
EDIT:
PS: I forgot to mention: these are the times from within the IDE (i.e. bytecode-jitted execution). Statically compiled code (using the stc-compiler) will generally be a bit faster (20-30%) on these low-level micro benchmarks, due to a better register allocation algorithm.
EDIT: I tried to reproduce these numbers the other day, but got completely different results (8ns for the simple call, but 9ns for the perform). So be very careful with these micro-timings, as they run completely out of the first-level cache (and empty messages even omit the context setup, or get inlined) - they are usually not very representative of the overall performance.