Is there any good tutoria or reference for writing code with Magma? - gpu

Currently I am trying to use Magma to do matrix operation on GPU, however, I found few documents about it. The only thing I can refer to is its testing program and the online generated document(here), which is not convenient to use. And the user guide seems outdated.

If you look here, getri and potri are supported.

Related

Tensorflow Documentation

I am increasingly irritated and frustrated by the Tensorflow documentation. I searched on google for documentation regarding
tf.reshape
I'm getting directed to a generic page like here. I want to see the details of tf.reshape and not the entirety of the documentation.
Am I doing something wrong here?
Do not Google about Tensorflow documentation, use the TensorFlow Python reference documentation and ctrl + f
The probably fastest way is to use the Tf documentation is:
http://devdocs.io/tensorflow~python/
Just type tf.reshape and you are done.
which can be also used offline and automatically updates the docs.
edit: even typing only res shows you the documentation.
Update for posterity:
With the new TensorFlow, the website is now indexed with Google, and it should also soon be indexed by other search engines.
I would suggest you use the GitHub repo as your documentation instead. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/g3doc/api_docs/python/functions_and_classes
For example tf.reshape is in a single Markdown file https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reshape.md
To search for the document you want, you could use the GitHub search under that functions_and_classes folder.
An example is
tf.reshape() path:tensorflow/g3doc/api_docs/python/functions_and_classes language:Markdown
https://github.com/tensorflow/tensorflow/search?utf8=✓&q=tf.reshape%28%29+path%3Atensorflow%2Fg3doc%2Fapi_docs%2Fpython%2Ffunctions_and_classes+language%3AMarkdown&type=Code
which search for tf.reshape() under the documentation folder.
I use the non-official Dash/Zeal docset for TensorFlow:
https://github.com/ppwwyyxx/dash-docset-tensorflow
It is a very convenient way of browsing the TensorFlow documentation offline and it solves the problem you are describing.
Is this what you are looking for? Using the search functionality of the browser helped me find it.
I suppose that you have installed tensorflow in your computer and that you know the name of function that you may want to use.
So if you use some Python IDE, I think you can directly jump to the declaration or definition of this function and see the usage and explanation. That is the same documentation as online (although for some functions it is not very clear).
You can use the url for tensorflow documentation and add what you want to search..
The base url is:
https://www.tensorflow.org/api_docs/python/tf/
You can add what_ever_you_want_to_search after the /
Since Tensorflow r1.1 a search on google for items like 'tf.shape' now lists the appropriate page at the top of the search results.
This didn't work back in r0.10 and r0.11, maybe because there were many markdown formatting issues in the Tensorflow docs themselves.
Since you tf is developing best way is to go through the tf API. And it's good if you can follow these slides in http://web.stanford.edu/class/cs20si/

User-specified function in mpfit

I have been an IDL programmer for sometime now and looking to transition to Python. I find that MPFIT's IDL version exists in Python. However, I am looking for MPFITFUN version in Python (http://www.physics.wisc.edu/~craigm/idl/down/mpfitfun.pro) or something similar.
Basically, I am looking for a Python function that takes a user-defined function and uses like Levenberg-Marquardt least-squared fit (like MPFIT).
Thanks,
There are fitting functions built into SciPy but I do not know of any which account for uncertainties in data as MPFITFUN does.
I have found Sherpa to be an excellent modeling and fitting package for Python which accounts for uncertainties and replaces MPFITFUN: http://cxc.harvard.edu/contrib/sherpa/
Since Sherpa is produced by astronomers it has a lot of built in astrophysical models, but you can build your own function to fit with Sherpa's Levenberg-Marquardt, Nelder-Mead or Monte Carlo algorithms. I used the template from the pysherpa blog:
http://pysherpa.blogspot.com/2010/06/user-defined-sherpa-model-types-using.html
mpfit.py is available from https://code.google.com/p/astrolibpy/ and an older version hosted at http://cars.uchicago.edu/software/python/mpfit.html.
A good alternative is lmfit: https://pypi.python.org/pypi/lmfit/, https://github.com/lmfit/lmfit-py, http://lmfit.github.io//lmfit-py/
I accidentally found that there also exists the MPFITEXPR in Python. Here's the link to the code. You can also download it via Astrolibpy project.
Link:
https://code.google.com/p/astrolibpy/source/browse/mpfit/mpfitexpr.py?r=3545675a0662392e3e09c88beaf275c9e7881cf6

Porting newlib to a custom ARM setup

this is my first post, and it covers something which I've been trying to get working on and off for about a year now.
Essentially it boils down to the following: I have a copy of newlib which I'm trying to get working on an LPC2388 (an ARM7TDMI from NXP). This is on a linux box using arm-elf-gcc
The question I have is that I've been looking at a lot of the tutorials talking about porting newlib, and they all talk about the stubs (like exit, open, read/write, sbrk), and I have a pretty good idea of how to implement all of these functions. But where should I put them?
I have the newlib distribution from sources.redhat.com/pub/newlib/newlib-1.18.0.tar.gz and after poking around I found "syscalls.c" (in newlib-1.18.0/newlib/libc/sys/arm) which contains all of the stubs which I have to update, but they're all filled in with rather finished looking code (which does NOT seem to work without the crt0.S, which itself does not work with my chip).
Should I just be wiping out those functions myself, and re-writing them? Or should I write them somewhere else. Should I make a whole new folder in newlib/libc/sys with the name of my "architecture" and change the target to match?
I'm also curious if there's proper etiquette on distribution of something like this after releasing it as an open source project. I currently have a script which downloads binutils, arm-elf-gcc, newlib, and gdb, and compiles them. If I am modifying files which are in the newlib directory, should I hand a patch which my script auto-applies? Or should I add the modified newlib to the repository?
Thanks for bothering to read! Following this is a more detailed breakdown of what I'm doing.
For those who want/need more info about my setup:
I'm building a ARM videogame console based loosely on the Uzebox project ( http://belogic.com/uzebox/ ).
I've been doing all sorts of things pulling from a lot of different resources as I try and figure it out. You can read about the start of my adventures here (sparkfun forums, no one responds as I figure it out on my own): forum.sparkfun.com/viewtopic.php?f=11&t=22072
I followed all of this by reading through the Stackoverflow questions about porting newlib and saw a few of the different tutorials (like wiki.osdev.org/Porting_Newlib ) but they also suffer from telling me to implements stubs without mentioning where, who, what, when, or how!
But where should I put them?
You can put them where you like, so long as they exist in the final link. You might incorporate them in the libc library itself, or you might keep that generic, and have the syscalls as a separate target specific object file or library.
You may need to create your own target specific crt0.s and assemble and link it for your target.
A good tutorial by Miro Samek of Quantum Leaps on getting GNU/ARM development up and running is available here. The examples are based on an Atmel AT91 part so you will need to know a little about your NXP device to adapt the start-up code.
A ready made Newlib porting layer for LPC2xxx was available here, but the links ot teh files appear to be broken. The same porting layer is used in Martin Thomas' WinARM project. This is a Windows port of GNU ARM GCC, but the examples included in it are target specific not host specific.
You should only need to modify the porting layer on Newlib, and since it is target and application specific, you need not (in fact probably should not) submit your code to the project.
When I was using newlib that is exactly what I did, blew away crt0.s, syscalls.c and libcfunc.c. My personal preference was to link in the replacement for crt0.s and syscalls.c (rolled the few functions in libcfunc into the syscalls.c replacement) based on the embedded application.
I never had an interest in pushing any of that work back into the distro, so cannot help you there.
You are on the right path though, crt0.S and syscalls.c are where you want to work to customize for your target. Personally I was interested in a C library (and printf) and would primarily neuter all of the functions to return 0 or 1 or whatever it took to get the function to just work and not get in the way of linking, periodically making the file I/O functions operate on linked in data in rom/ram. Basically without replacing or modifying any other files in newlib I had a fair amount of success, so you are on the right path.

Where can I browse the sourcecode for libc online (like doxygen)

Sometimes I want to look up the implementations of functions in the stdlib, I've downloaded the sourcecode, but it's quite messy.
Just greping is not really suitable because of the many hits.
Does anyone know a webpage doxygen style that has the documentation.
The same goes for the linux kernel.
Thanks
You should check if your distribution is using the vanilla GLIBC or the EGLIBC fork (Debian and Ubuntu have switched to EGLIBC EDIT: they switched back around 2014).
Anyway, the repository browser for GLIBC is at http://sourceware.org/git/?p=glibc.git
http://code.woboq.org/userspace/glibc/, posted by #guruz below, is a good alternative.
The source is a bit complicated by the presence of multiple versions of the same files.
How about this for libc documentation? And perhaps this for the kernel? There is also Google Code search; here is an example search.
More on Google Code Search You can enter search queries like this: package:linux-2.6 malloc for any references to malloc in the linux-2.6 kernel.
Edit: Google Code search is now shut down. But you can access the git repo at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git and it has search as well.
You can try http://code.woboq.org/userspace/glibc/
It has nice navigation/hilighting similar to an IDE.
To help navigate the source to glibc, perhaps try something like ctags or cscope?
Note: I get dumber every time I look at the glibc source, so please be careful! :)
If you are using GNU C (glibc), the functions (beyond the GNU extensions) follow the POSIX standard as far as their arguments, implementation, failure and return values. If you want to peek under the hood of static members, you'll have to look at the code.
Every push (that I can remember) to try and adopt something like Doxygen for glibc was rejected for the following reasons:
Redundant, POSIX already documents almost everything thats exposed, as well as man and info pages.
Too much work initially
More work for maintainers
As far as the kernel goes, Linux does use a system very similar to Doxygen called Kerneldoc.
You can also get actual Doxygen-generated docs from http://fossies.org/dox/glibc.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?