How can I retrieve the host triplet of the system I'm compiling on?
The question is very clear and doesn't need to be more wordish, but SO insists in imposing a question length limit. Sorry for this buzz.
As Brett suggested (dunno why he doesn't posts it as an answer), the config.guess script of autotools does that.
Related
For example, will it eventually work? Does it work? What are the goals and plans? Where can we read about it.
Is tensorflow_transform a going concern for tf 2.0?
Absolutely! Development is ongoing. Issues are being actively discussed, PRs are being worked on and there have been several changes to the master branch this week.
will it eventually work? Does it work?
Yes it works now (in general at least). Perhaps if you are encountering some specific issue could ask a new question with what, specifically, isnt working for you.
What are the goals and plans? Where can we read about it.
The tensorflow team are really good at communicating plans via RFCs and doing development in the open. I am less familiar with work on tf-transform but all the signs are this is developed with the same culture. Check out:
the github repo
the official site
I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).
I'm trying to understand someone else's simple tensorflow model and they make use of contrib.layers.linear.
However I cannot find any information on this anywhere and it's not mentioned in the tensorflow documentation.
The tf.contrib.layers module has API documentation here. As you observed in your answer, the contrib APIs in TensorFlow are (especially) subject to change. The tf.contrib.layers.linear() function appears to have been removed, but you can use tf.contrib.layers.fully_connected(…, activation_fn=None) to achieve the same effect.
I managed to find the answer and felt it was still worth posting this to save others wasting their time.
"In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.
Code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees." source
According to what I can see in the Master branch, the function linear still exists in contrib.layers. It actually is a "simple alias which removes the activation_fn parameter":
linear = functools.partial(fully_connected, activation_fn=None)
Here is a link from the 1.0 branch (to increase link persistence).
Though, if the doc still shows it, the link to contrib.layers.linear seems indeed broken.
This is more of a curiosity I suppose, but I was wondering whether it is possible to apply compiler optimizations post-compilation. Are most optimization techniques highly-dependent on the IR, or can assembly be translated back and forth fairly easily?
This has been done, though I don't know of many standard tools that do it.
This paper describes an optimizer for Compaq Alpha processors that works after linking has already been done and some of the challenges they faced in writing it.
If you strain the definition a bit, you can use profile-guided optimization to instrument a binary and then rewrite it based on its observable behaviors with regards to cache misses, page faults, etc.
There's also been some work in dynamic translation, in which you run an existing binary in an interpreter and use standard dynamic compilation techniques to try to speed this up. Here's one paper that details this.
Hope this helps!
There's been some recent research interest in this space. Alex Aiken's STOKE project is doing exactly this with some pretty impressive results. In one example, their optimizer found a function that is twice as fast as gcc -O3 for the Montgomery Multiplication step in OpenSSL's RSA library. It applies these optimizations to already-compiled ELF binaries.
Here is a link to the paper.
Some compiler backends have a peephole optimizer which basically does just that, before it commits to the assembly that represents the IR, it has a little opportunity to optimize.
Basically you would want to do the same thing, from the binary, machine code to machine code. Not the same tool but the same kind of process, examine some size block of code and optimize.
Now the problem you will end up with though is for example you may have had some variables that were marked volatile in C so they are being very inefficiently used in the binary, the optimizer wont know the programmers desire there and could end up optimizing that out.
You could certainly take this back to IR and forward again, nothing to stop you from that.
Lately I implemented a MersenneTwister for 64-bit integer (or long). Is there a guide or examples of how to test PRNG so that I may know whether or not my implementation is good-enough solution. I'm specially interested into how to verify if my implementation has good enough uniform distribution.
The more specifically this is tied to MersenneTwister the better.
You do not need to test the Mersenne Twister algorithm -- that's been done over and over by people who really know what they're doing -- you only have to test whether you've correctly implemented the algorithm.
You can go to the Mersenne Twister web site and grab their test output. If you produce the same sequence of outputs that they do, you've probably implemented the algorithm correctly.
Note that the MT site has a link specifically for 64 bit machines and different test outputs for 32 and 64 bit versions.
The standard battery of tests for a PRNG is the Diehard Tests.
Easiest approach (If it's truely generic MT) would be to compared it with a known-good MT library with the same seed.
Aloha!
As someone else said - use the known answer test vectors for the algorithms. If you meet the test vectors you can be reasonably sure that your generator works.
If you really want to test the generator. Use the DIEHARD tests++ as implemented by the Dieharder tool:
http://www.phy.duke.edu/~rgb/General/dieharder.php