I'm working with mips.
I am confused between machine code and mips code.
if I use mips I can see the addresses of the branch is in number of words we need to jump from the instruction after the branch .
what I don't understand is how it work "behind the sceen" ? and how the "shift left" by 2 is involved in this case?
I need the word to be aligned by 4 byte(word) so actually the address that I see in mips languge is the number of word *4 =number of bytes we need to skeep?
another question:
if the shift left was by 3? what can happend? it will give me the wrong address?
In the MIPS architecture, branching is done by comparing a value as soon as the instruction is given, relying on no previous operation or flag. This takes up space in the instruction format, leaving only 16 bits to be used as the branch address. This is much too small of an address to be particularly useful, so instead of branching to that address, it branches relative to it's own address. The calculation of this branch offset is handled all by the assembler, so it looks like a branch operation would branch directly to a label/address.
Source: http://www.mrc.uidaho.edu/mrc/people/jff/digital/MIPSir.html
Related
This is a question popped into my mind while reading the halting problem, collatz conjecture and Kolmogorov complexity. I have tried to search for something similar but I was unable to find a particular topic maybe because it is not of great value or it could just be a trivial question.
For the sake of simplicity I will give three examples of programs/functions.
function one(s):
return s
function two(s):
while (True):
print s
function three(s):
for i from 0 to 10^10:
print(s)
So my questions is, if there is a way to formalize the length of a program (like the bits used to describe it) and also the internal memory used by the program, to determine the minimum/maximum number of time/steps needed to decide whether the program will terminate or run forever.
For example, in the first function the program doesn't alter its internal memory and halts after some time steps.
In the second example, the program runs forever but the program also doesn't alter its internal memory. For example, if we considered all the programs with the same length as with the program two that do not alter their state, couldn't we determine an upper bound of steps, which if surpassed we could conclude that this program will never terminate ? (If not why ?)
On the last example, the program alters its state (variable i). So, at each step the upper bound may change.
[In short]
Kolmogorov complexity suggests a way of finding the (descriptive) complexity of an object such as a piece of text. I would like to know, given a formal way of describing the memory-space used by a program (computed in runtime), if we could compute a maximum number of steps, which if surpassed would allow us to know whether this program will terminate or run forever.
Finally, I would like to suggest me any source that I might find useful and help me figure out what I am exactly looking for.
Thank you. (sorry for my English, not my native language. I hope I was clear)
If a deterministic Turing machine enters precisely the same configuration twice (which we can detect b keeping a trace of configurations seen so far), then we immediately know the TM will loop forever.
If it known in advance that a deterministic Turing machine cannot possibly use more than some fixed constant amount of its input tape, then the TM must explicitly halt or eventually enter some configuration it has already visited. Suppose the TM can use at most k tape cells, the tape alphabet is T and the set of states is Q. Then there are (|T|+1)^k * |Q| unique configurations (the number of strings over (T union blank) of length k times the number of states) and by the pigeonhole principle we know that a TM that takes that many steps must enter some configuration it has already been to before.
one: because we are given that this function does not use internal memory, we know that it either halts or loops forever.
two: because we are given that this function does not use internal memory, we know that it either halts or loops forever.
three: because we are given that this function only uses a fixed amount of internal memory (like 34 bits) we can tell in fewer than 2^34 iterations of the loop whether the TM will halt or not for any given input s, guaranteed.
Now, knowing how much tape a TM is going to use, or how much memory a program is going to use, is not a problem a TM can solve. But if you have an oracle (like a person who was able to do a proof) that tells you a correct fixed upper bound on memory, then the halting problem is solvable.
I am currently studying the bitcoin and litecoin to try and get a better understanding of cryptocurrencies, and blockchains in general - and I have spotted something in the code that I have a question about.
in src/amount.h - I see the following code...
/** No amount larger than this (in satoshi) is valid.
*
* Note that this constant is *not* the total money supply, which in Bitcoin
* currently happens to be less than 21,000,000 BTC for various reasons, but
* rather a sanity check. As this sanity check is used by consensus-critical
* validation code, the exact value of the MAX_MONEY constant is consensus
* critical; in unusual circumstances like a(nother) overflow bug that allowed
* for the creation of coins out of thin air modification could lead to a fork.
* */
static const CAmount MAX_MONEY = 84000000 * COIN;
Now, the comment here, seems to suggest that this code does not actually define what the total supply of the currency will be, even though the amount of Litecoin available is in fact 84,000,000...
So, my real question :
Is the real total supply held in another piece of code? If so, what am I missing, where can I find this code, and if I were to be trying to edit this (I'm not - but I want to understand what is going on here) - would I need to edit code in multiple places?
NOTE : Tagged bitcoin even though this is litecoin souce in the question, because litecoin doesn't appear to have a stackoverflow tag, and the two codebases are similar anyway.
EDIT : I also wanted to add, that I performed a grep for "84000000" - and only really found that one line of code to be relevant... So I must be missing something...
EDIT 2 : According to literally every coin out there on git that I have looked at - this is the number that they change when adjusting the total supply - so is the comment just wrong - or did I misunderstand it?
I realise this is an old question, but since it hasn't been updated I'll provide an answer.
As the source suggests, MAX_MONEY is simply a sanity check. If someone tries to create a transaction spending 500 million Bitcoin, and it somehow manages to bypass all other sanity checks, the network will still reject it because the amount exceeds MAX_MONEY. So MAX_MONEY is not directly related to total supply, but as you have observed, many alts will set MAX_MONEY to the expected total supply over the lifetime of the coin.
For a pure proof-of-work coin with consistent reward scheme (eg halving every X blocks) the total supply can be pre-calculated, but a future fork could change that.
For a typical proof-of-stake or hybrid proof-of-work and proof-of-stake coin, the maximum supply can be estimated by simulation, but the exact amount will vary depending on network activity.
(This assumes there is not another part of the code that cuts off all rewards after a limit is reached.)
In the second line of the program’s output, notice that the value of
331.79, which is assigned to floatingVar, is actually displayed as 331.790009.The reason for this inaccuracy is the particular way in which numbers are internally represented inside the computer.You
have probably come across the same type of inaccuracy when dealing
with numbers on your calculator. If you divide 1 by 3 on your
calculator, you get the result .33333333, with perhaps some additional
3s tacked on at the end.The string of 3s is the calculator’s
approximation to one third.Theoretically, there should be an infinite
number of 3s. But the calculator can hold only so many digits, thus
the inherent inaccuracy of the machine.The same type of inaccuracy
applies here: Certain floatingpoint values cannot be exactly
represented inside the computer’s memory.
the above quote comes from Programming in Objective-C – 4th edition
And this post answered a little part but not the kind of answer i'm trying to look for.
Will try to find another book about this later in the day.
Anyway if anyone would like to answer this question, thanks!
I've been searching the web and I'm finding somewhat contradictory answers. Some sources assert that a language/machine/what-have-you is Turing complete if and only if it has both conditional and unconditional branching (which I guess is kind of redundant), some say that only unconditional is required, others that only conditional is required.
Reading about the German Z3 and ENIAC, Wikipedia says:
The German Z3 (shown working in May
1941) was designed by Konrad Zuse. It
was the first general-purpose digital
computer, but it was
electromechanical, rather than
electronic, as it used relays for all
functions. It computed logically using
binary math. It was programmable by
punched tape, but lacked the
conditional branch. While not designed
for Turing-completeness, it
accidentally was, as it was found out
in 1998 (but to exploit this
Turing-completeness, complex, clever
hacks were necessary).
What complex, clever hacks, exactly?
A 1998 paper Abstract by R. Rojas also states (Note that I haven't read this paper, it's just a snippet from IEEE.):
The computing machine Z3, built by
Konrad Zuse between 1938 and 1941,
could execute only fixed sequences of
floating point arithmetical operations
(addition, subtraction,
multiplication, division, and square
root) coded in a punched tape. An
interesting question to ask, from the
viewpoint of the history of computing,
is whether or not these operations are
sufficient for universal computation.
The paper shows that, in fact, a
single program loop containing these
arithmetical instructions can simulate
any Turing machine whose tape is of a
given finite size. This is done by
simulating conditional branching and
indirect addressing by purely
arithmetical means. Zuse's Z3 is
therefore, at least in principle, as
universal as today's computers that
have a bounded addressing space.
In short, SOers, what type of branching is exactly required for Turing-completeness? Assuming infinite memory, can a language with only a goto or jmp branching construct (no if or jnz constructs) be considered Turing-complete?
The original Rojas paper can be found here. The basic idea is that the Z3 only supports a unconditional single loop (by gluing the ends of the instruction tape together). You build conditional execution of it by putting all code sections one after another in the loop, and having a variable z that determines which section to execute. At the beginning of section j, you set
if (z==j) then t=0 else t=1
and then make each assignment a = b op c in this section read
a = a*t + (b op c)*(1-t)
(i.e. each assignment is a no-op, except in the active section). Now, this still includes a conditional assignment: how to compare z==j? He proposes to use the binary representation of z (z1..zm) along with the negated binary representation of j (c1..cm), and then compute
t = 1 - sqr((c1-z1)(c2-z2)...(cm-zm))
This product will be 1 only if c and z differ in all bits, which will happen only if z==j. An assignment to z (which essentially is an indirect jump) must also assign to z1..zm.
Rojas has also written Conditional Branching is not Necessary for Universal Computation in von Neumann Computers. There he proposes a machine with self-modifying code and relative addressing, so that you can read the Turing instructions from memory, and modify the program to jump accordingly. As an alternative, he proposes the above approach (for Z3), in a version that only uses LOAD(A), STORE(A), INC and DEC.
If you have only arithmetical expressions you can use some properties of arithmetical operations. E.g., is A is either 0 or 1 depending on some condition (which is previously computed), then A*B+(1-A)*C computes the expression if A then B else C.
If you can compute the address for your goto or jmp, you can simulate arbritary conditionals. I occasionally used this to simulate "ON x GOTO a,b,c" in ZX Basic.
If "true" has the numerical value 1 and "false" 0, then a construction like:
if A then goto B else goto C
is identical to:
goto C+(B-C)*A
So, yes, with a "computed goto" or the ability to self-modify, a goto or jmp can act as a conditional.
You need something that can branch based on (results from) input.
One way to simulate conditional branches is with self-modifying code -- you do a computation that deposits its result into the stream of instructions being executed. You could put the op-code for an unconditional jump into the instruction stream, and do math on an input to create the correct target for that jump, depending on some set of conditions for the input. For example, subtract x from y, shift right to 0-fill if it was positive, or 1-fill if it was negative, then add a base address, and store that result immediately following the jmp op-code. When you get to that jmp, you'll go to one address if x==y, and another if x!=y.
You don't need conditional branching to build a Turing-complete machine, but of course any Turing-complete machine will provide conditional branching as a core feature.
It was proved that systems as simple as the Rule 110 Cellular Automaton can be used to implement a Turing machine. You sure don't need conditional branching to pull such a system from the bit bucket. Actually one could just use a bunch of rocks.
The point is that a Turing machine will provide the conditional branching, so what you are doing anyway by proving Turing completeness is somewhat implementing conditional branching. You have to do it without conditional branching at some point, be it rocks or PN-junctions in semi-conductors.
The Z3 was only Turing complete from an abstract point of view. You can have an arbitrarily long program tape and just have it compute both sides of every conditional branch. In other words, for each branch, it would compute both answers and tell you which one to ignore. Obviously this creates exponentially larger programs for every conditional branch you would have, so you could never use this machine in a Turing-complete manner.
If a machine can branch, then yes it's considered Turing complete.
The reason is having conditional-branching automatically makes any computer Turing complete. However, there are also machines that can't jump branch or even IF but are still considered Turing complete.
Processing is just the process of identifying inputs in-order to select outputs.
Branching is one way to mentalize this process, the condition of the jump is what can classify inputs, the place you branch to stores the correct output for that input.
So finally, to clarify things:
If you have conditional branching your computer is necessarily computationally equivalent to a Turing machine. However, there are plenty of other ways for a computer to achieve Turing completeness (lambda, IF's, CL).
I found this on an "interview questions" site and have been pondering it for a couple of days. I will keep churning, but am interested what you guys think
"10 Gbytes of 32-bit numbers on a magnetic tape, all there from 0 to 10G in random order. You have 64 32 bit words of memory available: design an algorithm to check that each number from 0 to 10G occurs once and only once on the tape, with minimum passes of the tape by a read head connected to your algorithm."
32-bit numbers can take 4G = 2^32 different values. There are 2.5*2^32 numbers on tape total. So after 2^32 count one of numbers will repeat 100%. If there were <= 2^32 numbers on tape then it was possible that there are two different cases – when all numbers are different or when at least one repeats.
It's a trick question, as Michael Anderson and I have figured out. You can't store 10G 32b numbers on a 10G tape. The interviewer (a) is messing with you and (b) is trying to find out how much you think about a problem before you start solving it.
The utterly naive algorithm, which takes as many passes as there are numbers to check, would be to walk through and verify that the lowest number is there. Then do it again checking that the next lowest is there. And so on.
This requires one word of storage to keep track of where you are - you could cut down the number of passes by a factor of 64 by using all 64 words to keep track of where you're up to in several different locations in the search space - checking all of your current ones on each pass. Still O(n) passes, of course.
You could probably cut it down even more by using portions of the words - given that your search space for each segment is smaller, you won't need to keep track of the full 32-bit range.
Perform an in-place mergesort or quicksort, using tape for storage? Then iterate through the numbers in sequence, tracking to see that each number = previous+1.
Requires cleverly implemented sort, and is fairly slow, but achieves the goal I believe.
Edit: oh bugger, it's never specified you can write.
Here's a second approach: scan through trying to build up to 30-ish ranges of contiginous numbers. IE 1,2,3,4,5 would be one range, 8,9,10,11,12 would be another, etc. If ranges overlap with existing, then they are merged. I think you only need to make a limited number of passes to either get the complete range or prove there are gaps... much less than just scanning through in blocks of a couple thousand to see if all digits are present.
It'll take me a bit to prove or disprove the limits for this though.
Do 2 reduces on the numbers, a sum and a bitwise XOR.
The sum should be (10G + 1) * 10G / 2
The XOR should be ... something
It looks like there is a catch in the question that no one has talked about so far; the interviewer has only asked the interviewee to write a program that CHECKS
(i) if each number that makes up the 10G is present once and only once--- what should the interviewee do if the numbers in the given list are present multple times? should he assume that he should stop execting the programme and throw exception or should he assume that he should correct the mistake by removing the repeating number and replace it with another (this may actually be a costly excercise as this involves complete reshuffle of the number set)? correcting this is required to perform the second step in the question, i.e. to verify that the data is stored in the best possible way that it requires least possible passes.
(ii) When the interviewee was asked to only check if the 10G weight data set of numbers are stored in such a way that they require least paases to access any of those numbers;
what should the interviewee do? should he stop and throw exception the moment he finds an issue in the algorithm they were stored in, or correct the mistake and continue till all the elements are sorted in the order of least possible passes?
If the intension of the interviewer is to ask the interviewee to write an algorithm that finds the best combinaton of numbers that can be stored in 10GB, given 64 32 Bit registers; and also to write an algorithm to save these chosen set of numbers in the best possible way that require least number of passes to access each; he should have asked this directly, woudn't he?
I suppose the intension of the interviewer may be to only see how the interviewee is approaching the problem rather than to actually extract a working solution from the interviewee; wold any buy this notion?
Regards,
Samba