2 4-bit binary multiplication in excel [closed] - vba

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Using excel to demonstrate 4-bit x 4-bit binary multiplication as follows:
Convert the multiplier to binary and process one bit at a time
All other operations can be done in decimal.
I have the above question for homework, but it has been given to me by a poor teacher with poor notes. Can anyone give me an idea of where I can read up on the topic? Are there any books (or preferably a link to a webpage) that I could read up on to help me with it?
This is what I have so far. Obviously it's wrong. The thing I'm having trouble with is the addition of the products. I can't simply use =sum() because it 1+1 should equal '0' with a carry. How do I go about achieving this?
Any advice welcome. Thanking you in advance.
Joe.

The problem here is the addition as you correctly point out - but it's wrong in all the calculated cells and not just the cell you highlighted it's just luck that it's the only place you add 1+1.
So, lets work that through with an example that adds a pair of 4-bit binary numbers together in rows 1 & 2. There's an interim calculation to put in row 3 and we'll put the result in row 4.
The least significant bit is simplest and we can restrict this to base 2 (binary) using the MOD function like this =MOD(D1+D2,2) which adds the bits from D1 and D2 and returns 0 where the binary result is 0 or 10 and 1 where it is 1 or 11.
Next we can consider the overflow (or carry) from the less significant operation into the next one...
We can calculate if a bit has overflowed by calculating the integer result of a division by 2. You can calculate the overflow from column D into D3 with =INT((D1+D2)/2) and we can fill that across.
Finally we integrate the carry with the addition, so in C3 we can use =MOD(C1+C2+D3,2) and again fill that back.
Using this you should be able to see how the binary addition works as Excel formulas and work out why your sheet isn't behaving as you expected. Here's the whole calculation in one...
A B C D
1 1 1 1
1 1 1 1
=INT((A1+A2)/2) =INT((B1+B2)/2) =INT((C1+C2)/2) =INT((D1+D2)/2)
=MOD(A1+A2+B3,2) =MOD(B1+B2+C3,2) =MOD(C1+C2+D3,2) =MOD(D1+D2,2)

Related

how to find factors of very big number [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
i need to find factors of very big number say (10^1000) . i.e if input is 100 then output should be 10 10 because (10*10=100) .this is very simple if N<=size of (long) but i want to know how it will be possible to find factors of very big number say (10^1000). also i cant use Big Integer .
.
1) As has been pointed out, factoring large numbers is hard. It is in fact sufficiently hard that it's the basis for RSA public key cryptography, or in other words every time you buy something online, you are counting on the fact that it's hard to factor numbers of the order 2^2048 (given 2^10 = 1024 which is about 10^3, 2^2048 is about 10^600). While RSA specifically uses two large prime numbers and your random N may have lots of small numbers which will help somewhat, I wouldn't count on being able to factor 10^1000 +/- some random value anytime soon.
2) You can definitely reimplement big number library using strings [source: I had a classmate who did it before we learned about how to do big number math] but it's going to be painfully slow, and you basically have to cast your strings back to ints each time; a slightly less painful approach if you wanted to reimplmeent big numbers is arrays of integers. You still need to do some extra steps, but for doing at least basic math, it's not super difficult. (But it still won't be as efficient as specialized big number libraries, which can do clever algorithms. For example, multiplying 2 big numbers the straight forward way would be let A = P * 2^32 + Q (i.e. A is a 64 bit number represented as an array of 2 32 bit numbers) and B = R * 2^32 + S... the straightforward way takes 4 multiplactions plus some additions plus some dealing with carries). As the size of the big number increases, there are ways (see e.g. http://en.wikipedia.org/wiki/Karatsuba_algorithm) to reduce the number of multipication required)
3) (There are algorithms to more efficiently factor numbers compared to trial factorization, but the current ones are still not going to help compute the numbers you're asking about before the heat death of the universe)
10^1000 has exactly 1,002,001 integer divisors, and they should be very easy to find with a bit of thinking. The prime factorisation is
2 * 2 * 2 * ... * 5 * 5 * 5
with exactly 1,000 twos and exactly 1,000 fives.

Finding VERY precise percentiles based on Z-Scores in C/Objective-C? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am creating a program that needs to find thousands of individual percentiles, most of which are less than .00005. Currently, to do this I use
0.5 * erfc(-zScore * M_SQRT1_2)
However, this seems to be rounding slightly (throughout the rest of my program I am using doubles and long doubles, so it has to be this). I believe this because at the end, when I add up all the percentiles, I get 1.835468. This tells me that it is rounding as it should add up 1, or at least a number very close to 1. In addition, when I log each individual percentile, I get the same number (let's say 0.000036) for a few percentiles, and then it goes down to 0.000035. It should be going down each time as each number is further from the mean than the last.
I need a way to find very precise percentiles based on Z-Scores, which this is not giving me as it is rounding too early, at the 6th decimal place.
When you see it jump from 0.000036 to 0.000035, this is because you need to use NSLog(#"Value is: %0.36f", yourPercentile);. You should find that it is not actually rounding at 6 digits, but that was just how it was logged.
Now, your error is coming from the fact that you are using doubles, which do not store precise values very well. First, you need to know how much precision that you need, and then use a type which can handle that level of precision.
Let's say that you decide that you require 12 digits of precision.
long long is then a good unit to use because it can store 19 digits. When you calculate your original value, you need to multiply the original values by 100,000,000,000 and store them as long long's. Then do the math that you need using the large values. Eventually, when you get your result, just divide by the same value (or 2 digits less less if you want to see it as a whole number) to get your percentage.
I believe it's a matter of formatting the number: if two values(long double) are very close each other but still different, you will see them equal, specify the format with something like "%.12Lf".

How to generate all of the numbers of pi in Objective-C [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I want to create an iPad application named "Am I in Pi?" to check birthday numbers with Pi numbers and show the numbers. My question is how can I generate all of the million numbers of pi 3.1415.... etc. Is there any library in Objective-C or XML file or function that I can use for my implementation?
Rather than generating pi and searching for a certain sequence of digits, you're best off simply saying yes all the time. There is no evidence for any sequence not being in pi.
Grab the 1 megabyte of text for pi.
Writing a script, all 1, 2, 3, and 4 digit sequences exist within this file. Only the following 5 digit sequences don't exist within the first 1M digits of pi:
!!! 14523 not found
!!! 17125 not found
!!! 22801 not found
!!! 33394 not found
!!! 36173 not found
!!! 39648 not found
!!! 40527 not found
!!! 96710 not found
Rather than scanning the text file each time, index the location of all '#', '##', '###', '####' strings.
If you want all 5 or longer digits to be found in the string, include a larger version of the digits of pi.
Calculating the first N million digits of pi on an ipad is a waste of cpu and battery when the data file isn't that large.

SQL anagram efficiency and logic? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have an SQL db with about 200,000 words. I need a query which I will be able to solve an anagram kind of. The difference is that I need all the possible words that could be made with the input characters. For example, if you input ofdg, it should output the words: do, go, and dog. Can you estimate the amount of time a query like this would take. How can I make it faster and more efficient? Also, in general how long does it take SQL to parse a 200000 row database.
To solve this problem, the first thing you need to do is reduce every word to what Scrabble players call an alphagram. That is, all the letters in the word but in alphabetical order. So do, go and dog make do, go and dgo. Of course, any given alphagram may correspond to more than one word, so, for example, alphagram dgo corresponds to both the words dog and god.
The next thing you need to do is construct a table with a key alphagram-sequence number and a single attribute field word.
Word lists tend to be static. For example, the two Scrabble word lists in the English-speaking world change about every 5 years of so. So you construct this lookup table beforehand. Performance is O( n ) and it is a sunk cost. That is, you do it once and store it, so it is not counted against the cost of the query. You have to do this beforehand. It makes absolutely no sense to build such an index on the fly every time a query comes in.
You may be wondering "What is all this about Scrabble?" The answer is that your figure of 200,000 words falls neatly between the two approved tournament word lists in the English-speaking world. The US National Scrabble Association's Official Tournament and Club Word List (2006) contains 178,691 words, and the international list, maintained by the World English Scrabble Players' Association, contains 246,691.
When you get a query you reduce the supplied word to a bunch of alphagrams. Input odfg makes alphagrams od fo go df dg fg dfo dgo fgo dfg dfgo (which is a pretty programming problem in pure SQL, so I have to assume there is a PHP or Python or JavaScript front-end that will do that for you). Then you do the lookup in the database. The cost of each query should be approximately O(log2 n), in other words pretty damn immediate. That sort of query is what relational databases are good at.
BTW, your example output is poor. Alphagram dfgo with what Scrabble players call 'build' (all possible subsets) makes do od of go dog god fog.
(I hate to have to do this rigmarole, but Hasbro's lawyers are touchy, so: Scrabble is a registered trademark owned in the USA by Hasbro, Inc.; in Canada by Hasbro Canada Corporation; and throughout the rest of the world by J. W. Spear & Sons, a Mattel Company.)
Well, the number of possible letter combination in a word of length n is n!. Apparently you have a few more options as you want the shorter words as well, but that does not change that much the general O(n!) relationship. So a simple algorithm trying all combinations and looking the up in the database will have that as complexity.
Making the algorithm more efficient is apparently to reduce the search space - for which there are a few options.
How long it takes to look up a 200.000 row table depends on what kind of data is stored in there, in what format and what indexes you have on that table.

What other numeric systems are there? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
There's binary, decimal, hexadecimal, anything else?
Octal (base-8) is another popular one, but there are infinitely many numeric systems.
For example, in Excel the columns are labeled in hexavigesimal (base-26).
Here is a list of popular positional numeric systems.
Then you also have other numeric systems that aren't positional, such as Chinese and Roman numeral systems, but I'm guessing by your examples that you meant strictly positional numeric systems.
You can use any base you like to represent numbers, though it becomes difficult once you move beyond the alphanumeric characters. On the other hand, if you consider a single byte as a "digit", then most (unsigned) integral numbers are stored in base-256 within a computer.
That being said, the only widely-used number system aside from those listed (that I'm aware of) is Octal, which is base-8.
The Babylonians liked base 60 (sexagesimal)...
http://en.wikipedia.org/wiki/Babylonian_numerals
There are infinitely many...check out the Wikipedia page for Arity (specifically n-ary):
Arity - Wikipedia
There are tons of bases for integers (with pretty much everyone knowing base 10).
Also complex, real, & rational.
Wikipedia Number
A numbering system can be formed for any Number n, where n need only be an element of the Hamiltonians or any subset thereof the digits then correspond to the form
{n^k0, n^k0-1, n^k0 -2, ... n^0}.(radix point){n^-1, n^-2, ...,n^k1}
Where k0 is the High order of magnitude, and k1 is the precision