count all numbers from L to R such that there is exactly one zero in the binary representation.I'm very new to digit dp topic - dynamic

count all numbers from L to R such that there is exactly one zero in the binary representation.
I know that it would be F(R) - F(L-1) but not able to solve further and code it.

Related

Unnormalizing in Knuth's Algorithm D

I'm trying to implement Algorithm D from Knuth's "The Art of Computer Programming, Vol 2" in Rust although I'm having trouble understating how to implement the very last step of unnormalizing. My natural numbers are a class where each number is a vector of u64, in base u64::MAX. Addition, subtraction, and multiplication have been implemented.
Knuth's Algorithm D is a euclidean division algorithm which takes two natural numbers x and y and returns (q,r) where q = x / y (integer division) and r = x % y, the remainder. The algorithm depends on an approximation method which only works if the first digit of y is greater than b/2, where b is the base you're representing the numbers in. Since not all numbers are of this form, it uses a "normalizing trick", for example (if we were in base 10) instead of doing 200 / 23, we calculate a normalizer d and do (200 * d) / (23 * d) so that 23 * d has a first digit greater than b/2.
So when we use the approximation method, we end up with the desired q but the remainder is multiplied by a factor of d. So the last step is to divide r by d so that we can get the q and r we want. My problem is, I'm a bit confused at how we're suppose to do this last step as it requires division and the method it's part of is trying to implement division.
(Maybe helpful?):
The way that d is calculated is just by taking the integer floor of b-1 divided by the first digit of y. However, Knuth suggests that it's possible to make d a power of 2, as long as d * the first digit of y is greater than b / 2. I think he makes this suggestion so that instead of dividing, we can just do a binary shift for this last step. Although I don't think I can do that given that my numbers are represented as vectors of u64 values, instead of binary.
Any suggestions?

SAS TO COBOL conversion variable declaration

Friends,
I am doing SAS to COBOL conversion.I am stuck with below declaration and conversion.So I am getting SOC7 in COBOL run.Please provide some solution.
IP in SAS - PD3.5
OP in SAS - z6.5
My COBOL declaration below.
IP s9.9(5);
OP .9(5);
Please suggest some solution..
Thanks a lot!!
Packed Decimal is stored one digit per nibble, which is two digits per byte, with the last nibble storing the sign. The sign nibbles C, A, F, and E are treated as positive; the sign nibbles B and D are treated as negative. Sign nibbles C and D are referred to as "preferred sign". A sign nibble of F is considered "unsigned," meaning it is neither positive nor negative, though pragmatically you can think of it as positive for arithmetic purposes. +123 is stored in two bytes as x'123C', -456 is stored as x'456D'.
The SAS PD informat specifies PDw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. So PD3.5 is a 3 byte field (which would store 5 digits and a sign) with all 5 digits to the right of the decimal point.
To obtain the COBOL declaration for a SAS PDw.d declaration...
a = (w * 2) - 1
b = a - d
if b = 0
PIC SVd Packed-Decimal
else
PIC S9(b)Vd Packed-Decimal
The SAS Z format specifies Zw.d where w is the width of the field in bytes and d is the number of decimal places to the right within the field. The field will be padded with zeroes on the left to make it w bytes wide. So Z6.5 specifies a 6 byte output field with 5 bytes to the right of the decimal point. One byte is taken by the decimal point itself, and unfortunately there is no room for the sign, which may be a bug or may be intentional (perhaps all the data is known to be positive).
IP PIC Sv99999 Packed-Decimal.
OP PIC .99999.
When you MOVE IP TO OP the conversion from Packed Decimal to Zoned Decimal will be done for you by COBOL.

Is this O(N) algorithm actually O(logN)?

I have an integer, N.
I denote f[i] = number of appearances of the digit i in N.
Now, I have the following algorithm.
FOR i = 0 TO 9
FOR j = 1 TO f[i]
k = k*10 + i;
My teacher said this is O(N). It seems to me more like a O(logN) algorithm.
Am I missing something?
I think that you and your teacher are saying the same thing but it gets confused because the integer you are using is named N but it is also common to refer to an algorithm that is linear in the size of its input as O(N). N is getting overloaded as the specific name and the generic figure of speech.
Suppose we say instead that your number is Z and its digits are counted in the array d and then their frequencies are in f. For example, we could have:
Z = 12321
d = [1,2,3,2,1]
f = [0,2,2,1,0,0,0,0,0,0]
Then the cost of going through all the digits in d and computing the count for each will be O( size(d) ) = O( log (Z) ). This is basically what your second loop is doing in reverse, it's executing one time for each occurence of each digits. So you are right that there is something logarithmic going on here -- the number of digits of Z is logarithmic in the size of Z. But your teacher is also right that there is something linear going on here -- counting those digits is linear in the number of digits.
The time complexity of an algorithm is generally measured as a function of the input size. Your algorithm doesn't take N as an input; the input seems to be the array f. There is another variable named k which your code doesn't declare, but I assume that's an oversight and you meant to initialise e.g. k = 0 before the first loop, so that k is not an input to the algorithm.
The outer loop runs 10 times, and the inner loop runs f[i] times for each i. Therefore the total number of iterations of the inner loop equals the sum of the numbers in the array f. So the complexity could be written as O(sum(f)) or O(Σf) where Σ is the mathematical symbol for summation.
Since you defined that N is an integer which f counts the digits of, it is in fact possible to prove that O(Σf) is the same thing as O(log N), so long as N must be a positive integer. This is because Σf equals how many digits the number N has, which is approximately (log N) / (log 10). So by your definition of N, you are correct.
My guess is that your teacher disagrees with you because they think N means something else. If your teacher defines N = Σf then the complexity would be O(N). Or perhaps your teacher made a genuine mistake; that is not impossible. But the first thing to do is make sure you agree on the meaning of N.
I find your explanation a bit confusing, but lets assume N = 9075936782959 is an integer. Then O(N) doesn't really make sense. O(length of N) makes more sense. I'll use n for the length of N.
Then f(i) = iterate over each number in N and sum to find how many times i is in N, that makes O(f(i)) = n (it's linear). I'm assuming f(i) is a function, not an array.
Your algorithm loops at most:
10 times (first loop)
0 to n times, but the total is n (the sum of f(i) for all digits must be n)
It's tempting to say that algorithm is then O(algo) = 10 + n*f(i) = n^2 (removing the constant), but f(i) is only calculated 10 times, each time the second loops is entered, so O(algo) = 10 + n + 10*f(i) = 10 + 11n = n. If f(i) is an array, it's constant time.
I'm sure I didn't see the problem the same way as you. I'm still a little confused about the definition in your question. How did you come up with log(n)?

Number of possible binary strings of length k

One of my friends was asked this question recently:
You have to count how many binary strings are possible of length "K".
Constraint: Every 0 has a 1 in its immediate left.
This question can be reworded:
How many binary sequences of length K are posible if there are no two consecutive 0s, but the first element should be 1 (else the constrains fails). Let us forget about the first element (we can do it bcause it is always fixed).
Then we got a very famous task that sounds like this: "What is the number of binary sequences of length K-1 that have no consecutive 0's." The explanation can be found, for example, here
Then the answer will be F(K+1) where F(K) is the K`th fibonacci number starting from (1 1 2 ...).
∑ From n=0 to ⌊K/2⌋ of (K-n)Cn; n is the number of zeros in the string
The idea is to group every 0 with a 1 and find the number of combinations of the string, for n zeros there will be n ones grouped to them so the string becomes (k-n) elements long. There can be no more than of K/2 zeros as there would not have enough ones to be to the immediate left of each zero.
E.g. 111111[10][10]1[10] for K = 13, n = 3

Number of BST's given a linked list of numbers

Suppose I have a linked list of positive numbers, how many BST's can be generated from them, provided all nodes all required to form the tree?
Conversely, how many BST's can be generated, provided any number of the linked list nodes can exist in these trees?
Bonus: how many balanced BST's can be formed? Any help or guidance is greatly appreciated.
You can use dynamic programming to compute that.
Just note that it doesn't matter what the numbers are, just how many. In other words for any n distinct integers there is the same amount of different BSTs. Let's call this number f(n).
Then if you know f(k) for k < n, you can get f(n):
f(n) = Sum ( f(i) + f(n-1-i), i = 0,1,2,...,n-1 )
Each summand represents the number of trees for which the (1+i)-th smallest number is at the root (thus in the left subtree where are i numbers and in the right subtree there are n-1-i).
So DP solves this.
Now the total number of BSTs (with any nodes from the list) is just a sum:
Sum ( Binomial(n,k) * f(k), k=1,2,3,...,n )
This is because you can pick k of them in Binomial(n,k) ways and then you know that there are f(k) BSTs for them.