How to make a bit select chip in HDL? - hdl

BitSelect chip has: 3-bit input and 8-bit output.
CHIP BitSelect
{
IN bit[3];
OUT out[8];
PARTS:
// what parts to use?
}
How to achieve behaviour described by the truth table below?
in out
000. 00000001
001. 00000010
010. 00000100
011. 00001000
100. 00010000
101. 00100000
110. 01000000
111. 10000000

The chip that matches you description is called a demultiplexor.
DMux8Way built in project 01 of the course is almost the same:
/**
* 8-way demultiplexor:
* {a, b, c, d, e, f, g, h} = {in, 0, 0, 0, 0, 0, 0, 0} if sel == 000
* {0, in, 0, 0, 0, 0, 0, 0} if sel == 001
* etc.
* {0, 0, 0, 0, 0, 0, 0, in} if sel == 111
*/
It has 2 inputs:
IN in, sel[3];
and 8 outputs:
OUT a, b, c, d, e, f, g, h;
You now have to adjust it a little to be it the only part that you need:
1) in to DMux8Way has to be always 1 (or true in HDL) as you need to switch where 1 goes.
2) instead of 8 single outputs a..h you have to wire them to make a single 8-bit output out[8] that matches your chip's requirements
If you need to create your chip from more basic parts it's a whole different story but still should give you pretty good idea what to do and what to look for in your design.
As I treat this quesion as "homework help" whether it's actual homework or not I don't give you the copy-paste solution but this should help you keep going into right direction.
If you need further help you always can edit your question with more specific questions or ask new one if the topic differs much from original one.

You can do this using logic gates, for example, 111 would be bit[0] and bit[1], which is then anded together with bit[2]
Please don’t plagiarize any work or I’ll set Muller on you.

Related

How to add sequential (time series) constraint to optimization problem using python PuLP?

A simple optimization problem: Find the optimal control sequence for a refrigerator based on the cost of energy. The only constraint is to stay below a temperature threshold, and the objective function tries to minimize the cost of energy used. This problem is simplified so the control is simply a binary array, ie. [0, 1, 0, 1, 0], where 1 means using electricity to cool the fridge, and 0 means to turn of the cooling mechanism (which means there is no cost for this period, but the temperature will increase). We can assume each period is fixed period of time, and has a constant temperature change based on it's on/off status.
Here are the example values:
Cost of energy (for our example 5 periods): [466, 426, 423, 442, 494]
Minimum cooling periods (just as a test): 3
Starting temperature: 0
Temperature threshold(must be less than or equal): 1
Temperature change per period of cooling: -1
Temperature change per period of warming (when control input is 0): 2
And here is the code in PuLP
from pulp import LpProblem, LpMinimize, LpVariable, lpSum, LpStatus, value
from itertools import accumulate
l = list(range(5))
costy = [466, 426, 423, 442, 494]
cost = dict(zip(l, costy))
min_cooling_periods = 3
prob = LpProblem("Fridge", LpMinimize)
si = LpVariable.dicts("time_step", l, lowBound=0, upBound=1, cat='Integer')
prob += lpSum([cost[i]*si[i] for i in l]) # cost function to minimize
prob += lpSum([si[i] for i in l]) >= min_cooling_periods # how many values must be positive
prob.solve()
The optimization seems to work before I try to account for the temperature threshold. With just the cost function, it returns an array of 0s, which does indeed minimize the cost (duh). With the first constraint (how many values must be positive) it picks the cheapest 3 cooling periods, and calculates the total cost correctly.
obj = value(prob.objective)
print(f'Solution is {LpStatus[prob.status]}\nThe total cost of this regime is: {obj}\n')
for v in prob.variables():
print(f'{v.name} = {v.varValue}')
output:
Solution is Optimal
The total cost of this regime is: 1291.0
time_step_0 = 0.0
time_step_1 = 1.0
time_step_2 = 1.0
time_step_3 = 1.0
time_step_4 = 0.0
So, if our control sequence is [0, 1, 1, 1, 0], the temperature will look like this at the end of each cooling/warming period: [2, 1, 0, -1, 1]. The temperature goes up 2 whenever the control input is 1, and down 1 whenever the control input is 1. This example sequence is a valid answer, but will have to change if we add a max temperature threshold of 1, which would mean the first value must be a 1, or else the fridge will warm to a temperature of 2.
However I get incorrect results when trying to specify the sequential constraint of staying within the temperature thresholds with the condition:
up_temp_thresh = 1
down = -1
up = 2
# here is where I try to ensure that the control sequence would never cause the temperature to
# surpass the threshold. In practice I would like a lower and upper threshold but for now
# let us focus only on the upper threshold.
prob += lpSum([e <= up_temp_thresh for e in accumulate([down if si[i] == 1. else up for i in l])]) >= len(l)
In this case the answer comes out the same as before, I am clearly not formulating it correctly as the sequence [0, 1, 1, 1, 0] would surpass the threshold.
I am trying to encode "the temperature at the end of each control sequence must be less than the threshold". I do this by turning the control sequence into an array of the temperature changes, so control sequence [0, 1, 1, 1, 0] gives us temperature changes [2, -1, -1, -1, 2]. Then using the accumulate function, it computes a cumulative sum, equal to the fridge temp after each step, which is [2, 1, 0, -1, 1]. I would like to just check if the max of this array is less than the threshold, but using lpSum I check that the sum of values in the array less than the threshold is equal to the length of the array, which should be the same thing.
However I'm clearly formulating this step incorrectly. As written this last constraint has no effect on the output, and small changes give other wrong answers. It seems the answer should be [1, 1, 1, 0, 0], which gives an acceptable temperature series of [-1, -2, -3, -1, 1]. How can I specify the sequential nature of the control input using PuLP, or another free python optimization library?
The easiest and least error-prone approach would be to create a new set of auxillary variables of your problem which track the temperature of the fridge in each interval. These are not 'primary decision variables' because you cannot directly choose them - rather the value of them is constrained by the on/off decision variables for the fridge.
You would then add constraints on these temperature state variables to represent the dynamics. So in untested code:
l_plus_1 = list(range(6))
fridge_temp = LpVariable.dicts("fridge_temp", l_plus_1, cat='Continuos')
fridge_temp[0] = init_temp # initial temperature of fridge - a known value
for i in l:
prob += fridge_temp[i+1] == fridge_temp[i] + 2 - 3*s[i]
You can then sent the min/max temperature constraints on these new fridge_temp variables.
Note that in the above I've assumed that the fridge temperature variables are defined at one more intervals than the on/off decisions for the fridge. The fridge temperature variables represent the temperature at the start of an interval - and having one extra one means we can ensure the final temperature of the fridge is acceptable.

Create a new generation using replication and crossover in genetic algorthm

Hi all i am studying genetic algorithm to create a new generation. I got a problem for the following one:
This question refers to Genetic Algorithms. Assume you have a population made of 10 individuals. Each individual is made of 5 bits. Here is the initial population.
x1 = (1, 0, 0, 1, 1)
x2 = (1, 1, 0, 0, 1)
x3 = (1, 1, 0, 1, 1)
x4 = (1, 1, 1, 1, 1)
x5 = (0, 0, 0, 1, 1)
x6 = (0, 0, 1, 1, 1)
x7 = (0, 0, 0, 0, 1)
x8 = (0, 0, 0, 0, 0)
x9 = (1, 0, 1, 1, 1)
x10 = (1, 0, 0, 1, 0)
Individuals are ranked according to fitness value (x1 has the greatest fitness value, x2 the second best, etc.). Assume that when sampling, you get individuals in the same order as they are ranked. Create a new generation of solutions assuming the following:
Replication is 20%. Cross over is 80% (assume a crossover mask as follows: 11100; pair examples in the same order as ranked). No mutation is done.
My solution: replication is 20% that means first two population is unchanged.Next the given the crossover mask given 11100 I choose randomly 3 words from crossover(11100) mask so start from x3 and x4 and here i keep first 3 words same both x3 and x4 and finally swap last two remaining words for x3 and x4 and generate new population. I follow same rule for x5 and x6, x7 and x8 and x9 and x10.I am not sure this answer is correct or wrong. Any body can help me please?
I don't know the background of the implementation you are using so I may not be correct, but from a genetic algorithm point of view most of your answer seems correct.
As far as I can see, the only issue in your reasoning is with the crossover. After replication has taken place you use the remaining chromosomes for crossover. This seems inherently flawed from a genetic algorithms point of view. Genetic algorithms generally use the best chromosome in crossover. You've already saved the best and seem to then exclude them from any recombination. This idea goes against the idea of genetic algorithms, which is to evolve the population by means of recombination of the fittest individuals. At the very least, the fittest chromosomes should be included.
Generally, most implementations involve an element of randomness in selection with the fittest chromosomes given more weighting. Since your question explicitly states that pairs are selected in order of ranking, and therefore no randomness, I'd assume crossover is to be performed on chromosomes 1 to 8.
Your understanding of the crossover mask seems correct from the question.
Again, I know nothing of the implementation in question so I'm not sure how good my understanding is. I'd be interested to know the source since the genetic algorithm seems highly unusual.

Distribute number over a bell curve

I am looking for a mathematical function that produces something similar to a bell curve ( think). I am very much out of my depth here. I think the Gaussian function might be what I need, but I don't know how to apply it correctly for my purposes.
I will be using the function to animate over a series of objects:
I want to simulate the appearance of acceleration and deceleration of this animation by offsetting each object closer to the previous one, until the midway point, after which the offset increases back to the original value:
Once implemented, I want the function to accept my start and end points on the x-axis, and the number of objects that need to be accommodated. It should then return a series of values that will be the x origin for each object.
For example:
Start: 0
End: 100
Objects: 20
Flat distribution: 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95
Required results: 0, 10, 19, 27, 34, 40, 44, 45, 46, 47, 48, 49, 50, 51, 55, 60, 66, 73, 81, 90
Some control over the profile of the curve would also be nice - for example, my estimated values above are quite a 'flat' bell (items 7-14 have the same offset).
Consider the following cubic polynomial:
f(x,a) = 4ax^3 - 6ax^2 + 2ax + x
evaluated over the domain x in [0:1] with a held constant and chosen from the interval `[0:1]'.
This will generate plot that starts at zero and ends at one. If a==0, you get a straight line. If a==1, you get a deep curve. For a somewhere in between, you get something in between.
Once you've picked a good value for a, you simply evaluate at however many points you want between 0 and 1. Then you scale the values to fit the range that you want.
This screenshot of a spreadsheet gives two examples. Columns A and F hold the value a, columns B and G hold values of x (if you wanted to use the exact values from your flat distribution, you could change every usage of x to be x/100). Columns C and H hold the outcome of f(x,a). Columns D and I hold f(x,a)*100.
Here is a Java implementation for generating a normal deviate:
/** generation of a Gaussian deviates (variants) by the ratio of uniform method */
final public static double generateNormalDeviate( final double mean, final double std_deviation ){
double u, v, x, y, q;
do {
u = random.nextDouble();
v = 1.7156 * ( random.nextDouble() - 0.5 );
x = u - 0.449871;
y = (v < 0 ? v * -1 : v) + 0.386595;
q = x*x + y * (0.19600 * y - 0.25472 * x);
} while( q > 0.27597 &&
(q > 0.27846 || v*v > -4d * Math.log(u) * u*u));
return mean + std_deviation * v / u;
}
See Numeric Recipes by Press for more information and a C version.
IIRC a sum of N independent randoms out of [-1..1] gives a good approximation to a Gaussian curve with the center of 0, I don't remember what's the dispersion.
Edit: Didn't understand the question. You seem to need something that implements this, precisely "inverse error function", you might want its approximation to be implemented in your code, as the function itself is an integral and cannot be evaluated in elementary functions. Once you get the function provide correct approximations to bell curve points, you plainly take a 0-1 number as base (which will define the flatness of the bell curve), name it B, and distribute your N numbers evenly between (-1+B) and (1-B), then take outputs of that function as bell curve positions, then normalize them so the leftmost will be at the start and rightmost at the end.

<< in UITableView Enum

A << operator is used in UITableViewCell, as listed below:
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1 << 0,
UITableViewCellStateShowingDeleteConfirmationMask = 1 << 1
};
I had been to this post << operator in objective c enum? but still not clear about the use of << operator.
The same above mentioned Enum and be written as mentioned below, then why is it so, they have used << operator?
enum {
UITableViewCellStateDefaultMask = 0,
UITableViewCellStateShowingEditControlMask = 1,
UITableViewCellStateShowingDeleteConfirmationMask = 2
};
The post you have linked explains why quite clearly. The << operator in C shifts numbers left by the specified number of bits. By shifting a 1 into each column, it is easy to see that the enum options can be bitwise ORed together. This allows the enum options to be combined together using the | operator and held in a single integer. This would not work if the enum declaration was as follows:
enum {
UITableViewCellStateDefaultMask = 0, (= 00 in binary)
UITableViewCellStateShowingEditControlMask = 1, (= 01 in binary)
UITableViewCellStateShowingDeleteConfirmationMask = 2, (= 10 in binary)
UITableViewCellStateThatIJustMadeUpForThisExample = 3 (= 11 in binary)
};
As 3 = 11 in binary, it is not possible to know from a single integer if you have the state UITableViewCellStateThatIJustMadeUpForThisExample or UITableViewCellStateShowingEditControlMask ORed with UITableViewCellStateShowingDeleteConfirmationMask.
The enum values give names to bits that are to be used in a bitmask. The bits in a bitmask by value are 1, 2, 4, 8, 16, ... (the powers of two). These values can be more clearly shown using the expressions (1<<0, 1<<1, 1<<2, 1<<3) -- i.e,. 1 shifted to the left by 0, 1, ... places. It's clearly and less error prone than listing the powers of 2 as decimal constants.
When you use the values, they are normally combined using a bitwise-OR operation ('|'). The goal is to specify zero or more bits, each of which has a specific meaning. Using a bitmask allows you to specify them independently but compactly. You may wish to read more on bitmasks for more details and examples.

How to solve Euler Project Prblem 303 faster?

The problem is:
For a positive integer n, define f(n) as the least positive multiple of n that, written in base 10, uses only digits ≤ 2.
Thus f(2)=2, f(3)=12, f(7)=21, f(42)=210, f(89)=1121222.
To solve it in Mathematica, I wrote a function f which calculates f(n)/n :
f[n_] := Module[{i}, i = 1;
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i = i + 1];
Return[FromDigits[IntegerDigits[i, 3]]/n]]
The principle is simple: enumerate all number with 0, 1, 2 using ternary numeral system until one of those number is divided by n.
It correctly gives 11363107 for 1~100, and I tested for 1~1000 (calculation took roughly a minute, and gives 111427232491), so I started to calculate the answer of the problem.
However, this method is too slow. The computer has been calculating the answer for two hours and hasn't finished computing.
How can I improve my code to calculate faster?
hammar's comment makes it clear that the calculation time is disproportionately spent on values of n that are a multiple of 99. I would suggest finding an algorithm that targets those cases (I have left this as an exercise for the reader) and use Mathematica's pattern matching to direct the calculation to the appropriate one.
f[n_Integer?Positive]/; Mod[n,99]==0 := (* magic here *)
f[n_] := (* case for all other numbers *) Module[{i}, i = 1;
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i = i + 1];
Return[FromDigits[IntegerDigits[i, 3]]/n]]
Incidentally, you can speed up the fast easy ones by doing it a slightly different way, but that is of course a second-order improvement. You could perhaps set the code up to use ff initially, breaking the While loop if i reaches a certain point, and then switching to the f function you have already provided. (Notice I'm returning n i not i here - that was just for illustrative purposes.)
ff[n_] :=
Module[{i}, i = 1; While[Max[IntegerDigits[n i]] > 2, i++];
Return[n i]]
Table[Timing[ff[n]], {n, 80, 90}]
{{0.000125, 1120}, {0.001151, 21222}, {0.001172, 22222}, {0.00059,
11122}, {0.000124, 2100}, {0.00007, 1020}, {0.000655,
12212}, {0.000125, 2001}, {0.000119, 2112}, {0.04202,
1121222}, {0.004291, 122220}}
This is at least a little faster than your version (reproduced below) for the short cases, but it's much slower for the long cases.
Table[Timing[f[n]], {n, 80, 90}]
{{0.000318, 14}, {0.001225, 262}, {0.001363, 271}, {0.000706,
134}, {0.000358, 25}, {0.000185, 12}, {0.000934, 142}, {0.000316,
23}, {0.000447, 24}, {0.006628, 12598}, {0.002633, 1358}}
A simple thing that you can do to is compile your function to C and make it parallelizable.
Clear[f, fCC]
f[n_Integer] := f[n] = fCC[n]
fCC = Compile[{{n, _Integer}}, Module[{i = 1},
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i++];
Return[FromDigits[IntegerDigits[i, 3]]]],
Parallelization -> True, CompilationTarget -> "C"];
Total[ParallelTable[f[i]/i, {i, 1, 100}]]
(* Returns 11363107 *)
The problem is that eventually your integers will be larger than a long integer and Mathematica will revert to the non-compiled arbitrary precision arithmetic. (I don't know why the Mathematica compiler does not include a arbitrary precision C library...)
As ShreevatsaR commented, the project Euler problems are often designed to run quickly if you write smart code (and think about the math), but take forever if you want to brute force it. See the about page. Also, spoilers posted on their message boards are removed and it's considered bad form to post spoilers on other sites.
Aside:
You can test that the compiled code is using 32bit longs by running
In[1]:= test = Compile[{{n, _Integer}}, {n + 1, n - 1}];
In[2]:= test[2147483646]
Out[2]= {2147483647, 2147483645}
In[3]:= test[2147483647]
During evaluation of In[53]:= CompiledFunction::cfn: Numerical error encountered at instruction 1; proceeding with uncompiled evaluation. >>
Out[3]= {2147483648, 2147483646}
In[4]:= test[2147483648]
During evaluation of In[52]:= CompiledFunction::cfsa: Argument 2147483648 at position 1 should be a machine-size integer. >>
Out[4]= {2147483649, 2147483647}
and similar for the negative numbers.
I am sure there must be better ways to do this, but this is as far as my inspiration got me.
The following code finds all values of f[n] for n 1-10,000 except the most difficult one, which happens to be n = 9999. I stop the loop when we get there.
ClearAll[f];
i3 = 1;
divNotFound = Range[10000];
While[Length[divNotFound] > 1,
i10 = FromDigits[IntegerDigits[i3++, 3]];
divFound = Pick[divNotFound, Divisible[i10, divNotFound]];
divNotFound = Complement[divNotFound, divFound];
Scan[(f[#] = i10) &, divFound]
] // Timing
Divisible may work on lists for both arguments, and we make good use of that here. The whole routine takes about 8 min.
For 9999 a bit of thinking is necessary. It is not brute-forceable in a reasonable time.
Let P be the factor we are looking for and T (consisting only of 0's, 1's and 2's) the result of multiplication P with 9999, that is,
9999 P = T
then
P(10,000 - 1) = 10,000 P - P = T
==> 10,000 P = P + T
Let P1, ...PL be the digits of P, and Ti the digits of T then we have
The last four zeros in the sum originate of course from the multiplication by 10,000. Hence TL+1,...,TL+4 and PL-3,...,PL are each others complement. Where the former only consists of 0,1,2 the latter allows:
last4 = IntegerDigits[#][[-4 ;; -1]] & /# (10000 - FromDigits /# Tuples[{0, 1, 2}, 4])
==> {{0, 0, 0, 0}, {9, 9, 9, 9}, {9, 9, 9, 8}, {9, 9, 9, 0}, {9, 9, 8, 9},
{9, 9, 8, 8}, {9, 9, 8, 0}, {9, 9, 7, 9}, ..., {7, 7, 7, 9}, {7, 7, 7, 8}}
There are only 81 allowable sets, with 7's, 8's, 9's and 0's (not all possible combinations of them) instead of 10,000 numbers, a speed gain of a factor of 120.
One can see that P1-P4 can only have ternary digits, being the sum of ternary digit and naught. You can see there can be no carry over from the addition of T5 and P1. A further reduction can be gained by realizing that P1 cannot be 0 (the first digit must be something), and if it were a 2 multiplication with 9999 would cause a 8 or 9 (if a carry occurs) in the result for T which is not allowed either. It must be a 1 then. Two's may also be excluded for P2-P4.
Since P5 = P1 + T5 it follows that P5 < 4 as T5 < 3, same for P6-P8.
Since P9 = P5 + T9 it follows that P9 < 6, same for P10-P11
In all these cases the additions don't need to include a carry over as they can't occur (Pi+Ti always < 8). This may not be true for P12 if L = 16. In that case we can have a carry over from the addition of the last 4 digits . So P12 <7. This also excludes P12 from being in the last block of 4 digits. The solution must therefore be at least 16 digits long.
Combining all this we are going to try to find a solution for L=16:
Do[
If[Max[IntegerDigits[
9999 FromDigits[{1, 1, 1, 1, i5, i6, i7, i8, i9, i10, i11, i12}~
Join~l4]]
] < 3,
Return[FromDigits[{1, 1, 1, 1, i5, i6, i7, i8, i9, i10, i11, i12}~Join~l4]]
],
{i5, 0, 3}, {i6, 0, 3}, {i7, 0, 3}, {i8, 0, 3}, {i9, 0, 5},
{i10, 0, 5}, {i11, 0, 5}, {i12, 0, 6}, {l4,last4}
] // Timing
==> {295.372, 1111333355557778}
and indeed 1,111,333,355,557,778 x 9,999 = 11,112,222,222,222,222,222
We could have guessed this as
f[9] = 12,222
f[99] = 1,122,222,222
f[999] = 111,222,222,222,222
The pattern apparently being the number of 1's increasing with 1 each step and the number of consecutive 2's with 4.
With 13 min, this is over the 1 min limit for project Euler. Perhaps I'll look into it some time soon.
Try something smarter.
Build a function F(N) which finds out the smallest number with {0, 1, 2} digits which is divisible by N.
So for a given N the number which we are looking for can be written as SUM = 10^n * dn + 10^(n-1) * dn-1 .... 10^1 * d1 + 1*d0 (where di are the digits of the number).
so you have to find out the digits such that SUM % N == 0
basically each digits contributes to the SUM % N with (10^i * di) % N
I am not giving any more hints, but the next hint would be to use DP. Try to figure out how to use DP to find out the digits.
for all numbers between 1 and 10000 it took under 1sec in C++. (in total)
Good luck.