How to find multiplicative orders of all elements in F 13?
I am working on some Finite fields and I was referring to some online class material. Is there any way to find this?
The non-zero elements in F 13 form a multiplicative group of order 12. You can represent them by the numbers 1, 2, 3, ..., 12. Algebra tells you that the group is cyclic. It turns out that 2 is a generator. Knowing the order of an element g in a group G it is straight forward to determine the order of any element on the form g^i. You can use this to determine the orders of all the elements.
A different method is to directly use the definition of the order of an element. That is for each element you calculate g, g^2, g^3, g^4, ... The smallest number d for which g^d = 1 is the order of that element. Given the small size of the group F 13* this is quite doable.
Related
I was curious to learn if there is a simple method/algorithm through which I can obtain a generator g for a 20-digit prime integer to implement in Elgamal cryptosystem.
The best way to do this is pick your prime such that finding a generator is easy. Often the best way is to find a prime p such that q=2p+1 is also prime. Then in the multiplicative group of order q, elements will have order 2p, p, 2 or 1. Most will have order 2p, so just pick a number g and check that g^2 and g^p are not 1, then it will have order 2p and thus be a generator of the group.
If the prime is given (say q), then the order of the group will be q-1 and you will need to factorise q-1 into prime factors (which is not always easy). Then when picking your candidate g, you need to check that g^x (for x ranging through all combinations of prime factors that are less than q-1) is not 1, then you'll know that g has order q-1 and is a generator. Which is why, if you can pick your prime q, it's easier to make sure that q-1 factorises nicely into just two primes.
Let's say I have a list of 5 words:
[this, is, a, short, list]
Furthermore, I can classify some text by counting the occurrences of the words from the list above and representing these counts as a vector:
N = [1,0,2,5,10] # 1x this, 0x is, 2x a, 5x short, 10x list found in the given text
In the same way, I classify many other texts (count the 5 words per text, and represent them as counts - each row represents a different text which we will be comparing to N):
M = [[1,0,2,0,5],
[0,0,0,0,0],
[2,0,0,0,20],
[4,0,8,20,40],
...]
Now, I want to find the top 1 (2, 3 etc) rows from M that are most similar to N. Or on simple words, the most similar texts to my initial text.
The challenge is, just checking the distances between N and each row from M is not enough, since for example row M4 [4,0,8,20,40] is very different by distance from N, but still proportional (by a factor of 4) and therefore very similar. For example, the text in row M4 can be just 4x as long as the text represented by N, so naturally all counts will be 4x as high.
What is the best approach to solve this problem (of finding the most 1,2,3 etc similar texts from M to the text in N)?
Generally speaking, the most widely standard technique of bag of words (i.e. you arrays) for similarity is to check cosine similarity measure. This maps your bag of n (here 5) words to a n-dimensional space and each array is a point (which is essentially also a point vector) in that space. The most similar vectors(/points) would be ones that have the least angle to your text N in that space (this automatically takes care of proportional ones as they would be close in angle). Therefore, here is a code for it (assuming M and N are numpy arrays of the similar shape introduced in the question):
import numpy as np
cos_sim = M[np.argmax(np.dot(N, M.T)/(np.linalg.norm(M)*np.linalg.norm(N)))]
which gives output [ 4 0 8 20 40] for your inputs.
You can normalise your row counts to remove the length effect as you discussed. Row normalisation of M can be done as M / M.sum(axis=1)[:, np.newaxis]. The residual values can then be calculated as the sum of the square difference between N and M per row. The minimum difference (ignoring NaN or inf values obtained if the row sum is 0) is then the most similar.
Here is an example:
import numpy as np
N = np.array([1,0,2,5,10])
M = np.array([[1,0,2,0,5],
[0,0,0,0,0],
[2,0,0,0,20],
[4,0,8,20,40]])
# sqrt of sum of normalised square differences
similarity = np.sqrt(np.sum((M / M.sum(axis=1)[:, np.newaxis] - N / np.sum(N))**2, axis=1))
# remove any Nan values obtained by dividing by 0 by making them larger than one element
similarity[np.isnan(similarity)] = similarity[0]+1
result = M[similarity.argmin()]
result
>>> array([ 4, 0, 8, 20, 40])
You could then use np.argsort(similarity)[:n] to get the n most similar rows.
I've been using the PuLP library for a side project (daily fantasy sports) where I optimize the projected value of a lineup based on a series of constraints.
I've implemented most of them, but one constraint is that players must come from at least three separate teams.
This paper has an implementation (page 18, 4.2), which I've attached as an image:
It seems that they somehow derive an indicator variable for each team that's one if a given team has at least one player in the lineup, and then it constrains the sum of those indicators to be greater than or equal to 3.
Does anybody know how this would be implemented in PuLP?
Similar examples would also be helpful.
Any assistance would be super appreciated!
In this case you would define a binary variable t that sets an upper limit of the x variables. In python I don't like to name variables with a single letter but as I have nothing else to go on here is how I would do it in pulp.
assume that the variables lineups, players, players_by_team and teams are set somewhere else
x_index = [i,p for i in lineups for p in players]
t_index = [i,t for i in lineups for t in teams]
x = LpVariable.dicts("x", x_index, lowBound=0)
t = LpVAriable.dicts("t", t_index, cat=LpBinary)
for l in teams:
prob += t[i,l] <=lpSum([x[i,k] for k in players_by_team[l]])
prob += lpSum([t[i,l] for l in teams]) >= 3
Suppose I have a linked list of positive numbers, how many BST's can be generated from them, provided all nodes all required to form the tree?
Conversely, how many BST's can be generated, provided any number of the linked list nodes can exist in these trees?
Bonus: how many balanced BST's can be formed? Any help or guidance is greatly appreciated.
You can use dynamic programming to compute that.
Just note that it doesn't matter what the numbers are, just how many. In other words for any n distinct integers there is the same amount of different BSTs. Let's call this number f(n).
Then if you know f(k) for k < n, you can get f(n):
f(n) = Sum ( f(i) + f(n-1-i), i = 0,1,2,...,n-1 )
Each summand represents the number of trees for which the (1+i)-th smallest number is at the root (thus in the left subtree where are i numbers and in the right subtree there are n-1-i).
So DP solves this.
Now the total number of BSTs (with any nodes from the list) is just a sum:
Sum ( Binomial(n,k) * f(k), k=1,2,3,...,n )
This is because you can pick k of them in Binomial(n,k) ways and then you know that there are f(k) BSTs for them.
I have read something in the site that inversion means if i<j then A[i]>A[j] and it has some exercises about this , I have a lot of questions but I want to ask just one of them at first and then i will do the other exercises by myself if I can!!
Exercise: What permutation array (1,2, ..., n) has the highest number of inversion? What are these?
thanks
Clearly N, ..., 2, 1 has the highest number of inversions. Every pair is an inversion. For example for N = 6, we have 6 5 4 3 2 1. The inversions are 6-5, 6-4, 6-3, 6-2, 6-1, 5-4, 5-3 and so on. Their number is N * (N - 1) / 2.
Well, the identity permutation (1,2,...,n) has no inversions. Since an inversion is a pair of elements that are in reverse order than their indices, the answer probably involves some reversal of that permutation.
I have never heard the term inversion used in this way.
A decreasing array of length N, for N>0, has 1/2*N*(N-1) pairs i<j with A[i]>A[j]. This is the maximum possible.