Just for fun coding - complex-numbers

For Fun!
From standard input, read two pairs of doubles and create two complex number of those numbers (first two doubles form first complex number and second two numbers forms second complex number, first number in each pair is real part of complex number), then compute and print out the result of angle, difference (first - second), conjugate (of first number), dividision (first / second), power2 (first to power 2), and power3(first to power 3).
double theta(): computes the angle of this complex number.
Complex minus(Complex b): returns a complex number that is equal to the difference of this complex number and the argument(b)
Complex conjugate(): returns the conjugate of this complex number as a new complex number
Complex divides(Complex b): returns the division result of this number by the argument(b)
Complex power(int b): returns the result of raising this number to the bth power.
Complex squareRoot(): returns the result of the square root operation as a complex number

Related

How do I determine "Big-oh" time complexity given an excel sheet showing various input size values VS their corresponding run time values

I have been given a list of input sizes and their corresponding runtime values for a given algorithm A. How should I go about computing the "Big-oh" time complexity of algorithm A given these values?
Try playing around with the numbers and see if they approximately fit one of the "standard" complexity functions, e.g. n, n^2, n^3, 2^n, log(n).
For example, if the ratio between value and input is nearly constant, it's likely O(n). If the ratio between value and input grows linearly (or doubling the input quadruples the value etc.), it is O(n^2). If it grows quadratically, it's O(n^3). If adding a constant to the input results in multiplicative change in its value, it's exponential. And if it's the reverse relationship, it's log(n).
If it's just slightly but consistently growing more quickly than a line, it's probably O(n log(n)).
You can also plot the graph of your values (input numbers vs runtime values) in Excel and overlay it with the graph of the function you guessed may fit, and then try to tweak the parameters (e.g. for O(n^2), plot a graph of a*x^2 + b, and tweak a and b).
To make it more precise (e.g. to calculate the uncertainty), you could apply regression analysis (search for non-linear regression analysis in Excel).

How is the Gini-Index minimized in CART Algorithm for Decision Trees?

For neural networks for example I minimize the cost function by using the backpropagation algorithm. Is there something equivalent for the Gini Index in decision trees?
CART Algorithm always states "choose partition of set A, that minimizes Gini-Index", but how to I actually get that partition mathematically?
Any input on this would be helpful :)
For a decision tree, there are different methods for splitting continuous variables like age, weight, income, etc.
A) Discretize the continuous variable to use it as a categorical variable in all aspects of the DT algorithm. This can be done:
only once during the start and then keeping this discretization
static
at every stage where a split is required, using percentiles or
interval ranges or clustering to bucketize the variable
B) Split at all possible distinct values of the variable and see where there is the highest decrease in the Gini Index. This can be computationally expensive. So, there are optimized variants where you sort the variables and instead of choosing all distinct values, choose the midpoints between two consecutive values as the splits. For example, if the variable 'weight' has 70, 80, 90 and 100 kgs in the data points, try 75, 85, 95 as splits and pick the best one (highest decrease in Gini or other impurities)
But then, what is the exact split algorithm that is implemented in scikit-learn in python, rpart in R, and the mlib package in pyspark , and what are the differences between them in the splitting of a continuous variable is something I am not sure as well and am still researching.
Here there is a good example of CART algorithm. Basically, we get the gini index like this:
For each attribute we have different values each of which will have a gini index, according to the class they belong to. For example, if we had two classes (positive and negative), each value of an attribute will have some records that belong to the positive class and some other values that belong to the negative class. So we can calculate the probabilities. Say if an attribute was called weather and it had two values (e.g. rainy and sunny), and we had these information:
rainy: 2 positive, 3 negative
sunny: 1 positive, 2 negative
we could say:
Then we can have the weighted sum of gini indexes for weather (assuming we had a total of 8 records):
We do this for all the other attributes (like we did for weather) and at the end we choose the attribute with the lowest gini index to be the one to split the tree from. We have to do all this at each split (unless we could classify the sub-tree without the need for splitting).

Get minimum Euclidean distance between a given vector and vectors in the database

I store 128D vectors in PostgreSQL table as double precision []:
create table tab (
   id integer,
   name character varying (200),
   vector double precision []
 )
For a given vector, I need to return one record from the database with the minimum Euclidean distance between this vector and the vector in the table entry.
I have a function that computes the Euclidean distance of two vectors according to the known formula SQRT ((v11-v21) ^ 2 + (v1 [2] -v2 [2]) ^ 2 + .... + (v1 [128] -v2 [128] ]) ^ 2):
CREATE OR REPLACE FUNCTION public.euclidian (
  arr1 double precision [],
  arr2 double precision [])
  RETURNS double precision AS
$ BODY $
  select sqrt (SUM (tab.v)) as euclidian from (SELECT
     UNNEST (vec_sub (arr1, arr2)) as v) as tab;
$ BODY $
LANGUAGE sql IMMUTABLE STRICT
Ancillary function:
CREATE OR REPLACE FUNCTION public.vec_sub (
  arr1 double precision [],
  arr2 double precision [])
RETURNS double precision [] AS
$ BODY $
  SELECT array_agg (result)
    FROM (SELECT (tuple.val1 - tuple.val2) * (tuple.val1 - tuple.val2)
        AS result
        FROM (SELECT UNNEST ($ 1) AS val1
               , UNNEST ($ 2) AS val2
               , generate_subscripts ($ 1, 1) AS ix) tuple
    ORDER BY ix) inn;
$ BODY $
LANGUAGE sql IMMUTABLE STRICT
Query:
select tab.id as tabid, tab.name as tabname,
        euclidian ('{0.1,0.2,...,0.128}', tab.vector) as eucl from tab
order by eulc ASC
limit 1
Everything works fine since I have several thousands of records in tab. But the DB is going to be grown and I need to avoid full scan of tab running the query, add a kind of index search. Would be great to filter-out at least 80% of records by index, the remaining 20% can be handled by full scan.
One of the current directions of the solution search: PostGIS extension allows to search and sort by distance (ST_3DDistance), filter by distance (ST_3DWithin), etc. This works great and fast using indicies. Is it possible to abstract for N-dimensional space?
Some observations:
all coordinate values are between [-0.5...0.5] (I do not know exactly, I think [-1.0 ...1.0] are theoretical limits)
the vectors are not normalized, the distance from (0,0,... 0) is in range [1.2...1.6].
That is the translated post from StackExchange Russian.
Like #SaiBot hints at with local sensitivity hashing (LSH), there are plenty of researched techniques that allow you to run approximate nearest neighbors (ANN) searches. You have to accept a speed/accuracy tradeoff, but this is reasonable for most production scenarios since a brute-force approach to find the exact neighbors tends to be computationally prohibitive.
This article is an excellent overview of current state-of-the-art algorithms along with their pros and cons. Below, I've linked several popular open-source implementations. All 3 have Python bindings
Facebook FAISS
Spotify Annoy
Google ScaNN
With a 128 dimensional data constrained to PostgreSQL you will have no choice than to apply a full scan for each query.
Even highly optimized index structures for indexing high-dimensional data like the X-Tree or the IQ-Tree will have problems with that many dimensions and usually offer no big benefit over the pure scan.
The main issue here is the curse of dimensionality that will let index structures degenerate above 20ish dimensions.
Newer work thus considers the problem of approximate nearest neighbor search, since in a lot of applications with this many dimensions it is sufficient to find a good answer, rather than the best one. Locality Sensitive Hashing is among these approaches.
Note: Even if an index structure is able to filter out 80% of the records, you will have to access the remaining 20% of the records by random access operations from disk (making the application I/O bound), which will be even slower than reading all the data in one scan and computing the distances.
You could looking at Computational Geometry which is a field dedicated to efficient algorithms. There is generally a tradeoff between the amount of data stored and efficiency of an algorithm. So by adding data we can reduce speed. In particular, we are looking at a Nearest neighbour search the algorithm below uses a form of space partitioning.
Lets consider the 3D case as the distances from the origin lie in a narrow range it looks like they are clustered around a fuzzy sphere. Divide space into sub 8 cubes depending on the sign of each coordinate, label these +++, ++- etc. We can work out the minimum distance from the test vector to a vector in each cube.
Say our test vector is (0.4,0.5,0.6). The minimum distance from that to the +++ cube is zero. The minimum distance to the -++ cube is 0.4 as the closest vector in the -++ cube would be (-0.0001,0.5,0.6). Likewise, the minimum distances to +-+ is 0.5, ++-: 0.6, --+: sqrt(.4^2+.5^2) etc.
The algorithms then becomes: First search the cube the test vector is in and find the minimum distance to all the vectors in that cube. If that distance is smaller than the minimum distance to the other cubes then we are done. If not search the next closest cube until no vector in any other cube could be closer.
If we were to implement this is a database we would compute a key for each vector. In 3D this is a 3-bit integer with 0 or 1 in each bit depending on the sign of the coordinate + (0), - (-). So first select WHERE key = 000, then where key = 100, etc.
You can think of this as a type of hash function which has been specifically designed to make finding close points easy. I think these are called Locality-sensitive hashing
The high dimensions of your data make things much trickier. With 128 dimensions just using the signs of coordinates give 2^128 = 3.4E+38 possibilities. This is far too many hash buckets and some form of dimension reduction is needed.
You might be able to choose k points and partition space according to which of those you are closest too.

How to make a start on the "crackless wall" problem

Here's the problem statement:
Consider the problem of building a wall out of 2x1 and 3x1 bricks (horizontal×vertical dimensions) such that, for extra strength, the gaps between horizontally-adjacent bricks never line up in consecutive layers, i.e. never form a "running crack".
There are eight ways of forming a crack-free 9x3 wall, written W(9,3) = 8.
Calculate W(32,10). < Generalize it to W(x,y) >
http://www.careercup.com/question?id=67814&form=comments
The above link gives a few solutions, but I'm unable to understand the logic behind them. I'm trying to code this in Perl and have done so far:
input : W(x,y)
find all possible i's and j's such that x == 3(i) + 2(j);
for each pair (i,j) ,
find n = (i+j)C(j) # C:combinations
Adding all these n's should give the count of all possible combinations. But I have no idea on how to find the real combinations for one row and how to proceed further.
Based on the claim that W(9,3)=8, I'm inferring that a "running crack" means any continuous vertical crack of height two or more. Before addressing the two-dimensional problem as posed, I want to discuss an analogous one-dimensional problem and its solution. I hope this will make it more clear how the two-dimensional problem is thought of as one-dimensional and eventually solved.
Suppose you want to count the number of lists of length, say, 40, whose symbols come from a reasonably small set of, say, the five symbols {a,b,c,d,e}. Certainly there are 5^40 such lists. If we add an additional constraint that no letter can appear twice in a row, the mathematical solution is still easy: There are 5*4^39 lists without repeated characters. If, however, we instead wish to outlaw consonant combinations such as bc, cb, bd, etc., then things are more difficult. Of course we would like to count the number of ways to choose the first character, the second, etc., and multiply, but the number of ways to choose the second character depends on the choice of the first, and so on. This new problem is difficult enough to illustrate the right technique. (though not difficult enough to make it completely resistant to mathematical methods!)
To solve the problem of lists of length 40 without consonant combinations (let's call this f(40)), we might imagine using recursion. Can you calculate f(40) in terms of f(39)? No, because some of the lists of length 39 end with consonants and some end with vowels, and we don't know how many of each type we have. So instead of computing, for each length n<=40, f(n), we compute, for each n and for each character k, f(n,k), the number of lists of length n ending with k. Although f(40) cannot be
calculated from f(39) alone, f(40,a) can be calculated in terms of f(30,a), f(39,b), etc.
The strategy described above can be used to solve your two-dimensional problem. Instead of characters, you have entire horizontal brick-rows of length 32 (or x). Instead of 40, you have 10 (or y). Instead of a no-consonant-combinations constraint, you have the no-adjacent-cracks constraint.
You specifically ask how to enumerate all the brick-rows of a given length, and you're right that this is necessary, at least for this approach. First, decide how a row will be represented. Clearly it suffices to specify the locations of the 3-bricks, and since each has a well-defined center, it seems natural to give a list of locations of the centers of the 3-bricks. For example, with a wall length of 15, the sequence (1,8,11) would describe a row like this: (ooo|oo|oo|ooo|ooo|oo). This list must satisfy some natural constraints:
The initial and final positions cannot be the centers of a 3-brick. Above, 0 and 14 are invalid entries.
Consecutive differences between numbers in the sequence must be odd, and at least three.
The position of the first entry must be odd.
The difference between the last entry and the length of the list must also be odd.
There are various ways to compute and store all such lists, but the conceptually easiest is a recursion on the length of the wall, ignoring condition 4 until you're done. Generate a table of all lists for walls of length 2, 3, and 4 manually, then for each n, deduce a table of all lists describing walls of length n from the previous values. Impose condition 4 when you're finished, because it doesn't play nice with recursion.
You'll also need a way, given any brick-row S, to quickly describe all brick-rows S' which can legally lie beneath it. For simplicity, let's assume the length of the wall is 32. A little thought should convince you that
S' must satisfy the same constraints as S, above.
1 is in S' if and only if 1 is not in S.
30 is in S' if and only if 30 is not in S.
For each entry q in S, S' must have a corresponding entry q+1 or q-1, and conversely every element of S' must be q-1 or q+1 for some element q in S.
For example, the list (1,8,11) can legally be placed on top of (7,10,30), (7,12,30), or (9,12,30), but not (9,10,30) since this doesn't satisfy the "at least three" condition. Based on this description, it's not hard to write a loop which calculates the possible successors of a given row.
Now we put everything together:
First, for fixed x, make a table of all legal rows of length x. Next, write a function W(y,S), which is to calculate (recursively) the number of walls of width x, height y, and top row S. For y=1, W(y,S)=1. Otherwise, W(y,S) is the sum over all S' which can be related to S as above, of the values W(y-1,S').
This solution is efficient enough to solve the problem W(32,10), but would fail for large x. For example, W(100,10) would almost certainly be infeasible to calculate as I've described. If x were large but y were small, we would break all sensible brick-laying conventions and consider the wall as being built up from left-to-right instead of bottom-to-top. This would require a description of a valid column of the wall. For example, a column description could be a list whose length is the height of the wall and whose entries are among five symbols, representing "first square of a 2x1 brick", "second square of a 2x1 brick", "first square of a 3x1 brick", etc. Of course there would be constraints on each column description and constraints describing the relationship between consecutive columns, but the same approach as above would work this way as well, and would be more appropriate for long, short walls.
I found this python code online here and it works fast and correctly. I do not understand how it all works though. I could get my C++ to the last step (count the total number of solutions) and could not get it to work correctly.
def brickwall(w,h):
# generate single brick layer of width w (by recursion)
def gen_layers(w):
if w in (0,1,2,3):
return {0:[], 1:[], 2:[[2]], 3:[[3]]}[w]
return [(layer + [2]) for layer in gen_layers(w-2)] + \
[(layer + [3]) for layer in gen_layers(w-3)]
# precompute info about whether pairs of layers are compatible
def gen_conflict_mat(layers, nlayers, w):
# precompute internal brick positions for easy comparison
def get_internal_positions(layer, w):
acc = 0; intpos = set()
for brick in layer:
acc += brick; intpos.add(acc)
intpos.remove(w)
return intpos
intpos = [get_internal_positions(layer, w) for layer in layers]
mat = []
for i in range(nlayers):
mat.append([j for j in range(nlayers) \
if intpos[i].isdisjoint(intpos[j])])
return mat
layers = gen_layers(w)
nlayers = len(layers)
mat = gen_conflict_mat(layers, nlayers, w)
# dynamic programming to recursively compute wall counts
nwalls = nlayers*[1]
for i in range(1,h):
nwalls = [sum(nwalls[k] for k in mat[j]) for j in range(nlayers)]
return sum(nwalls)
print(brickwall(9,3)) #8
print(brickwall(9,4)) #10
print(brickwall(18,5)) #7958
print(brickwall(32,10)) #806844323190414

Difference between Logarithmic and Uniform cost criteria

I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations.
Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C
(Yes this is part of an assignment =) )
Thx for any help!
/Marthin
Uniform Cost Criteria assigns a constant cost to every machine operation regardless of the number of bits involved WHILE Logarithm Cost Criteria assigns a cost to every machine operation proportional to the number of bits involved
Problem size influence complexity
Since complexity depends on the size of the
problem we define complexity to be a function
of problem size
Definition: Let T(n) denote the complexity for
an algorithm that is applied to a problem of
size n.
The size (n) of a problem instance (I) is the
number of (binary) bits used to represent the
instance. So problem size is the length of the
binary description of the instance.
This is called Logarithmic cost criteria
Unit Cost Criteria
If you assume that:
- every computer instruction takes one time
unit,
- every register is one storage unit
- and that a number always fits in a register
then you can use the number of inputs as
problem size since the length of input (in bits)
will be a constant times the number of inputs.
Uniform cost criteria assume that every instruction takes a single unit of time and that every register requires a single unit of space.
Logarithmic cost criteria assume that every instruction takes a logarithmic number of time units (with respect to the length of the operands) and that every register requires a logarithmic number of units of space.
In simpler terms, what this means is that uniform cost criteria count the number of operations, and logarithmic cost criteria count the number of bit operations.
For example, suppose we have an 8-bit adder.
If we're using uniform cost criteria to analyze the run-time of the adder, we would say that addition takes a single time unit; i.e., T(N)=1.
If we're using logarithmic cost criteria to analyze the run-time of the adder, we would say that addition takes lg⁡n time units; i.e., T(N)=lgn, where n is the worst case number you would have to add in terms of time complexity (in this example, n would be 256). Thus, T(N)=8.
More specifically, say we're adding 256 to 32. To perform the addition, we have to add the binary bits together in the 1s column, the 2s column, the 4s column, etc (columns meaning the bit locations). The number 256 requires 8 bits. This is where logarithms come into our analysis. lg256=8. So to add the two numbers, we have to perform addition on 8 columns. Logarithmic cost criteria say that each of these 8 addition calculations takes a single unit of time. Uniform cost criteria say that the entire set of 8 addition calculations takes a single unit of time.
Similar analysis can be made in terms of space as well. Registers either take up a constant amount of space (under uniform cost criteria) or a logarithmic amount of space (under uniform cost criteria).
I think you should do some research on Big O notation... http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
If there is a part of the description you find difficult edit your question.