I am currently developing a hierarchical bayesian model in Openbugs that involves a lot (about 6000 sites) of binomial processes. It describes successive removal electric fishing events/pass and the general structure is as follow:
N_tot[i]<-d[i] * S[i]
d[i]~dgamma(0.01,0.01)
for (i in 1:n_sites){
for (j in 1:n_pass[i]){
logit(p[i,j])~dnorm(0,0.001)
N[i,j] <- N_tot[i] - sum( C[i,1:(j-1)] )
C[i,j] ~ dbin( p[i,j] , N[i,j] )
}
}
where n_sites is the total number of sites i'm looking at. n_pass[i] is the number of fishing pass carried out in site i. N[i,j] is the number of fish in site i when doing fish pass j. N_tot[i] is the total number of fish in site i before any fish pass and it is the product of the density at the site d[i] times the surface of the site S[i] (the surface is known). C[i,j] is the number of fish caught in site i during fish pass j. p[i,j] is the probability of capture in site i for fish pass j.
Each sites as on average 3 fishing pass which is a lot of successive binomial process which typically takes a lot of time to compute/converge.
I can't approximate the binomial process because the catches are typically small.
So I'm a bit stuck and i'm looking for suggestions/alternatives to deal with this issue.
Thanks in advance
edit history:
15-11-2016: added prior definitions for d and p following on #M_Fidino request for clarification
Related
The "Traveling Salesman Problem" is a problem where a person has to travel between "n" cities - but choose the itinerary such that:
Each city is visited only once
The total distance traveled is minimized
I have heard that if a modern computer were the solve this problem using "brute force" (i.e. an exact solution) - if there are more than 15 cities, the time taken by the computer will exceed a hundred years!
I am interested in understanding "how do we estimate the amount of time it will take for a computer to solve the Traveling Salesman Problem (using "brute force") as the number of cities increase". For instance, from the following reference (https://www.sciencedirect.com/topics/earth-and-planetary-sciences/traveling-salesman-problem):
My Question: Is there some formula we can use to estimate the number of time it will take a computer to solve Traveling Salesman using "brute force"? For example:
N cities = N! paths
Each of these N! paths will require "N" calculations
Thus, N * N calculations would be required for the computer to check all paths and then be certain that the shortest path has been found : If we know the time each calculation takes, perhaps we could estimate the total run time as "time per calculation * N*N! "
But I am not sure if this factors in the time to "store and compare" calculations.
Can someone please explain this?
I have heard that if a modern computer were the solve this problem using "brute force" (i.e. an exact solution) - if there are more than 15 cities, the time taken by the computer will exceed a hundred years!
This is not completely true. While the naive brute-force algorithm runs with a n! complexity. A much better algorithm using dynamic programming runs in O(n^2 2^n). Just to give you an idea, with n=25, n! ≃ 2.4e18 while n^2 2^n ≃ 1e12. The former is too huge to be practicable while the second could be OK although it should take a pretty long time on a PC (one should keep in mind that both algorithm complexities contain an hidden constant variable playing an important role to compute a realistic execution time). I used an optimized dynamic programming solution based on the Held–Karp algorithm to compute the TSP of 20 cities on my machine with a relatively reasonable time (ie. no more than few minutes of computation).
Note that in practice heuristics are used to speed up the computation drastically often at the expense of a sub-optimal solution. Some algorithm can provide a good result in a very short time compared to the previous exact algorithms (polynomial algorithms with a relatively small exponent) with a fixed bound on the quality of the result (for example the distance found cannot be bigger than 2 times the optimal solution). In the end, heuristics can often found very good results in a reasonable time. One simple heuristic is to avoid crossing segments assuming an Euclidean distance is used (AFAIK a solution with crossing segments is always sub-optimal).
My Question: Is there some formula we can use to estimate the number of time it will take a computer to solve Travelling Salesman using "brute force"?
Since the naive algorithm is compute bound and quite simple, you can do such an approximation based on the running-time complexity. But to get a relatively precise approximation of the execution time, you need a calibration since not all processors nor implementations behave the same way. You can assume that the running time is C n! and find the value of C experimentally by measuring the computation time taken by a practical brute-force implementation. Another approach is to theoretically find the value of C based on low-level architectural properties (eg. frequency, number of core used, etc.) of the target processor. The former is much more precise assuming the benchmark is properly done and the number of points is big enough. Moreover, the second method requires a pretty good understanding of the way modern processors work.
Numerically, assuming a running time t ≃ C n!, we can say that ln t ≃ ln(C n!) ≃ ln(C) + ln(n!). Based on the Stirling's approximation, we can say that ln t ≃ ln C + n ln n + O(ln n), so ln C ≃ ln t - n ln n - O(ln n). Thus, ln C ≃ ln t - n ln n - O(ln n) and finally, C ≃ exp(ln t - n ln n) (with an O(n) approximation). That being said, the Stirling's approximation may not be precise enough. Using a binary search to numerically compute the inverse gamma function (which is a generalization of the factorial) should give a much better approximation for C.
Each of these N! paths will require "N" calculations
Well, a slightly optimized brute-force algorithm do not need perform N calculation as the partial path length can be precomputed. The last loops just need to read the precomputed sums from a small array that should be stored in the L1 cache (so it take only no more than few cycle of latency to read/store).
I'm working on implementing a model in Python. As part of this model, I have a set of agents (e.g. humans) that need to visit a set of targets (e.g. places). Each agent has its own initial location (i.e. starting point) and I can calculate the distance from each agent to each target.
What I need at this point is to allocate a first job to each agent in a way that the sum of all travel distances for agents from their starting location to their first job is minimum.
I considered greedy algorithm, but I found examples that proves order of allocation can lead to non-optimal solutions. I also looked into nearest neighbour algorithm in TSP, but all I could find was for one agent (or salesman) not multiple.
Could someone point me to any (non-exhaustive search) algorithm/approach that could be used for this purpose please? Thanks
If the number of agents = number of targets, we end up with a standard assignment problem. This can be solved in different ways:
as an LP (linear programming problem). Technically a MIP but variables are automatically integer-valued, so an LP solver suffices.
as a network problem
or using specialized algorithms.
If, say, the number of locations > number of agents, we still can use an LP/MIP:
min sum((i,j), d(i,j)*x(i,j))
sum(j, x(i,j)) = 1 for all agents i (each agent should be assigned to exactly one location)
sum(i, x(i,j)) <= 1 for all locations j (each location should be assigned to at most one agent)
x(i,j) in {0,1}
For the network approach, we would need to add some dummy nodes.
All these methods are quite fast (this is an easy model). To give you an indication: I solved a random example with 500 agents and 1000 locations as an LP and it took 0.3 seconds.
This is my first data science project and I need to select some data. Of course, I know that I can not just select all the data available because this will result in overfitting. I am currently investigating house prices in the capital of Denmark for the past 10 years and I wanted to know which type of houses I should select in my data:
Owner-occupied flats and houses (This gives a dataset of 50000 elements)
Or just owner-occupied flats (this gives a dataset of 43000 elements)
So as you can see there are a lot more owner-occupied flats sold in the capital of Denmark. My opinion is that I should select just the owner-occupied flats because then I have the "same" kind of elements in my data and still have 43000 elements.
Also, there are a lot higher taxes involved if you own a house rather than owning an owner-occupied flat. This might affect the price of the house and skew the data a little bit.
I have seen a few projects where both owner-occupied flats and houses are selected for the data and the conclusion was overfitting, so that is what I am looking to avoid.
This is an classic example of over-fitting due to lack of data or insufficient data.
Let me example the selection process to resolve this kind of problem. I will example using the example of credit card fraud then relate that with your question or any future problem of prediction with classified data.
In ideal world credit card fraud are not that common. So, if you look at the real data you will find only 2% data which resulted in fraud. So, if you train a model with this datasets it would be biased as you don't have normal distribution of the class (i.e fraud and none fraud transaction in your case its Owner-occupied flats and houses). There are 4 a way to tackle this issue.
Let's Suppose Datasets has 90 none fraud data points and 10 fraud data points.
1. Under sampling majority class
In this we just select 10 data points from 90 and train model with 10:10 so distribution is normalised (In your case using only 7000 of 43000 flats). This is not ideal way as we would be throughout a huge amount of data.
2. Over sampling minority class by duplication
In this we duplicate the 10 data points to make it 90 data point distribution is normalised (In your case duplicating 7000 house data to make it 43000 i.e equal to that of flat). While this work there is a better way.
3. Over sampling minority class by SMOTE (recommended)
Synthetic Minority Over-sampling Technique is a technique we use k nearest neigbors algo to generate the minority class in your case the housing data. There is module named imbalanced-learn (here) which can be use to implement this.
4. Ensemble Method
In this method you divide your data into multiple datasets to make it balance for example dividing 90 into 9 sets so that each set can have 10 fraud class data and 10 none fraud class data. In your case diving 43000 in batch of 7000. After that training each one separately and using majority vote mechanism to predict.
So now I have created the following diagram. The green line shows the price per square meter of owner occupied flats and the red line shows price per square meter of houses (all prices in DKK). I was wondering if there is imbalanced classification? The maximum deviation of the prices is atmost 10% (see for example 2018). Is 10% enough to say that the data is biased and hence therefore is imbalanced classified?
I have to optimize my Parameters in order to get the highest Energy cusumption. I think no Need to explain the physical phenomenon I'm studying but the important informations are: Let's say I have two variables which are the frequency F and the Magnitude A. The Energy consumption Y is not calculated through an equation but with a complex Simulation in Ansys. From Ansys I can have the Energy Y for every Frequency and Magnitude combination I choose. is there a Deep learning technique which allows to use some variables and their Output Energy as Training data to create a Networks which would calculate the Output Energy for every other Parameter combination.
Ideas are welcome …
Consider a problem whose solution maximizes an objective function.
Problem : From 500 elements, 15 needs to be selected (candidate solution), Value of Objective function depends on the pairwise relationships between the elements in a candidate solution and some more.
The steps for solving such a problem is described here:
1. Generate a set of candidate solutions in guided random manner(population) //not purely random the direction is given to generate the population
2. Evaluating the objective function for current population
3. If the current_best_solution exceeds the global_best_solution, then replace the global_best with current_best
4. Repeat steps 1,2,3 for N (arbitrary number) times
where size of population and N are smaller (approx 50)
After N iterations it returns a candidate solution stored in global_best_solution
Is this the description of a well-known algorithm?
If it is, what is the name of that algorithm or if not under which category these type of algorithms fit?
What you have sounds like you are just fishing. Note that you might as well get rid of steps 3 and 4 since running the loop 100 times would be the same as doing it once with an initial population 100 times as large.
If you think of the objective function as a random variable which is a function of random decision variables then what you are doing would e.g. give you something in the 99.9th percentile with very high probability -- but there is no limit to how far the optimum might be from the 99.9th percentile.
To illustrate the difficulty, consider the following sort of Travelling Salesman Problem. Imagine two clusters of points A and B, each of which has 100 points. Within the clusters, each point is arbitrarily close to every other point (e.g. 0.0000001). But -- between the clusters the distance is say 1,000,000. The optimal tour would clearly have length 2,000,000 (+ a negligible amount). A random tour is just a random permutation of those 200 decision points. Getting an optimal or near optimal tour would be akin to shuffling a deck of 200 cards with 100 read and 100 black and having all of the red cards in the deck in a block (counting blocks that "wrap around") -- vanishingly unlikely (It can be calculated as 99 * 100! * 100! / 200! = 1.09 x 10^-57). Even if you generate quadrillions of tours it is overwhelmingly likely that each of those tours would be off by millions. This is a min problem, but it is also easy to come up with max problems where it is vanishingly unlikely that you will get a near-optimal solution by purely random settings of the decision variables.
This is an extreme example, but it is enough to show that purely random fishing for a solution isn't very reliable. It would make more sense to use evolutionary algorithms or other heuristics such as simulated annealing or tabu search.
why do you work with a population if the members of that population do not interact ?
what you have there is random search.
if you add mutation it looks like an Evolution Strategy: https://en.wikipedia.org/wiki/Evolution_strategy