I have a process built in Excel and am trying to transfer it over to SQL for efficiency purposes. I have figured much of it out, including random number generation and pieces involving exponential distributions.
The one piece I am unable to figure out in SQL is how to get the inverse of lognormal.
This is how the formula looks in Excel:
lognorm.inv(p,mu,sigma)
I have the equivalent to p, mu and sigma set up in my data and am trying to find the SQL equivalent (or workaround) to the lognorm.inv() Excel function.
Has anyone done something like this before?
Related
I require some more advanced MDX knowledge than mine.
I need to get the RepoRate_MAX for repo products, at book and instrument level, but also looking at the Java code I'm replacing that code always uses the max MurexId.
How can I perform the below (I've placed MAX in here on the dimension but this is wrong) and I need the combo of the dimensions and also the MAX MurexId:
[Measures].[RepoRate_VAL] = (([Deal].[ProductType].&[REPO],[Deal].[Book],[Deal].[Instrument],MAX([Deal].[MurexId])),[Measures].[RepoRate_MAX])
I'm sure it's a simple one but my mind is part way between the Java OO and MDX worlds currently haha :D
Thanks
Leigh
So after some experimenting I found out about the TAIL and Item MDX functions.
I think at one point I did get it working, but didn't make a note of what did work. I was playing around with this and variants of it..but most versions ended up in unusable query times:
[Measures].[RepoRate_VAL] = (([Deal].[ProductType].&[REPO],[Deal].[Book],[Deal].[Instrument],TAIL(EXISTING([Deal].[MurexId].[MurexId])).Item(0)),[Measures].[RepoRate_MAX])
So I then decided to push the RepoRate calculation back to the SQL data preparation script. Cleaner/smoother data is always better and then to have simple calculated members.
I used SQL to determine the RepoRate from tradelevel with MAX(MurexId) and GROUP BY on Book, Instrument to then update my main fact table to ensure that the correct RepoRate was set at Book, Instrument level.
Thus the calculated member is then:
[Measures].[RepoRate_VAL] = (([Deal].[Book],[Deal].[Instrument]),[Measures].[RepoRate_MAX])
Fast data prep and a fast calculated member on the Excel/Pivot/UI layer.
I have 10 nested if functions. I'm trying to transform nonsmooth and non linear function to a linear function. In order to do that, I need to transform nested if functions to a linear format by adding binary variables. It is easy if there is only 1 if statement. What about more than one. Thanks in advance for your responds.
I suspect this may not still be an issue for you, but I just saw this post today. Manually linearizing nested IF statements can be quite a challenge. LINDO Systems has an Excel add-in solver named What'sBest that can internally linearize nested IF statements. This can allow What'sBest to solve the resulting model as a mixed integer linear program.
I need to do a Vlookup from another workbook on about 400000 cells with Vba. These cells are all in one Column.And shall be written into one Column. I know already , how the Vlookup Works, but my runtime is much to high by using autofill. Do you have an Suggestion how i can approve it?
Dont use VLookup use Index Match: http://www.randomwok.com/excel/how-to-use-index-match/
If you are able to adjust what the data looks like a slight amount, you may be interested in using a binary search. Its been a while since I last used one (writing a code for group exercise check-in program). https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/implementing-binary-search-of-an-array , was helpful in setting up the idea behind it.
If you are able to sort them in an order, say by last name (im not sure of what data you are working with) then add an order of numbers to use for the binary search.
Edit:
The reasoning for a binary search would be that with a binary search is that the computational time it takes. The amount of iterations it would take is log2(400000) vs 400000. So instead of 400000 possible iterations, it would take at most 19 times with a binary search, as you can see with the more data you use the binary search would yield much quicker times.
This would only be a beneficial way if you are able to manipulate the data in such a way that would allow you to use a binary search.
So, if you can give us a bit more background on what data you are using and any restrictions you have with that data we would be able to give more constructive feedback.
I'm trying to calculate the 99.5% percentile for a data set of 100000 values in an array (arr1) within VBA using the percentile function as follows:
Pctile = Application.WorksheetFunction.Percentile(arr1, 0.995)
Pctile = Application.WorksheetFunction.Percentile_Inc(arr1, 0.995)
Neither works and I keep getting a type mismatch (13).
The code runs fine if I limit the array size up to a maximum of 65536. As far as I was aware calculation limited by available memory since Excel 2007 array sizes when passing to macro limited by available memory since Excel 2000.
I'm using Excel 2010 on a high performance server. Can anyone confirm this problem exists? Assuming so, I figure that my options are to build a vba function to calculate the percentile 'manually' or output to a worksheet, calculate it there and read it back. Are there any alternatives and what would be quickest?
The error would occur if arr1 is 1-dimensional and has greater than 65536 elements (see Charles' answer in Array size limits passing array arguments in VBA). Dimension arr1 as a two-dimensional array with a single column:
Dim arr1(1 to 100000, 1 to 1)
This works in Excel 2013. Based on Charles' answer, it appears that it will work with Excel 2010.
Here is a Classic VBA example that mimics the Excel Percentile function.
Percentile and Confidence Level (Excel-VBA)
In light of Jean's exposure of the Straight Insertion method being inefficient. I've edited this answer with the following:
I read that QuickSelect seems to excel with large records and is quite efficient doing so.
References:
Wikipedia.org: Quick Select
A C# implementation can be found # Fast Algorithm for computing percentiles to remove outliers which should be easily converted to VB.
I've been doing a lot of research on this topic and I'm finally getting somewhere. Below is two complex numbers from the java code I'm using:
-9771.0 - j2125.0
-16184.09634718744 - j53968.71008512241
I know the amplitude/magnitude can be computed by doing the sqrt(a^2 + b^2) and this as far as I've gotten with this. I've read about sample rate but I'll need a better explanation of this alone and would like to be pointed in the right direction to obtain the knowledge. I've done the powerspectum graph but I need to do this on paper so I'll know how to obtain the frequency.
Applying Fourier Transformation to two values is pretty meaningless. You apply it to series of values (signal), then frequency starts to make sense. You can't speak about frequency in series of two values.