In Wolfram alpha I can use this formular:
((7days/week) *(1000km-691.3km) / (today - end of year))
and it will display the result using various units.
But I want it in the units Kilometers per Week. I tried various variations, like this one:
((7days/week) *(1000km-691.3km) / (today - end of year)) in (km/week)
but it can't parse it.
How do I specify the desired units for the result?
I found a partial solution.
This one actually makes the result a unit less number.
((7days/week) *(1000km-691.3km) / (today - end of year)) / (km/wk)
Which is good enough for my current purpose. But I still like to get it with the actual units
Related
I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias
It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.
I am trying to subtract the value from two textboxes in Visual Studio 2012.
Example input and results:
textbox1 - textbox2 = label1
25.9 - 25.4 = 0.50 (it's ok)
173.07 - 173 = 0.06 (should be 0.07)
144.98 - 142.12 = 2.85 (should be 2.86)
My code (I tried all three lines separately):
label1.text = (Convert.ToDouble(textbox1.text) - Convert.ToDouble(textbox2.text)).ToString
label1.text = (CDbl(textbox1.text) - CDbl(textbox2.text)).ToString
label1.text = (Val(textbox1.text) - Val(textbox2.text)).ToString
This error (may be not an error) occurs some times, not every time.
What am I missing here? And what should I use instead of "CDbl" ?
what should I use instead of "CDbl" ?
When you start with the a string, the best option is Double.Parse() or Double.TryParse(), depending on the possibility for bad data.
But even that's not enough in this case. Computers use something called IEEE754 for floating point arithmetic. This scheme for encoding floating point numbers is designed as an efficient way to represent numbers in binary, and further has direct support in CPUs for arithmetic operations, meaning it is much faster than any available alternative (it's not even close). Pretty much every programming platform uses it.
The downside is there is some loss of precision. When treated as IEEE754 doubles, 173.07-173 produces .69999999.
You can solve this in two ways:
Round the results. This isn't an option when using division, but with just addition and subtraction you can track significant digits and round to get exact results. This is a pain, though.
Use the Decimal type. Decimal isn't perfect, but is does have a much greater degree of precision (at the cost of some performance), and for your sample data produces exact results.
In short, try this code:
label1.text = (Decimal.Parse(textbox1.text) - Decimal.Parse(textbox2.text)).ToString()
I have a report that presents information and I'm getting inconsistent information based on what appears to be some issue with a SQL view or possibly a SQL Function nested within the view. I've tried finding a way to debug the SQL View, however, it looks like SSMS only will debug Stored Procedures, so I'm not really sure how to step through and see what is happening. It really has me stumped and I can't help but wonder if it isn't a rounding issue.
GetItemAverageCost RETURNS DECIMAL(12,2) and the DataType in sitli.QuantityIssuedAtStockUOM is System.Int64 / bigint (sidenote: I'm confused about why LINQPad shows 2 data types for that column. In the tree on the left, after expanding the sitli table and hovering over the QuantityIssuedAtStockUOM the balloon BigInt NOT NULL pops up, but when I Take(100) and hover over the column in the result set it says System.Int64). Anyroad, here is the COALESCE function.
COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
--ROUND(COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0),2) -- 259.73
--COALESCE(ROUND((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor,2), 0),1),0) -- 259.70
--COALESCE((ROUND(dbo.GetItemAverageCost(ItemModel.IDItemModel),2)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
original / wrong coalesce:
COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM,0)
I'm not sure what else to include, but I haven't found many resources online that offer insight into this kind of a situation. Many thanks in advance for your time.
EDIT: GetItemAverageCost:
ALTER FUNCTION GetItemAverageCost
(
#IDItemModel varchar(8000)
)
RETURNS DECIMAL(16,4)
--RETURNS DECIMAL(12,2)
AS
BEGIN
RETURN
(
SELECT
COALESCE(AVG(poli.UnitPrice),0) as AvgCost
-- COALESCE(ROUND(AVG(poli.UnitPrice),0),2) as AvgCost 260.00
FROM ItemModel im
LEFT JOIN VendorItem vi
ON im.IDItemModel = vi.IDItemModel
JOIN POLineItem poli
ON vi.IDVendorItem = poli.IDVendorItem
WHERE
im.IDItemModel = #IDItemModel
GROUP BY
im.IDItemModel,
im.ItemNumber
)
END
To fix; have your function return 16,4 instead of 12,2 and then ROUND two two decimals after multiplying by the quantity.
"When a given report is run, there are no errors thrown. But the calculations are off. For example a part number 12 shows a quantity of 24 were issued at a cost of $259.73. However, each part costs $10.82 so the calculation should be $259.68. I'm not sure where the difference of 5 cents is coming from. The $259.73 is the result of the COALESCE function above. Hopefully this makes sense"
Run the SQL only for part 12 independent of the function and you'll see the average is 10.822083333333333333333333333333 (10.82 5/24ths)
24*unitprice = $259.73
unitprice = 259.73/24
unit price = $10.82 5/24.
You'll see the variance is $.05
10.82 5/24ths. *24 = 259.73
10.82 * 24 = 259.68
That difference of 5cents doesn't go evenly into the remaining 24. thus the rounding error when using your function.
When you think of going to the store and buying something it's always at amounts to the whole penny. When you go to the gas station they charge to the nearest .00001 cents. (or in your case 4 decimals)
The rounding when using fractions of pennies isn't done until multiplied by the quantity or when actual cash needs to change hands. If done too early you get rounding errors you are seeing.
Thus you eliminate over/under charging rounding errors and at most you'll charge a fraction of a penny less or more than you should.
Okay, so many thanks to all who helped along the way. There were a couple of issues preventing me from getting the correct answer. For one thing, I was working with the incorrect expression for much of the time. Secondly, after I figured out which expression to use, it was a matter of placing the ROUND function in the correct place.
So, the expression I should have been using to get my average cost is:
COALESCE(dbo.GetItemAverageCost(Item.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0)
When I moved this into the WorkOrderItemInstructionPartCosts View, my report was then producing $10.82. Then I added *sitli.QuantityIssuedAtStockUOM to the line and was getting $259.73. Then I applied the ROUND function to the COALESCE function and voila! the correct value ($259.68) is being produced.
The final line looks like this:
ROUND(COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0),2)*sitli.QuantityIssuedAtStockUOM
Once again, thank you to all who helped me in the effort to resolve this and sorry for not having accurate information to begin with.
Best,
Jonathan
I am doing some calculations in vb.net, this is my equation:
rms = (20 * (Math.Log(rms / 0.7746))) 'also tried (Math.Log10(rms / 0.7746))
I have tried various different methods of writing this, including separating out the calculations into various steps. However the final result is quite far out.
I have tried declaring my variable 'rms' as a decimal and a double. It does contain decimal places.
In Excel, I have tried the same calculation using this formula:
=20*(LOG(C2/0.7746)) ' where C2 is the RMS value
And the results are consistent with a website I used to check, as well as my pocket calculator.
I have also tried rounding the number to 3 decimal places:
rms = Math.Round(rms, 3)
This too has a minimal effect on the final result.
I can only assume it's the 'operator precedence' in VB but I'm struggling to work this one out.
Any help greatly appreciated as always, thanks.
After a marathon debugging session I found an error in my code.
I have a routine that uses the voltage at a given frequency to "normalise" all the plots I do to 0dBu.
My normalisation routine was broken. Badly.
And finally - to get the correct output from the log maths, I had to change the order in which the calculation was performed.
It was originally rms = (Math.Log10(rms / 0.7746) * 20)
In trying to find the issue, I changed it to rms = (20 * (Math.Log(rms / 0.7746)))
Which yields a different (and incorrect) result.
In any case - it's fixed now.
Thanks to all who responded.
I have a rgb value and if it doesn't exist in the color table in my database I need to find the closest color. I was thinking of comparing all values and finding the difference(in red,green,and blue) then take the average. The lowest average deviation should be the closest color. There seems to me like there should be a better way. Any ideas?
Consider a color as a vector in 3-dimensional space, you can then easily compute the difference by using 3d pythagoras:
d = sqrt((r2-r1)^2 + (g2-g1)^2 + (b2-b1)^2)
However, note that due to colors being subject to interpretation by not-so-perfect eyes, you might want to adjust the colors to avoid them having the same importance.
For instance, using a typical weighted approach:
d = sqrt(((r2-r1)*0.3)^2 + ((g2-g1)*0.59)^2 + ((b2-b1)*0.11)^2)
Since eyes are most sensitive to green, and least sensitive to blue, two colors that differ only in the blue component must thus have a larger numeric difference to be considered "more different" than one that is the same numeric difference in the green component.
There's also various ways to optimize this calculation. For instance, since you're not really interested in the actual d value, you can dispense with the square root:
d = ((r2-r1)*0.30)^2
+ ((g2-g1)*0.59)^2
+ ((b2-b1)*0.11)^2
Note here that in many C-syntax-based programming languages (like C#), ^ does not mean "raise to the power of", but rather "binary exclusive or".
So if this was C#, you would use Math.Pow to calculate that part, or just expand and do the multiplication.
Added: Judging by the page on Color difference on Wikipedia, there's various standards that try to handle perceptual differences. For instance, the one called CIE94 uses a different formula, in the L*C*h color model that looks like it's worth looking into, but it depends on how accurate you want it to be.
The Euclidean distance difference = sqrt(sqr(red1 - red2) + sqr(green1 - green2) + sqr(blue1 - blue2)) is the standard way to determine the similarity of two colours.
However, if you have your colours in a simple list, then to find the nearest colour requires computing the distance from the new colour with every colour in the list. This is an O(n) operation.
The sqrt() is an expensive operation, and if you're just comparing two distances then you can simply omit the sqrt().
If you have a very large palette of colours, it is potentially quicker to organise the colours into a kd tree (or one of the alternatives) so as to reduce the number of diffreences that require computing.
The following does exactly what you describe:
select (abs(my_R - t.r) + abs(my_G - t.g) + abs(my_B - t.b)) / 3 as difference, t.*
from RGBtable t
order by difference desc;
However, you might get better results with something that was non-linear. In the "take the averages" approach, if you goal color is (25, 25, 25) the color (45, 25, 25) would be closer than (35, 35, 35). However, I bet the second would actually look closer, since it would also be gray.
A few ideas come to mind: you could try squaring the differences before you average them. Or you could do something complicated with finding the color with the closest ratio between the different values. Finding the closest ratios would get you closest to the right hue, but won't account for saturation (if I'm remembering the terms right...)
Let the database do it for you:
select top 1
c.r,
c.b,
c.g
from
color c
order by
(square(c.r - #r) + square(c.g - #g) + square(c.b - #b))
Where #r, #g, and #b are the r,g,b values of the color that you're searching for (SQL Server parameter syntax, since you didn't specify a database). Note that this is still going to have to do a table scan since the order by has a function call in it.
Note that the extra square root call isn't actually required since it's a monotonic function. Not that it would probably matter very much, but still.
From looking at the Wikipedia page on Color difference, the idea is to treat RGB colours as points in three dimensions. The difference between two colours is the same as the distance between two points:
difference = sqrt((red1 - red2)^2 + (green1 - green2)^2 + (blue1 - blue2)^2)
One step better than average is nearest square root:
((delta red)^2 + (delta green)^2 + (delta blue)^2)^0.5
This minimizes the distance in 3D color space.
Since a root is strictly increasing, you can search for the maximum of the square instead. How you express this in SQL would depend on which RDBMS you're using.
Comparing a color sample to the whole color list every time is probably not optimal. This can be optimized by putting the colors in the color list into a search tree. If you are comparing the color sample on its Red, Green and Blue (RGB) value, you would put the colors in the color list into a three dimensional search tree. The search tree could be created once and saved to a (json, xml) file or in a database. This may be worth it if speed is important, e.g. there are many points to compare.
Use a [k-d tree] with the R, G and B values 0-256 as X, Y, and Z coordinates1.
Or another type of nearest neighbour search.
Calculate both the average and the distance like this:
(r + g + b) / 3 = average
(r - average) + (g - average) + (b - average) = distance
This should give you a good idea of the closest value.