100 is showing as 10,000% NumberTextBox - dojo

I set the constraint to use type set to percent, a value of 100 is showing up as 10,000% once the NumericTextbox renders
<div id="Percentage" data-dojo-type="ourcompay.NumberTextBox" data-dojo-props="constraints:{type: 'percent'}" title="Percentage" required="true"></div>
Not sure what to do to fix this. Is this a bug with Dojo or I'm not doing something right? Not quite sure.

I would assume that given a percentage is a representation of the amount of something in hundreths of a unit that it's doing exactly what it intends - i.e. if you are supplying a value of 100 and expecting that to mean 100% you're not understanding what percentages are.
1 would be 100%.

Related

X and Y inputs in LabVIEW

I am new to LabVIEW and I am trying to read a code written in LabVIEW. The block diagram is this:
This is the program to input x and y functions into the voltage input. It is meant to give an input voltage in different forms (sine, heartshape , etc.) into the fast-steering mirror or galvano mirror x and y axises.
x and y function controls are for inputting a formula for a function, and then we use "evaluation single value" function to input into a daq assistant.
I understand that { 2*(|-Mpi|)/N }*i + -Mpi*pi goes into the x value. However, I dont understand why we use this kind of formula. Why we need to assign a negative value and then do the absolute value of -M*pi. Also, I don`t understand why we need to divide to N and then multiply by i. And finally, why need to add -Mpi again? If you provide any hints about this I would really appreciate it.
This is just a complicated way to write the code/formula. Given what the code looks like (unnecessary wire bends, duplicate loop-input-tunnels, hidden wires, unnecessary coercion dots, failure to use appropriate built-in 'negate' function) not much care has been given in writing it. So while it probably yields the correct results you should not expect it to do so in the most readable way.
To answer you specific questions:
Why we need to assign a negative value and then do the absolute value
We don't. We can just move the negation immediately before the last addition or change that to a subtraction:
{ 2*(|Mpi|)/N }*i - Mpi*pi
And as #yair pointed out: We are not assigning a value here, we are basically flipping the sign of whatever value the user entered.
Why we need to divide to N and then multiply by i
This gives you a fraction between 0 and 1, no matter how many steps you do in your for-loop. Think of N as a sampling rate. I.e. your mirrors will always do the same movement, but a larger N just produces more steps in between.
Why need to add -Mpi again
I would strongly assume this is some kind of quick-and-dirty workaround for a bug that has not been fixed properly. Looking at the code it seems this +Mpi*pi has been added later on in the development process. And while I don't know what the expected values are I would believe that multiplying only one of the summands by Pi is probably wrong.

Coalesce returning wrong value after a function call followed by multiplication

I have a report that presents information and I'm getting inconsistent information based on what appears to be some issue with a SQL view or possibly a SQL Function nested within the view. I've tried finding a way to debug the SQL View, however, it looks like SSMS only will debug Stored Procedures, so I'm not really sure how to step through and see what is happening. It really has me stumped and I can't help but wonder if it isn't a rounding issue.
GetItemAverageCost RETURNS DECIMAL(12,2) and the DataType in sitli.QuantityIssuedAtStockUOM is System.Int64 / bigint (sidenote: I'm confused about why LINQPad shows 2 data types for that column. In the tree on the left, after expanding the sitli table and hovering over the QuantityIssuedAtStockUOM the balloon BigInt NOT NULL pops up, but when I Take(100) and hover over the column in the result set it says System.Int64). Anyroad, here is the COALESCE function.
COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
--ROUND(COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0),2) -- 259.73
--COALESCE(ROUND((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor,2), 0),1),0) -- 259.70
--COALESCE((ROUND(dbo.GetItemAverageCost(ItemModel.IDItemModel),2)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
original / wrong coalesce:
COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM,0)
I'm not sure what else to include, but I haven't found many resources online that offer insight into this kind of a situation. Many thanks in advance for your time.
EDIT: GetItemAverageCost:
ALTER FUNCTION GetItemAverageCost
(
#IDItemModel varchar(8000)
)
RETURNS DECIMAL(16,4)
--RETURNS DECIMAL(12,2)
AS
BEGIN
RETURN
(
SELECT
COALESCE(AVG(poli.UnitPrice),0) as AvgCost
-- COALESCE(ROUND(AVG(poli.UnitPrice),0),2) as AvgCost 260.00
FROM ItemModel im
LEFT JOIN VendorItem vi
ON im.IDItemModel = vi.IDItemModel
JOIN POLineItem poli
ON vi.IDVendorItem = poli.IDVendorItem
WHERE
im.IDItemModel = #IDItemModel
GROUP BY
im.IDItemModel,
im.ItemNumber
)
END
To fix; have your function return 16,4 instead of 12,2 and then ROUND two two decimals after multiplying by the quantity.
"When a given report is run, there are no errors thrown. But the calculations are off. For example a part number 12 shows a quantity of 24 were issued at a cost of $259.73. However, each part costs $10.82 so the calculation should be $259.68. I'm not sure where the difference of 5 cents is coming from. The $259.73 is the result of the COALESCE function above. Hopefully this makes sense"
Run the SQL only for part 12 independent of the function and you'll see the average is 10.822083333333333333333333333333 (10.82 5/24ths)
24*unitprice = $259.73
unitprice = 259.73/24
unit price = $10.82 5/24.
You'll see the variance is $.05
10.82 5/24ths. *24 = 259.73
10.82 * 24 = 259.68
That difference of 5cents doesn't go evenly into the remaining 24. thus the rounding error when using your function.
When you think of going to the store and buying something it's always at amounts to the whole penny. When you go to the gas station they charge to the nearest .00001 cents. (or in your case 4 decimals)
The rounding when using fractions of pennies isn't done until multiplied by the quantity or when actual cash needs to change hands. If done too early you get rounding errors you are seeing.
Thus you eliminate over/under charging rounding errors and at most you'll charge a fraction of a penny less or more than you should.
Okay, so many thanks to all who helped along the way. There were a couple of issues preventing me from getting the correct answer. For one thing, I was working with the incorrect expression for much of the time. Secondly, after I figured out which expression to use, it was a matter of placing the ROUND function in the correct place.
So, the expression I should have been using to get my average cost is:
COALESCE(dbo.GetItemAverageCost(Item.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0)
When I moved this into the WorkOrderItemInstructionPartCosts View, my report was then producing $10.82. Then I added *sitli.QuantityIssuedAtStockUOM to the line and was getting $259.73. Then I applied the ROUND function to the COALESCE function and voila! the correct value ($259.68) is being produced.
The final line looks like this:
ROUND(COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0),2)*sitli.QuantityIssuedAtStockUOM
Once again, thank you to all who helped me in the effort to resolve this and sorry for not having accurate information to begin with.
Best,
Jonathan

1 billionth ugly or hamming number?

Is this the 1 billionth ugly/hamming number?
62565096724471903888424537973014890491686968126921250076541212862080934425144389
76692222667734743108165348546009548371249535465997230641841310549077830079108427
08520497989078343041081429889246063472775181069303596625038985214292236784430583
66046734494015674435358781857279355148950650629382822451696203426871312216858487
7816068576714140173718
Does anyone have code to share that can verify this? Thanks!
This SO answer shows a code capable of calculating it.
The test entry on ideone.com takes 1.1 0.05 sec for 109 (2016-08-18: main speedup due to usage of Int instead of the default Integer where possible, even on 32-bit; additional 20% thanks to the tweak suggested by #GordonBGood, bringing band size complexity down to O(n1/3)).
It gives the answer as ((1334,335,404),"6.21607575556559E+843"), i.e.
21334 * 3335 * 5404 ≈ 6.21607575556559 * 10843.
(coincidentally, only two last digits in the fractional number above are incorrect).
This also means, of course, that there are 404 zeroes at the end of this number, and that it has 844 digits in total. So no, the number you show isn't it.
Exact answer:
6216075755565244861630816332872072003947056519089652706591632409642337022002753141824417540777256732780370172616615291935540418620025524916729500086831454711313694078635504004160312872951788703647948382456091072701600790562071797590306654765882256990391763887850141154482249915927439184562828227449023750262318234797192076792208033475638322151983772515798004125909334741121595323950448656375104457026997424772966917441779406172736975588556800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

Threshold in Sharepoint Dashboard Designer For Percentage KPI

I have got a SSAS cube, that has a KPI which has a Value as a percentage. I also have a Goal which is also the target percentage to keep the Value below.
I create the KPI fine, but when I import it into the dashboard designer and set the scoring pattern and indicator ( I used tick, exlamation mark, cross - which gives 2 thresholds). It always shows the tick even though its way over the goal.
I have set it to be that decreasing is better and the banding method is "Band by stated score", but it always shows on the scorecard as being On Target.
This is the threshold I currently have.
Is it something to do with having the goal as a percentage? Can anyone explain how Dashboard Designer thresholds work with percentages please??
Update : Seemed to get it to work by setting the thresholds against the actual values in stead.
The percentage works in terms of decimal.
0 = 0%
0.5 = 50%
1 = 100%
In your scenario, you can use -0.01, 0.8, 1, 1.01
Thanks,
Merin
I eventually found the answer to this for anyone who wants to know still.
I did indeed need the "Band by stated score" banding method, the thing I was missing was where does the score come from? This is not immediately obvious but it is in the "Specify the worst value" section when editing the banding settings.
This needs to be set correctly against the threshold values.

how to analyze min max loc returned by opencv's cvMatchTemplate?

I am trying to detect objects in image on an iphone app.
I am using the cvMatchTemplate function, I manage to see some patterns returned by the cvMatchTemplate function (I chose CV_TM_CCOEFF_NORMED).
Positive Results (result image is 163x371):
http://encryptedpixel.files.wordpress.com/2011/07/photo-13-7-11-11-52-19-am.jpeg
cvMinMaxLoc returns: min (102,244) max(11,210)
The min point is making some sense here, the position of the dark spot is really 102,244 in the result image of 163x371
Negative Results:
cvMinMaxLoc returns: min (114,370) max(0,0)
This is not making sense, there is totally no results, why is there still a min point at 114,370?
I need to know how to analyze these results programatically so that I can say "Hey I found the object!" in objectiveC for iPhone app?
Thanks!
cvMinMaxLoc will always return the position of the minimum and maximum values of their input. It only "doesn't make sense" in your particular application. You should check the value at the returned position for the minimum and do something like threshold it to see if that's a probable match for your template. A template match will yield a very low or a very high value, depending on the method you chose.