Is there a Objective-C or C math function to constrain a variable between a Min and a Max? [duplicate] - objective-c

This question already has answers here:
Fastest way to clamp a real (fixed/floating point) value?
(14 answers)
Closed 6 years ago.
What I am looking for is a math function that constrains a primitive variable between a minimum and a maximum value in only one single function call, if it exists, in the standard math libraries for Objective-C.
I currently use:
float constrainedValue = fminf( maxValue, fmaxf( minValue, inValue ) );
Since I know that both fminf and fmaxf could potentially have instruction jumps or branches, it seems likely there could be a simple routine that could conjoin both of these operations into one, optimized function.

This topic is thoroughly discussed here: Fastest way to clamp a real (fixed/floating point) value?
'clamp' is the keyword I was looking for.

Related

Big O notation of a constant larger than 1 [duplicate]

This question already has an answer here:
Why do we prefer not to specify the constant factor in Big-O notation?
(1 answer)
Closed 3 years ago.
Does it mean anything at all to have a function with time complexity O(2)?
For example, how would one describe a function that must check two lookup tables rather than one. Is that not strictly describable in big-O, or is O(2) a real way to describe this? Or something else?
Thanks.
O(something) is a set of functions.
O(1) and O(2) are the same set.
A constant time function is a member of O(1). It's also a member of O(2) because O(1) and O(2) are exactly the same thing. Use whichever one you prefer. Normally you'd use O(1), but you be you.

Limiting chosen variables solved for in opensolver [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've got a linear system of 17 equations, 506 variables that solve for a minimum summation of the total variables. This works fine, so far, but the solution is a result of a combination of 19 variables.
But in the end I want to limit the amount of chosen variables to 10, without knowing in advance which ones are the optimal ones (The solver figures that out for me, as well as their ratio).
I figured I can set a boolean = 1 if the value becomes larger than 0: (meaning the variable is picked), and 0 if the variable is not picked for an optimal solution.
And then have the sum of the booleans be 10 at most.
However this seems a bit elaborate, and I was wondering whether there was a built in option in the opensolver, for I think it is quite a common problem to solve a large set with a subset.
So does anyone have a suggestion on:
How my elaborate way drastically decreases performance? (*I have no intrinsic comprehension of the opensolver algorithms, yet.)
A suggestion to more easily/within the opensolver options account for my desire of max. 10 solution variables?
Based on the information provided below, I first scaled down the size of the problem:
I have three lists of data with 18 entries in columns:
W7:W23,AC7:AD23
which manually (with: W28 = 6000, AC28=600,W29 = 1,AC29 =1), in a linear combination,equal/exceed the target list:
EGM34:EG50
So what I did was put the descion variables in W28:W29, AC28:AD29
Where I added the constraint W28,AC28:AD28 = integer in the solver (both the original excel solver as in opensolver)
And I added the constraint W29,AC29:AD29 = Boolean in the solver (both the original excel solver as in opensolver)
Then I have a multiplication of the integer*boolean = the actual multiplication factor for the above lists in (W7:W23 etc)
In order to limit the nr of chosen variables I have also tried, in addition to the described constraints, to limit the cell with =sum(W29,AC29:AD29) to <= 10 (effectively reducing the amount of booleans set to true below 11, or so I thought, but the booleans aren't evaluated as booleans by the solver).
These new multiplied lists are placed in W34:W50,AC34:AD50, and the summation is situated in: EGY34:EGY50 Hence the final check is added as a constraint as:
EGY34:EGY50 =>EGM34:EGM50
And I had a question about how the linear solver evaluates these constraints, does it:
a. Think the sum of EGY34:EGY50 must be larger or equal than/to EGM34:EGM50
or
b. Does it think: "for every row x EGYx must be larger or equal than/to EGMx
So far I've noted b. but I would like to make sure.
But my main question concerns:
After using the Evolutionary algorithm as was kindly suggested in the comments below, how/why does it try values as 0.99994 for the desicion variables designated as booleans?
The introduction of binary variables is indeed the standard way to implement such constraints. Unfortunately, it transforms the problem from being a linear programming problem to being an integer programming problem (specifically a mixed integer linear programming problem). A standard approach to such problems is the branch and bound algorithm. This is what Excel's built-in solver seems to use, I'm not sure about the open solver that you are using. In the best case (where there is a lot of bounding) it will run fairly rapidly, even with problems of your size. In the worst case, for your problem it could be little better than what you would get by running the simplex algorithm C(506,10) = 2.8 x 10^20 times (once for each possible set of 10 decision variables). In other words, it might be infeasible. Integer programming is known to be NP-hard.
If an exact solution is infeasible, you could always use a heuristic algorithm such as an evolutionary approach.

Why arc4random_uniform instead of random or rand? [duplicate]

This question already has answers here:
Arc4random modulo biased
(1 answer)
What's the difference between arc4random and random?
(2 answers)
Closed 8 years ago.
I've been trying to create program in Objective-C to return random value between specific minimum to maximum value.
I've searched online and found out people are using arc4random_uniform instead of random() or rand().
My question is how arc4random is different from random()?
random() is definitely easier to remember and I get same result from both arc4random and random()
Thank You.
Regards.

Storing and computing with real numbers up to an arbitrary precision in vb.net [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
.NET Framework Library for arbitrary digit precision
How can I store a real number, eg, root 2 or one third, up to an arbitrary precision (the precision I need is infinate precision) in vb.net?
I would like to be able to store real numbers and perform operations on them (ie root 2 times root 2) without losing any accuracy - IE storing 1/3 would return the value 1/3 if I needed to retrieve this value.
I was thinking of using a fractal encoding but I am unsure as to the best way to do this.
Storage capacity is not an issue, I just need the real numbers to be 100% accurate.
Will that be a single real number there or does it need to be an arbitrary number of (almost) arbitrary figures? (Sorry for "answer" - for some reason i can't add comments now...)

Data Types Obj-C [duplicate]

This question already has an answer here:
Closed 11 years ago.
Exact Duplicate:
Issue with float and double data types in objective C
[Ironically, to find the duplicate questions you need to know the answer.]
What Every Computer Scientist Should Know About Floating-Point Arithmetic
If it cannot be expressed in base 2, it will not be precise. See also floating point inaccuracy.
0.1 is a 'repeating decimal' in binary (0.0001100110011...) so the representation of 0.1 is inexact. NSLog is likely rounding or truncating the output.