I want to make a scientific calculator in which the user enters something like 3+4*(3-5)/23 and then the calculator can return the value.
Now I'm trying to find a way to parse a string of mathematical expression. I know that there are some built parsers and algorithms but I want to know whether it's possible by using #define method.
Basically, I want to use the #define to literally remove the # and " " in a string and make it look like an expression that can be evaluated. At this stage, I won't use unknown variables like x or 3*k or a*b/c. All will be numbers and operators like 3+4 and 32 that can be directly evaluated by the compiler. Here is what I want to write in #define:
#define eval#"(x)" x
In the above code, eval is just a signal of parsing and the #"x" is the actual string that need to parse and x is a mathematical expression. After the translation, only x will remain. For example, if I write
double result = eval#"(3+4)";
the compiler will read
double result = 3+4;
(according to my understanding of #define). However, the code does not work. I suspect that the quotation marks confuse the compiler and cause the code to break. So my question is: can anyone come up with a solution using #define?
This is not possible with the preprocessor, no string manipulation besides concatenation supported.
Why would you need the #"x" syntax anyways? You can just put the expression right there in the code.
People are right, you cannot do it in direct way, however if you very want macro:
#define eval(x) [[[NSExpression expressionWithFormat:x] expressionValueWithObject:nil context:nil] doubleValue]
double result = eval(#"3+4");
#define is an invocation of the C preprocessor, which is not capable of this kind of manipulation. It almost sounds like you're trying to define an Objective-C macro that would do the same kind of thing as a LISP macro, but that's not possible. Why don't you tell us what the original problem is that you're trying to solve... I think we can probably come up with an easier way to do what you're trying to do.
Related
The app I'm working on has a credit response object with a Boolean "approved" field. I'm trying to log out this value in Objective C, but since there is no format specifier for Booleans, I have to resort to the following:
NSLog("%s", [response approved] ? #"TRUE" : #"FALSE");
While it's not possible, I would prefer to do something like the following:
NSLog("%b", [response approved]);
...where "%b" is the format specifier for a boolean value.
After doing some research, it seems the unanimous consensus is that neither C nor Objective-C has the equivalent of a "%b" specifier, and most devs end up rolling their own (something like option #1 above).
Obviously Dennis Ritchie & Co. knew what they were doing when they wrote C, and I doubt this missing format specifier was an accident. I'm curious to know the rationale behind this decision, so I can explain it to my team (who are also curious).
EDIT:
Some answers below have suggested it might be a localization issue, i.e. "TRUE" and "FALSE" are too English-specific. But wouldn't this be a dilemma that all languages face? i.e. not just C and Objective-C? Java and Ruby, among others, are able to implement "True" and "False" boolean values. Not sure why the authors of these langs didn't similarly punt on this choice.
In addition, if localization were the problem, I would expect it to affect other facets of the language as well. Take protected keywords, for instance. C uses English keywords like "include", "define", "return", "void", etc., and these keywords are arguably more difficult for non-English speakers to parse than keywords like "true" or "false".
Pure C (back to the K&R variety) doesn't actually have a boolean type, which is the fundamental reason why native printf and cousins don't have a native boolean format specifier. Expressions can evaluate to zero or nonzero integral values, which is what is interpreted in if statements as false or true, respectively in C. (Understanding this is the key to understand the semantics of the delightful !! "bang bang operator" syntax.)
(C99 did add a _Bool type, though unless you're using purest C you're unlikely to need it; derived languages and common platforms already have common boolean types or typedefs.)
The BOOL type is an ObjC construct, and -[NSString stringWithFormat:] (and NSLog) simply doesn't have an additional format specifier that does anything special with it. It certainly could (in addition to %#), and choose some reasonable string to drop in there; I don't know whether such a thing was ever considered, but it strikes me anyway as different in kind from all other format specifiers. How would you know to appropriately localize or capitalize the string representations of "yes" or "no" (or "true" or "false"?) etc? No other format specifier results in the library making decisions like that; all others are precisely numeric or insert the string result of another method call. It seems onerous, but making you choose what text you actually want in there is probably the most elegant solution.
What should the formatter display? 0 & 1? TRUE & FALSE? YES & NO? -1 and 1? What about other languages?
There's no good consistently right answer so they punted it to the app developer, for whom it'll be a clearer (and still simple) choice.
In C early days, there was no numeric printf() specifier for char, short either as there was little need for it. Now there is "%hhd" and "%hd". Any type narrower than int/unsigned was promoted.
Today, in C, _Bool type maybe printed with "%d".
#include <stdio.h>
int main(void) {
_Bool some_bool = 2;
printf("%d\n", some_bool); // prints 1 (or 0 when false)
return 0;
}
The missing link in C is its lack of a format specifier to scan into a _Bool. This leads to various solutions like the following which are not satisfactory with input like "T" or "false".
_Bool some_bool;
int temp;
scanf("%d", &temp);
some_bool = temp;
Apologies if there's an answer out there already; but all I seem to be getting is a bunch of "I want to turn my 1 into a 1.0" chaff from my Google searches.
First things first. No, I'm not talking about a simple Convert::ToSingle() call. Rather, I need to convert the representation of the data to a System::Single.
So in other words, I'd like to take int myInt = 1065353216;, and the result should be something like 1.000. I know the pure c++ method would be something like float myFloat=*(float *)&myInt;;but I need the managed version.
Thanks in advance for your help.
If you're in C++/CLI, you can do it the same way as you do in C++: float myFloat=*(float*)&myInt;
In pure managed-land, there are built-in methods to do this for double & Int64 (DoubleToInt64Bits and Int64BitsToDouble, but not for single & Int32. However, if you look at the implementation of those methods (MS Reference Source), you'll see that they're doing the exact same thing as you have listed, so that's also the managed way to do it. The only difference is if you do it in C#, you have to tag the method as unsafe.
I am trying to format some output in Standard ML. I need to display some real values as rounded to a certain decimal place, and I also need to be able to display some real values using scientific notation.
The signature for the print function is
val it = fn : string -> unit
which doesn't seem to allow for the use of formatting codes or any other parameters. I also haven't had any luck finding documentation online. Ideally I was hoping the print function in SML would have similar functionality to printf in C...
Standard ML is a statically-typed language. It's hard to make something like printf in a type-safe way.
The SML Basis Library contains some formatting operations for numbers. But to use them is relatively verbose and relatively difficult to figure out. For example, to format a real number into a string in scientific notation with 3 places after the decimal point, you can do something like this:
Real.fmt (StringCvt.SCI (SOME 3)) 4324423423.5; (* evaluates to string "4.324E9" *)
Ugly, right?
Some implementations offer other formatting methods. For example, SML/NJ has a Format structure that allows you to use a printf-style formatting string. However, the arguments must be wrapped according to their type:
Format.format "%.3e" [Format.REAL 4324423423.5]; (* evaluates to string "4.324e09" *)
Other SML implementations might have their own custom formatting functions.
I'm trying to simplify my code by using #define statements. This is because it contains a lot of repetitive "chunks" of code that cannot be repeated using the obvious alternative, functions, because in these chunks, variables need to be declared like you'd do in a #define statement, e.g. #define dostuff(name) int name##Variable;.
Code
#define createBody(name,type,xpos,ypos,userData,width,height) b2BodyDef name##BodyDef;\
name##BodyDef.type = type==#"dynamic"?b2_dynamicBody:b2_staticBody;\
name##BodyDef.position.Set(xpos,ypos);\
name##BodyDef.userData = userData;\
name=world->CreateBody(&name##BodyDef);\
b2PolygonShape name##shape;\
name##shape.SetAsBox(width/ptm_ratio/2,height/ptm_ratio/2);
... and applying that in the following:
createBody(block, #"dynamic", winSize.width*5/6/ptm_ratio, winSize.height*1/6/ptm_ratio, ((__bridge void*)blockspr), blockspr.contentSize.width, blockspr.contentSize.height)
// error appears there: ^
Now my point is that everything's working great, no errors, except a single one that's freaking me out:
Expected unqualified-id
which points at the first bracket in ((__bridge ..., as indicated. (That argument gets passed via the userData argument to createBody.)
I know this code is nowhere near simple, but since everything else is working, I believe that an answer must exist.
This is my first question on SO, so if there's anything unclear or insufficient, please let me know!
I'm trying to simplify my code by using #define statements.
This sounds an alarm in my mind.
Break this down into functions. You said you can't. I say you can.
Notice that your macro here:
createBody(name,type,xpos,ypos,userData,width,height);
It has exactly the same syntax as a C function. So you've already created a function, you only declared it as a macro. There's no reason why you couldn't rewrite it as a function (C or Objective-C doesn't matter). You do not need to give each body its own name, instead you could store them in a dictionary (careful though because Box2D takes ownership of the bodies).
In algebra if I make the statement x + y = 3, the variables I used will hold the values either 2 and 1 or 1 and 2. I know that assignment in programming is not the same thing, but I got to wondering. If I wanted to represent the value of, say, a quantumly weird particle, I would want my variable to have two values at the same time and to have it resolve into one or the other later. Or maybe I'm just dreaming?
Is it possible to say something like i = 3 or 2;?
This is one of the features planned for Perl 6 (junctions), with syntax that should look like my $a = 1|2|3;
If ever implemented, it would work intuitively, like $a==1 being true at the same time as $a==2. Also, for example, $a+1 would give you a value of 2|3|4.
This feature is actually available in Perl5 as well through Perl6::Junction and Quantum::Superpositions modules, but without the syntax sugar (through 'functions' all and any).
At least for comparison (b < any(1,2,3)) it was also available in Microsoft Cω experimental language, however it was not documented anywhere (I just tried it when I was looking at Cω and it just worked).
You can't do this with native types, but there's nothing stopping you from creating a variable object (presuming you are using an OO language) which has a range of values or even a probability density function rather than an actual value.
You will also need to define all the mathematical operators between your variables and your variables and native scalars. Same goes for the equality and assignment operators.
numpy arrays do something similar for vectors and matrices.
That's also the kind of thing you can do in Prolog. You define rules that constraint your variables and then let Prolog resolve them ...
It takes some time to get used to it, but it is wonderful for certain problems once you know how to use it ...
Damien Conways Quantum::Superpositions might do what you want,
https://metacpan.org/pod/Quantum::Superpositions
You might need your crack-pipe however.
What you're asking seems to be how to implement a Fuzzy Logic system. These have been around for some time and you can undoubtedly pick up a library for the common programming languages quite easily.
You could use a struct and handle the operations manualy. Otherwise, no a variable only has 1 value at a time.
A variable is nothing more than an address into memory. That means a variable describes exactly one place in memory (length depending on the type). So as long as we have no "quantum memory" (and we dont have it, and it doesnt look like we will have it in near future), the answer is a NO.
If you want to program and to modell this behaviour, your way would be to use a an array (with length equal to the number of max. multiple values). With this comes the increased runtime, hence the computations must be done on each of the values (e.g. x+y, must compute with 2 different values x1+y1, x2+y2, x1+y2 and x2+y1).
In Perl , you can .
If you use Scalar::Util , you can have a var take 2 values . One if it's used in string context , and another if it's used in a numerical context .