I've been toying around with some Project Euler problems and naturally am running into a lot that require the handling of bigger than long long type numbers. I am committed to using Cocoa and Objective-C (I need to stay sharp for work) but can't find an elegant way (read: library) to handle these really big numbers.
I'd love to use GMP but is sounds like using it with Xcode is a complete world of hurt.
Does anyone know of any other options?
If I were you I would compile gmp outside XCode and use just gmp.h and libgmp.a (or libgmp.dylib) in my XCode project.
Try storing the digits in arrays.
Although you will have to write some new functions for all your arithmatic problems but thats how we were told to do it in college.
Plus the speed of calculations was pretty improved as big numbers weren't really big afterall and were not numbers really altogether
see if it helps
regards
vBigNum in vecLib implements 1024 bit integers (signed or unsigned). Is that big enough?
If you wanted to use matlab (or anything close) you could look at my implementation of a big integer form (vpi) on the file exchange.
It is rather simple. Store each digit separately. Adds and subtracts are simple, just implement a carry operation. Multiplies are best done using convolution, then a carry. Implement divide and mod operators, then a powermod operation, useful for many of the PE problems. Powers are easy - just repeated squaring and multiplication, based on the binary representation of the exponent.
This will let you solve many PE problems.
I too got the bright idea to attempt some Euler Project problems with Cocoa/Objective-C and have found it frustrating. I previously used Java and perhaps some PHP. I posted my exact problem in this thread.
I always considered using a library cheating for this project. Just write a class with the things you need. And don't be afraid to use malloc and uint64_t and so on. NSNumber is not a good idea in many cases.
On the other hand, there are many problems where the obvious solution would require huge to enormously huge numbers, and the trick is to find a way to solve the problem without using these huge numbers. (For example, what is the sum of the last thousand digits of 1,000,000 factorial)?
Related
I am attempting a proof of concept under very constrained technological conditions. My question is: how to efficiently subtract big integers (represented as byte arrays) in a Java Card?.
Now, the details are what make the task tricky. I have access to one smart card. The model is Feitian JavaCOS A22 and runs Java Card 2.2. For full detail, Java Card enables the usage of a very restricted subset of the Java API (namely, no int, no char, and naturally, no BigInteger), but it does support a series of cryptographic primitives that can be detailed on this list.
In particular, my task is to implement classic ElGamal on card. I found two relevant replies so far. In the first one, Maarten points out that ElGamal is not on the standard, and therefore the functionality would need to be implemented. In this answer, thotheolh shares a link to an implementation of DiffieHellman in Java Card 2.2 based on the same principle: since it is not natively supported, it leverages on the functionality of RSA.
The logic is seamless: RSA, ElGamal and DiffieHellman rely on the same basic operation $a^b mod c$. Based on thotheolh's code, I have managed to achieve key generation. Encryption occurs out of the card so it is not my concern. But decryption requires a particular variant. For decryption $b=p-1-x$, where both $p$ and $x$ are BigIntegers. This is the point where I get stuck: how to calculate efficiently $p-1-x$?
Well, in fact there is no such thing like native real BigInteger support for JavaCard. There is BigNumber, but I don't think it will fit your requirements.
However, there is a way to undertake this limitation.
There is some JavaCard library that should allow you to deal with arbitrary long big integers - the problem is that your applet could run out of memory.
Sources of library are here, and here is the prebuilt .jar.
This approach might work but also likely to be drastically slow on real card. However this isn't an issue, if you run such code in simulator just for PoC.
I've no idea what is your IDE but this is how you can add this library for IntelliJ.
However, as Maarten Bodewes pointed out, you might be better focus on bytes substraction, just because of probable inefficency of any BigInteger JavaCard library.
Hope this helps.
UPD
BigNumber is guaranteed to be at least 8 bytes, but as far, as I tried it, it allows exactly 8 bytes, which is way to small to hold some security-robust parameters. Say, it cat not contain safe prime p that equals to 57896044618658097711785492504343953926634992332820282019728792003956564821041.
You can try this yourself with method getMaxBytesSupported() just to ensure the fact.
So, as you can see, BigNumber is relatively big for JavaCard, but still smaller, than most crypto protocols needs.
As others said, you won't find native Integers or BigInts in most JavaCards, even today.
However, for anyone still wondering 4 years later, JCMathLib actually implements this functionality.
It is not as fast as a native implementation would be but it uses the crypto coprocessor (where possible) and achieves decent performance.
I'm starting to build a real-time raytracer for iOS. I'm new to this raytracing thing, all I've done so far is write a rudimentary one in ObjC. It seems to me that a C-based raytracer is going to be faster than one written in ObjC, but the ObjC one will be far simpler, as object hierarchies come in very handy. Speed is very important, though, as I want it to be real-time, say 30 fps.
What's your opinion on whether the speed up of C be worth the extra complexity? I can forsee the C code taking much longer and causing me headaches with lots of bugs (although I'm not new to C), but going for more speed is seductive initially.
Are there any examples out there of raytracers written in C? My google search for such things is contaminated with lots of results for C++ and C#.
If you want fast ray tracing, you can pretty much forget about using either C or Objective C. You almost certainly want to use OpenCL. It's still not going to be enough to get you (even very close to) 30 fps, but it'll probably be at least twice as fast as anything running on the CPU (and 5-10 times faster wouldn't be any real surprise).
as zneak stated, c++ is the best combination for speed and polymorphism.
however, you can accomplish something close by reducing the objc calls (read: reduce the polymorphic interface to the minimum set required, then just putting the parts that need to be fast in plain c or c++).
objc message dispatch is quite fast, and you can typically remove much of the virtual/dynamic methods from your interfaces (assume every objc instance method is virtual). c code in objc methods is still c code... from there, determine where your bottlenecks are -- it doesn't hurt to profile before changing working code, either ;)
Writing a "realtime Raytracer" is without the use of Hand-Optimized Assembly (or the use of the "cheap" Intel compiler ;) , but this is not possible for this platform), impossible because you need the speed.
Further more you need a lot of Processing power but i guess even the OpenCL path is not powerful enought (this is in my opinion the case even for real Desktop machines, the reason for this is the lack of an real big cache on the Graphics Processor).
Have a look through http://ofps.oreilly.com/titles/9780596804824/ that is as close as you'll get.
It isn't ray tracing, I have written a ray tracer and it is a huge amount of work. GL uses a different technique for graphics, hence it will be unable for example to render the capacity of a diamond to capture light. that link contains sample code, you can download and run it. You will realise that even some of the moderately complex examples really chug on an actual device... we are talking < 1 fps.
I have a game I wrote in Actionscript 3 I'm looking to port to iOS. The game has about 9k LOC spread across 150 classes, most of the classes are for data models, state handling and level generation all of which should be easy to port.
However, the thought of rejiggering the syntax by hand across all these files is none too appealing. Are there tools that can help me speed up this process?
I'm not looking for a magical tool here, nor am I looking for a cross compiler, I just want some help converting my source files.
I don't know of a tool, but this is the way I'd try and attack your problem if there really is a lot of (simple) code to convert. I'm sure my suggestion is not that useful on parts of the code that are very flash-specific (all the DisplayObject stuff?) and also not that useful on lots of your logic. But it would be fun to build! :-)
Partial automatic conversion should be possible, especially if the objects are just 'data containers', watch out for bringing too much as3-idiom over to objective-c though, it might not always be a good fit.
Unless you want to create your own (semi) parser for as3 you'd need some sort of a parser, apparently FlexPMD has one (never used it), and there probably are others.
After getting your hands on a parser you have to find some way of suggesting to the system what parts could be converted automatically. You could try and add rules to the parser/generator script for the general case. For more specific cases I'd use custom metadata on the actual class/property/method, assuming a real as3 parser would correctly parse those.
Now part of your work will shift from hand-converting files to hand-annotating files, but that might be ok for you.
Have the parser parse your classes and define actions based on your metadata that will determine what kind of objective-c class to generate. If you get this working it could at least get you all your classes, their simple properties and method signatures (getting the body of the methods converted might be a bit too much to ask but you could include it as a comment so you'd have a nice reference while hand-translating).
PS: if you make this into a one way process be very sure you don't need to re-generate it later - it would be bad if you find out that you have been modifying the generated code and somehow need to re-generate all those classes -- that would mean you'll have to redo all your hard work!
I've started putting a tool together to take the edge off the menial aspects of this process.
I'm trying to figure out if there's enough interest to make it clean and stable enough to release for others to use. I may just do it anyway.
http://meanwhileatthelab.blogspot.com.au/2012/08/automating-process-of-converting-as3-to.html
It's so far saving me a lot of time while porting one of my fairly large games from AS3 to objc.
Check out the Sparrow Framework. It's purported to be designed with Actionscript developers in mind, recreating classes that sort of emulate display list and things like that. You'll have to dive into some "rejiggering" for sure no matter what you do if you don't want to use the CS5 packager.
http://www.sparrow-framework.org/
even if some solution exists, note that architectural logic is DIFFERENT, and many more other details.
Anyway even if posible, You will have a strange hybrid.
I am coming back from WWDC2012, and the message is (as always..) performance anf great user experience.
So You should rewrite using a different programming model.
Hej guys,
I was wondering if you know any well working Math or Calculation engines written in Objective-C? Found a graphing one using CorePlot already....
Thanks for your help! :)
You might get some use out of David Stes' CAKit (a computer algebra package), but you'll have a ton of hacking to do, since Stes is ravingly anti-FoundationKit and wrote the whole thing based on the old, pre-NeXT ICPak API. (Don't go looking to him for help -- you'll get a world of hurt.)
The key issue to keep in mind is that ICPak was based on Smalltalk and describes more or less the intended function of the class, while FoundationKit class names tend to describe the raw functionality itself; the most useful correlation will probably be OrdCltn -> NSMutableArray; you'll also have to tweak the memory management to use autorelease. It's not impossible, but you do have to understand the philosophy that FoundationKit follows.
Actually I am making a major project in implementing compiler optimization techniques. I already know about the existing techniques, but I am confused what technique to choose and how to implement it.
G'day,
What area of optimization are you talking about?
Compiler optimizations such as:
loop optimizations
dataflow optimizations
static single assignment based optimizations
code generator optimizations
etc.
etc.
Or optimization in the performance of the compiler itself, i.e. the speed with which it works?
Assuming that you have a compiler to optimize, and if it wasn't written by you, look up the documentation to see what is missing. Otherwise, if it was written by you, you can start off with the simplest. The definition for the simplest will depend on the language your compiler consumes. Or am I missing something?
I think you may have over optimized your question . Are you trying to decide where to start or trying to decide if some optimizations are worth implementing and others are not? I would assume all of the existing techniques have a place and are useful depending on the code they come across. If you are deciding which one to do first, pick the one you can do and do it. Pick the low hanging fruit. Get a few wins in your back pocket before you tackle a tough one and stumble and get frustrated. I would assume the real trick is having all the optimizations there and working but coming up with a way to decide which ones produce something better for a particular program and which ones get in the way and make things worse.
IMHO, the thing to do is implement the simple, obvious optimizations and then let it rest. Certainly it is very interesting to try to do weird and wonderful optimizations to rectify things that the user could simply have coded a little better, but if you really want to try to clean up after poor coding or poor design, the user can always outrun you. This is my favorite example.
My favorite example of compiler-optimizations-gone-nuts is Fortran compilers, where they go to such lengths to scramble code to shave a few hypothetical cycles that the code is almost impossible to debug, and typically the program counter is in there less than 1% of the time, so the effort is wasted.