Does a language describe things beyond itself? - objective-c

I now have sufficent exposure to the Objective-C that if i'm stuck with anything, I know how to think of the problem in terms of a likely tool I need and go look for it. Simple really. There's A Method For That. So nothings a real problem anymore.
Now I'm looking deeper at the language in broader terms. We write stuff. The compiler hews out all the code to execute it. From a simple flashlight app thats a if/then decision to turn on, to a highly complex accelerometer driven 3D shoot 'em up with blood 'n guts and body parts following all sorts of physics, the compiler prepares the code ready to be executed like a giant railway layout. No matter how random it appears on the screen, everything possible can be generically described and prepared for.
So here's the question:
Are there cases where something completely unexpected to the software designer can still be handled without an execution halt? Maybe I'd better re-frame the question a few different ways: Can a ( objective-C ) program meta-compile within itself in response to an unplanned-for user request? or to re-put my opening remark, are there tools or methods for unlikely descriptions of unlikely problems?

I think #kfb has the right comment about metaprogramming. Check out the Runtime docs in conjunction with metaprogramming tutorials.
Parts of your last question might be in the realm of this doc.
If your looking for ways to reduce the size of your code base for the lesser used features, one idea might be to make the features internet based (assuming connectivity is not a problem).

Related

Code cleanup and optimization, where to start?

I want to optimize my angularjs frontend application and cleanup the code to provide better code quality.
I thought about bringing in more abstraction, since I implemented a lot of similiar looking, but slightly different controllers.
My question(s) are the following:
Are there common techniques to recognize bad code and optimize it?
How can one determine if code is either good, bad or redundant?
Where should one start, when trying to provide better code quality in
an existing software project?
To answer your questions:
Yes there are: by looking at the code itself experienced programmers can tell if the code has certain characteristics or not. Some metrics exist that could indicate alarm signals in terms of quality like "many dots" in object-oriented languages (same in Javascript) which indicates close coupling. Here is a comprehensive list.
By looking at it or as written before with static code analysis.
As others stated don't optimize or refactor just for the sake to have a good looking code base. When you need to touch existing code again to e.g. add a feature or fix a bug then start to look for code redundancies and many other signals that might indicate to refactor the code. Martin Fowler wrote an excellent book about it with step-by-step examples which IMO is a must-read for every developer. Also a good starting point is Misko's site. He talks about testability but "good" code is well testable.
What's really important before refactoring is to have a strong automated test base to rely on. If not add tests and go really slow to make sure you don't break existing functionality.
The topic is really huge and impossible to work through in a post here but I think it's one of the most important ones that makes an experienced programmer.
If a code is good or bad is your own opinion.
To make the code look better and more efficient I would do something like this:
Don't make the lines to long.
Use variables that make sense.
Use tabs and enter when it is too messy.
There are a whole lot more things to clean up your code, but these are just some examples.
If the code works - don't touch it :)
Then when you work on bug fixes or new features\changes - see if you can also gradually improve pieces of code you are working with. The more you work with the code the better understanding of overall picture you should get and opportunities to improve and optimize should become more obvious. (you should also continue learning from other sources - books, internet, other codebases)
There is now magic "one size fits all" solution :) but yes, you can start with simple style changes as suggested in the other answer.
The process you refer to is commonly known as Refactoring. There are a number of standard techniques for improving code; Martin Fowler's book "Refactoring" has a list, with examples.
Many popular IDEs have refactoring tools built-in.
One of the processes in agile development is known as "red/green/refactor". Red means your code doesn't pass its unit tests; green means it passes (i.e. it does what it's supposed to do), and "refactor" means you make it elegant, maintainable and clean. Because you have a unit test, you know the refactoring doesn't break the code.
Where to start is a tough question - I typically recommend refactoring when you're fixing bugs. You may as well write a unit test to expose the bug, and tidy up the code while fixing the bug. Because that module has a bug, it's likely to be high-risk, so you should improve the code quality.

Converting Actionscript syntax to Objective C

I have a game I wrote in Actionscript 3 I'm looking to port to iOS. The game has about 9k LOC spread across 150 classes, most of the classes are for data models, state handling and level generation all of which should be easy to port.
However, the thought of rejiggering the syntax by hand across all these files is none too appealing. Are there tools that can help me speed up this process?
I'm not looking for a magical tool here, nor am I looking for a cross compiler, I just want some help converting my source files.
I don't know of a tool, but this is the way I'd try and attack your problem if there really is a lot of (simple) code to convert. I'm sure my suggestion is not that useful on parts of the code that are very flash-specific (all the DisplayObject stuff?) and also not that useful on lots of your logic. But it would be fun to build! :-)
Partial automatic conversion should be possible, especially if the objects are just 'data containers', watch out for bringing too much as3-idiom over to objective-c though, it might not always be a good fit.
Unless you want to create your own (semi) parser for as3 you'd need some sort of a parser, apparently FlexPMD has one (never used it), and there probably are others.
After getting your hands on a parser you have to find some way of suggesting to the system what parts could be converted automatically. You could try and add rules to the parser/generator script for the general case. For more specific cases I'd use custom metadata on the actual class/property/method, assuming a real as3 parser would correctly parse those.
Now part of your work will shift from hand-converting files to hand-annotating files, but that might be ok for you.
Have the parser parse your classes and define actions based on your metadata that will determine what kind of objective-c class to generate. If you get this working it could at least get you all your classes, their simple properties and method signatures (getting the body of the methods converted might be a bit too much to ask but you could include it as a comment so you'd have a nice reference while hand-translating).
PS: if you make this into a one way process be very sure you don't need to re-generate it later - it would be bad if you find out that you have been modifying the generated code and somehow need to re-generate all those classes -- that would mean you'll have to redo all your hard work!
I've started putting a tool together to take the edge off the menial aspects of this process.
I'm trying to figure out if there's enough interest to make it clean and stable enough to release for others to use. I may just do it anyway.
http://meanwhileatthelab.blogspot.com.au/2012/08/automating-process-of-converting-as3-to.html
It's so far saving me a lot of time while porting one of my fairly large games from AS3 to objc.
Check out the Sparrow Framework. It's purported to be designed with Actionscript developers in mind, recreating classes that sort of emulate display list and things like that. You'll have to dive into some "rejiggering" for sure no matter what you do if you don't want to use the CS5 packager.
http://www.sparrow-framework.org/
even if some solution exists, note that architectural logic is DIFFERENT, and many more other details.
Anyway even if posible, You will have a strange hybrid.
I am coming back from WWDC2012, and the message is (as always..) performance anf great user experience.
So You should rewrite using a different programming model.

When you write your code, do you deal with errors proactively or reactively?

In other words, do you spend time anticipating errors and writing code to get around these potential issues, or do you write the code as you see fit and then work through any errors on an issue by issue basis?
I've been thinking a lot about this lately and I'm very much a reactive person. I write my code, give it a whirl, go back correct error and repeat until application works as expected. However a friend of mine offered that he spends time thinking how each line is interpreted and fixes errors before they occur.
I must point out that re-active is pure PRE-live. I definitely make sure my application is working before it goes live.
There should always be a balance.
Too many error checking is slow and leads to garbage code. Not enough error checking makes your program crash on edge cases which is not very good to discover after having it shipped.
So you decide how reliable some piece of code should be and implement error checking accordingly. Some test utility can be not very reliable - less error checking. A COM server meant to be used by a third party search service in deep background should be super reliable - much more error checking.
I think asking this in isolation is kinda weird, and very subjective, however there are obviously a bunch of techniques that permit you to do each. I tend to use these two:
Test-driven development (this would seem to be proactive)
Strong, static typing (reactive, but part of a tight iterative development cycle, as in, it's enforced by my ML compiler, and I compile a lot)
Very occasionally I swerve into the world of formal verification of programs. That's definitely "reactive", but if you think a little more up-front, it tends to make the verification easier.
I must also say that I value a lot of up-front thought in programming. The easiest way to avoid bugs is to not write them in the first place. Sometimes it's inevitable, but often a little more time spent thinking about the problem can lead to better-quality solutions, and then the rest can be taken care of using the kinds of automated methods I talked about above.
I usually ask myself a bunch of what-ifs when coding, like
The user clicks the button, what if they didn't select a date?
The user is typing in the search box, what if they try to type html in there?
My label text depends on a value from a shared drive, what if it's not mapped?
and so on. By doing this I've found that when the application does go live, there are a ton fewer errors and I can focus on fixing more obscure bugs instead of correcting conditions that should have been in place to begin with.
I live by a simple principle when considering error-handling: garbage in, garbage out. If you don't want any garbage (e.g. invalid input) messing up your software, you have to find all the points in your software where it can get in and handle it. Of course, the more complicated your software is, the harder it is to find every point of entry, but I feel that the more you do up front the less reactive you will need to be later on.
I advocate the proactive approach.
I try to write the code in that style which results in maintainable and reliable code
I use the defensive programming techniques to prevent stupid errors in code due to my loss of attention and similar
I design the database model according to the fortress principle, SQL code checking for results after each singular operation
I think of potential problems that can happen with that part of the code and I account for that. Not for every possibility but for major ones I can think of right now.
This usually results in software operating rather smoothly. At times it even surprises me but that was the intended goal, so here we are.
IMHO, the word "Error" (or its loose synonym "bug") itself means that it is a program behavior that was not foreseen.
I usually try to design with all possible scenarios in mind. Of course, it is usually not possible to think of all possible cases. But thinking through and allowing for as many scenarios as possible is usually better than just getting something working as soon as possible. This saves a lot of time and effort debugging and redesigning the code. I often sit down with pen and paper for even the smallest of programing tasks before actually typing any code into my editor.
As I said, this will not eliminate all errors. For me it pays off many times over in terms of time spent debugging. Another benefit is that it results in a more solid and maintainable design with fewer bugfixing hacks and special cases added on later. But in any case, you will have to do a lot of debugging after the code is done.
This does not apply when all you want is a mockup or rapid prototype. Also practical constraints such as deadlines often makes a thorough evaluation difficult or impossible.
What kind of programming? It's impossible to answer this in any general way. (It's like asking "do you wear a helmet when playing?" -- well, playing what?)
At work, I'm working on a database-backed website. The requirements are strict, and if I don't anticipate how users will screw it up, I'm going to get a call at some odd hour of the day to fix it.
At home, I'm working on a program ... I don't even know what it'll do yet. I can't deal with 'errors' because I don't know what 'an error' is in this context, because I don't know what correct behavior is going to be. The entire purpose of the program can and frequently does change on a timescale of minutes to hours, so even a couple minutes spent thinking about errors this early is a complete waste of time. (It's even worse than browsing SO, since error-handling adds lines of code.)
I guess the only general answer is "I do what makes sense in terms of saving time in the long term", which is, after all, the whole reason to use machines to do work for us.

Compromising design & code quality to integrate with existing modules

Greetings!
I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in).
My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code.
What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine!
I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated!
Thanks!
Also, the longer you wait, the worse the codebase will smell. My suggestion would be first create a testsuite that you can evaluate your refactoring against. This makes it a lot easier to see if you are refactoring or just plain breaking things :).
I would definitely recommend you to refactor the code if you feel its junky. Yes, during the process of refactoring you may have some inconsistencies/problems at the start. But that is why we have iterations and testing. Since you are going to build up on this core engine in future, why not make the basement as stable as possible.
However, be very sure on what you are going to do. Because at times long lines of code does not necessarily mean evil. On the other hand they may be very efficient in running time. If/else blocks are not bad if you ask me, as they are very intelligent in branching from a microprocessor's perspective. So, you will have to be judgmental and very clear before you touch this.
But once you refactor the code, you will definitely have fine control over it. And don't forget to document it!! Tomorrow, someone might very well come and say about you on whatever you've told about this guy who have written that core code.
This depends on the constraints you are facing, it's a decision to be based on practical basis, not on theoretical ones. You need three things to consider.
Time: you need to have enough time to learn it, implement it, and test it, without too many other tasks interrupting you
Boss #1: if you are working for someone, he needs to know and approve the time and effort you will spend immediately, required to rebuild your solution
Boss #2: your boss also needs to know that the advantage of having new and clean software will come at the price of possible regressions, and therefore at the beginning of the deployment there may be unexpected bugs
If you have those three, then go ahead and refactor it. It will be surely be worth it!
First and foremost, get all the business logic out of the Form. Second, locate all the parts where the code interacts with the global state (e.g. accessing the embedded system). Delegate all this access to methods. Then, move these methods into a new class and create an instance in the class's constructor. Finally, inject an instance for the class to use.
Following these steps, you can move your embedded system logic ("existing module") to a wrapper class you write, so the interface can be nice and clean and more manageable. Then you can better tackle refactoring the monster method because there is less global state to worry about (only local state).
If the code works and you can integrate your part with minimal changes to it then let the code as it is and do your integration.
If the code is simply a big barrier in your way to add new functionality then it is best for you to refactor it.
Talk with other people that are responsible for the project, explain the situation, give an estimation explaining the benefits gained after refactoring the code and I'm sure (I hope) that the best choice will be made. It is best to speak about what you think, don't keep anything inside, especially if this affects your productivity, motivation etc.
NOTE: Usually rewriting code is out of the question but depending on situation and amount of code needed to be rewritten the decision may vary.
You say that this is having an impact on the future design of the system. In this case I would say it is broken and does need fixing.
But you do have to take into account the business requirements. Often reality gets in the way!
Would it be possible to wrap this code up in another class whose interface better suits how you want to take the system forward? (See adapter pattern)
This would allow you to move forward with your requirements without the poor design having an impact.
It gives you an interface that you understand which you could write some unit tests for. These tests can be based on what your design requires from this code. It ensures that your assumptions about what it is doing is correct. If you say that this code works, then any failing tests may be that your assumptions are incorrect.
Once you have these tests you can safely refactor - one step at a time, and when you have some spare time or when it is needed - as per business requirements.
Quite often I find the best way to truly understand a piece of code is to refactor it.
EDIT
On reflection, as this is one big method with multiple calls to the outside world, you are going to need some kind of inverse Adapter class to wrap this method. If you can inject dependencies into the method (see Dependency Inversion such that the method calls methods in your classes then you can route these to the original calls.

Real time scripting language + MS DLR?

For starters I should let you guys know what I'm trying to do. The project I'm working on has a requirement that requires a custom scripting system to be built. This will be used by non-programmers who are using the application and should be as close to natural language as possible. An example would be if the user needs to run a custom simulation and plot the output, the code they would write would need to look like
variable input1 is 10;
variable input2 is 20;
variable value1 is AVERAGE(input1, input2);
variable condition1 is true;
if condition1 then PLOT(value1);
Might not make a lot of sense, but its just an example. AVERAGE and PLOT are functions we'd like to define, they shouldn't be allowed to change them or really even see how they work. Is something like this possible with DLR? If not what other options would we have(start with ANTRL to define the grammar and then move on?)? In the future this may need to run using XBAP and WPF too, so this is also something we need to consider, but haven't seen much if anything on dlr & xbap. Thanks, and hopefully this all makes sense.
Lua is not an option as it is to different from what they are already accustomed to.
Ralf, its going to reactive, and to be honest the timeframe for when the results should get back to the user may be 1/100 of a second all the way up to 2 weeks or a month(very complex mathematical functions).
Basically they already have a system they purchased that does some of what they need, and included a custom scripting language that does what I mentioned above and they don't want to have to learn a new one, they basically just want us to copy it and add functionality. I think I'll just start with ANTRL and go from there.
Lua
it's small, fast, easy to embed, portable, extensible, and fun!
Lua is definitly the best choice for soft real-time system (like computer games).
See http://shootout.alioth.debian.org/ for detailed benchmarks.
However, last time I checked, Lua used a mark-and-sweep garbage collector which can lead to deadline-violation and non-deterministic jitter in real-time systems.
I believe that you could use theoretically use the DLR, but I'm unsure about support in an XBAP (partially trusted?) scenario.
If you host the DLR you would quickly be able to take advantage of IronRuby or IronPython scripting. You would want to look at these implementations when creating your own language implementation. If you post your question to the IronPython mailing list I'm sure you would get a better reply around the XBAP scenario, and some of the developers there created ToyScript.
What kind of real-time requirement are you trying to fulfill? Is the simulation a hard real-time simulation (some kind of hardware-in-the-loop simulation ==> deadline is less than 1/1000 second)?
Or do you want the scripting-system to be "reactive" to user-input ==> 1/10 should be sufficient.
I am no expert regarding MS DLR, but as far as I know, it does not support hard real-time systems. You may want to take a look at the real-time specification for Java (RTSJ)
Firstly I think that defining your own language is not the way to go.
Primarily because the biggest productivity gains you can get for programmers or non-programmers are the development tools. You (and 99.9% of the rest of us) are not going to write tools as good as what is out their.
Language design is hard.
Language support and documentation, also hard
I would recommend looking for a pre-built solution. If you could find a language that can lock down some functionality, that would be a good starting point. MatLab would be the first that comes to my mind.
Lastly, ditch the natural language part, BASIC, COBOL and YA-TDWTF-Lang all tried and failed at it.
Full disclosure: I work for a company that is developing a generalized domain specific language "system". It's targeted at data-in/text-out applications so it's not apropos and it's not yet to beta. The result is I'm somewhat knowledgeable and biased.