As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have been working on an application from past 4 months and it's very much depends on Google Maps on iOS platform. Recently one of my friends raised a concern that what if Apple Inc. decides to use a different Map provider?
It turns out after few searching on internet that Apple is going to replace Google Maps with some new advance 3D maps build by company called C3. (One of the researched resource). Well now I am worried of my already written code,
Should I delay my development work until this new technology gets in? or just wait until Apple announces officially.
Thanks
This is a common dilemma in programming, and there's a common solution too. Develop your own primitives - whether you need to display overlays, show landmarks, draw polygons and lines, do everything through stubs in your own code. If the underlying platform has to change, you then have a few well-known places to update to the new API.
Be very strict about not accessing the underlying API anywhere that isn't in your wrapper layer, and it should be straight-forward to change to a different later. Not free, of course, but so long as it's possible to implement the primitives you need in the new layer, you just need to change those, and can leave the rest of your project untouched.
It's not worth losing months' of having a finished project to avoid this situation.
Edit: This approach has another benefit - if you end up writing multiple primitive layers for different APIs, you may be able to let the user pick between them: you may have a (more expensive) higher-quality map layer which you charge for, and a cheap/free one which you don't - allowing people free access to a lower-quality version, and letting them buy an upgrade to the better maps. Or ... there are lots of possibilities. It's the same pattern some applications take with data-persistence layers, letting people run the same application on top of differing data platforms. There are lots of examples of this patterm.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a bunch of C code accessing database (Oracle, DB2 and Sybase) through Embedded SQL : the base code is the same, but with three different precompilers, three sort of executables are built, one for each database/platform.
I works perfectly fine, but we need now migrate to a solution using ODBC access.
The problem is : what tools / api can be used ? A direct way seems to write a custom precompiler (or modify an existent) to wrap all SQL and host variables calls to calls on an ODBC connection.
Can somebody recommend tools for that task or api to keep it simple ?
Or is it a simpler way, another approach ?
Thank you
As is usual for such situations, there are likely no off shelf answers; people's codebases always have a number of surprise in them, and the combination prevents a COTs tool from ever being economical for individual situations.
What you want is a program transformation system (PTS), with a C front end, that can be customized to parse embedded SQL. Such tools can apply source-to-source rewrite rules ("if you see this pattern, then replace it by that pattern") to solve the problem.
These tools require some pretty technical effort to configure. In your case, you'd have to adjust a C front end to handle embedded SQL; that's typically not in C parsers. (How is it that you can process this stuff in its current form?) You'll have trouble with the C preprocessor, because people do abusive things with it that really violate a parsers nested-structures-view of the universe. Then you'll have to write and test the rules.
This effort is a sunk cost to be traded against the effort of doing the work by hand or some more ad hoc scripting (e.g., Perl) that partially does the job leaving you to clean it up. Our experience is that it is not worth the trouble below 100K SLOC, and that you have no chance of manual/ad hoc remediation above 1M SLOC, and in between your mileage will vary.
At these intermediate sizes, you can agonize over the tradeoffs; that costs energy and time, too. Sometimes its just better to bite the bullet and do it any way you can an clean it up.
Our DMS Software Reengineering Toolkit is one of these PTS. It has a customizable C parser and preprocessor, precisely to help deal with these configuration troubles. The other PTSs mentioned in the Wikipedia article, do not, I beleive, have any serious C parser associated with them. (I'm the guy behind DMS).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been working on my own ssl based multi process multi file descriptor threaded server for a few weeks now, needless to say it can handle a good amount of punishment. I am writing it in C++ in an object oriented manner and it is nearly finished with signal handling (atomic access included) and exception / errno.h handled.
The goal is to use the server to make multi-player applications and games for Android/iOS. I am actually very close to the completion, but it recently occurred to me that I can just use Apache to accomplish that.
I tried doing some research but couldn't find anything so perhaps someone can help me decide weather I should finish my server and use that or use apache or whatever. What are the advantages and disadvantages of apache vs your own server?
Thank you to those who are willing to participate in this discussion!
We would need more details about what you intend to accomplish but I would go with Apache in any case if it matches your needs:
it is battle tested for all kind of cases and loads
you can benefit from all the available modules (see http://httpd.apache.org/docs/2.0/mod/)
you can benefit from regular security patches
you don't have to maintain it yourself!
Hope this helps!
You can always write your own software even when perfectly well-proven alternatives exists, but you should be conscious about what are your reasons for doing so, and what are the costs.
For instance, your reasons could be:
Existing software too slow/high latency/difficult to synchronize
Existing software not extensible for my purpose
Your needs don't overlap with the architecture imposed by the software - for instance if you need a P2P network, then a client/server-based HTTP protocol is not your best
You just want to have fun exploring low-level protocols
I believe none of the above except possibly the last of these apply to your case, but you have not provided much details, so my apologies if I am wrong.
The costs could be:
Your architecture might get muddled - for instance you can fall into the trap of having your server being too busy calculating if a gunshot hits the enemy, when 10 clients are trying to initiate a TCP connection, or a buffer overflow in your persistent storage routine takes down the whole server
You spend time on lower level stuff when you should dealing with your game engine
Security is hard to get right, it takes many man-years of intrusion testing and formal proofs (even if you are using openSSL)
Making your own protocol means making your own bugs
Your own protocol means you have to make your own debuggers (for instance you can't test using curl or trace using HTTP proxies)
You have to solve many of the issues that have already been solved for the existing solution. For instance caching, authentication, redirection, logging, multi-node scaling, resource allocation, proxies
For your own stuff you can only ask yourself for help
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I have seen some people who refuse to use Interface Builder and prefer to make everything using code. Isn't this a bit tedious and doesn't it take longer? Why would people do that?
This is usually a holdover from working in other environments with other UI builders. A lot of UI builder programs are viewed as newbie hand-holding at best and outright harmful at worst. Interface Builder is unusual in that it's actually the preferred way to create interfaces for the platform.
Some people don't like mixing code functionality in interface designs. Another example is when flash devs would include lots of code snippets directly in the stage (fla files), rather than in separate .as files. With xib it's not as big of a problem, since they are xml and can be merged quite easily when using source control. I personally like using xib's because we have a team of devs and designers -- splitting up the work load is nice. The designers can easily port their photoshop/fireworks designs into xibs and we can focus on the functionality.
Sometimes you want to do something that the UI builder can't quite handle (these situations aren't common, but they do come up now and then). Sometimes you may feel you have better control over what's happening when you write the code yourself. Me, I prefer to let the UI builders do it as much as possible, but sometimes it doesn't always work that nicely, and I sometimes have had to write the code myself.
Possibly because the Interface Builder is another tool to understand. Also, it's useful to know how to do things programmatically in case nibs don't give you enough functionality.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Can anyone point me to a cognos API document and some example code? The best for me is that the API can be access thru python. But examples in other languages are good also.
The Cognos SDK for .net is horrible; I know because I just spent 3+ days trying to get even basic functionality working. Its clear that the person who developed the sample applications has no idea how to work with Web Services or .net.
I managed to find a Cognos.WSDL file that you can try to use to generate your own proxy classes; but; its not WS-I compatible and thus won't work with wsdl.exe
The cognosdotnet.dll and cognosdotnetassembly's are overbloated. There are nearly 1000 classes defined in there. They basically wrapped up their entire API set into a single assembly.
Cognosdotnet.dll defines all the types; and many of them are confusing to work with; but all the raw materials you need are there.
Cognosdotnetassembly.dll defines the serializers. Why they even include them is beyond me. This file is huge (46MB) and provides zero value. The problem is that there is a dependency on this assembly with the type definitions (cognosdotnet.dll).
What I ended up doing was taking Refelector; and code generating the cognosdotnet.dll; then removed the dependency on the serializers. I then created my own wrappers around it to make the API more friendly.
I would recomend starting with the reportrunner example as a starting point; to at least try and get your connectivity working etc..
You haven't indicated which version of Cognos you're seeking assistance for, but if it's for Cognos 8, you should have the full API docs and sample code if you have the Cognos 8 SDK.
The SDK samples are provided mostly in Java, though some are .NET.
The SDK Developer Guide (again, Cognos 8) should contain enough information to help you get started on putting your own library together.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to be able to choose the right branching strategy for most thinkable situations and organizations. So I'm looking for a extensive list of positive and negative effects of extending the use of code repository branches in a development organization.
Please only post one pro or one con in each post, so that the voting system can help rank the feedback somewhat.
Pro: By keeping latest deployed version in trunk, small fixes can be rolled out quickly without extensive testing of the latest development version.
Pro: Developers can work more freely in tighter iterations without stepping on eachother's feet.
Pro: if you have many branches you'll be pushed to adopt a modern DVCS (my experience is with Mercurial but I hear git or Bazaar are also good) rather than stay with a traditional centralized system (like, say, svn).
Pro: Branches can be used to facilitate 'what-if' scenario's in trying out new code. At the end a decision can be made to merge the new feature or to abandon it.
Con: Having too many branches in the air at the same time and you start forgetting where things where commited, where changes have been made etc.
Con (and it can be a big one): Merging back at a point in the future. The longer the duration and the greater the deviation of code base, the harder your life will be. My advice: think very carefully about branching and ensure you only do it when necessary and consider the effort involved in merging at a later date should it be required.
Con: Merge nightmare.
Con: Greater learning threshold for junior developers.
Pro: Each update is independant from the others, so work can be parallelized.
Con: someone has to manage the branch(es) and keep on top of things. In most teams this falls by the way-side.
Pro: Greater flexibility in diverging code for the purpose of simultaneously developing on or supporting multiple streams of work.