Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Is it better in some sense to vectorize code by hand, using explicit pragmas or to rely on or use auto-vectorization? For optimum performance using auto-vectorization, one would have to monitor the compiler output to ensure that loops are being vectorized or modify them until they are vectorizable.
With hand coding, one is certain that the desired instructions are being emitted, but now the code is likely not portable (either to other architectures or other compilers).
Auto vectorization never worked out well for me. To me it seems like auto-vectorization only works for very trivial loops at the moment.
I use the pragma/intrinsic approach and take a look at the assembly. If the compiler generates bad code (like spilling SSE registes onto the stack or adding redundant moves) I use inline assembler for the whole loop body.
Portability is btw not a problem. Often you start with a C/C++ loop and optimize it using intrinsics. Just keep the old loop and use it as a unit-test / fallback for your SIMD implementation. Also it's always wise to be able to remove all SIMD code from a project via a compile-time define. Debugging an application is much easier that way. The same define can be used for cross-compilation.
I would never rely on automatic vectorization from any compiler. With gcc I would be doubly wary because the effects of gcc's optimizations always vary from version to version. Almost everyone I know who relies on special optimizations or gcc extensions has to deal with breakage when a new gcc version is released.
You can usually trust pragmas and intrinsics, but you should keep a sharp eye on release notes for new gcc versions, and you should tell your own users what gcc version is needed to compile your code.
Once or twice when vectorization really mattered, we've added something to the test suite to call objdump and verify that vector instructions are actually being used. It would be nice to be able to detect 'bad vector code' (as Nils describes) automatically as well, but we've never gotten that far.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have 2 year of experience in IT. I haven't seen any recursive code. I would like to know If there is any company or organization Who use recursive code on their Production Environment. It would be great If some also explain the use cases also.
All code which uses variadic templates necessarily uses recursion, see e.g. http://kevinushey.github.io/blog/2016/01/27/introduction-to-c++-variadic-templates/.
The answers to this question give a few recursion examples. The most convincing one is a hand-coded compiler (or rather parser) implementation for a recursively defined language (like C and most others, where blocks can contain blocks, expressions expressions etc.). Perhaps it's only most convincing to me because I did that in a CS course, but still. Even here it is admittedly possible that production compilers are created with tools and are not recursive. If anybody shed light on the inner workings of gcc or one of the other open source compilers, I'd appreciate it.
I would generally assume that some programs handling recursive data structures with a limited recursion depth (like balanced trees, as opposed to normal trees or lists) use recursion, just because it is simple and elegant, and the depth limit removes the greatest obstacle for recursion.
Come to think of it, I have actually used recursion to parse a simple "option language" for an internal custom-made program which has an option -eval <file>, where the referenced file contains more options, possibly including more -evals. The referenced files are indeed simply recursively evaluated.
If the programs are basically CRUD (create, retrieve, update, delete) screens to interface to a database of some kind, you won't see much call for recursion. And that's a lot of of serious, real world programming.
But large numbers of programs have trees. E.g. an artwork tree, or a 3D animated object tree. Once you work with trees, recursion is by far the easiest way to solve problems.
There's also the "functional programming" paradigm which replaces iteration with recursion. It has some theoretical benefits and is used in some environments, though it's still a bit academic and experimental.
For your information I was into a IT company in my earlier days where I use to write loads of recursive code in production Environment. and more over it depends upon the coder if you wish to c I can send some recursive code example
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am about to start working on my master project which is processing images that are captured by an iOS application. My supervisor gave me the option to either develop the application with Swift or Objective C. I have searched online about which of them is better in terms of image processing and I still could not determine which one is better. Therefore, what would you suggest?
You can do everything with Swift, that you can do with Objective-C, but Swift offers additional advantages.
Swift has a simpler but more powerful syntax. You only need to maintain single source files, instead of interface/implementation pairs. Swift has better ways of dealing with errors and optional properties. Most importantly, for your image processing app, the compiler will optimize Swift code to run faster than Objective-C.
As with Cocoa objects, any pre-existing Objective-C image processing frameworks you might wish to use, can be called from Swift with no problem — so, you can "mix and match" as desired.
The only reason I can think of why you might wish to choose Objective-C over Swift, would be if time were of the essence and you were already totally up to speed with Objective-C, and had no time to learn Swift.
Any answer to your question will contain a fair amount of opinion and you need to weigh up the pros'n'cons yourself and make a decision - there is no right or wrong answer here.
which of them is better in terms of image processing
Neither. You'll also probably be using existing frameworks a fair amount and writing C-level (as in the language) code and that will work out much the same in either language.
I am about to start working on my master project
Go with which you are most comfortable with, you want to be able to concentrate on the topic of your Masters and not spend time learning a language/paradigm you are unfamiliar with.
Consider that Swift is still in a state of flux, Swift 3 is expected sometime in 2016 and it will change things - Apple have made it clear that (at least the first versions of) Swift is not yet stable and code may be broken by updates. This doesn't mean you shouldn't use Swift, but if you do try to stick with the same version during your Masters and resist any temptation to upgrade.
HTH, and may your Masters go well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Lua beginner here, i am looking into lua.
my question is: since an object in Lua is just a table,
new fields can be added at runtime. if I have a typo in the code, and instead of changing a field, I create a new field, won't that bring mayhem? ;)
I would only be able to figure out the bug in runtime, if I even get to that point in the program.
(of course the table concept has other benefits like meta programming without reflection, but my question is about "safety" or predictability.)
Is that the right conclusion?
Yes, that is correct.
When working with a dynamically typed language, you'll need an extensive suite of unit tests, to make sure you cover all possible scenarios and prevent the kind of mayhem you described.
If you want to protect yourself from this, I'd recommend looking at a static typed language, such as java, c# or scala, and let the compiler do the type-checking for you.
This is why Twitter moved from Ruby to Scala - as the project grows, it gets progressively harder to keep track of bugs that can only be verified at runtime using a dynamically typed language - but could be verified at compile-time by a static language compiler.
Dynamic typed languages are based on duck typing:
If it walks like a duck, and quacks like a duck, it is a duck
I prefer this version:
If it walks like a duck, and quacks like a duck, it’s probably gonna throw exceptions at runtime.
Lua gives you the mechanisms to have at least as much safety as other dynamic programming languages with baked-in object models do. See here for instance.
Errors will still happen at runtime only though, so you need a test suite with decent coverage.
There are projects to add static typing to Lua. Fabien Fleutot, who created metalua, presented his at the latest Lua Workshop. See:
his slides
a high-level overview of his work
a more formal paper about it
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I would like to know if there is any language that compiles to VBA, like we have coffeescript for js, less for css...
If there is not, is there something that prevents us from achieving that? Would it be a bad idea?
I guess that would help people that are used to work with more modern languages to be a LOT more productive.
What would it take to do that ? Could we reuse the coffeescript grammar and parser, but hack into the steps that generate Javascript and generate VBA instead ? A subset of VBA would be just fine.
In general, it's always possible to compile from one Turing-complete language to any other. The result might not be fast, but it's generally fairly straightforward.
So, why was Coffeescript created ex nihilo, instead of using an existing language? Integration.
Suppose, for example, that we wanted to write JS in Haskell. You could easily implement a Haskell to JavaScript compiler. Now, suppose, writing in Haskell, you wanted to pop up a dialog box on a Web page. In JS, you'd write alert("hello"), but if your H2JS compiler is correct, there won't be any alert function, because Haskell functions don't have side-effects (perhaps the whole reason that you wanted to write in Haskell was so that you could have nice guarantees like that calling functions won't pop up dialog boxes).
There are many ways that your H2JS compiler could provide this functionality, but it's not necessarily obvious which one was chosen. You can't just read JavaScript documentation to learn how to do browser-y things; you also need to read the documentation for your H2JS compiler!
On the other hand, Coffeescript is similar enough to JS that it's pretty obvious how to pop up alerts, edit the DOM, etc., just from knowing how it's done in JS.
So, it's not hard to do in a slapdash way, but, if the source language is much different from VBA, it'll likely be tricky to do the VBA-specific things that make the project useful in the first place.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm hoping to show a visualization of the code base which can show areas that are overly complex and intertwined.
I know what clang is, but I'm not sure it gives me what I want in this case.
AnalysisTool: I know it's a clang wrapper but it also provides dependency diagrams.
AnalysisTool was originally created to serve two main purposes: to provide an easy-to-use executable binary of Clang static analyzer and to customize Clang by providing some additional checks. When Clang static analyzer was in its early stages, the only option for developers to try it out was to check out the latest source code of LLVM and Clang, compile it, and use the analyzer from the command line. AnalysisTool provided an easy-to-use GUI interface and removed the need to touch Clang source code. It also provided automatic updates, so that users of AT could always use the latest Clang static analyzer.
lizard:
This tool will calculate the cycolomatic complexity of C/C++/Objective C code without caring about header files and preprocessors. So the tool is actually calculating how complex the code 'looks' rather than how complex the code 'is'.
People will need this tool because it's often very hard to get all the
include folders and files right with a similar tool, but we don't
really need that kind of accuracy when come to cyclomatic complexity.
These are the only two tools I know, hope this helps.
Our Source Code Search Engine provides the ability to search across large sets of source code in multiple languages, using the code structure of each language to guide the search and minimize false positive matches.
As a side effect of its indexing process, it computes various complexity metrics (Halstead, McCabe) for files and writes that to an XML file you can process/display any way you like.
It has language front ends for C and C++; either of them ought to be able to process Objective C well enough for the SCSE to operator, and for OP's purpose, to compute such complexity metrics.
The downloadable version has the C front end included.
Edit June 2019: It has an Objective C front end now.