What is a native build environment? - cmake

I am simply reading information off the interwebs, currently the cmake about page, and I need the information to fill in the gaps, it helps to see the big picture.
Surely the answer is straightforward, I hope. What is a native build environment?
Context: I need to know how to build software on my machine (CodeBlocks, etc), why I need to do this, the advantages of doing this, etc. But first, I need to know every piece of jargon I come across, and I could not find any explanations about exactly what a "Native build environment" is, although I can speculate to some degree.

"Native" as in "runs directly in the host operating system" and not "runs in a virtual machine or emulator."
The particular point that CMake's about page is trying to convey is the manner in which CMake achieves cross-platform functionality: specifically not by virtualizing, but by cooperating/collaborating directly with the host system, and the "normal" ways the host system is used to doing things.
Is the build environment then just the directory holding all the garbage needed for a compiler to build the software then?
That's an oversimplification – there's nothing to say that it's a single directory – but more or less, yes. The term is not jargon, it literally means "the state of the world" (aka, environment) needed for the build.
What would you call the other thing then, Non-native?
Sure, or virtualized, or emulated, or whatever other intermediate layer has been added.
Why do we need the distinction as well?
Why not? It's useful to have a concise, clear, simple term so we can communicate precisely and with minimal confusion and ambiguity.

Why 'non-native'? If you haven't already figured this out - there is something called cross-compilation.
Simply put, if I don't have access to target hardware (or an equivalent virtual machine) on which the software needs to run, how do I develop this software on my host and package it to run on that target?
Cross-compilation addresses this by providing necessary tools that perform a translation (or other important low-level stuff) to give you the final software. Such an environment to develop software is called non-native.
Well, I believe we need the term to state the technique.

Related

needed : Program for easily switching developer environment

This might be the wrong forum, but its filled with so many smart people so someone might know a solution.
One of my customers has given me several assignments that require a quite similar yet wery different development setup (different versions of class-libraries and such) and the problem is that each time I need to switch between the projects I need to do a lot of configuring to get the compilation to work and not include wrong versions of tools used.
If I dont get it right there might be a lot of cleanup afterwards.
It takes me at least an hour to switch projects. Often several.
Now I realize that the customer has an issue with a lot of branching in their setup and they are working on that, but thats a long process.
So.. My question is. Is there some tool that allows me to take "snapshots" of working project-environments and switch between them?
I'm working on windows.
There are many options. One of them is virtualization: running an entire operating system on top of your operating system. It will emulate a fake hard drive, fake network card, etc. This will run an entire desktop 'in a box'.
In the last years containerization is very popular: running a separate environment (file system, OS configuration) but leaving the heavy stuff to the host (in your case, Windows) such as networking, hardware, etc. This is geared towards running a single application (though this is flexible) 'in a box'.
In your case, since you need to encapsulate an entire development environment, it would seem to be best to go with a virtual machine.
A no-cost solution is Virtualbox. Paid alternatives such as VMWare may offer better usability, performance or features.
If your class libraries are in a specific location, you could containerize the compiler: you work on your PC, then build inside of a 'box' with its own environment. Docker is often used.
Depending on what language you are programming in, there may be a specific solution. Python has VirtualEnv, for example.

objective-c frameworks - Dynamic Library Install Name

I'm new to objective-c & osx architecture. I started playing with building a framework and then using it. I followed this great tutorial.
During the tutorial, I had to set the framework's target's Dynamic Library Install Name to #rpath/MyFramework.framework/Versions/A/MyFramework. My understanding is that #rpath will expand to the loader's (consumer's) run-path search paths.
It seems as if the responsibility of loading the framework is split between the framework author and the consumer author. Could someone please explain why the author of the framework needs to be concerned with the consumer's run-path search path? For example, if the framework-author set the Dynamic Library Install Name to point to some random directory (instead of #rpath) how would the client be able to consume the framework?
Thanks in advance.
It depends a lot on how the framework is being used. And it's important to remember that the framework construct has existed for a long time on the platform.
For a system framework, such as the ones that Apple creates, you're going to be quite happy that they keep the frameworks in a known location. In those cases, the paths that they use are fixed for the OS, and it guarantees that you don't accidentally load the wrong one. Further, as indicated in the Framework documentation, these frameworks are loaded only once on the machine, regardless of how many times they are used (see Apple:What Are Frameworks) . The benefit here is performance and it is for both the code and the resources in many cases.
Due to the recent move to randomize framework locations,and Apple's comments in the release notes that "Mountain Lion randomly relocates the kernel, kexts, and system frameworks at system boot," it certainly appears they're still sharing these resources, and thus still gaining from this benefit.
For embedded frameworks, the situation is a lot more tedious, and Apple has moved through a variety of methods over the years to make it easier to find frameworks wherever they may be. Due, again, to the shared nature, it would make sense for Applications which share common library requirements to share them on the machine, both for purposes of efficiency, and to make sure they're at the same version if they're sharing data. So, for example, if you have two separate apps that use the same framework to work with shared data, you might put the shared framework in /Library/Frameworks and have both apps explicitly look for that, making sure that some other (possibly older) version of the framework, that has been loaded by another App, is not used instead.
In the end, there's a lot of flexibility for the Framework producer and consumer the way that it currently works. It means that the developer can decide to share a framework, include a private copy of the framework, or even do both, depending upon whether the framework exists on the machine or not. However, the price for that flexibility is the complexity that we have today.
Another example of a reason you might not want to use #rpath specifically is for tightly-linked embedded frameworks (yes, people embed frameworks within other frameworks). In these cases, you don't know where the first framework is loaded, but you want to put the embedded framework inside of it, so that they stay together. In this case #loader_path is relative to the code that is loading it, so that your plug-in's framework can find its resources correctly.
In answer to your specific example about somebody setting the Dynamic Library Install Name
to a "random" location. In this case, you'd have to know that location. There might be many reasons for somebody doing this, such as wanting to discourage reuse by other programs, or because there are large resources within the framework that should only be installed in a known shared location.

Why use an IDE? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
This may be too opinionated, but what I'm trying to understand why some companies mandate the use of an IDE. In college all I used was vim, although on occasion I used netbeans for use with Java. Netbeans was nice because it did code completion and had some nice templates for configuration of some the stranger services I tried.
Now that my friends are working at big companies, they are telling me that they are required to use eclipse or visual studio, but no one can seem to give a good reason why.
Can someone explain to me why companies force their developers into restricted development environments?
IDE vs Notepad
I've written code in lots of different IDEs and occasionally in notepad. You may totally love notepad, but at some point using notepad is industrial sabatoge, kind of like hiring a gardener who shows up with a spoon instead of a shovel and a thimble instead of a bucket. (But who knows, maybe the most beautiful garden can be made with a spoon and a thimble, but it sure isn't going to be fast)
IDE A vs IDE B
Some IDE's have team and management features. For example, in Visual Studio, there is a screen that finds all the TODO: lines in source code. This allows for a different workflow that may or may not exist in other IDEs. Ditto for source control integration, static code analysis, etc.
IDE old vs IDE new
Big organizations are slow to change. Not really a programming related problem.
Because companies standardize on tools, as well as platforms--if your choice of tools is in conflict with their standards then you can either object, silently use your tool, or use the required tool.
All three are valid; provided your alternative doesn't cause other team-members issues, and provided that you have a valid argument to make (not just whining).
For example: I develop in Visual Studio 2008 as required by work, but use VS2010 whenever possible. Solutions/Projects saved in 2010 can't be opened in 2008 without some manual finagling--so I can't use the tool of my choice because it would cause friction for other developers. We also are required to produce code according to documented standards which are enforced by Resharper and StyleCop--if I switched to a different IDE I would have more difficulty in ensuring the code I produced was up to our standards.
If you're good at using vim and know everything there is to know about it, then there is no reason to switch to an IDE. That said, many IDEs will have lots of useful features that come standard. Maintaining an install of Eclipse is a lot easier than maintaining an install of Vim with plugins X, Y, and Z in order to simulate the same capabilities.
IntelliSense is incredibly useful. I realize that vim has all sorts of auto-completion, but it doesn't give me a list of overloaded methods and argument hints.
Multiple panes to provide class hierarchies/outlines, API reference, console output, etc.. can provide you more information than is available in just multiple text buffers. Yes, I know that you have the quickfix window, but sometimes it's just not enough.
Compile as you type. This doesn't quite work for C++, but is really nice in Java and C#. As soon as I type a line, I'll get feedback on correctness. I'm not arrogant enough as a programmer to assume that I never make syntax errors, or type errors, or forget to have a try/catch, or... (the list goes on)
And the most important of all...
Integrated Debuggers. Double click to set a break point, right click on a variable to set a watch, have a separate pane for changing values on the fly, detailed exception handling all within the same program.
I love vim, and will use it for simple things, or when I want to run a macro, or am stuck with C code. But for more complicated tasks, I'll fire up Eclipse/Visual Studio/Wing.
Sufficiently bad developers are greatly assisted by the adoption of an appropriately-configured IDE. It takes a lot of extra time to help each snowflake through his own custom development environment; if somebody doesn't have the chops to maintain their own dev environment independently, it gets very expensive to support them.
Corporate IT shops are very bad at telling the difference between "sufficiently bad" and "sufficiently good" developers. So they just make everybody do the same thing.
Disclaimer: I use Eclipse and love it.
Theoretically, it would decrease the amount of training needed to get an unexperienced developer to deal with the problems of a particular IDE if all the team uses that one tool.
Anyway, most of the top companies don't force developers to use some specific IDE for now...
I agree with this last way of thinking: You don't need your team to master one particular tool, having team knowledge in many will improve your likelyhood to know better ways to solve a particular roblems.
For me, I use Visual Studio with ReSharper. I cannot be nearly as productive (in .Net) without it. At least, nobody has ever shown me a way to be more productive... Vim, that is great. You can run Vim inside of Visual Studio + R# and get all the niceties that the IDE provides, like code navigation, code completion and refactoring.
Same reason we use a hammer to nail things instead of rocks. It's a better tool.
Now if you are asking why you are forced to use a specific IDE over another, well that's a different topic.
A place that uses .NET will use Visual Studio 99% of the time, at least that's what I've seen. And I haven't found anything out there that is better than Visual Studio for writing .NET applications.
There is much more than code completion into an IDE:
debugging facilities
XML validation
management of servers
automatic imports
syntax checking
graphical modeling
support of popular technologies like Hibernate, TestNG or Spring
integration of source code management
indexing of file names for quick opening
follow "links" in code: implementation, declaration
integration of source code control
searching for classes or methods
code formatting
process monitoring
one click/button debugging/building
method/variable/field/... renaming
etc
Nothing to do with incompetence from the programmers. Anybody would be A LOT less productive using vim for developing a big Java EE application.
How big were you projects at college? A couple of classes in a couple of files? Or rather a couple of hundreds of classes in a couple of hundreds of files?
Today I had the "honor" of looking at a file in a rather large project where the programmer opted to use vi (yes vi, not vim) and a handcrafted commandline compiler call (no make). The file contained on function spanning about 900 lines with a series of if-else-if-else-constructs (because that way you have all your code in one place!!!!!!). Macho-Programmer at his finest.
OK there are very good reasons for enforcing a particular toolset within a production environment:
Companies want to standardize everything so that if an employee leaves they can replace that person with minimal effort.
Commercial IDEs provide a complex enough environment to support a single interface for a variety of development needs and supporting varying levels of code access. For instance the same file-set could be used by the developer, by non-programmers (graphics designers etc.) and document writers.
Combine this with integrated version control and code management without the need of someone learning a particular version control system, all of a sudden IDEs start to look nicer and nicer.
It also streamlines maintenance of build systems in a multi-homed environment.
IDEs are easier to give tutorials to via phone or video, and probably come with those.
etc. etc. and so forth.
The business decision making behind enforcing a standardized environment goes beyond the preference of a single programmer or for that matter perhaps the understanding of the programming team.
Using an IDE helps an employee to work with huge projects with minimal training. Learn a few key combos - and you will comfortably work with multi-thousand-file project in Eclipse, IDE handles most of the work for you under the hood. Just imagine how many years of learning it takes to feel comfortable developing such projects in Vim.
Besides, with an IDE it is easy to support common coding standards across the entire team: just set a couple of options and an IDE will force you to write code in a standardized way.
Plus, IDE gives a few added bonuses like refactoring tools (especially good in Eclipse), integrated debugging (especially good in Visual Studio), intellisense, integrated unit tests, integrated version control system etc.
The advantages and disadvantages of using an IDE also greatly depends on the development platform. Some platforms are geared towards the use of IDEs, others are not. As a rule of thumb, you should use IDE for Java and .Net development (unless you're extremely advanced); you should not use IDE for ruby, python, perl, LISP etc development (unless you're extremely new to these languages and associated frameworks).
Features like these aren't available in vim:
Refactoring
Integrated debuggers
Knowing your code base as an integrated whole (e.g., change a Java class name; have the change reflected in a Spring XML configuration)
Being able to run an app server right inside the IDE so you can deploy and debug your code.
Those are the reasons I choose IntelliJ. I could go back to sticks and bones, but I'd be a lot less productive.
As said before, the question about using an IDE is basicaly productivity. However there is some questions that should be considered by the company when choosing a specific IDE. that includes:
Company culture
Standardize use of tool, making it accessible for all developers. That easies training, reduces costs and improve the speed of learn curve.
Requirements from specific contract. As an example, there are some development packages that are fully supported (i.e. plugins) by some IDE and not by anothers. So, if you are working with the support contract you will want to work with the supported IDE. A concrete example is when you are working with not common OS like VxWorks, where you can work with the Workbench (that truely is an eclipse with lot of specific plugins for eclipse).
Company policy (and also I include the restriction on company budget)
Documentation relating to the IDE
Comunity (A strong one can contribute and develop still further the IDE and help you with your doubts)
Installed Base (no one wants to be the only human to use that IDE on the world)
Support from manufacturers (an IDE about to be discontinued probably will not be a good option)
Requirements from the IDE. (i.e. cross platform or hardware requirements that are incompatible with some machines of the company)
Of course, there is a lot more. However, I think that this short list help you to see that there is some decisions that are not so easy to take, when we are talking about money and some greater companies.
And if you start using your own IDE think what mess will be when another developer start doing maintenance into your code. How do you think will the application be signed at the version manager ? Now think about a company with 30+ developers each using its own IDE (each with its own configuration files, version and all that stuff)...
http://xkcd.com/378/
Real programmers use the best tools available to get the job done. Some companies have licenses for tools but there's nothing saying you can't license/use another IDE and then just have the other IDE open to copy/paste what you've done in your local IDE.
The question is a bit open-ended, perhaps you can make it community wiki...
As you point out, the IDE can be useful, or even a must have, for some operations, like refactoring, or even project exploring: I use Eclipse at my work, on Java projects, and I find very useful to get a list of all occurrences of the usage of a public method or a class in a project. Likely, I appreciate to be able to rename it from where it is defined, and having all these occurrences automatically updated.
The fact I have the JavaDoc displayed when hovering over a name is very nice too. Like autocompletion, jump to a class name, etc.
And, of course, debugging facilities...
Now, usage of Eclipse isn't mandatory in our shop! Some years ago, some people used the Delphi IDE (forgot its name), I tried NetBeans, etc. But I think we de facto standardized on Eclipse, but it was a natural evolution rather than a company policy. And we often just open files in a text editor when we need a quick update...

Most appropriate platform independent development language

A project is looming whereby some code that I will be writing may be deployed on any hardware that potential clients happen to have. Its a business application that will be running 24/7 so I envisage that most of the host machines will be server type boxes but smaller clients might, for example, just have a simple PC.
A few more details about the code I will be writing:
There will be no GUI.
It will need to communicate with another bespoke 'black box' device over an Ethernet network.
It will need to communicate with a MySQL database somewhere on the network.
I don't have any performance concerns as a) the number of communications with the black box will be small, around 1 per second, and the amount of data exchanged will be tiny (around 1K each time), b) the number of read/writes with the database will be small, around 5 per minute, and again the amount of data exchanged will be tiny and c) the processing that needs to be performed is fairly simplistic.
Nothing I'm doing is very 'close to the metal' so I don't want to use languages that are too low level. Ease of development and ease of deployment are my main priorities.
I'm not expecting there to be a perfect solution so I can live with things like, for example, having to have slightly different configuration files for Windows machines than for Linux boxes etc. I would like to avoid having to compile the software for each host machine if possible though.
I would value your thoughts as to which development language you think is most suitable.
Cheers,
Jim
I'd go with a decent scripting language such as Python, Perl or Ruby personally. All of those have decent library support, can communicate easily with both local and remote MySQL databases and are pretty platform independent.
The first thing we need to know is what language skills you already have? This is likely to be a fairly big determiner of what choice you would ideally make.
If I was doing this I'd suggest Java for a couple of reasons:
It will run almost anywhere and meet the requirements you've outlined.
Its not an esoteric language so there will be plenty of developers.
I already know how to program in it!
Probably the most extensive library ecosystem of any of the development platforms.
Also note that you could write it in another language on the JVM if your more comfortable with Ruby or Python.
Sounds like Perl or Python would fit the bill perfectly. Which one you choose would depend on the expertise of the people building and supporting the system.
On the subject of scripting languages versus Java, I have been disappointed with developing command line tools using Java. You can't directly execute them, you have to (1) compile them and (2) write a shell script to execute the jar file, this script may differ between platforms. I recommend Python because it runs anywhere and it's got a great SQL library, mysql-python. The library is ready to use on Windows and Linux. Python also has a lot less boilerplate, you'll write fewer lines of code to do the same thing.
EDIT: when I talked about JARs being executable or not, I was talking about whether they are directly executable be the OS. You can, of course, double click on them to run them if your file manager is set up to do so. But when you're in a terminal window and you want to run a java program, you have to "java -jar myapp.jar" instead of the usual "./myapp.jar". In Python one just runs "./myapp.py" and doesn't have to worry about compiling or class paths.
If all platforms are standard PCs (or at least run Linux), then Python should be considered. You can compile it yourself if no package exists for your version. Also, you can strip the standard library easily from things that aren't available and which you don't need (sound support, for example).
Python doesn't need lots of resources, it's easy to learn and read.
If you know Perl, you can try that. If you don't use Perl on a daily basis, then don't. The Perl syntax is hard to remember and after a week, you'll wonder what the code did, even if you wrote it yourself.
Perl may be of help to you as it is available for many platforms and you can get almost any functionality by simply installing modules from CPAN.
Python or Java. They both are easy to deploy on both the server environments and the desktop environments you mention - i.e., Linux/Solaris and Windows.
Perl is also a nice choice, but it depends on how well you know Perl, how well other people that will maintain your code know Perl, and number of desktop users that are savvy enough to handle an install of the Windows Perl version(s).
As Java supports Python via Jython, I'd go with a JVM requirement myself, but I'd personally go with a Java application all the way for such a system you describe.
I would say use C or C++. They are platform independant, though you will have to compile for each platform.
Or use Java. That runs in a Virtual Machine so is truely cross platform and not a slow level as C.

Will static linking on one unix distribution work but not another?

If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.