As libgit2 is a library, is there any existing C/C++ project which depends on libgit2 and exposes the usual Git command line interfaces (like git clone, git commit, etc.)?
The closer you may find is in the examples folder of the libgit2 project.
As stated by the README
These examples are a mixture of basic emulation of core Git command line functions and simple snippets demonstrating libgit2 API usage (for use with Docurium). As a whole, they are not vetted carefully for bugs, error handling, and cross-platform compatibility in the same manner as the rest of the code in libgit2, so copy with caution.
That being said, you are welcome to copy code from these examples as desired when using libgit2. They have been released to the public domain, so there are no restrictions on their use.
One of the long term goal of the libgit2 project would be to run the whole git.git tests against those examples (to ensure compatibility with the core git implementation), so there's a reasonable chance they'll keep on evolving.
From time to time there is some project which tries to reimplement the git tool on top of libgit2 or one of the bindings, but these don't tend to go very far.
The git interface is a collection of quirks, and it's not a very rewarding job to reimplement them in your own tool. To add to that, if you do go through and reimplement the interface, what you get is to have a version of git with mismatched features, which is what you had before you even started.
The are some systems where it might be worth going through all the trouble is to avoid having to have the unix-like environment with shell or perl, but there is also effort to port those parts of git to C as well, which tackles this from the other side.
Related
I'm using git submodule to build and shared components between projects. The project is not in production yet, so, at this point submodule is serving well.
But I'm concerned about maintenance and deploy, would be a good idea transform it into a npm package ?
An npm package will allow fragmentation across different package versions. On the other hand, git submodules have a bit of a learning curve, and the tooling is really not that good. With git submodules, you have all the source in one folder.
If it's at all possible, I'd recommend using a plain monorepo for all projects. You may need to create build time variables (via babel plugin/s), you may need some sort of "live config" get served from the backend. I worked with git submodules for a year and I've recently worked with a project that uses npm to share code.
I would recommend using only one git submodule, for all shared code, instead of several submodules. I would strongly consider using lerna, and use your one git submodule to track lerna's packages directory. And if the team decides they don't like git submodules, you can easily make this repo a sibling git repo, instead of a submodule. However, above all this, I'd recommend using a plain monorepo.
Here's a great talk on monorepo's from Netflix: https://www.youtube.com/watch?v=VNqmHJtItCs (strong focus on discouraging npm-style packages)
Here's google's infamous monorepo talk: https://www.youtube.com/watch?v=W71BTkUbdqE
This is a great site to read to help you think about good development flows: https://trunkbaseddevelopment.com/ (it primarily advocates for the monorepo approach)
If you are developing software for different clients(different people/companies paying you for similar projects), and have some agreement that they should be at least ~80% the same, you may really enjoy using build flags to help get started on splitting functionality, but I'm sure you should very proactively keep the code around the build flags clean, and refactor into re-usable components/packages. Give each client some sort of build-flags.json. Build flags should be named for features only, which in theory can all be individually toggled. Some code may be totally custom for each project, in this case, you may want to consider using dynamic imports, but generally this is a pain point I have yet to fully cross, although I have plenty of unrefined ideas around this.
If a monorepo is just not happening, I would actually recommend using npm packages+separate repos over git submodules, assuming you can do good semantic versioning of the package. (And, yalc seems to be a good tool for linking together packages, as opposed to the standard npm/yarn link)
My findings after trying both lerna, npm workspaces and git submodules. I find it is not a case of the one vs the other.
The reason why I say this is because one can have submodules that are part of the monorepo. Doing exactly this made my development experience better as I could clone an existing project and actively develop it within the bigger project (monorepo). I could then contribute back to the cloned project once satisfied with the changes. This is something that you cannot do with npm workspaces alone. Hence my argument that it is not a case of one vs the other. They solve different problems and can therefore complement each other.
Before using npm workspaces I would use npm link all the time. npm workspaces makes this use-case of developing with multiple packages more convenient. Even when the team you work with does not use a monorepo; you could use one to develop multiple packages and test them in conjunction. Once satisfied, you can push the individual repos. This is something you cannot do with git alone.
Maybe you can think of more novel ways of combining the features of npm and git.
I wrote a piece of software which works well on my own box. It has been a headache to get it onto another box, though.
The main problem is that there is a library which it uses which is not a library covered by apt-get; it's called pngwriter. And pngwriter is also very finicky, and it is not very easily installed. It also has version compatibility issues. To get around all of that, I thought it would be great to include the source for pngwriter with my project, and have CMake go ahead and make pngwriter with the rest of the code.
So my main question is: Is this type of deployment canon? Should CMake call the makefiles that the developers of the software already wrote, and then use FIND_PACKAGE locally, or will I need to rewrite all of their makefiles so that I can use ADD_LIBRARY?
I'd recommend using the ExternalProject_Add function.
The docs are OK, but there is a decent article which explains things in a bit more detail. From this article:
The ExternalProject_Add function makes it possible to say “download this project from the internet, run its configure step, build it and install it”
Bear in mind that you can skip the install step altogether, or you could choose to install to a location inside your own build directory.
We are looking for a software to run our test cases automatically.
We want a software which will run on our server (or a commercial), which automatically gets the newest commit on github. Then compiles the commit of the project with CMake and run Ctest on our test cases. The results should then be visualized on a nice website.
I had a look at CDash, but as the documentation is so bad I did not even get it to get the latest commit from github.
So my questions are:
Is there a good tutorial to CDash? Except the bad wiki page.
What software is available for running tests on new commits to github, what are their advantages and drawbacks?
In answer to your second question, Jenkins is a robost and extensible continuous integration tool that can be integrated tightly with GitHub using a plug-in (or loosely using standard Git support). It also supports CMake via a plug-in. Whether it has disadvantages that will make it less useful for you depends on your organization and build process, but I've found it to be highly customizable to a wide variety of processes. I recommend taking a look at it.
There's also a third-party Ctest plugin available for Jenkins.
CDash works in pair with CTest. If you are already using CMake then it should be fairly easy to submit your testing results to CDash. I'd recommend reading the CTest documentation:
http://www.vtk.org/Wiki/CMake_Testing_With_CTest
You can either install your own CDash server or use Kitware's hosted server at my.cdash.org. You can test your server with a sample project available at:
http://www.cdash.org/cdash/resources/software.html
I feel that using Git submodules is somehow troublesome for my development workflow. I've heard about Git subtree and Gitslave.
Are there more tools out there for multiple repository projects and how do they compare ?
Can these tools run on Windows ?
Which is best for you depends on your needs, desires, and workflow. They are in some senses semi-isomorphic, it is just some are a lot easier to use than others for specific tasks.
gitslave is useful when you control and develop on the subprojects at more of less the same time as the superproject, and furthermore when you typically want to tag, branch, push, pull, etc all repositories at the same time. gitslave has never been tested on windows that I know of. It requires perl.
git-submodule is better when you do not control the subprojects or more specifically wish to fix the subproject at a specific revision even as the subproject changes. git-submodule is a standard part of git and thus would work on windows.
git-subtree provides a front-end to git's built-in subtree merge strategy. It is better when you prefer to have a single-repository "unified" git history. Unlike the subtree merge strategy, it is easier to export changes to the different (directory) trees back out to the original project, but it is not as automatic as it is with gitslave or even git-submodule.
repo is in theory similar to gitslave, but not as well documented for non-android operations that I have found. It is fairly dedicated to the Google Android development model and only natively supports a handful of git commands (though you can run arbitrary commands) and the limited native support doesn't support, for example, a centralized repository to push to and checking out a branch seems fairly difficult.
kitenet's mr is what you would want to use if you have multiple version control systems in use, but is mostly limited for git-only superprojects due to its lowest common denominator approach. There are ways to run arbitrary commands, but they are not as well integrated.
For some use cases, I have liked each of the following two simple approaches:
Nested repositories. If your software project has a plugin mechanism, with each plugin in its own sub-directory, it can make sense to git-ignore these plugin directories and, in your local filesystem, to make each of them into its own git repository. This way, all your files form a single directory tree, but are managed in different git repositories. It will not confuse git.
Per-package repositories. For software projects where you use some kind of source code package management system (gem / bundler, npm, pear or the like) it can make sense to put your re-used code into separate git repositories, then to make source packages from them, and then to install them with the package management tool into the parent project. Your parent project's git repository would only contain a reference to the required packages and their versions, while the actual code of these packages will be git-ignored as done with all other packages and external libraries as well. Compared to the nested repositories proposed above, this is a more elaborate approach as it allows to specify which package version is to be installed.
I currently use submodules for development and not just relating 3rd party libraries. There are some ways that you can make life easier with submodules, especially when they are the source of merge or rebase conflicts. Look to ls-tree to get the 2 commits involved on a conflict in the submodule. This is probably the most difficult part of submodules for people to deal with. For now scripting will make this much easier to work with. Future versions of Git should have better native support for dealing with them.
Hope this helps.
We encountered a similar issue when using Git submodules in projects where we had dependencies in a variety of languages. To deal with them, we built and open-sourced a tool called MDLR ("Modular") that gives you declarative version-controlled Git dependencies with similar functionality to Git submodules, but without the annoying workflow. You can install it and manage your dependencies with the instructions/downloads on the GitHub repo
I have been looking for pros & cons of Autotools and CMake. But I would like to know opinions from people having used one (or both) of these tools for projects.
I used Autotools very basically a year ago and I know that one of the good points is that it relies on shell scripting, thus it does not need to be installed to be run and uses portable shell scripting. But it looks like it is too unix oriented, and it would not be possible to run the configure file on Windows.
I have now to choose a build system tool for an open source project that will have to be compiled for at least Linux & Windows. It is written in C++, and uses a Qt GUI front-end, the rest of it is "generic".
Thanks for you help.
Updated 16th of January 2019: Refined advice as tools evolve.
I have used autotools before for a considerable amount of time.
Currently I make intensive use of meson and cmake only when I need it.
Some personal advice:
for big teams, stick to CMake if you want to make use of the generators for XCode. If you do not need it, I would use Meson directly. Meson, as of version 0.49, also supports finding CMake configuration files (though I did not test yet how well this works). Also, Visual Studio seems to be sufficiently well-supported at this point in time, though, again, I did not try myself. The advantage of CMake is that it has Visual Studio integration.
Drop autotools. Meson covers well everything already. Their cross-compilation model is amazingly understandable. In CMake, last time I checked, everything was quite more difficult.
I have also tried scons, waf, and tup.
The most full-featured, cross-platform system, is CMake, but the DSL from meson will be easier to use for people used to python and others. Meson is starting to support VS also (a VS2015 generator) and some projects already have experimental support for it, for example gstreamer. Gstreamer is compiled in windows as well with meson. Right now there is VS2015 generator and VS2017 but I did not try myself the generators lately. As of meson 0.37.1 needed some work, but they are improving them and current version is already 0.40.
Meson
Pros:
The DSL does not get in the way at all. In fact, it is very nice and familiar, based in python.
Well-thought cross compilation support.
The objects are all strongly typed: you cannot make string substitution mistakes easily, since objects are entities such as 'depencency', 'include directory', etc.
It is very obviuos how to add a module for one of your tools.
Cross-compilation seems more straightforward to use.
Really well-thought. The designer and main writer of Meson knows what
he talks about very well when designing a build system.
Very, very fast, especially in incremental builds.
The documentation is 10 times better that what you can find in cmake. Go visit http://mesonbuild.com and you will find tutorial, howtos and a good reference. It is not perfect but it is really discoverable.
Cons:
Not as mature as CMake, though, I consider it already fully usable for C++.
Not so many modules available, though, gnome, qt and the common ones are already there.
Project generators: seems VS generator is not working that well as of now. CMake project generators are far more mature.
Has a python3 + ninja dependency.
Cmake
Pros:
Generates projects for many different IDEs. This is a very nice feature for teams.
Plays well with windows tools, unlike autotools.
Mature, almost de-facto standard.
Microsoft is working on CMake integration for Visual Studio.
Cons:
It does not follow any well known standard or guidelines.
No uninstall target.
The DSL is weird, when you start to do comparisons and such, and the strings vs list thing or escape chars, you will make many mistakes, I am pretty sure.
Cross compilation sucks.
Autotools
Pros:
Most powerful system for cross-compilation, IMHO.
The generated scripts don't need anything else than make, a shell and, if you need it to build, a compiler.
The command-line is really nice and consistent.
A standard in unix world, lots of docs.
Really powerful command-line: changing directories of installation, uninstall,
renaming binaries...
If you target unix, packaging sources with this tool is really convenient.
Cons:
It won't play well with microsoft tools. A real showstopper.
The learning curve is... well... But actually I can say that CMake was not that easy either.
The use of recursive make is pervasive in legacy projects. Automake supports non-recursive builds, but it's not a very widely used approach.
About the learning curve, there are two very good sources to learn from:
The website here
The book here
The first source will get you up and running faster. The book is a more in-depth discussion.
From Scons, waf and tup, Scons and tup are more like make. Waf is more like CMake and the autotools. I tried waf instead of cmake at first. I think it is overengineered in the sense that it has a full OOP API. The scripts didn't look short at all and it was really confusing for me the working directory stuff and related things. At the end, I found that autotools and CMake are a better choice. My favourite from these 3 build systems is tup.
Tup
Pros
Really correct.
Insanely fast. You should try it to believe it.
The scripting language relies on a very easy idea that can be understood in 10 minutes.
Cons
It does not have a full-featured config framework.
I couldn't find the way to make targets such as doc, since
they generate files I don't know of and they must be listed in the output before being generated, or at least, that's my conclusion for now. This was a really annoying limitation, if it is, since I am not sure.
All in all, the only things I am considering right now for new projects is are Cmake and Meson. When I have a chance I will try tup also, but it lacks the config framework, which means that it makes things more complex when you need all of that stuff. On the other hand, it is really fast.
I would not recommend autotools for Windows. Use CMake.
Why? Windows doesn't have a native sh.exe, and the emulation is slow. It's also very easy to get configury stuff wrong. I'm not saying it's impossible in CMake, but CMake surely abstracts more away, so you worry about less. CMake documentation can be a bit hard to read, but once it's set up, you should be fine for all toolchains ever supported by CMake. CMake also integrates testing, packaging etc...
Autotools is slow on Windows, does not work easily with MSVC, and has weird quirks with Windows (and other OSes) that are hard to debug, and hard to fix. libtool also sucks on Windows, where it often refuses to build a shared library even, if you think it should and could. Toolchain relocation issues are also prevalent with libtool, which may look at the wrong files in a user's toolchain. CMake is a lot easier in this regard. It assumes normal things about the target platform and creates generic and good build instructions.
Also, CMake has coloured output :) and nice progress percentages.
PS: I just have some experience with CMake and autotools on Windows as a user. CMake tends to work, autotools tends to bite your ear off when you're not looking, and smile at you when it fails due to some strange error...