First, I obviously tried to get a binary release of Mono.Fuse project, but the only available downloads were the source files. (And actually it seems that latest release has a syntax error in an override)
So I tried to install it on a Linux box from git, successfully, but I'll soon need to bring it to a production server together with my Xcopy-deployable application.
I don't like compiling software on a production machine, especially because I need to install loooooooooooots of development tools from YaST. So now I have this git-cloned directory with all source and compiled files.
How can I create an RPM package that I can install on multiple production machines with a simple command without resolving configure.sh's dependencies with lots of unneeded libraries like glib-2.0-devel?
Take a look at Open Build Service.
Related
recently I tried to install an apache webserver on an ubuntu machine. I discovered two ways to do that.
The first one is to install it like it is described at this source apache docs using ./configure, make, make install to the downloaded apache sources.
And the second way is to install it via apt-get update && apt-get install apache2.
I also noticed that when I run the installation via apt-get it seems that there is a different configuration of the apache for example the directory /etc/apache2/* is only available over this way of installation. So when I install it manually the directories sites-available, sites-enabled, ... are just missing.
Is there also a way to get these folders when running the manual installation?
Where do this differences come from?
This is nothing that can be summed up in a few sentences. You basically ask: how does software management work under Linux systems? In short:
using the apt utility on a Ubuntu based system you are using the systems package management system, that should nearly always be what you want. That way the system can take care to keep your apache installation up to date, you can remove software again without leaving artefacts, the software is guaranteed to work with the system libraries already installed in your system. Potential conflicts are resolved. You can be certain that the configuration matches your system.
using the build system is a more generic (archaic) way: you do not only install the software, you build it from scratch from the software sources prior to installing it. That is only possible for OpenSource software, obviously. This certainly allows for more flexibility. But you are responsible yourself for a whole lot of things, starting with first setting up a complete build system, then configuring the package, select what you actually want to build and last but not least you are yourself responsible to update software you installed that way. This rarely is a good idea, with two exceptions:
you absolutely cannot find a pre build package for your Linux distribution or
you want to make own modifications to the software itself
The difference in the folder layout of the apache configuration is a separate thing. There are two aspects that come into play here:
you can change that layout using the uncounted build options offered by the build system (namely the options you can hand over to the configure utility on the command line).
typically the configuration of complex software packages (like the apache http server with all it's modules) is broken up into
various sub folders so that you can keep an overview and
allows additional packages to drop their additional configuration in place (this is mostly useful for prebuild packages)
Long story short: in 99,8% of all cases you want to use prebuild packages prepared for your distribution. That is the power of the software management systems under Linux (that still have no comparable counterpart in other operating systems).
I am designing an environment for productive research, i.e. writing, data-analysis, publication, etc.
In order to share the final results with others, I need to find a way to package this and to set up the local installation.
The project depends on Anaconda, so conda as a package manager is available.
It also includes
Pandoc and some pandoc packages, some will have to be fetched from Github directly because some versions are not available via conda-forge (doable in conda)
Atom and Atom packages; they should be installed and configured by my script (this works on the CLI via the apm package manager)
Node.js and Mermaid and a few other JS packages, which require npm calls
Some file-system-level operations, like deleting parts from packages where I only need a portion from, creating symlinks and aliases etc.
Maybe some Python code for modifying yaml/json/ini files or reading therefrom.
The main project will reside in a Github repository. It will be fine for users to clone it from there and start a build script locally.
My idea is to write a Bash shell script that
creates a conda environment based on requirements.yaml for everything that can be done this way
installs other parts using CLI commands (wget/curl etc.)
does all necessary modifications using CLI commands, maybe using a few short Python scripts (e.g. for changing or reading JSON or yaml files).
My local usage will be on OSX Big Sur, Linux should be supported, Windows compatibility would be nice-to-have.
Before I start:
Is this approach viable? I think it will be pretty transparent, but of course also a bit proprietary.
Docker is likely overkill for my purpose, and I also read that the execution will be slow on OSX.
The same environment will likely be installed multiple times on the same users' machine, so it is important that I can control e.g. the usage of existing packages and files via aliases or symlinks. It is not important that the multiple installations are decoupled for the non-python/non-conda parts (e.g. atom, node.js, mermaid could be the same binaries for all installations; just the set of Python packages might vary by installation).
Thanks for your expertise!
I'm a maintainer of a program that I'd might like to propose for inclusion in the Cygwin distribution.
We use CMake so there is a packager available, and it's easy to create a .bz2 package.
Once I've created the package, how can I try it locally? In Linux this can easily be done, but is there a way to use the Cygwin package installer so that it picks up a local package?
I've read the package contribution documentation and related pages but can't find an answer.
The CMake Cygwin package generator seems extremely out-of-date. Cygwin hasn't used .bz2 for some years. This is from a Cygwin-mailing list answer from Adam Dinwoodie:
Cygwin packages generally use Cygport to define the build process and
so forth. It's more-or-less the equivalent of rpmbuild for RPM
packages, and similar tools for other distribution systems. The
documentation for Cygport is at http://cygwinports.github.io/cygport/;
if you're using make in a reasonably standard way, most things should
Just Workâ˘.
In particular, if you're using Cygport, it'll automatically do things
like creating setup.hint files for you.
For testing locally, I find it's simplest to just do tar -xaC/ -f
<tarball> on the compiled tarballs that Cygport generates. That
doesn't test the dependency management or anything that requires
post-install scripts, but it's fine for checking the installation
itself works.
I'm building an application which is made of four .app packages: a launcher, a client, a server manager and a runtime.
The launcher's .app is distributed in a .pkg file generated by MonoMac's packaging option. The other .app files are downloaded/auto-updated as ZIP by the launcher.
Bundling a copy of Mono within every single one of those .app files would be a waste of bandwidth / diskspace, but even more than that, I have a Mono server .exe file which is cross-platform and as such doesn't come in an .app bundle nor should pack any platform-specific DLLs. So bundling a private copy of Mono isn't an option.
Is there a way for me to create a .pkg file which has a dependency/requirement on a globally-installed Mono?
I see PackageMaker has a Requirements pane which can run scripts but I have no idea how to properly check whether Mono has been installed without relying on some hardcoded paths and stuff like that.
I'd like to have the installer check whether Mono is installed and if it isn't, install it (or, failing that, display a message with a link to the Mono website for instance).
I ended up switching to Packages which is a pretty neat tool.
I added a pre-install requirement of type "Shell script returns true". My shell script just checks for the "mono" executable using the "which" command:
#!/bin/bash
which mono
$(exit $?)
When the script returns false, I configured Packages to spit out an informative error message with a link to download Mono (for lack of a way to trigger download / installation of Mono).
We are big users of NuGet, we've got 25-30 packages which we make available on a network share.
We'd like to be able to test new packages before they're built and released in the consuming applications. Ideally, this could be done using something similar to Maven's snapshot and having a specific development package (e.g. snapshot functionality).
Has anyone else come up with a, ideally reasonably non-hacky, way of doing it?
Our favoured method is to generate the package assemblies and then manually overwrite the assemblies in the packages/ directory, i.e. to replace the actual project references, but that doesn't seem particularly clean.
Update:
We use a CI build server which creates builds on every commit and has a specific manually triggered NuGet build which works off specifically tagged versions of the codebase. We don't want to create a NuGet build off every commit, but we would like to be able to test a likely candidate in the wild before we trigger the manual NuGet package build.
I ended up writing a unit / integration testing framework to solve a simular problem. Basically, I needed to verity the content of the package, the versions and info, what would happen when I installed and uninstalled the package, what versions were the assemblies in the lib, what bits the assemblies were built as (x86 or x64) and so on - and I needed it all to run without Visual Studio installed and on my build machine (headless) as a quality gate.
Standing on the shoulders of giants like: Pester, PETools, and SharpDevelop's package management module I put together - nuget-test
Clone the project into your package directory (where your .nuspec file and package files are). If for whatever reason you want to keep the nuget-test project as a "git" repo then simple remove "remove-item nuget-test/.git -Recurse -Force" from the command below.
git clone https://github.com/nickfloyd/nuget-test.git; remove-item nuget-test/.git -Recurse -Force
Run Setup.ps1 in the root of the nuget-test directory in an x86 instance of PowerShell.
PS> .\setup.ps1
Write tests and place them in the nuget-test/test directory using the Pester syntax.
Run the tests.
PS> Invoke-Pester
Project page: nuget-test
On github: https://github.com/nickfloyd/nuget-test
I hope this helps you get closer to what you're trying to get done.
If you're using NuGet packages to distribute your libraries, you should not limit to only testing the libraries. You should test the packages themselves as well (if your binaries are OK but incorrectly installed, consumers still have issues). The whole point is to improve this experience.
One way could be to have an additional CI or QA repository. The one you currently have is actually your "production" repository containing consumable releases, considered finished high-quality products.
Going further, you could have a logical package promotion flow (based on Continuous Integration or even using a Continuous Delivery approach), where:
- each check-in produces a package on your CI repository
- testers pick up a CI package for QA and if found OK promote it to either a QA feed, or to the Production feed (whatever you prefer, depends on the quality of your testing and how well it is automated)
There are various ways of implementing this scenario, using simple network shares, internal NuGet.Server or Gallery implementations, or simply use http://myget.org to give it a try with minimal cost and zero effort.
Hope that helps!
Cheers,
Xavier