Creating an rpm package from source tarball - but tarball contains no spec file, just an install script - packaging

During my time as a SysAdmin, I have encountered applications that provide no rpm packages to install on redhat-based distributions - only a source tarball. The source tarball does not provide a spec file which would simplify the process of rpm package creation. Instead, the source tarball only provides a bash/ksh/ script which must be executed as root to install the application on the system.
I have tried to create an rpm package which essentially runs the install script to perform the install script. I have also tried to do the right thing by attempting to rpmbuild the package as a non-root user, and modifying the install script as best i can to ensure that the script's install dirs refer to rpm enviromentals/macros. But with a complicated install script that is a few hundred lines long, which also calls other scripts into play... well, I was bound to fail in this endeavour.
Is there a better way to package up such .spec-less source tarballs? Would a better solution be to:
somehow take a snapshot of the system before the application install
install the source tarball with the provided install script
take a snapshot of the system after the installation, and determine the changes/additions made by the installation
put the list of changes/additions in a spec file, and create an rpm package this way?
Any useful/relevant/instructive/amusing/profound input and advice to this problem will be much appreciated.
Thank you in advance

I think the best solution is to request the upstream to provide a spec file for the applications. Another way could be request experience package maintainer to package the application. Or you can probably explore to checkinstall as it tracked the install file using Makefile. If you want to package yourself, you should read this link as it provides explanation and with many examples.

I've seen many scripts like you mention, and I suspect I can tell you with certainty the company selling you these horrible installs.
Your best bet is to request a proper installable package. Read up on why a package is better than a configure;make;make-install and is better than this install.sh nonsense. Armed with that, even if it's talking points, try and get them into the 3rd millennium . It will be hard, but ultimately most rewarding.
Barring that, you will need that package built (and by now you will have read why). Unfortunately, there are some apps and vendors whose payload can't be packaged, because they compile, query the destination host, look for a license, compile a bit more, etc. My bias here is well-honed.

Related

How to tell CPack to use the FreeBSD generator?

I have found several interesting links talking about a CPack generator for FreeBSD.
I would like to generate FreeBSD packages; however, whenever I attempt to generate TXZ archives (as directed by the instructions), the generated package isn't compatible with the pkg utility on FreeBSD. They miss the manifest file.
Obviously, CPack is generating raw archives, not pkg-ready archives. I assume I must be missing a step.
However, none of the links above talk about any such step.
Therefore,
How can I tell CPack to generate a FreeBSD-ready package?
(Original author of that code here)
So, there's two things in play here:
you need to be on FreeBSD (so that you have libpkg, which is needed to do the building)
you need to build the devel/cmake package with OPTIONS CPACK (which is not the default)
So:
cd /usr/ports/devel/cmake
make configure and select CPACK
make && make install
Then #Tsyvarev's comment will be the right answer. For the record, the support was deemed experimental, the library API unstable, and the pkg authors have asked me to re-vamp the code to use the current libpkg API so they can drop the old one. Time, though, is the limiting factor.

Install jpeg 2000 on Windows 10

I want to investigate a new application for JPEG 2000 encoding and decoding. I downloaded openjpeg-master and managed to cobble together the ability to cmake the files. After a bunch of grinding, this resulted in the following output:
"Build files have been written to: C: openjpeg-master/build
\build> "
Any "normal" Unix installations have a multi-step installation like this:
"UNIX/LINUX - MacOS (terminal) - WINDOWS (cygwin, MinGW)
To build the library, type from source tree directory:
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
Binaries are then located in the 'bin' directory.
To install the library, type with root privileges:
make install
make clean
To build the html documentation, you need doxygen to be installed on your system. It will create an "html" directory in TOP_LEVEL/build/doc)
make doc"
But the Windows 10 equivalent is unclear, to put the most charitable spin on it. You can find it here: "https://github.com/uclouvain/openjpeg/blob/master/INSTALL.md"
Some questions arise:
is there a better starting place for installing JPEG 2000 that actually shows me how to install it and run the tests?
if not, how do I get from the build files to installing the libraries and making the test programs?
Is there more information I can dig out that would help to answer these questions?
Since I'm allergic to Visual Studio, I overlooked a nice tutorial specifying how to install something as complex as openjpeg by direct clone from github. However, in desperation, I found it and it worked. It is Visual Studio Community 2019 Version 16.8.3. I needed only to use -DTHIRDPARTY to get the third party libraries installed. There is a drop-down menu to build and install OPENJPEG. All I need to do now is figure out how to compile and run the utilities that invoke the installed libraries ...
actually, the complete line to add was -DBUILD_THIRDPARTY:bool=true.
Somewhere in my frantic random search for a way forward, I remember seeing the thought that to make the tests work, I merely need to find files like *.vsproj and run them a separate VS solutions. Some random guesswwork with .vdproj files in src/bin/... hasn't produced anything good. Is there not a document somewhere showing how to run the tests?

Is it possible to include a file outside Python package in wheel/bdist/sdist?

I am the author of KiKit. KiKit is a tool using KiCAD's Python API. KiKit also serves as a plugin for KiCAD. To register a plugin for KiCAD, you have to place a Python file in a specific location (e.g., /usr/share/kicad/scripting/plugins/ or ~/.kicad_plugins/ - see KiCAD doc for details).
I distribute KiKit as a python package via pip so my users can just type pip install kikit and not care about anything else. I would like to be able to register the plugin within the installation step so the users don't have to perform this step manually.
I know that wheel does not support arbitrary code execution thus no post-installation script is possible. But I wasn't able to find out if it is possible to include a file that will be installed on a location outside the root of the package. If so, how can I specify it in setup.py?
PS: I am aware of the option to provide an extra script, e.g., register-kikit, which would register the plugin. But I would like to include it in the installation step so there is a lesser chance of the user forgetting to do so.

When to use shrinkwrap, npm-lockdown, or npm-seal

I'm coming from a background much more familiar with composer. I'm getting gulp (etc) going for the build processes and learning node and how to use npm as I go.
It's very odd (again, coming from a composer background) that a composer.lock-like manifest is not included by default. Having said that, I've been reading documentation on [shrinkwrap], [npm-lockdown], and [npm-seal]. ...and the more documentation I read, the more confused I become as to which I should be choosing (everyone thinks their way is the best way). One of the issues I notice is that npm-seal hasn't changed in 4 years and npm-lockdown in 8 months -- this all leads me to wonder if this because it's not needed with the newest version of npm...
What are the benefits / drawbacks of each?
In what cases would I use one over another in Project A, but use a different one in Project B?
How will each impact our development workflow?
PS: Brownie points if you include the most basic implementation example for each. ;)
npm shrinkwrap is the most standard way how to lock your dependencies. And yes, npm install does not create it by default which is a pity and it is something that npm creators definitely should change.
npm-lockdown is trying to do the same things as npm shrinkwrap, there are two minor points in which npm-lockdown is better: it handles optional dependencies better and it validates checksums of the package:
https://www.npmjs.com/package/lockdown#related-tools
Both these features seem not so relevant for me; I'm quite happy with npm shrinkwrap: For example, npmjs guarantees that once you upload certain package at certain version, it stays immutable - so checking sha checksums is not so hot (I've never encountered an error caused by this).
seal is meant to be used together with npm shrinkwrap. It adds the 'checksum checking' aspect. It looks abandoned and quite raw.
Good question - I'm going to skip everything but shrinkwrap because it is the de-facto way to do this, per NPM's docs.
Long story short the npm-shrinkwrap.json file is akin to your lock files you are used to in every other package manager, though NPM allows different versions of the same package to play nice together by isolation - literally scoping and copying different entire versions to node_modules at different levels of the tree. If two projects that are parent-child to each other use the exact same version, NPM will copy the version to only the parent and the child will traverse up the tree to find the package.
Best practice is simply to update package.json for your direct dependencies, run npm install, verify that things are working while developing, then run npm shrinkwrap when you are just about to commit and push. NOTE: make sure to rm npm-shrinkwrap.json before running npm install during active development - if your direct dependencies have changed, you want package.json to be used, and not the lock! Also include node_modules in your .gitignore or equivalent in your source control system. Then, when you are deploying and getting to run the project, run npm install like normal. If npm finds an npm-shrinkwrap.json file, it will use that to recursively pull all locked modules, and it will ignore package.json in both your project and all dependent projects.
You might find shrinkpack useful – it checks in the tarballs which npm install downloads and bundles them into your repository, before finally rewriting npm-shrinkwrap.json to point at that local bundle instead.
This way, your project is totally locked down, completely available offline, and much quicker to install.

How do you make it so that cpack doesn't add required libraries to an RPM?

I'm trying to convert our build system at work over to cmake and have run into an interesting problem with the RPMs that it generates (via cpack): It automatically adds all of the dependencies that it thinks your RPM has to its list of required libraries.
In general, that's great, but in my case, it's catastrophic. Unfortunately, the development packages that we build end up getting installed with one our home-grown tool that uses rpm to install them in a separate RPM database from the system one. It's stupid, but I can't change it. What this means is that all of the system libraries that any normal library will rely on (like libc or libpthread) aren't in the RPM database that is being used with our development packages. So, if an RPM for one of our development packages lists system libraries as being required, then we can't install it, as rpm will think that they're not installed (since they're listed in the normal database rather than the one that it's being told to use when installing our packages). Our current build stuff handles this just fine, because it doesn't list any system libraries as dependencies in the RPMs, but cpack automatically populates the RPM's list of required libraries and puts the system libraries in there. I need a way to stop it from doing so.
I tried setting CPACK_RPM_PACKAGE_REQUIRES to "", but that has no effect. The RPM cpack generates still ends up with the system libraries listed as being required. All I can think of doing at this point is to copy the RPM cpack generator and hack it up to do what I want and use that instead of the standard one, but I'd prefer to avoid that. Does anyone have any idea how I could get cpack to stop populating the RPM with required libraries?
See bottom of
http://www.rpm.org/max-rpm/s1-rpm-depend-auto-depend.html
The autoreqprov Tag — Disable Automatic Dependency Processing
There may be times when RPM's automatic dependency processing is not desired. In these cases, the autoreqprov tag may be used to disable it. This tag takes a yes/no or 0/1 value. For example, to disable automatic dependency processing, the following line may be used:
AutoReqProv: no
EDIT:
In order to set this in cmake, you need to do set(CPACK_RPM_PACKAGE_AUTOREQPROV " no"). The extra space seems to be required in front of (or behind) the no in order for it to work. It seems that the RPM module for cpack has a bug which makes it so that it won't let you set some its variables to anything shorter than 3 characters long.
To add to Mark Lakata's answer above, there's a snapshot of the "Maximum RPM" doc
http://www.rpm.org/max-rpm-snapshot/s1-rpm-depend-auto-depend.html
that also adds:
The autoreq and autoprov tags can be used to disable automatic processing of requirements or "provides" only, respectively.
And at least with my version of CPackRPM, there seems to be similar variables you can set e.g.
set(CPACK_RPM_PACKAGE_AUTOREQ " no")
to only disable the automatic dependency processing of 'Requires'.