Is there a package manager for APL? - package-managers

Is there any package manager for APL (something like cargo for Rust, npm for Node.js or pip for Python)?
I'd like to contribute a package or two, but not sure how to do it other than via a GitHub repository.

There is no general APL package manager.
There is also no Dyalog APL package manager that has been adopted by the community at large, or by Dyalog itself, or even by a few people or organizations for that matter. There are two attempts at a package manager for Dyalog APL. First is the successor to the previously mentioned APM, now named Tatin. This is new, and I'm not aware that it is in use by anyone.
The second is Dado. This is used by the Carlisle Group internally to manage a stack of dependent packages, some which are also open source. Dado takes a very simple approach to package management, leveraging services like GitHub and GitLab, and git repositories, instead of running its own server. Dado has been in use for a little while at the Carlisle Group, but there are no other users at this point.

One package manager is APM - APL Package Manager:
https://github.com/theaplroom/APM
https://www.youtube.com/watch?v=HXlgY47ZS_w

Related

Several people on same project using different package manager?

Let's say we have a Vue/React project that requires 3 people, but each of those people has a different taste in package manager.
The first person already feels cozy using npm, the second one uses yarn because he thinks it has better security, and the third person loves to use pnpm because he thinks it can save storage when having multiple projects.
Is it possible if that one project that is being worked on by those 3 people to run on each person's device using their chosen package manager?
Even if it is possible, is it something that is normal? Or is it something that we should avoid?
It is something that you should avoid. Even if they used the same lockfile, there would be slight differences in how they work, so people would get "works on my machine" issues. You don't want to spend your time figuring out such issues.
Each project needs to pick one package manager and stick to a given major version of that package manager. You can even go one step further and stick to a given exact version of that package manager. That will make your setup most stable. You can use the new packageManager field for that in package.json:
{
"packageManager": "<package manager name>#<version>"
}
But you need to enable corepack as it is an experimental feature of Node.js for now:
corepack enable

Straight forward way to use your own NPM package without the NPM registry

I want to split up the code base of several of my project into isolated package like projects. Those should be easily usable by npm but they do not seem significant enough to be published to the global npm registry.
So, my question is if there is a middle way to handling them like local provided packages and installing them with their path and publishing them in the global repository.
Concerns:
cluttering the npm registry with packages which don't seem to be significant enough to take up the name
the need to document and to create tests for each package seems to be too much and I would not sleep well publishing packages which are not well documented and tested
I take up a name which might be more appropriate to be used by a more sophisticated package and maintainers
I still want other to be able to easily try / use this package, to see if it fits their needs
Alternatives:
A) creating a private npm repository (with CouchDB?)
+ is pretty much identical to the npm repository and would be easy to use
+ the versioning is identical just pure semver lookup
- every user needs to set up this repository if they want to use this package or need it as a child dependency in their (public npm) package (even though this is unlikely)
- Need to invest time into setting it up and maintaining it
B) Using my username npm namespace
+ would solve pretty much every problem
- namespaces seem to be meant for projects and its sub packages which wouldn't be the case for my packages since their only connection is the creator
- it seems arrogant to prefix your packages with your name, like you are tagging it with a big sign THIS WAS DONE BY ME
C) Using GitHub with a special detached branch which contains the (tagged) releases
+ you could use it like the global npm repository since the npm resolving strategy allows the repository url with a semver range in place of the version
- special case which is bound to break
- GitHub is not meant to provide npm packages, about no developer expects a git url instead of the versionrange, tools and firewalls might have problem with this
- workflow is really not meant this way neither for git nor for npm
D) using a local package and install package by its path
+ easy to setup and use
- no version management
- build steps must be done manually beforehand
- can not publish packages depending on those packages
- all dependencies have to be installed locally
E) making those packages more useful, implementing edge cases, writing documentation and testing the whole package
+ would resolve about all problems
- ALOT of extra work, primarily thinking about edge cases and giving the developer a good api
- sometimes you can't really get the name for you package (it collides with other) which results in weird
- it is your responsibility, you have to maintain it, be responsible (test it well, edge cases)
- cluttering of the npm repository
So those are all the alternatives which came to mind when I tried to find a solution. Please leave a comment / answer if you have another idea or maybe you can remove / reduce the importance of those contra points.
Maybe you could include your own experience, so I get a better view for the whole problem.
Currently I would just try to make the package more helpful to the greater majority but this does not work in all cases.
Thank you all for your time!
Installing from git is pretty standard feature in package managers. npm doesn't have Github-support, it's generic support for any git repo. Unless you can find some discussion about deprecating it from npm, I'd not worry about it. It's used internally in many companies for private packages.
Of course, there is still some trade offs: build artifacts and maybe a bit more clumsy workflow. Things like npm outdated doesn't understand git semver. For build artifacts, I have seen many projects to commit them to master branch to support direct git-install. If you look around older open source projects for example, that's the case quite often.
We went for a private repository with verdaccio running in a docker container, which is very similar to version A. It took some setup, but for our developers all it took was a single npm command to add the private repo "in front of" npm for all packages of the namespace we created. Granted, our packages are project specific, but in a private repository that does not really matter either way, does it?
We considered the local package option at first, but the drawbacks were just too big for us, even if it's very easy to setup.
I'm not sure this helps, but this is at least the setup we decided upon when we had the same issue a few months ago.

Where do brew, cask, pip, npm, composer, et cetera get their stuff from, and what prevents malware?

If I install stuff using brew or brew cask (on macOS) or npm or pip or composer or similar package/code/library managers, I have always wondered:
Where do they get their content from, who or what entity is managing or facilitating or hosting all those software or packages or modules or libraries?
Are there any checks, safety filters, audits, validation, or other mechanisms that prevent malware in the packages or libaries that are being distributed though these package managers?
Composer
List of packages is stored at https://packagist.org/, but there are only metadata. Packages are downloaded directly from related repositories (usually GitHub or GitLab), Packagist does not store or analyze its content. So while it may look a bit scary, the security model is based on trust to the vendor or direct code review. There is no magic solution which will pretend that it protects you from malware - you need to think what you're doing and what dependencies you're including into your project (or at least use some malware scanner on your own).

Universal Package Managers - is MyGet oddly cheap

My organization needs a central place to keep our binaries across the different engineering teams. Currently we deal solely with npm and nuget packages. We will want to host our own private npm and nuget feeds and will want caching (which comes with these package managers - universal package managers aren't a factor correct?) to be available.
As I research the different commercial suppliers of such services, I have come across:
MyGet
ProGet
JFrog Artifactory
Package Drone
Nexus
This link has been very helpful:
https://binary-repositories-comparison.github.io/
What I have come down to is MyGet seems to offer everything I need, however so does ProGet and Artifactory. Although both the latter two are thousands of dollars more than the MyGet platform. I can't figure out why? Can someone help me understand what I am missing here... I am still learning about this world of repo managers.
Feel free to reach out to MyGet if you have any questions/use cases you are looking at. (http://myget.org/support)

Testing a NuGet package

We are big users of NuGet, we've got 25-30 packages which we make available on a network share.
We'd like to be able to test new packages before they're built and released in the consuming applications. Ideally, this could be done using something similar to Maven's snapshot and having a specific development package (e.g. snapshot functionality).
Has anyone else come up with a, ideally reasonably non-hacky, way of doing it?
Our favoured method is to generate the package assemblies and then manually overwrite the assemblies in the packages/ directory, i.e. to replace the actual project references, but that doesn't seem particularly clean.
Update:
We use a CI build server which creates builds on every commit and has a specific manually triggered NuGet build which works off specifically tagged versions of the codebase. We don't want to create a NuGet build off every commit, but we would like to be able to test a likely candidate in the wild before we trigger the manual NuGet package build.
I ended up writing a unit / integration testing framework to solve a simular problem. Basically, I needed to verity the content of the package, the versions and info, what would happen when I installed and uninstalled the package, what versions were the assemblies in the lib, what bits the assemblies were built as (x86 or x64) and so on - and I needed it all to run without Visual Studio installed and on my build machine (headless) as a quality gate.
Standing on the shoulders of giants like: Pester, PETools, and SharpDevelop's package management module I put together - nuget-test
Clone the project into your package directory (where your .nuspec file and package files are). If for whatever reason you want to keep the nuget-test project as a "git" repo then simple remove "remove-item nuget-test/.git -Recurse -Force" from the command below.
git clone https://github.com/nickfloyd/nuget-test.git; remove-item nuget-test/.git -Recurse -Force
Run Setup.ps1 in the root of the nuget-test directory in an x86 instance of PowerShell.
PS> .\setup.ps1
Write tests and place them in the nuget-test/test directory using the Pester syntax.
Run the tests.
PS> Invoke-Pester
Project page: nuget-test
On github: https://github.com/nickfloyd/nuget-test
I hope this helps you get closer to what you're trying to get done.
If you're using NuGet packages to distribute your libraries, you should not limit to only testing the libraries. You should test the packages themselves as well (if your binaries are OK but incorrectly installed, consumers still have issues). The whole point is to improve this experience.
One way could be to have an additional CI or QA repository. The one you currently have is actually your "production" repository containing consumable releases, considered finished high-quality products.
Going further, you could have a logical package promotion flow (based on Continuous Integration or even using a Continuous Delivery approach), where:
- each check-in produces a package on your CI repository
- testers pick up a CI package for QA and if found OK promote it to either a QA feed, or to the Production feed (whatever you prefer, depends on the quality of your testing and how well it is automated)
There are various ways of implementing this scenario, using simple network shares, internal NuGet.Server or Gallery implementations, or simply use http://myget.org to give it a try with minimal cost and zero effort.
Hope that helps!
Cheers,
Xavier