serverless AWS deployment fails because of esbuild - serverless-framework

I'm using a Macbook M1 for development. When I try to deploy my serverless application to AWS I'm running into this error:
I'm using the aws-nodejs-typescript template.
You installed esbuild for another platform than the one you're currently using.
This won't work because esbuild is written with native code and needs to
install a platform-specific binary executable.
Specifically the "esbuild-darwin-arm64" package is present but this platform
needs the "esbuild-darwin-64" package instead. People often get into this
situation by installing esbuild with npm running inside of Rosetta 2 and then
trying to use it with node running outside of Rosetta 2, or vice versa (Rosetta
2 is Apple's on-the-fly x86_64-to-arm64 translation service).
If you are installing with npm, you can try ensuring that both npm and node are
not running under Rosetta 2 and then reinstalling esbuild. This likely involves
changing how you installed npm and/or node. For example, installing node with
the universal installer here should work: https://nodejs.org/en/download/. Or
you could consider using yarn instead of npm which has built-in support for
installing a package on multiple platforms simultaneously.
If you are installing with yarn, you can try listing both "arm64" and "x64"
in your ".yarnrc.yml" file using the "supportedArchitectures" feature:
https://yarnpkg.com/configuration/yarnrc/#supportedArchitectures
Keep in mind that this means multiple copies of esbuild will be present.
Another alternative is to use the "esbuild-wasm" package instead, which works
the same way on all platforms. But it comes with a heavy performance cost and
can sometimes be 10x slower than the "esbuild" package, so you may also not
want to do that.
Why is this happening?

I needed to reinstall serverlesss....
for that I deleted the folder ./serverless in the directory ~
and reinstalled using homebrew
https://formulae.brew.sh/formula/serverless#default

Related

Can i safely migrate to pnpm from yarn

I am currently staying at a location where internet and disk space are at a premium and yarn/npm constantly having to install module every single time isn't the most efficient use of both my disk space and internet data which bring me to my question,
I recently came across pnpm and it perfectly solves my problem (it install modules in a central location and symlink them to your projects), my question is this if i completely migrate to pnpm will that affect the project setup if i was working with someone using yarn/ npm for instance and if i publish a project will the users be forced to use pnpm or they can use any package manager
It is recommended to pick one package manager for a project and force its usage by everyone. pnpm, yarn, npm all have their own lockfiles.
If it is a small project and you don't commit the lockfile, then you may use different package managers. But I still don't recommend this.

Best way to write setup script for multi-language project package that includes anaconda, atom, node.js etc.?

I am designing an environment for productive research, i.e. writing, data-analysis, publication, etc.
In order to share the final results with others, I need to find a way to package this and to set up the local installation.
The project depends on Anaconda, so conda as a package manager is available.
It also includes
Pandoc and some pandoc packages, some will have to be fetched from Github directly because some versions are not available via conda-forge (doable in conda)
Atom and Atom packages; they should be installed and configured by my script (this works on the CLI via the apm package manager)
Node.js and Mermaid and a few other JS packages, which require npm calls
Some file-system-level operations, like deleting parts from packages where I only need a portion from, creating symlinks and aliases etc.
Maybe some Python code for modifying yaml/json/ini files or reading therefrom.
The main project will reside in a Github repository. It will be fine for users to clone it from there and start a build script locally.
My idea is to write a Bash shell script that
creates a conda environment based on requirements.yaml for everything that can be done this way
installs other parts using CLI commands (wget/curl etc.)
does all necessary modifications using CLI commands, maybe using a few short Python scripts (e.g. for changing or reading JSON or yaml files).
My local usage will be on OSX Big Sur, Linux should be supported, Windows compatibility would be nice-to-have.
Before I start:
Is this approach viable? I think it will be pretty transparent, but of course also a bit proprietary.
Docker is likely overkill for my purpose, and I also read that the execution will be slow on OSX.
The same environment will likely be installed multiple times on the same users' machine, so it is important that I can control e.g. the usage of existing packages and files via aliases or symlinks. It is not important that the multiple installations are decoupled for the non-python/non-conda parts (e.g. atom, node.js, mermaid could be the same binaries for all installations; just the set of Python packages might vary by installation).
Thanks for your expertise!

Any way to check that package.json engines satisfies version of global installed packages/binaries?

As I understand, the engines object is only for cases where someone installs my application. I would like to have something similar where I can prevent compiling errors on employee systems (because of wrong node versions etc.) before they happen.
I found that there are two packages:
https://github.com/jgillich/npm-check-engines/blob/master/index.js
https://github.com/kruppel/check-engines
But they are not working or not doing what I want.
It would be nice to have a script running before npm install that checks these engines versions and if they are available in path.
I created a package that does this.
https://github.com/muuvmuuv/npm-supervisor
It can be run via npm before installation and will check if a version in engines satisfies the global or local installed version.

Dependency resolution approach - comparing NPM to Homebrew?

I recently got confused and almost installed a tool via brew install when in fact it was an npm package and all I needed to do was npm install -g.
So these tools are strangely similar yet obviously different.
What's the difference in crystal clarity?
NPM exists to resolve dependencies for application code, on a per app basis, allowing an app to be self-contained and portable. This means that (in its default mode of operation) it will install the same stuff many times, uniquely, repeatedly, and separately, for every app on your system that needs the same package, inside of that apps own directory and isolated from everything else.
Homebrew is not like this. The reason is it serves the system itself, not individual apps, so is more comparable to just the npm -g part of npm.
There is one extra bit to understand about homebrew, though - some system packages have specific dependencies and could even have conflicts. This means that for the global installs that homebrew provides, it also has to solve some nesting and conflict issues. It's a kind of magic?

How can I simplify my stack of package managers?

I don't know how it got this bad. I'm a web developer, and I use Ubuntu, and here are just some of the package managers I'm using.
apt-get for system-wide packages
npm for node packages
pip for python packages
pip3 for python 3 packages
cabal for haskell packages
composer for php packages
bower for front-end packages
gem for ruby packages
git for other things
When I start a new project on a new VM, I have to install seemingly a dozen package managers from a dozen different places, and use them all to create a development environment. This is just getting out of control.
I've discovered that I can basically avoid installing and using pip/pip3 just by installing python packages from apt, like sudo apt-get install python3-some-library. This saves from having to use one package manager. That's awesome. But then I'm stuck with the Ubuntu versions of those packages, which are often really old.
What I'm wondering is, is there a meta-package manager that can help me to replace a few of these parts, so my dev environment is not so tricky to replicate?
I had a thought to make a package manager to rule them all for that very reason. Never finished it though, too much effort required to stay compatible. For each package manager you have a huge community supporting it's upkeep.
Best advice I have is to try to reduce your toolchain for each type of project. Ideally you shouldn't need to work in every language you know for each project you work on. How many projects are you using that use both python 2 and python 3 simultaneously?
Keep using apt for your system packages and install git with it. From there try to stick to one language per project. AFAIK all of the package managers you listed support installing packages from git. The languages you mentioned all have comparable sets of tooling, so use the toolchain available for the target language.
I worked with a team that was using composer, npm, bower, bundler, maven, and a tar.gz file for frontend SPAs because those are the tools they knew. On top of all of that, they were using vagrant simply as a deployer. We considered our toolchain and described our need and realized that it could be expressed in a single language once we adopted appropriate tooling for the task at hand.