Generate Sorbet RBI files with bundler and dependabot - sorbet

I'm trying to add Sorbet to a Rails codebase maintained by 20 engineers. We use dependabot to update gems pretty frequently and I'd rather not have to manually run srb rbi update on every version bump. Is there a way to automatically run srb rbi update every time bundle install is run, and have it only update gems that have been changed (so it doesn't take 5 minutes)?

You need to run every piece of code from every gem for Sorbet to create accurate type information, unfortunately, so only updating specific gems isn't really possible.
You could theoretically write a script that runs the srb rbi update command and then generates a git patch file and opens a PR/MR automatically, but I don't really know of any tool to do that.
I use Dependabot for my Rails app and so far what I've been doing is just running the update command every once in a while. Most of the time, having slightly outdated method definitions isn't going to cause type errors for new versions unless the gem's codebase changes completely.

Related

Is there a way to deploy 2 versions of the same package for 2 different use cases at once?

The answer seems like 'no' but I wanted to check with colleagues here.
We provide an npm package for our own sites as well as some 3rd party sites.
There's a fairly heavy and old homegrown npm package that we also have in our package.
We don't need that package any longer on our sites but the 3rd party sites do.
We also have no way of controlling the code on those 3rd party sites so we need to keep the deployed bundle name and location the same for them.
Is there a way to publish a version of our package first without the extra package for us and then a version with it for the third parties from the same repository?
ourpackage-new.js (without the dependency)
ourpackage.js (with the dependency)
I had some success with a new package json in a subdirectory. I would create a new package and the original package via a command in gitlab.yaml to cd into that directory and npm publish there after the first one. This requires copying some dependency files down there as well which would mean if one version was updated, we'd need to remember to update the copy. Not a situation we'd want.
Even if we created a 2nd repository for the change just for us, we'd still need to update 2 repositories every time we had a new change to deploy.
Checked into Aliasing as well, we wouldn't be planning to import a new version and an old version though, more like sister versions.
In any case, thanks for the input and thoughts. I realize Npm was prob not made for this type of situation. If I remember right, I could do this with Gulp years ago, but I haven't even thought about Gulp in so long :) And then, I'd have to deploy manually via an FTP program ... wow, those were days.
Thanks again!

Testing in Gitlab CI/CD with different dependacies versions

I'm currently developing a (Laravel) package on Gitlab, and i want to automate testing using its CI/CD pipeline.
The problem
I already know ho to set up a pipeline in Gitlab, but what i want to achieve is to automate testing against different versions of the same dependancy, in order to keep checking compatibility with old version and add checking with upcoming new ones.
The case
My Laravel package is not so complex right now and don't use some particular nor specific Laravel features, so i would like to keep it compatible with te more versions of laravel possible: i would trigger different testing stages in my pipeline to run my tests against laravel 5.6, 5.7, 5.8, 6, 7, and 8.
The question
How do i trigger different testing stages using different laravel/framework versions?
When downloading dependancies composer will go for the latest version available if i define it with '^', so which files do i have to edit?
Ok, i've analyzed the problem a bit more, and made some considerations about it.
I'm writing not to properly answer my question, since i hope someone will eventually came up with a better solution/idea, but to just share some toughts with everyone is facing the same problem.
First: since i'm developing a package for Laravel i cannot declare laravel as dependancy for it, production nor develop, it is my laravel project that need to declare my package as a dependancy.
Second: to test my package compatibility with laravel i'm using orchestra/testbench as a dev-dependancy, and as for its documentation every release target a single and precise Laravel version, so if i want to test my package against different Laravel version i need to test it with different orchestra/testbench releases.
Third: the only dependancy my package has is just php 7.3, so i can easily test against this and subsequent version using Gitlab pipeline and creating a job for each php version that use a docker image with the correct php version and the last composer one.
Conclusion
It is not trivial nor straight to test a Laravel package against different Laravel version.
The only idea i came up with, but not tried since i gave it up aj just test php versions (for now) is to make a branch for each Laravel version i want to test with and update its composer.json dev-dependancy with the correct orchestra/testbench release.
Than i can execute php tests on my features branch merge request, and in case of success merge the develop branch on each "laravel branch" and execute on those the laravel compatibility test.
At last, if every laravel branch pass its tests, or at least the ones i decide to keep deevelopment/support active for, i can merge the develop branch on the master.
I'm not goig for it
I decided to avoid all of this since i'm not quite sure on how implement all of this on the pipeline, and i strongly think that it just adds mantainence burden to this project.
So i just keep the php jobs to check against different php versions, this way i just need to copy/paste a job definition in my gitlab-cy.yml file and change the docker image version accordingly to the new php version to test against.

PHPUnit: local VS global install

Installing PHPUnit with composer globally seems more convenient to me for those two reasons:
1. Using it everywhere without needing an extra install.
2. Just running phpunitinstead of vendor/bin/phpunit (using an alias might solve this)
Are there any reasons why a local install might be the better choice? For example: using the exact same versions every time. (don't have a lot of experience with PHPUnit, so not sure if this really is an issue or not)
The big disadvantage of installing packages globally is that you might end up with different versions of PHPUnit between developers in your team (unless you are the only developer). This might cause some side effects.
If you install it locally using composer.json, then every developer in your team will have exactly the same version as you do for that specific application. Also, everybody will see when you change the version in composer.json.
If you don't like typing vendor/bin/phpunit, you can use Makefile (which is also in your project):
test:
vendor/bin/phpunit --configuration=test/Unit/phpunit.xml
then run it ...
make test
I like to install it via composer and the require-dev block, but another way that does come highly recommended is to download the phpunit.phar into the project, to use that.
Either way, you control exactly which version is being used (and when it's updated) - which is the most important part, as you can't so easily control what people have installed globally.

How to organize development of Rails App and multiple Engines

It's hard to formulate the question actually so I just explain the situation.
I'm working on a application that consists of multiple sub applications. The main app just provides an navigation bar and some basic functionality like configuration of users and permissions while the sub applications provide the actual functionalities.
Now this is a Rails 2 application and the sub applications get embedded in frames, it's not really nice design and pretty complex to setup.
Fortunately we have Engines now and that would be the saner solution for this application.
Until now everything lives in subversion and can be updated at once, shared code uses externals. We would like to move to git while we're at restructuring and refactoring.
I've been searching the web the past few days about bundler, git submodules and git subtrees but I haven't found a good description how to properly manage a large project which consists of multiple Engines/Gems when you are developing on all of them the same time.
In particular I would like to be able to:
use Bundler to manage dependencies
do not install our own Gems and Engines into the global gem path but relative to the main app, as an git repository
have our own Gems and Engines setup as git repository (maybe with Bundler's local path override)
an easy way to fetch all dependencies (bundle install) which pulls the latest version of our own Gems and Engines, if that's not possible then one command to git pull all own Gems and Engines (maybe an rake task?)
make it easy for new developers to setup the entire development enviroment fast (git clone the app, bundle install dependencies including all own Gems and Engines, locally)
deploy with Capistrano, easily
What I already thought about:
including everything into one repository, seems to defeat the purpose of separate Gems/Engines for me, also I think it wouldn't allow us to manage the dependencies of the main app on our Engines via Bundler
using submodules, I read too many posts about why it's bad, and with our number of developers it's only a matter of time until somebody commits a submodule pointer to a commit that only exists in his local repo
git subtree utility, seems quite complex to me
So has anybody of you a similar setup and how do you manage it to make updating and committing changes as easy as possible? Where do you put your Engine/Gem code on which the application depends?
TL;DR How do manage a large rails project which consists of multiple Engines and Gems?
We have a similar (but probably less complex) case at my company. What we do (as for now) and that could work for you too :
Put your Rails app in its own git repository. The various gems each get their own repository also (while it is possible to do otherwise, the "one gem = one git repository" will make your life easier).
Then in your Rails app Gemfile, you have several options
Default should be to refer each gem to its git repository (so that bundle will load them from there)
When working locally on some of the gems and the Rails app, either change the Gemfile to use the local path (http://gembundler.com/v1.2/gemfile.html) or better, by overriding the path locally (see: http://gembundler.com/v1.2/git.html). Be careful that those two options are different : the first one use the path, the second the local git repository (so a new uncommitted change will be visible by the first, not by the second).
For updating all your gems easily, I would create a small .sh script (just to launch the various clone or update operations, and the bundle install so that everything comes out clean), and commit it with the main app. I would also get a "standard folder organization" among the team (ie, that everyone use a base folder of their choice, with under it folders for the Rails app and each gem), to make the procedure easier.
I hope this can help or get you ideas (your question is quite complex and manifold, so I'm not 100 % sure this is what you are looking for).
How to manage your Gem dependencies?
Bundler via Gemfile.
How to manage your Engines?
Bundler via Gemfile.
Architect your Engines as Gems and provide their git repo location in your Gemfile. If you want examples, check out how to include the https://github.com/radar/forem gem in your Gemfile.
Also, this helped me learn Rails Engines, http://edgeguides.rubyonrails.org/engines.html.
Are you coming from Java Land?
Rails does have a learning curve, but not like the Java mountain cliff drop off.

How to deal with different gem dependencies within Bundler for scripts within a single Rails project?

Our Rails application pulls feeds from multiple sources. The workers that pull these feeds need gem dependencies for rmagick, oracle databases, and many other gems. In short, they have very different dependency needs than the main web application. Until Rails 3 and Bundler, life was good.
These worker gem dependencies are irrelevant to our actual production website. Under Rails 3, one Gemfile is expected to contain all these dependencies. This has the nasty side effect of requiring all gem dependencies to be loaded within the production app, which would cause pointless bloat, possible security issues, memory leaks, complicate deployment, and other ills. Sadly, Bundler breaks the standard require mechanism, which would have provided a way out of the quagmire by allowing us to simply require the necessary gems only in the worker and have them somewhere on the system, not in the bundle. The workers use our rails models to file their data.
Can anyone suggest solutions to make the system practical in Rails 3? I am tempted to make the Gemfile use conditional environment variables in places to drive the gem commands, however, it seems the Gemfile.lock could make this problematic going from working on one worker script (for the feeds) to the next, which would have different dependencies. Help???
I've been contemplating a similar problem, and although I don't have a solution in use anywhere yet, your question did make me think it out some more. I think you should be able to use a group to accomplish this. You can add something like this to your Gemfile:
group :workers do
gem "extra_gem_1"
gem "extra_gem_2"
end
Then, you can call
Bundler.require(:default, :workers)
and that should load your gems. How this works will depend on your setup, you might be able to add logic to config/application.rb, or you might need to do this elsewhere. This might be hackish, but it works in the console anyway.
When installing your gems, you can call:
bundle install --without workers
to exclude those gems from production.
Alternatively, you can use two Gemfiles, but that seems like a mess as well since presumably there's some crossover.