Automatizing GitHub workflow that involves 3rd party repo - automation

I have a GitHub repo myRepo that scans the contents of another repo theirRepo and converts them to JSON files. The details aren't really that important, just so much that myRepo uses nodeJS and holds theirRepo as a submodule. Licensewise this is not a problem.
What I'd like to achieve is that, when theirRepo merges into main, myRepo magically updates and builds the new files. I'd like to use existing infrastructures such as GitHub actions, Netlify build processes etc.
How would you approach this?
I don't expect a detailed solution for the magical part, but am rather looking for a few pointers, something that gets me started.

As GitHub Actions (AFAIK) does not currently allow to trigger events based on changes in other repositories (unless you control another repository’s workflows) one might have to hack a little bit.
File change in OtherRepo
I’m not familiar with Node, but, depending on the project culture following files might change during the new release/main branch update:
package-lock.json
CHANGELOG.md (for semantic versioning)
This is a rough approximation, you might also want to identify multiple files likely to change with each merged PR.
Cron based jobs
Run your job every N hours/minutes or another time interval to check for changes.
Use caching
Run your action only when files in another repo change, something along these lines:
steps:
- run: curl <path to file> -o output1
- run: curl <path to file2> -o output2
- name: Cache
uses: actions/cache#v3
id: cache
with:
key: ${{ hashFiles(”output1”, “output2”) }}
- name: Update repo
if: steps.cache.output.cache-hit != “true”
run: <do your stuff>

Related

Using your own fork of Next.js

How would you use your own fork of Next.js in a project?
I tried using patch-package but the next package gets overridden by Vercel
Also tried releasing to npm but it's pretty difficult since it needs to release to many #next/ packages first
How would you go about this?
Some people frown on this technique, but I've used it in many projects with great success.
tl;dr - fork the repo on git and install directly from git. Works with monorepos by using GitPkg.
Fork the repository you want on git
Clone your new fork: git clone https://github.com/<your-user-name>/<package-name>
Make any changes and push those changes. Take note of the commit hash.
Install the dependency using the following format:
npm install <your-user-name>/<package-name>#<commit-ish>
Important: replace the <commit-ish> with the hash from Step #3 (note: a tag or branch name works too). This step is important as it makes sure you are locked to a specific version.
If you are referencing a monorepo, you can specify the subdir for the particular package you want by using GitPkg.
Continue working like you normally would. As you make changes to the package, you will need to push those changes and repeat Steps #3-4 to get the new changes in your project.
Keeping things up-to-date
To update the package, you will need to merge any new "upstream" changes made by the original author and push those to your forked repository. Then repeat Step #4 above.
Add the original "upstream" remote:
git add upstream https://github.com/vercel/next.js.git
Pull down any changes and merge them - make sure to reference the right branch name:
git fetch upstream
git checkout <main-branch>
git merge upstream/<main-branch>
Resolve any merge conflicts and then push to your fork: git push origin
Repeat step #4 from above for installing the latest changes - make sure to pick a new commit hash or tag in order to get the latest changes.

Create repository in non-empty remote folder

It's been 14 years since I last worked with svn and appearently I have forgotten everything...
I have an existing web-project, consisting of a bunch of php, html, js and other files in a directory tree on a V-Server. Now I want to take these folders under version control and create a copy on my local machine using svn. So I installed subversion according to these instructions: https://www.linuxcloudvps.com/blog/how-to-install-svn-server-on-debian-9/
Using the already-present apache2.
But now I kinda hit a roadblock. If I try svnadmin create on the existing folder, it tells me that is is not empty and does nothing really. All the questions and answers I find here and elsewhere are either
a) focussing on an already existing folder on the local machine
b) assuming more prior knowledge than I have right now aka I don't understand them.
Is there a step-by-step guide for dummies anywhere on how to do this? Or can anyone tell me in laymans terms how to do this?
I can't believe this case never comes up or that it is really very complicated.
At the risk of failing to understand your exact needs, I think you can proceed as follows. I'll use this terms:
Code: it's the unversioned directory at V-Server where you currently have the bunch of php, html, js and other files
Repository: it's the first "special" directory you need to create in order to store your Subversion history and potentially share it with others. There must be one and there can only be one.
Working copy: it's the second "special" directory you need to create in order to work with your php, html, js... files once they are versioned and it'll be linked to a given path and revision of your repository. At a given time there can be zero, one or many of them.
Your code can become a working copy or not, that's up to you, but it can never become a repository:
$ svnadmin create /path/to/code
svnadmin: E200011: Repository creation failed
svnadmin: E200011: Could not create top-level directory
svnadmin: E200011: '/path/to/code' exists and is non-empty
Your repository requires an empty folder but it can be located anywhere you like, as long as you have access to it from the machine you're going to use in your daily work. Access means it's located in your PC (thus you use the file: protocol) or it's reachable through a server you've installed and configured (svn:, http: or https:).
$ svnadmin create /path/to/repo
$ 😎
Your working copies can be created wherever you need to work with your IDE. It can be an empty directory (the usual scenario) or a non-empty one. The checkout command retrieves your files from the repo and puts them in the working copy so, at a later stage, you're able to run a commit command to submit your new and changed files to the repository. As you can figure out it isn't a good idea to create a working copy in random directories because incoming files will mix with existing files. There's however a special situation when it can make sense: when the repository location is new and is still empty. In that case you can choose between two approaches:
If you want code to become a working copy, you can check out right into in and then make an initial commit to upload all files:
$ svn checkout file://path/to/repo /path/to/code
Checked out revision 0.
$ svn add /path/to/code --force
A code/index.php
$ svn commit /path/to/code -m "Import existing codebase"
$ Adding /path/to/code/index.php
$ Transmitting file data .done
$ Committing transaction...
$ Committed revision 1.
If you don't care about code once it's stored in the repository or you want your working copy elsewhere, you can import your files from code and create a working copy in a fresh directory:
$ svn import /path/to/code file://path/to/repo -m "Import existing codebase"
Adding code/index.php
Committing transaction...
Committed revision 1.
$ svn checkout file://path/to/repo fresh
A fresh/index.php
Checked out revision 1.

How to list the modified files?

I am using gitlab-ci to run scripts defined in .gitlab-ci.yml whenever a PR is raised.
I want to get the list of modified files since the last commit.
The use case is to run file-specific integration tests in a large codebases.
If you do not need to know the paths, but you simply need to run a specific job only when a specific file is changed, then use only/changes .gitlab-ci.yml configuration, e.g.
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
changes:
- Dockerfile
- docker/scripts/*
Alternatively, if you need to get paths of the modified scripts, you can use gitlab-ci CI_COMMIT_BEFORE_SHA and CI_COMMIT_SHA environment variables, e.g.
> git diff --name-only $CI_COMMIT_BEFORE_SHA $CI_COMMIT_SHA
src/countries/gb/homemcr.js
src/countries/gb/kinodigital.js
A common use case that some people will find useful. Run the job when a merge request is raised. Specifically, run lint on all the files that are changed w.r.t. target branch.
As one of the answers suggest, we can get the target branch name through CI_MERGE_REQUEST_TARGET_BRANCH_NAME variable (list of predefined variables). We can use git diff --name-only origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME to get the list of files that were changed. Then pass them on to the linter via xargs. The example configuration may look like.
code_quality:
only:
- merge_requests
script:
- git diff --name-only origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME | xargs <LINT_COMMAND_FOR_FILE>

Trigger gitlab pipeline based on file type or extension

I trying to build a conditional pipeline that is only triggered with a certain file type or extension is being pushed.
For example, when pushing my markdown files I would like to compile and generate html, txt,...
Looking at the documentation, I see that there is support for only or except but they are relying only of tags, commit messages,... but it's not possible to specify the files modified in the commit. An extension of the only/except will look like:
only:
files:
- *.md
That is not built-in. You may write your own implementation but do note that if you want to compare with previous commit and you push several commits at once the build may be skipped as by default the jobs are only run for the latest commit. It would be a lot easier to just build the static site always.
A simplest way to implement it would be just to grep the result of git diff and end the job if the files don't match.
before_script:
- if ! grep "\.md$" <(git diff --name-only HEAD~1); then exit; fi;

Getting current Git commit version from within Rails app?

How can I retrieve the current Git commit version from within a Ruby on Rails app?
Want to display the Git version (or maybe the last 6 letters or so) to serve as an App version.
Like #meagar said, use backticks to execute the shell command from within your app, but you may find these two commands more useful:
Full hash:
git rev-parse HEAD
First 7 characters of hash:
git rev-parse --short HEAD
You can invoke the git command from within your script:
commit = `git show --pretty=%H`
puts commit
Depending on your environment you may want to use the full path to the git binary, and possibly specify the GIT_DIR via an environment variable or --git-dir.
A more robust solution would be git show --pretty=%H -q. The -q flag quiets the output.
In order to remove the newline that is part of the output, you can use chomp. For example: system('git show --pretty=%H -q').chomp
The selected answer has the potential to actually return the diff when the commit is not a merge commit. Verified on git version 2.16.2.windows.1.
I presume that you want to include the app version in your HTML somewhere? The prerequisite is that you are deploying your repo with Capistrano in the default manner (you are uploading the repo, not sending up an archive file).
You can add some code to the Rails initializer as outlined here. That approach will get the SHA1 from the last commit, and make it available as an environment variable.
The other way to do it is have you Capistrano task generate a static file in the public directory with the commit SHA in it. You could include other info in this file that seems useful.