handle in meson several git wrap files - meson-build

I'm working in a project that use meson as build system and has several libs dependency in different repositories
mainProject
---subProj1
---subproj2
---subproj3
.....
in order to clone and get the dependency from each of them I could use the wrap file
e.g.
subProj1.wrap
[subProj1-git]
directory = subProj1
url = https://...../bitbucket/..../subProj1
revision = 0.0.1
depth = 1
but this means that I need one wrap file for each of them, is there a smart way to collect all the wrap files needed in a single file? or is there some other way to handle all the repository that I need to clone

Related

terraform module from git repo - how to exclude directories from being cloned by terraform init?

We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.
The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)

Use Conan package manager to copy files to project

I use Conan to manage the dependencies within my (c++) project.
Now I need some relatively large files in the project, which should not be checked in to GIT. I have these files on a http server and want to download them via a Conan recipe and make them available within my project (the files are needed by the finished binary and have nothing to do with the build process itself).
But I can't get conan to copy the files to the right place, here is my attempt:
from conan's import ConanFile, tools
class MyPackage(ConanFile):
name = "package"
version = "11.28"
author = "Whatever"
keep_imports = True
exports = "*"
def source(self):
tools.get("http://just/a/file.zip")
def imports(self):
self.copy("*", dst="content")
def package(self):
self.copy("*")
def package_id(self):
self.info.header_only()
For example, if my project is located under C:\dev\project and the files A.dat, B/C.dat are located in the "file.zip", I would like to have them under c:\dev\project\ \A.dat or c:\dev\project\ \B\C.dat
The problem is that when I run the recipe, the files are under <CONAN_HOME>\package\11.28\ (...) \package\A.dat or
<CONAN_HOME>\package\11.28\ (...) \package\B\C.dat (Additionally also under <CONAN_HOME>\package\11.28\ ... \source, but that is not important)
and not under c:\dev\project...
You are calling
conan create .
when you run the recipe I suppose. Actually you can't modify you local folder running conan create, but there is one exception to this rule:
if you define a
set_version(self):
self.version = "11.28"
# do whatever you want in your local folder
# e.g. tools.get("http://just/a/file.zip")
# and unpack your files into A.dat and B/C.dat
inside your recipe. All you do inside this function is executed within your current working directory, so you could download your zipfile here and copy the files in it in their locations.
Additionally, you have to pick these files with the exports_sources attribute if you want them to become part of your final package:
exports_sources = "A.dat","B/C.dat"
This is a hack and should be avoided, however.
Try instead to package your files "A.dat" and "C.dat" in a own package say MyDats/1.0, using the following recipe:
class MyPackage(ConanFile):
name = "MyDats"
version = "1.0"
def source(self):
tools.get("http://just/a/file.zip")
# unpack files into A.dat and C.dat herein..
def package(self):
self.copy("*", dest = "include", keep_path = False)
def package_id(self):
self.info.header_only()
By the way: you don't need to specify exports = "*" normally, the only things that should be exported are files that are necessary to run the recipe itself (not source code or your files A.dat, C.dat).
When you call
conan create .
on this it will package your files and install them locally in your cache.
Then place a conanfile.txt in your folder C:\dev\project\import\ of your local project containing:
[requires]
MyDats/1.0
[imports]
include, A.dat -> ..\
include, C.dat -> ..\B
You can obviously put your conanfile.txt in another location than project\import, the main point is to non have two recipes in the same location.
If these file are needed in your finished binary, you should include them into your package of your project however, which is what you did already inadvertently as far as I understood.

What is the difference between mongooseim.cfg at 2 different places

I am using Mongooseim 3.2.0 and after compiling it from the source code, I can see the mongooseim.cfg at:
1. /MongooseIM/_build/prod/rel/mongooseim/etc/mongooseim.cfg
I can also see that in the docs here that there is another mongooseim.cfg at root level -
2. /MoongoosIM/rel/files/mongooseim.cfg
What is the difference between the two? My guess is path 1 file is copied to the path[2] after compiling the project.
Path 1 (/MongooseIM/_build/prod/rel/mongooseim/etc/mongooseim.cfg) is the actual config file of MongooseIM once it's built. You can tell that by MongooseIM/_build in the path - the _build directory doesn't exist in a fresh clone of the repository. To give you more context, /MongooseIM/_build/prod/rel/mongooseim is a self-contained Erlang release of MongooseIM. Change this file if you want to modify the config of this particular MongooseIM build - the changes will be lost after you rebuild.
Path 2 (/MoongoosIM/rel/files/mongooseim.cfg) is cloned as part of the repository - it's a config template. The specific values are defined in rel/*vars.config files and are substituted for the variables in the template file at build time depending on the Rebar3 profile in use (see rebar.config for profiles). Change this file if you want your changes to remain after consecutive rebuilds of the project.
To cut the long story short, when you run make rel the files /MoongoosIM/rel/files/mongooseim.cfg and /MoongoosIM/rel/vars.config are used to create /MongooseIM/_build/prod/rel/mongooseim/etc/mongooseim.cfg.

Managing Dependencies in SailsJS

How can I manage dependencies in a Sails.JS based project. There is a really neat feature which automatically links assets from the assets folder into the relevant templates, however, to get these files, e.g Angular, Bootstrap, Material-Design etc. I like to run bower / npm install. But then the resources are in the bower_compnents file and can't be linked to. How can I work around this to get file installed by package managers to be included in Sails.JS's default mechanism?
You have the option to change the location of the bower components with a .bower.rc file.
Example .bower.rc
{
"directory": "assets/components/"
}
It is not a good idea to automatically link components, there are many .js/.css files that must not included together (eg. lib_name.js and lib_name.min.js).
You have to include them manually like this (tasks/pipeline.js)
var jsFilesToInject = [
'components/lodash/lodash.js',
'components/moment/moment.js',
'components/moment/locale/el.js',
'components/angular-moment/angular-moment.js',
'components/re-tree/re-tree.min.js',
'components/ng-device-detector/ng-device-detector.js',
// Load sails.io before everything else
'js/dependencies/extend.js',
'js/dependencies/sails.io.js',
// Dependencies
//'js/dependencies/**/*.js',
'js/my-app-bootstrap.js',
// load my controllers
'js/angular/**/*.js',
'js/my-app.js',
];

How can I separate generated artifacts from the main build with semantic UI?

I am trying to figure out how to integrate Semantic UI with my gulp-based frontend toolchain.
The npm artifact semantic-ui includes an interactive installer that will write a semantic.json file to the root of my project and install the less files, gulp tasks and some configuration into my project. All of these files will be put in subdirectories of a single base directory specified in semantic.json.
I do not want any dependency implementation files or any generated files in the git repository for my project because this will pollute revision history and lead to unneccessary merge conflicts. I would very much prefer to provide semantic.json only and .gitignore the semantic base directory. On npm install, the Semantic installer should install everything to the base directory specified in semantic.json. When building, I want the artifacts generated into a separate dist directory that does not reside under the semantic base directory.
However, if I do this, the installer will fail with a message stating that it cannot find the directories to update and drop me into the interactive installer instead. This is not what I want, because it means my build is no longer non-interactive (which will cause the CI build to fail).
How can I integrate Semantic UI into my build without having to commit Semantic and its generated artifacts into my git repository?
This is what we did in our similar scenario. The following are true:
Everything Semantic UI generates is in .gitignore. Therefore, the only Semantic UI files we have in our repo are:
semantic.json
semantic/src folder (this is where our theme modifications actually are)
semantic/tasks folder (probably doesn't need to be on git, but since it's needed for building, everything is simpler if we keep it in our repo)
We never need to (re)run the Semantic UI installer, everything is integrated in our own gulpfile.js.
Semantic UI outputs in our assets folder which is not in the same folder as its sources.
Semantic UI is updated automatically using npm as per the rules in my package.json.
Here are the steps needed to achieve this:
Install Semantic UI. By this I assume that you either used npm or cloned it from git (using npm is highly recommended), either way, you have semantic.json in your project's main folder and a semantic folder with gulpfile.js, src and tasks.
Make sure Semantic UI can be built. Navigate to semantic/ and run gulp build. This should create a dist folder in your semantic/ directory. Delete it and also delete Semantic UI's gulpfile.js since you'll no longer need it.
Edit semantic.json. You need to edit two lines:
Change "packaged": "dist/", to the path where you'd like to output semantic.css and semantic.js relative to Semantic UI's folder. In our case, it was "packaged": "../../assets/semantic/",
Change "themes": "dist/themes/" in the same way, since the themes/ folder contains fonts and images Semantic UI uses so it needs to be in the same folder as semantic.css. In our case, it was "themes": "../../assets/semantic/dist/themes/".
Edit your gulpfile.js so that it uses Semantic UI's build task. Add var semanticBuild = require('./semantic/tasks/build'); (if semantic/ is in the same folder as your gulpfile.js) and then simply register a task that depends on it, for example gulp.task('semantic', semanticBuild);.
Optionally, create a clean task. We used del for that.
gulp.task('clean:semantic', function(cb) {
del(['assets/semantic'], cb);
});