I have several projects which has same front end libs used. I want to know if there is a way to avoid the use of certain packages in the code. For instance, lets say I don't want developers to add underscore into their package.json. The idea behind this is to enforce consistency across different projects.
What I am looking for:
I am using webpack as a build tool. While building the files is there a way to check if there is a library named underscore, if so stop the build process (failed).
Related
We're planning three similar vue projects. We already know that we will be able to reuse a lot of code (especially vue SFCs and simple js helper functions) in all of them and we're looking for a proper way to share the code between them.
Unfortunately the scope of the projects is rather different and a monorepo is not an option due to its limitations in read / write permission and visibility management. Therefore we're planning to handle the reusable parts as separate repos (and most likely private npm packages) which seems to be a straightforward approach. However, the question is: How can we create a convenient setup in which we are able to work on the shared components from within the scope of one of the parent projects?
Project A [project-repo-a]
project-specific stuff for A
private package A [package-repo-a] (conveniently editable from within project A)
private package B [package-repo-b] (conveniently editable from within project A)
Project B [project-repo-b]
project-specific stuff for B
private package B [package-repo-b] (conveniently editable from within project B)
private package C [package-repo-c] (conveniently editable from within project B)
In our PHP projects, there is a simple solution, we just require the reusable parts via composer with the prefer-source option which provides the full git repository which can be worked on right from within the parent application. However, as far as we understand there is no prefer-source thing in npm or yarn. So how can we achieve the desired setup? (Or are we overlooking a major downside of this setup in general?)
We already looked into / considered the following (without finding a suitable approach):
yarn / npm link: We understood, that we could use linking in general, but this seems to be a very inconvenient approach while constantly developing the shared components (and always having to publish them to reflect the latest changes).
yarn workspaces / lerna: Seem to be closest to what we want, however they seem to be (or are explicitly) designed for a monorepo approach. In the end they don't to provide a solution for actually getting the git source of a package (in a separate repo) into the parent project (since there is no --prefer-source thing) - do they?
using composer additionally: Just pulling the git sources down with composer and creating yarn workspaces from the composer vendor folder. However, this is obviously a hacky way and sounds quite error prone concerning the whole dependency management
using a yarn post-install script to pull down the git source of the required private packages, but as the composer way, this seems to be rather unpredictable in terms of module resolution, dependency management and so on.
using git submodules and yarn workspaces: Could be a solution. To be honest we're just completely unexperienced with git submodules and at a first glance it didn't look very intuitive. If there is no other way, we'll anyways consider to use this approach.
To be clear about this: We're not asking the taste question if one or the other of those approaches would be "best". We're feeling like none of them is the right one. The question is: Are we overlooking a technically clean and proven approach to our scenario, using npm, yarn or another package manager / dependency management solution?
Git X-Modules is a tool designed to do exactly what you were asking about. Here's a video that explains it. However, it's very new and therefore can't be really considered "proven" :-)
Yet, if you consider trying it, we would love to hear your feedback!
(As you may guess from the previous sentence, I am a part of the development team.)
you probably have already figured this out but have you looked into https://bit.dev/ ?
I'm currently considering it for a similar task to yours and it looks like it could do the job. Here's an article explaining how to use it https://blog.bitsrc.io/how-to-easily-share-vue-components-between-applications-1d30a1ad4e4d
We are struggeling hard with how to use features the correct way.
Let’s say we have the plug-in org.acme.module which depends on org.thirdparty.specific and org.acme.core.
And we have the plug-in org.acme.other which depends on org.acme.core.
We want to create an application from these, which includes a target file and a product file. We have the following options:
One feature per module:
org.acme.core.feature
org.acme.core
org.acme.module.feature
org.acme.module
org.acme.other.feature
org.acme.other
org.thirdparty.specific.feature
org.thirdparty.specific
This makes the target and product files gigantic, and the dependencies are very hard to manage manually.
One feature per dependency group:
org.acme.module.feature
org.acme.core
org.acme.module
org.thirdparty.specific
org.acme.other.feature
org.acme.core
org.acme.other
This approach makes the dependencies very easy to manage, and the target and product files are easy to read and maintain. However it does not work at all. The moment org.acme.core changes, you need to change ALL the features. Furthermore, the application has no say in what to package, so it can’t even decide to update org.acme.core (because of a bugfix or something).
Platform Feature:
org.acme.platform.feature
org.acme.core
org.acme.other
org.thirdparty.specific (but could be its own feature)
org.acme.module.feature
org.acme.module
This is the approach used for Hello World applications and Eclipse add-ons - and it only works for those. Since all modules' target platforms would point to org.acme.platform.feature, every time anything changes for any platform plug-in, you'd have to update org.acme.platform.feature accordingly.
We actually tried that approach with only about 50 platform plug-ins. It's not feasible to have a developer change the feature for every bugfix. (And while Tycho supports version "0.0.0", Eclipse does not, so it's another bag of problems to use that. Also, we need reproducibility, so having PDE choose versions willy-nilly is out of the question.)
Again it all comes down to "I can't use org.acme.platform.feature and override org.acme.core's version for two weeks until the new feature gets released.
The entire problem is made even more difficult since sometimes more than one configuration of plug-ins are possible (let's say for different database providers), and then there are high level modules using other child modules to work correctly, which has to be managed somehow.
Is there something we are missing? How do other companies manage these problems?
The Eclipse guys seem to use the “one feature per module” approach. Not surprisingly, since it’s the only one that works. But they don’t use target platforms nor product files.
The key to a successful grouping is when to use "includes" in features and when to just use dependencies. The difference is that "includes" are really included, i.e. p2 will install included bundles and/or included features all the time. That's the reason why you need to update a bundle in every feature if it's included. If you don't update it, you will end up with multiple versions in the install.
Also, in the old day one had to specify dependencies in features. These days, p2 will mostly figure out dependencies from the bundles. Thus, I would actually stop specifying dependencies in features but just includes. Think of features as a way to specify what gets aggregated.
Another key point to grouping is - less is more. If you have as many features as bundles chances a pretty high that you have a granularity issue. Instead, think about what would a user install separately. There is no need to have four features for things that a user would never install alone. Features should not be understood as a way of grouping development/project structures - that's where folders in SCM or different SCM repos are ok. Think of features as deployment structures.
With that approach, I would recommend a structure similar to the following example.
my.product.base
base feature containing the bare minimum of the product
could be org.acme.core plus a few minimum
my.product.base.dependencies
features with 3rd party libraries for my.product.base
my.addon.xyz
feature bundling an add-on
separate features for things that can be installed separately
my.addon.xyz.dependencies
3rd party libraries for add-on dependencies
Now in the product definition I would list just my.product.base. There is no need to also list the dependencies features. p2 will fetch and install the dependencies automatically. However, if you want to bind your product to specific versions of the dependencies and don't want p2 to select any matching one, then you must include the my.product.base.dependencies feature.
In the target definition I would include a "my.product.sdk" feature. That feature is an aggregation feature of all other features. It makes target platform management easier. I typically create an sdk feature with everything.
Another feature that is also very often seen is a "master" feature. This is an "everything" feature that maybe used for creating a p2 repository during the build. The resulting p2 repository is then used for assembling products.
For a more real world example see here:
http://git.eclipse.org/c/gyrex/gyrex-server.git/tree/releng/features
Features and Continuous Delivery
There was a comment regarding frequent updates to feature.xml. A feature.xml only needs to be modified when there is a change in structure. No updates need to happen when the bundle version is modified. You should reference bundles in features with version 0.0.0. That makes Tycho to fill in the proper version at build time. Thus, all you need to do is commit a change to any bundle and then kick off a rebuild. Tycho also takes care of updating the feature qualifier based on the qualifiers of the contained bundles. Thus, the new feature qualifier will be different than in a previous build.
In a project where some targets are to be build and run on the build platform and other targets are to be build for a cross platform; what options do we have, when using cmake?
Currently I use CMAKE_BUILD_TYPE to define tool chain, build type and platform (for example -D CMAKE_BUILD_TYPE=arm_debug). In one place in the build, I switch tools (compilers, linke etc.), command line flags, libraries etc. according to the value of CMAKE_BUILD_TYPE. For every build type, I create a build directory.
This approach has it's drawbacks: multiple build directories and no easy way to depend one target from one build type on a target in an other build type (some kind of precompiler needed on the build platform by the build for the cross platform for example).
As currently every build targets has a single tool chain to be used I would love to associate a target with a target platform / tools set. This implies that some libraries have to be build for more than one target platform with different tool sets.
The 'one build type and platform per CMake run' limitation is fundamental and I would strongly advise against trying to work around it.
The proper solution here seems to me to split the build into several stages. In particular, for the scenario where a target from one build type depends on a target from another build type, you should not try to have those two targets in the same CMake project. Proper modularization is key here. Effective use of CMake's include command can help to avoid code duplication in the build scripts.
The big drawback of this approach is that the build process becomes more complex, as you now have several interdependent CMake projects that need to be built in a certain order with specific configurations. Although you already seem to be way beyond the point where you can build your whole system with a single command anyway. CMake can help manage this complexity with tools like ExternalProject, that allows you to build a CMake project from within another. Depending on your particular setup, a non-CMake layer written in your favorite scripting language might also be a viable alternative for ensuring that the different subprojects get built in the correct order.
The sad truth is though that complex build setups are hard to manage. CMake does a great job at providing a number of tools for tackling this complexity but it cannot magically make the problem easier. Most of the limitations that CMake imposes on its user are there for a reason, namely that things would be even harder if you tried to work without them.
This question is about the project command and, by extension, what the concept of a project means in cmake. I genuinely don't understand what a project is, and how it differs from a target (which I do understand, I think).
I had a look at the cmake documentation for the project command, and it says that the project command does this:
Set a name, version, and enable languages for the entire project.
It should go without saying that using the word project to define project is less than helpful.
Nowhere on the page does it seem to explain what a project actually is (it goes through some of the things the command does, but doesn't say whether that list is exclusive or not). The cmake.org examples take us through a basic build setup, and while it uses the project keyword it also doesn't explain what it does or means, at least not as far as I can tell.
What is a project? And what does the project command do?
A project logically groups a number of targets (that is, libraries, executables and custom build steps) into a self-contained collection that can be built on its own.
In practice that means, if you have a project command in a CMakeLists.txt, you should be able to run CMake from that file and the generator should produce something that is buildable. In most codebases, you will only have a single project per build.
Note however that you may nest multiple projects. A top-level project may include a subdirectory which is in turn another self-contained project. In this case, the project command introduces additional scoping for certain values. For example, the PROJECT_BINARY_DIR variable will always point to the root binary directory of the current project. Compare this with CMAKE_BINARY_DIR, which always points to the binary directory of the top-level project. Also note that certain generators may generate additional files for projects. For example, the Visual Studio generators will create a .sln solution file for each subproject.
Use sub-projects if your codebase is very complex and you need users to be able to build certain components in isolation. This gives you a very powerful mechanism for structuring the build system. Due to the increased coding and maintenance overhead required to make the several sub-projects truly self-contained, I would advise to only go down that road if you have a real use case for it. Splitting the codebase into different targets should always be the preferred mechanism for structuring the build, while sub-projects should be reserved for those rare cases where you really need to make a subset of targets self-contained.
I have a product that I'm working on, Foo. It has currently roughly the following filesystem structure. It's composed of several logically-distinct modules. I want to package each of those modules so that I can make dependencies a bit more explicit.
I'd also like to continue being able to do a single checkout, though, and have my single solution, single build-script, etc available to me.
Something like how rspec does it; the rspec package depends on a set of sub-packages that can be individually maintained.
Edit: How best to:
make the modules inter-dependent
make the work-on-many-things-at-once-from-source-control-checkout experience work, in the sense of not duplicating things like build-automation, etc. I want to keep having a single solution so that ReSharper can find unused code throughout (this is a big legacy codebase), for example.
** So changes to a set of modules would require that I increment all of their versions at once, to correctly advance the dependencies.
.
/Foo.git
/module1
/src
/module1
/module1.specs (tests)
/module1.sln
/module1.wrapdesc
/version
/module2
/src
/module2
/module2.specs
/module2.sln
/module2.wrapdesc
/version
/Foo.sln
/Rakefile.rb (I'm using ruby/rake to build)
/Gemfile
/Gemfile.lock