Replicating libman unpkg with a local filesystem - asp.net-core

I noticed I can use libman to download many libraries into my Asp.Net Core app in a nice way. E.g. I can easily get the latest #microsoft signalr:
However, in my application I can't rely on external package sources and would like to store the packages I need within my network.
I noticed that libman supports "filesystem" mode, so I copied all the files downloaded from unpkg onto my local network drive, let's call it "L:"
/ L:
| local_unpkg
| #microsoft
| signalr
| 5.0.2
| package.json
| README.md
| src
| ... // a lot of files
| dist
| browser
| cjs
| esm
| ... // other subfolders
When I try using "filesystem" provider, I get only the files in directly in the folder I specify, without nested folders:
Is there a way to import entire packages that way, without manually specifying all the subfolders in the libman.json file?
If not, what's the recommended approach for using the tool in an environment, when I don't want to rely on external package sources?

The filesystem provider specifically does not support recursive directory contents. With the other providers, the contents of the package are available all at once via the catalog metadata. But with file paths, and especially network file paths, iterating the file system can lead to extremely poor performance in large (or deep) directory structures. In many cases, you'd be typing out the path and the wizard would try to evaluate the contents as you type (e.g. once you typed L:\ it would recognize that directory and enumerate all its contents recursively, over the network).

Related

Issue while using Docker API for GO - cannot import "nat"

I am trying to use the docker API for golang that is available from github.com/docker/docker/client. So far I am able to start the containers on the port that is predefined during image built. I am trying to map the port during runtime using the API; something equivalent to
docker run -p 8083:8082 -d myImage:1.0.0
I tried to do something like the following for mapping the ports:
host_config := &container.HostConfig{
PortBindings: nat.PortMap{
"8082/tcp": []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: "8983",
},
},
},
}
The problem here is that the variable "nat" lives inside the vendor folder of the API. I couldn't import something directly from the go vendor folder. Someone on the stackoverflow suggested to copy the go-connection folder into the github folder and remove the nested vendor directory. I did as suggested and created a path as follows:
"github.com/docker/go-connections/nat"
now I get the following error during compile time:
src\main\createcontainer1.go:53: cannot use "github.com/docker/go-connections/nat".PortSet literal (type "github.com/docker/go-connections/nat".PortSet) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortSet in field value
src\main\createcontainer1.go:65: cannot use "github.com/docker/go-connections/nat".PortMap literal (type "github.com/docker/go-connections/nat".PortMap) as type "github.com/docker/docker/vendor/github.com/docker/go-connections/nat".PortMap in field value
Have anyone faced this issue and overcome it? I am using the Go ver 1.8.
So you need to do more than just copy it, you need to move it. The same package located in two different locations are different packages to the go tool (because it can't guarantee they are identical, so it uses fully-qualified import paths).
If a package you're using has a vendor directory, and you need to use the packages in it, you have two options:
Move everything out of the vendor directory in that package into your $GOPATH/src
Vendor the package itself, and then move everything from the package's vendor directory into your project's vendor directory (<project root>/vendor). This is known as "flattening" your vendored dependencies, and most Go vendoring utilities (ex. Govendor or Godep) can do this, either automatically or with a flag. You can also do it manually, though.
The latter is generally the recommended strategy. The important key, though, is that the package itself cannot have a version of that library in its own vendor directory, as Go tool automatically uses the deepest vendored version of a package that it can access.

Manage assets in Laravel 5

Could someone explain the right approach in managing assets in Laravel 5?
For example, let's imagine I want to install some plugins using bower. The recommended way, as I got it, to keep all files into /vendor/bower_components. So I got some css, some images, fonts and javascript files withing the plugins.
Also I have a "app.less", where I import everything I need, like #import ('../../../vendor/bower_components/someplugin/somestyle.css'). The problem though that I don't have images/js/fonts in my public directory. Okay, I saw that you can use gulp copy function. However, when the number of plugins is getting higher, how I am supposed to manage where each plugin keeps its pictures or other files?
Actually I wanted to try semantic ui. I've downloaded it with bower. I know nothing about semantic ui, but there is a dist folder with semantic-ui.css. Also there are some fonts files withing themes/basic/assets/fonts. If I just copy it to public, it'll be public/themes/basic/assets/fonts. Then I import semantic-ui.css into my app.les and it'll find necessary fonts. What if I have some other plugins, it'll become unbearable to manage it all.
What is the typical workflow for this problem? The most simple way is just something like put everything into public and include it manually using <link> and <script> tags, but it'll require a lot of queries.
And why it's bad to keep all bower_components inside public? On the analogy of composer, we don't have autoloader for bower, so there is a mess of assets.
You are correct in the recommended place to put bower_components. It's not recommended to put bower_components in the public directory because it contains ALL the files in that specific package, not just the file you need to include in your HTML.
Since your talking about Laravel5, it is recommended to utilize laravel-elixir to manage assets. http://laravel.com/docs/5.0/elixir which utilizes gulp and can compile less, sass or various other files. I don't have any experience with semantic ui, but it looks to be similar to bootstrap. Without a SaaS or Less version available on npmjs.org you would need to copy the necessary files to your public directory. Elixir provides a simple way to copy files or whole directories from bower_components to your public directory.
The easiest way to include all the files needed without a ton of or is to use saas or less.
Personally what I do is this using node
var elixir = require('laravel-elixir');
var nodeDir = './node_modules/'; //This is the node directory(base directory) where all vendor files are downloaded in your case might be different
/*
|--------------------------------------------------------------------------
| Elixir Asset Management
|--------------------------------------------------------------------------
|
| Elixir provides a clean, fluent API for defining some basic Gulp tasks
| for your Laravel application. By default, we are compiling the Sass
| file for our application, as well as publishing vendor resources.
|
*/
elixir(function(mix) {
mix the styles and copy fonts to my public/css folder
mix.styles([
'bootstrap/dist/css/bootstrap.css',
'font-awesome/css/font-awesome.css'
], './public/css/app.css', nodeDir)
.copy(nodeDir + 'font-awesome/fonts', 'public/fonts')
.copy(nodeDir + 'bootstrap/fonts', 'public/fonts');
//mix javascript from node directory and output to public/js/ folder
mix.scripts([
'jquery/dist/jquery.js',
'bootstrap/dist/js/bootstrap.js'
], './public/js/app.js', nodeDir);
});

Organization of files in Code Blocks

I am currently working on a medium/large project on Code::Blocks and I am wondering how to organize my files.
First, it seems that creating "virtual folders" in Code::Blocks is quite natural but then on disk, all files are in the root folder of the project and it seems messy for me : if I want to do something outside of Code::Blocks, files are then hard to find. Should I use this method anyway ?
Then if I create "real" folders every time I need them, I need to add them to the path in order for them to be built. Plus, Code::Blocks seems not to like that. Is there an easy way to say to Code::Blocks "build the project as if the files in the sub-folders in my project directory where directely inside the root project directory" ?
I did not find on the Internet how project are usually organized with Code::Blocks, any links are welcomed
large projects organisation
If you are creating a new project, coding a new software application or want to refactor existing code, it's a good to properly structure your project. While there is probably hundreds of ways to structure and while there are many thing to consider, here I would like to give you one possible approach which has really worked for me over and over. This example/proposal is the summary of the years of research I have done regarding this topic, so it's not just 'an idea'
There are three 'main' issues you definitely need to address when organising a project:
Medium to large projects, not to say all projects, should be version controlled (GIT as an example).
medium to large projects, not to say all projects, should be maintained by a project generator (Cmake as an example).
It would be impossible, for a medium to large project, to keep all files in the same physical directory. It is even strongly discouraged (by several guidelines including linux kernel). You should organize these files in a physical logical manner.
An example physical projects file structure would be:
~example/environment$project tree .
.
|- code
|- core
|- extern
|- docs
|- tests
|- core_tests
|- extern-tests
|- ...
This, unfortunately in code::blocks, means you will have to include all your project physical folders to the search paths.
You can organize your files inside code::blocks in any way you want, virtually too, but if your physical structure is logical, your project should be intuitive to browse!
code::blocks does not allow to include virtual paths.
hope this helps
KR
Hewi
In one of my projects in Code::Blocks I use different folders in my source folder; client, common and server.
I then have different compile targets, so that the client compile target will use the source files found in client and common, and the server compile target will use the source files found in server and common.
Not sure if that's what you're after but here is a picture of how my project looks like:

Teamcity 2 configurations merge and deploy

I have two teamcity configurations one becoming my common helpers and reuseable components and my other a website which uses the common project.
I use a third configuration to publish to a test environment.
When the third configuration is run i would like it to get the artifacts from the common project and merge them with the website output and deploy. Am i asking for two much?
This ought to be pretty straightforward.
On ThirdConfig add two artifact dependencies. One whose source is CommonProject, and another whose source is WebProject. When configuring an artifact dependency it will allow you to specify which artifact files are are actually pulled from CommonProject and WebProject into ThirdConfig via the 'Artifact paths'. The artifact files can then be placed into some new folder hierarchy specific to ThirdConfig by using the 'Destination path'. These two options ought to be enough to create the directory structure that is the merging of CommonProject and WebProject. That takes care of the merge part.
The deploy is a bit more tricky. To my knowledge TeamCity does not support any sort of 'copy or upload to external location' function out of the box. For this bit you'll need to create an msbuild script (or batch file, or anything that can be run from the command line). Said script can expect the file/directory structure you've created via artifact dependencies where the root of the structure is the initial working directory of the script, and need only push these files out to your specific deploy location. That 'push' of course is going to be specific to your environment. Ftp, unc share, etc.

Project organization in perforce

I created several web applications that use the same static files (css, js, images).
When I use svn for version control, I use an external repository (svn: externals) to add files to the current project.
For example:
- Project_1
---- Webapp
-------- Static (external to static's repo)
- Project_2
---- Webapp
-------- Static (external to static's repo)
I could easily use it in their web pages by adding a link like /static/ ...
But now our company has moved to perforce.
How can I support the current structure?
We also use maven, I think to pack these files as a jar and use as a dependency, but then my editor (idea) does not see that this dependence are js-scripts and styles.
And i need to repackage and deploy jar file when create minor changes.
How to use maven correctly?
Perforce has support for defining multiple mappings from the depot to your hard drive as part of the client spec. You could, for example, set the following:
Client Name: Sample_Maven
Client Root: c:\inetpub\wwwroot
//depot/Project_1/Webapp/... //Sample_Maven/Project_1/...
//depot/Project_2/Webapp/... //Sample_Maven/Project_2/...
//depot/Shared/static/... //Sample_Maven/static/...
... any other folder mappings you need to bring in and sync ...
Perforce won't handle multiple mapping of the shared static folder situation by itself, you will have to use junctions/symlinks in your file system to get the behavior you want. A word of caution though, make sure only one of the shared static folders is actually managed through Perforce. It can get slightly grumpy if resources get changed out from under it without it knowing about the changes.
Really though, you are probably better off (if you can) - having a single workspace/client spec per project - one for proj1 and one for proj2, each with their own mappings to the shared static folder. If you can structure things appropriately and just use maven to build each "project" things will go more smoothly.
For a Maven based solution, you could use WAR Overlays, sharing common resources across multiple web applications is exactly what overlays are for.
It seems you have a couple of choices, both called overlays:
a) Maven overlays as #Pascal suggests. Then you a struction like #Goyuix suggests to checkout the static content from Perforce.
b) Perforce overlays, which would allow you to have two different workspaces/client specs, one for each project, and in each import the static content into the expected place in the filesystem. This is the closest match to the subversion structure you were using before.