I am trying to understand the folder structure of a corefx project, here System.IO. Here is how the System.IO folder appears in OS X
System.IO BLACKSTAR$ pwd
/Users/BLACKSTAR/dotnet/corefx/src/System.IO
sameer:System.IO BLACKSTAR$ tree
.
├── System.IO.sln
├── ref
│ ├── System.IO.Manual.cs
│ ├── System.IO.cs
│ ├── System.IO.csproj
│ ├── bin
│ │ └── Debug
│ │ └── dotnet
│ │ ├── ref.dll
│ │ └── ref.xml
│ ├── project.json
│ └── project.lock.json
├── src
│ ├── Resources
│ │ └── Strings.resx
│ ├── System
│ │ └── IO
│ │ └── InvalidDataException.cs
│ ├── System.IO.csproj
│ ├── project.json
│ └── project.lock.json
Here is what I am trying to figure out
What is there in ref folder?
What is there in src folder?
What is the connection between ref and src?
Ref is targeted to dotnet but Src is targeted to dnxcore50 framework. What does this imply?
I was able to build the project in ref folder but i couldn't build the project in src using dnu build though dnu restore ran successfully. What am I doing wrong?
sameer:System.IO BLACKSTAR$ dnvm list
Active Version Runtime Architecture OperatingSystem Alias
------ ------- ------- ------------ --------------- -----
1.0.0-beta7 coreclr x64 darwin
* 1.0.0-beta7 mono linux/osx default
sameer:System.IO BLACKSTAR$
What you See here is a NuGet package for a namespace which is in reality part of the CLR. Some types are needed very early... Like file io and elementary data types so they are part of the CLR distribution. You can find these in the core CLR github project.
So ...
Ref are empty implementations for design time. They are there to define the types.
SRC is the dnxcore5 based implementation... Essentially being empty.
Ref vs SRC.... Ref is used for lookup of the types ... Binding to the implementation (either in coreclr or mscorlib) is done by some PCL type forwards.
SRC is the pseudo implementation for coreclr. Maybe just the missing types. Ref targets dotnet since all modern SDK have type forwards for System.IO.
I have no idea how they are build.
Sorry for the missing details. It is not very well documented by MS.
Related
My folder structure is like that:
├── SubLibA
│ ├── CMakeLists.txt
│ ├── include
│ │ └── SubLibA.h
│ └── SubLibA.cpp
├── SubLibB
│ ├── CMakeLists.txt
│ ├── include
│ │ └── structs.h
│ └── SubLibB.cpp
└── SharedLib
├── CMakeLists.txt
├── include
│ └── SharedLib.h
├── SharedLib.cpp
└── SharedLib.h
My global CMakeLists.txt looks like this:
add_subdirectory(SubLibA)
add_subdirectory(SubLibB)
add_subdirectory(SharedLib)
They all compile as static by default.
SharedLib depends on SubLibB that depends on SubLibA.
The dependent libraries SharedLib and SubLibB have:
#SubLibB
target_link_libraries(${PROJECT_NAME}
SubLibA::SubLibA
)
#SharedLib
target_link_libraries(${PROJECT_NAME}
SubLibB::SubLibB
)
Running cmake .. -DBUILD_SHARED_LIBS=ON compiles all the three libs as shared library...
Since they are tightly dependent, I'd like to keep them in the same repository with a unique CMakeLists.txt that compiles them all at once. I want to use the power of Modern CMake with the least hard-coded file and custom files as possible to keep a straightforward maintenance.
Try setting the variable within cmake:
set(BUILD_SHARED_LIBS OFF)
add_subdirectory(SubLibA)
add_subdirectory(SubLibB)
set(BUILD_SHARED_LIBS ON)
add_subdirectory(SharedLib)
set(BUILD_SHARED_LIBS OFF)
If you want SubLibA and SubLibB always be static libraries you can use the STATIC keyword on the add_library command, e.g. add_library(SubLibA STATIC ${SOURCES}) By omitting the keyword for SharedLib you are still free to build it as static or shared lib by setting -DBUILD_SHARED_LIBS=ON on the CMake command line.
Let's say we have repository structure like this (note the .qmake.conf files):
repo/
├── libraries
│ └── libFoo
│ └── libFoo.pri
├── projects
│ ├── ProjectX
│ │ ├── apps
│ │ │ └── AppX
│ │ │ └── AppX.pro
│ │ ├── libs
│ │ │ └── libX
│ │ │ └── libX.pri
│ │ └── .qmake.conf
│ └── ProjectY
│ ├── apps
│ │ └── AppY
│ │ └── AppY.pro
│ └── .qmake.conf
├── qmake
│ └── common.pri
└── .qmake.conf
QMake supports .qmake.conf files, where you can declare useful variables, and it is automatically included in your .pro file if found in parent directory.
This is how it helps to avoid dealing with ../../.. relative paths, for example:
Root repo/.qmake.conf file has REPO_ROOT=$$PWD declared.
project also has it's own repo/projects/ProjectX/.qmake.conf, which has include(../../.qmake.conf) included and PROJECT_ROOT=$$PWD declared.
project's application .pro file (repo/projects/ProjectX/apps/AppX/AppX.pro) can avoid writing ../../ and include all dependencies from sibling and parent directories like this:
include($${REPO_ROOT}/qmake/common.pri)
include($${REPO_ROOT}/libraries/libFoo/libFoo.pri)
include($${PROJECT_ROOT}/libs/libX/libX.pri)
This is convenient and tidy. You DO have to write ../../ once (and update it if repository tree changes), but only once per new .qmake.conf, and later you can use variables to refer to various useful relative paths in the repository in any number of .pro's you have.
Is three similar technique in CMake? How this kind of variable organization could be achieve with CMake, in most convenient way?
In CMake you can achieve similar result somewhat differently:
(regarding "useful variables" management)
CMake knows about 3 "types of variables":
vars with directory scope; directory scope variables behave in such a way that if you define them in some folder, they will automatically be visible in all subfolders. In brief, if you define some var in root CMakeLists.txt, it will be visible in all project subfolders. Example of defining "directory scope variable":
# outside any function
set(MY_USEFUL_VAR SOME_VALUE)
vars with function scope; function scope variables are variables defined within the function. They are visible in the current function scope and all scopes initiated from it. Example of function scope variable:
function(my_function)
# note that the definition is within the function
set(MY_LOCAL_VAR SOME_VALUE)
# rest of the function body...
endfunction()
cache variables may be considered as "global variables", and those are also stored within CMakeCache.txt file within the root build folder. Cache variables are defined as follows (adding a new string variable):
set (MY_CACHE_VAR "this is some string value" CACHE STRING "explanation of MY_VAR")
Also, as already suggested within the comments, you can place variables definitions into the various "include files" and include them using CMake include statement.
In the end, here is the documentation about set, and include CMake statements.
I am testing the loading of modules in webpack. How would you indicate the path of the dependency in an AMD module?
My project has something like this:
├── modules
│ ├── mod1.js
│ ├── mod2.js
│ └── others
│ └── mod3.js
├── public
│ └── bundle.js
├── src
│ └── app
│ └── app.js
└── webpack.config.js
in app.js I import only mod3.js therefore you must compile the three JS (mod1, mod2, mod3) since mod3.js depend on them.
I have a "others" route. Every time I create a folder I have to include the following line in webpack.config.js?
path.resolve(__dirname, 'modules/others'),
Is it not possible to indicate the path of the dependency in the module itself without webpack compiling go to the hard defined in the config?
Thank you
Having a code structure like below which contains documentation at the root level how can I TELL IDEA to import the Code/spark subfolder as a project?
.
├── Code
│ ├── foo
│ │ ├── bar
│ │ └── baz
│ └── spark
│ ├── build.sbt
│ ├── common
│ ├── job1
│ ├── project
│ ├── run_application
│ ├── sbt
│ ├── scalastyle-config.xml
│ └── version.sbt
├── Docs
You need to add Content Root, go to Project Structure settings (Ctrl + Alt + Shift + s), select your module, then on the right panel click Add Content Root and select Docs folder. Then you can select it and mark as part of the Module, for documentation I believe it should be Resources.
Even better: use a proper build tool like gradle and then apply https://docs.gradle.org/current/userguide/composite_builds.html
I'm updating an existing rails 2 app to rails 3, and having some trouble understanding the asset pipeline. I have read through the guide and as I understand it, files in any of the following directories will resolve to /assets:
app/assets
lib/assets
vendor/assets
and you could access them using helpers...i.e.
image_tag('logo.png')
But what I don't understand is how collisions are handled? For example, what if there are the following files:
app/assets/images/logo.png
lib/assets/images/logo.png
If I go to myapp.com/assets/images/logo.png, which file will be returned? I could check for collisions manually within my app, but this becomes a pain point when using gems that rely on the asset pipeline.
Based on what I've found, you can't have duplicate files, as rails will just return the first one found.
This seems like a bit of a design flaw, as a gem may not namespace their own assets
Why not taking advantage of the index manifest and organize your app/assets into decoupled modules? You can then reference to a particular image, image_tag('admin/logo.png'), and get for free your UI codebase organised in a more meaningful way. You could even promote a complex component, such as Single Page Application into it's own module and reuse it from different parts of the app.
Let's say you app is composed out of three modules: the public side, an admin UI and, e.g., a CRM to let your agents track the selling process at your company:
app/assets/
├── coffeescripts
│ ├── admin
│ │ ├── components
│ │ ├── index.coffee
│ │ └── initializers
│ ├── application
│ │ ├── components
│ │ ├── index.sass
│ │ └── initializers
│ └── crm
│ ├── components
│ ├── index.sass
│ └── initializers
├── images
│ ├── admin
│ ├── application
│ └── crm
└── stylesheets
├── admin
│ ├── components
│ └── index.sass
├── application
│ ├── components
│ └── index.sass
└── crm
├── components
└── index.sass
21 directories, 6 files
Don't forget to update your application.rb so they will be precompiled properly:
config.assets.precompile = %w(admin.js application.js crm.js
admin.css application.css crm.css)