Is it possible to structure a rust project in this way?
Directory structure:
src
├── a
│ └── bin1.rs
├── b
│ ├── bin2.rs
└── common
├── mod.rs
from Cargo.toml:
[[bin]]
name = "bin1"
path = "src/a/bin1.rs"
[[bin]]
name = "bin2"
path = "src/b/bin2.rs"
I would like to be able to use the common module in bin1.rs and bin2.rs. It's possible by adding the path attribute before the import:
#[path="../common/mod.rs"]
mod code;
Is there a way for bin1.rs and bin2.rs to use common without having to hardcode the path?
The recommended method to share code between binaries is to have a src/lib.rs file. Both binaries automatically have access to anything accessible through this lib.rs file as a separate crate.
Then you would simply define a mod common; in the src/lib.rs file. If your crate is called my_crate, your binaries would be able to use it with
use my_crate::common::Foo;
Related
I'm new to Kotlin so forgive me if this is an easy question. I'm writing a kotlin script that I hope will utilize a custom Hashtable implementation to store data from a file. I'm having trouble getting the script to find the HashTable class.
Here is my structure:
.
├── scripts
│ ├── kotlin
│ │ ├── [other scripts]
│ │ └── wordcount.kts
│ └── tests
│ └── wc
│ └── smallfile.txt
└── src
├── main
│ └── kotlin
│ └── dataStructures
│ └── HashTable.kt
└── test
The script is wordcount.kts and the class I'm trying to import is in HashTable.kt. I tried import dataStructures.HashTable and import kotlin.dataStructures.HashTable to no avail. I also tried adjusting the PWD (in IntelliJ runtime configuration) to the project directory, also with no luck. How do I import HashTable correctly? Let me know if I can provide any further information!
import is used to link to things that are on your classpath, so before you can use that you need to allow the compiler to actually find that HashTable class.
You have a couple of options, I would however recommend to rename wordcount.kts to wordcount.main.kts (kotlin script requires the executable to be named x.main.kts for most features to work), HashTable.kt to HashTable.kts and link it with #file:Import(<path-to-hashtable.kts>).
If you can't rename the hashtable you will need to import it either by compiling it to a class file and adding it to the classpath with kotlinc -script -cp <dir-with-.class> wordcount.main.kts. Or compile to a jar and link the jar with #file:DependsOn<path-to-jar> in the script.
For the reference to all this stuff, look here: https://github.com/Kotlin/KEEP/blob/master/proposals/scripting-support.md
I started my project with a simple "blink" example and used it as a template to write my code.
This example used only one source file blink.c.
Eventually, I want to a use multi source files project and can't figure out how to configure CMakeLists.txt in order to compile the project.
My CMakeLists.txt is:
cmake_minimum_required(VERSION 3.5)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(blink)
I want to add for example init.c.
I tried different ways, but with no success.
None of idf_component_register() / register_component() worked for me.
Any idea how to correctly configure the project?
Right, the CMake project hierarchy in ESP IDF is a bit tricky. You are looking at the wrong CMakeLists.txt file. Instead of the one in root directory, open the one in blink/main/CMakeLists.txt. This file lists the source files for the "main" component, which is the one you want to use. It would look like this:
idf_component_register(SRCS "blink.c" "init.c"
INCLUDE_DIRS ".")
Make sure your init.c file is in the same directory as this CMakeLists.txt and blink.c.
I also recommend taking a look at the Espressif Build System documentation, it's quite useful.
You should edit the CMakeLists.txt located in your main folder inside your project folder. In addition, you need to put the directory that contains the header files into INCLUDE_DIRS parameter.
For example, if you have this file structure in your project (you're putting init.h inside include folder) as shown below:
blink/
├── main/
│ ├── include/
│ │ └── init.h
│ ├── blink.c
│ ├── CMakeLists.txt
│ ├── init.c
│ └── ...
├── CMakeLists.txt
└── ...
The content in your main/CMakeLists.txt should be:
idf_component_register(SRCS "blink.c" "init.c"
INCLUDE_DIRS "." "include")
Good day,
I am trying to unpack the files from a .tar.gz archive into my bitbake generated image.
Basically just copy some files from the archive to usr/lib/fonts
File structure is like so:
├── deploy-executable
│ └── usr
│ └── lib
│ └── fonts
│ ├── LiberationMono-BoldItalic.ttf
│ ├── LiberationMono-Bold.ttf
│ ├── LiberationMono-Italic.ttf
│ ├── LiberationMono-Regular.ttf
│ ├── LiberationSans-BoldItalic.ttf
....
This goes inside an archive called deploy-executable-0.1.tar.gz
Now my deploy-executable_0.1.bb file looks like this:
SUMMARY = "Recipe for populating with bin_package"
DESCRIPTION ="This recipe uses bin_package to add some demo files to an image"
LICENSE = "CLOSED"
SRC_URI = "file://${BP}.tar.gz"
inherit bin_package
(I have followed the instructions from this post: https://www.yoctoproject.org/pipermail/yocto/2015-December/027681.html)
The problem is that I keep getting the following error:
ERROR: deploy-executable-0.1-r0 do_install: bin_package has nothing to install. Be sure the SRC_URI unpacks into S.
Can anyone help me?
Let me know if you need more information. I will be happy to provide.
Solution:
Add a subdir parameter after the filepath (and leave ${S} alone) to your tarball to get it unpack to the right location.
E.G.
SRC_URI = "file://${BP}.tar.gz;subdir=${BP}"
Explanation:
According to bitbake docs
subdir : Places the file (or extracts its contents) into the specified subdirectory. This option is useful for unusual tarballs or other archives that do not have their files already in a subdirectory within the archive.
So when your tarball gets extracted and unpacked, you can specify that it should go into ${BP} (relative to ${WORKDIR}) which is what do_package & co. expect.
Note that this is also called out in the bin_package.bbclass recipe class file itself (though for a slightly different application):
# Note:
# The "subdir" parameter in the SRC_URI is useful when the input package
# is rpm, ipk, deb and so on, for example:
#
# SRC_URI = "http://example.com/foo-1.0-r1.i586.rpm;subdir=foo-1.0"
#
# Then the files would be unpacked to ${WORKDIR}/foo-1.0, otherwise
# they would be in ${WORKDIR}.
I ran into issues simply doing ${S} = ${WORKDIR} because I had some leftover artifacts in my working directory from a recipe from before I made it a bin_package. The leftover sysroot_* artifacts wreaked havoc on do_package_shlibs... Figured it was better to just unpack the archive where it was expected to go instead of mucking with changing ${S} for a bit of robustness.
I am trying to port my old CMake to modern CMake (CMake 3.0.2 or above). In the old design I had multiple CMakelists.txt, each directory contained a CMakeLists.txt file.
My current project's directory structure looks like :
.
├── VizSim.cpp
├── algo
├── contacts
│ ├── BoundingVolumeHierarchies
│ │ └── AABBTree.h
│ └── SpatialPartitoning
├── geom
│ └── Geometry.h
├── math
│ ├── Tolerance.h
│ ├── Vector3.cpp
│ └── Vector3.h
├── mesh
│ ├── Edge.h
│ ├── Face.h
│ ├── Mesh.cpp
│ ├── Mesh.h
│ └── Node.h
├── util
| |__ Defines.h
| |__ Math.h
|
└── viz
└── Renderer.h
What I was planning to do was just use a single CMakelists.txt and place all the cpp files in SOURCE and all the headers in HEADER and use add_executable.
set (SOURCE
${SOURCE}
${CMAKE_CURRENT_SOURCE_DIR}/src/mesh/Mesh.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/math/Vector3.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/VizSim.cpp
....
)
set (HEADER
${HEADER}
${CMAKE_CURRENT_SOURCE_DIR}/src/mesh/Mesh.h
${CMAKE_CURRENT_SOURCE_DIR}/src/math/Vector3.h
....
)
add_library(${PROJECT_NAME} SHARED ${SOURCE})
Doing this I am worried if using a single CMakeLists.txt is good practice. So does single CMakeLists.txt suffice or do I need a CMakeLists.txt for each folder?
I can only think of one good reason to have multiple CMakeLists.txt in my project and that is modularity.
Considering my project will grow eventually.
This is a bit long for a comment – so I make it an answer:
In one of my projects (a library), I have that many sources that I started to move some of them in a sub-directory util.
For this, I made separate variables:
file(GLOB headers *.h)
file(GLOB sources *.cc)
file(GLOB utilHeaders
RELATIVE ${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/util/*.h)
file(GLOB utilSources
RELATIVE ${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/util/*.cc)
To make it nice looking / more convenient in VisualStudio, I inserted source_groups which generates appropriate sub-folders in the VS project. I believe they are called "Filters".
source_group("Header Files\\Utilities" FILES ${utilHeaders})
source_group("Source Files\\Utilities" FILES ${utilSources})
Of course, I have to consider the variables utilHeaders and utilSources as well where the sources have to be provided:
add_library(libName
${sources} ${headers}
${utilSources} ${utilHeaders})
That's it.
Fred reminded in his comment that I shouldn't forget to mention that file(GLOB has a certain weakness (although I find it very valuable in our daily work). This is even mentioned in the CMake doc.:
Note: We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate. The CONFIGURE_DEPENDS flag may not work reliably on all generators, or if a new generator is added in the future that cannot support it, projects using it will be stuck. Even if CONFIGURE_DEPENDS works reliably, there is still a cost to perform the check on every rebuild.
So, using file(GLOB, you shouldn't never forget to re-run CMake once files have been added, moved, or removed. An alternative could be as well, to add, move, remove the files directly in the generated built-scripts (e.g. VS project files) and rely on the fact that the next re-run of CMake will those files cover as well. Last but not least, a git pull is something else that it's worth to consider a re-run of CMake.
I would always recommend a CMakeList.txt file per directory. My reasons:
locality: keep everything in the same folder that belongs together. This includes the relevant parts of the build system. I would hate it to navigate to the root folder to see how a library or target was invoked.
separation of build artifacts and related build code: Tests belong below test, libraries below lib, binaries below bin, documentation below doc, and utilities below utils. This may vary from project to project. When I have to make a change to the documentation, why should I wade through dozens of unrelated CMake code? Just have a look into the right CMakeLists.txt.
avoid handling of paths: In most cases relative or absolute paths including stuff like ${CMAKE_CURRENT_SOURCE_DIR} can be avoided. That leads to maintainable build code and reduces errors from wrong paths. Especially with out-of-source build, which should be used anyway.
localization of errors: If a CMake error occurs it is easier to locate the problem. Often a sub-directory can be excluded as a first workaround.
Let me first present directory structure of what I have
/
├── dojo/
├── dojox/
├── dijit/
└── app/
├── a/
│ └── moduleA.js
├── b/
│ └── moduleA.js
└── c/
I'm trying to configure Dojo to be able to load moduleA.js in following way:
require(["app/moduleA"]
and in the same time resolve it according to below pseudocode:
if moduleA exists in c
load "app/c/moduleA.js"
else if moduleA exists in b
load "app/b/moduleA.js"
else if moduleA exists in a
load "app/a/moduleA.js"
else
standard fail, same as when no module is defined
It would be great if order a, b and c could be passed as array for instance ["app/c/moduleA", "app/b/moduleA", "app/a/moduleA"].
I was looking into documentation hoping to find something, but no luck. https://dojotoolkit.org/reference-guide/1.7/loader/amd.html#module-identifiers was closest I got. There is packages.location property, but it takes string to load module, so I guess it has no additional logic regards to arrays.
Any idea how to solve this problem?