How to add an existing recipe to a custom BSP layer in Yocto - layer

I'm new to Yocto. I have a custom BSP layer. I need to add an existing recipe to it. My layer needs to have libevent and libsoc in it.
How do I add them to it?

You have to add IMAGE_INSTALL_append = " libevent libsoc" in local.conf as explained on the manual.

Related

Yocto: Properly adding support for Qt5 modules with meta-qt5

Context
I'd like to run a Qt application on an IM6 based system with a Yocto built image. I already have the meta-qt5 layer in my project. I started with a simple Qt5 application that only neededs the following modules:
QT += core gui widgets
All I have to do is make sure my bitbake recipe has DEPENDS += qtbase and is based on the qmake class with: inherit qmake5. And it builds and runs on the target! No problem
Problem
Now I'd like to add another Qt5 application, this time with the following modules and one plugin:
QT += core gui widgets quick qml svg xml network charts
QTPLUGIN += qsvg
Unfortunately, I'm not able to simple add these to my DEPENDS variable and get it to work. But googling around for how to add support reveals what seems to be a sprawling assortment of solutions. I'll enumerate what I've found here:
I need to add inherit populate_sdk_qt5 to instruct bitbake to build the recipe against the SDK that contains the libraries for the modules (see here)
I need to add IMAGE_FEATURES += dev-pkgs to the recipe (see here)
I need to modify local.conf for the system, and add lines like: PACKAGECONFIG_append_pn_qttools = "..." and also PACKAGECONFIG_append_pn-qtbase = "..."
I need to modify layer.conf in my layer and add things like IMAGE_INSTALL_append = "qtbase qtquick ..." (slide 53 here)
I need to manually patch the Qt5 toolchain for charts? (see here)
I need to compile my image using bitbake <target> -c populate_sdk? (see here again)
At this point, I'm really unsure what exactly is going on. It seems we're modifying the recipe, the layer configuration file, the distribution configuration file, and even meta-Qt layer files. I know that I fundamentally need to do a few things:
Compile the application against the Qt5 SDK
Compile the needed plugins + modules for the target architecture
Make sure the appropriate binaries (images) are copied to the target.
But it has become a bit unclear about what does what. I know that IMAGE_INSTALL_append adds images to the target, but I am lost with what is the proper way to add the modules. I don't want to go about randomly adding lines, so I'm hoping someone can clear up a bit what exactly I need to be looking at in order to add support for a Qt5 module for an application.
There are different problems stated, your preferred way seems to the directly building a recipe, not using the toolchain. So, you need the image to have the tools you need.
First of all qtsvg is not on Qt Base, it is a module so you need it installed.
Add Qt SVG support
You need Qt SVG on target in order to run your App. Either to your image or to local.conf you need
IMAGE_INSTALL_append = " qtsvg"
As a fact, your app's recipe needs Qt QSVG so you need to DEPEND on it on your app's recipe like this:
DEPENDS = "qtsvg"
here qtsvg is the name of the other recipe, namely qtsvg_git.bb, not to be confused with the identically named qtsvg plugin. And it will get pulled automatically on build time on your development machine, otherwise it won't even build.
Remember yocto creates a simulated image tree on the TMP folder in order to build (yes it does it for each recipe), so you must describe what your recipe needs or it won't find it and your build will fail.
You can also check the recipe for a Qt5 example as it also has DEPENDS and RDEPENDS. And you can get more info on those in here.

Running load() within Skylark macro

If your project depends on TensorFlow it is recommended that you add...
load("//tensorflow:workspace.bzl", "tf_workspace")
tf_workspace()
...to your WORKSPACE file, which will load all of TF's dependencies.
However, if you look at TensorFlow's workspace.bzl file...
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/workspace.bzl
you can see that it depends on rules from #io_bazel_rules_closure. This means you also have to define this #io_bazel_rules_closure rule in your WORKSPACE file and keep it in sync with TensorFlow, even if you don't need it anywhere else in your project.
Is there a way to add the load() command somehow/somewhere into the tf_workspace() macro?
Thanks!
No, there is no way to add this rule in tf_workspace(), since the skylark rule tf_workspace() defined in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/workspace.bzl needs to load #io_bazel_rules_closure.
There are basically two ways to make this work
either the tensorflow project redefines its rules so that it only uses internal rules or native rules.
or bazel is able to load the workspace of a dependency (and I assume load all the transitive dependencies too). This is a hard problem and is tracked in #1943.

Why is add_custom_target always considered out of date?

According to the documentation, add_custom_target() creates a target that "ALWAYS CONSIDERED OUT OF DATE". However, the documentation also says that add_dependencies() can add dependencies between top-level targets, including one added by add_custom_target(). If add_custom_target() is always executed, is there any practical purpose to using it with add_dependencies()?
As far as CMake is concerned, add_custom_target does not produce anything that it could track to determine whether the target is out-of-date.
Contrast that with add_custom_command, where you have the ability to specify an OUTPUT produced by the command. As far as CMake knows, a custom target is just a black box, where anything could happen. That's what makes them so difficult to integrate correctly into a build.
Note that by default, custom targets are skipped altogether, you can only build them explicitly (eg. by calling make <my_target_name>). You can make them part of the default build target by either specifying the ALL option when calling add_custom_target or make another target, which is already part of the default build target, depend on your custom target.
You usually cannot add this dependency by depending on one of its output artifacts, since, as far as CMake is concerned, the custom target does not produce anything. That's why you have to use the more archaic add_dependencies instead.
If add_custom_target() is always executed, is there any practical purpose to using it with add_dependencies()?
Without ALL option add_custom_target() isn't built automatically.
Call add_dependencies(A B) makes sure that a target B will be built before a target A, so a target A may safetly use files, created for target B.
CMake article about add_custom_target() command is not very clear, so a similar question arose for me as well. We have two aspects of adding a custom target to a buildsystem:
is this target needed to be used in our buildsystem? "Used" in a sense of "materialized" - when a custom target is not only added to a buildsystem definition, but also corresponding logic is added to the generated buildsystem, so this logic will be executed during build.
when this target should be (re)built (assuming we have already decided that this custom target must be used in our buildsystem)?
Considering the first question, if we want a custom target to be used in our buildsystem we should use either "All" option or add_dependencies() command. I think that the phrase from docs:
By default nothing depends on the custom target.
implies that if we do not use "All" option or make any other target dependent on a custom target - this custom target logic will not appear in the generated buildsystem.
Regarding the second aspect, the phrase:
Add a target with no output so it will always be built.
says that a custom target is always being built when we (re)build our project (taking into account we have already made use of it using "All"/add_dependencies()), because since custom targets produce no output that build dependency resolution logic can track, it is impossible to determine when such target should be rebuilt (so it is assumed that a custom target should always be built).
Summarizing the written above: we should use a custom target with add_dependencies() command when this target is needed not only to be defined in a buildsystem, but also used in the generated buildsystem, because a custom target is always executed only if it has been already materialized in the generated buildsystem.

The destination does not support the architecture for which the selected software is built

Today I was creating a shared library in a project containing multiple targets where I first had only one (and no shared lib) when all of a sudden my project produced the following error when trying to run.
"The destination does not support the architecture for which the selected software is built. Switch to a destination that supports that architecture in order to run the selected software."
Do not change the bundle name and the Executable file in info.plist. I changed them and got this error. After I changed them to default, the error's gone.
After going through all the suggested steps here on Stackoverflow to no avail I found the answer to be a very simple one ...
I forgot to include the main.m in the targets so an executable would not be built. Adding the appropriate main files to their targets solved my problem.
The selected destination does not support the architecture,
maybe can help you. I have release the question by the way.
Select Info.plist in your project navigator tree and make sure it is not assigned to a target. I have confirmed this is the correct solution.

Extending CMake with a custom generator?

How would one add support for a new IDE/build system to CMake? Does it provide a mechanism to do this without modifying its code directly?
You have to write additional C++ code and build CMake to add a new generator. There is no mechanism to add a new generator without writing new code.
What IDE/build system are you thinking of adding to CMake?
Ask on the CMake mailing list ( http://www.cmake.org/mailman/listinfo/cmake ) whether or not anybody else is already working on a generator for the system you're thinking of. I know some recent work has been done to add a Ninja generator... It is not yet in the official CMake release, though: still in progress as of today.