I have divided my code into AppBundle that will hold anything related to framework (including doctrine stuff), and another namespace for business logic.
So basically I have such file structure:
└── src
├── AppBundle
│ └── Resources
│ └── config
│ └── doctrine
│ ├── Attributes.orm.yml
│ └── User.orm.yml
└── Logic
└── User
├── Attributes.php
└── User.php
And mappings looks like that:
Logic\User\User:
type: entity
table: user
id: (...)
fields: (...)
embedded:
attributes:
class: Logic\User\Attributes
And attributes mapping:
Logic\User\Attributes:
type: embeddable
fields: (...)
Now, when I try to update schema, i get error:
./sf doctrine:schema:update --dump-sql
[Doctrine\Common\Persistence\Mapping\MappingException]
Class 'AppBundle\Entity\Attributes' does not exist
So basically mappings are found and being processed, but symfony ignores namespaces for the classes that i wrote inside mappings, and tries to find it in bundle entity directory. What do I have to do to fix it ?
Doctrine mapping configurations are defined here: http://symfony.com/doc/current/reference/configuration/doctrine.html#mapping-configuration
In particular you will want to set dir to your mapping directory and prefix to the entity namespace.
orm:
default_entity_manager: default
auto_generate_proxy_classes: %kernel.debug%
entity_managers:
default:
connection: default
mappings:
CeradOrgBundle: ~
CeradUserBundle: ~
CeradPersonBundle: ~
games:
connection: games
mappings:
CeradGameBundle:
dir: Resources/config/doctrine2
prefix: Cerad\Bundle\GameBundle\Doctrine\Entity
In the above example, I created two entity managers. The default entity manager processes various bundles with the standard directly layout.
The games manager shows how to customize the mapping information. In this case the orm files live in the doctrine2 directory instead of doctrine.
The prefix is what you are interested in. Notice that these entities live under DoctrineEntity instead of just the normal Entity directory.
So set your prefix and you should be good to go.
Related
Background
I'm new to scientific workflows, and am building my first more complex Snakemake workflow.
Setup & what I've tried
I'm importing modules dynamically. The scenario is that I have a root config.yml file that includes:
subworkflows:
- subwf-1
- subwf-2
- other-subwf
In the root Snakefile I'm doing:
configfile: 'config.yml'
ALL_SUBWFS = config['subworkflows']
# <omitted> Check for min version 6.0
for MODULE in ALL_SUBWFS:
module MODULE:
snakefile:
f'subworkflows/{MODULE}/Snakefile'
use rule * from MODULE as f'{MODULE}_*'
This works fine so far. However, I'd like to be able to configure the different submodules each in their own config.yml.
Assume the following directory structure:
.
├── Snakefile
├── config.yml
└── subworkflows/
├── subwf-1/
│ ├── Snakefile
│ └── config.yml
├── subwf-2/
│ ├── Snakefile
│ └── config.yml
└── other-subwf/
├── Snakefile
└── config.yml
As far as I understand, this isn't supported, and neither of these options work:
Define configfile: config.yml in main workflow and configfile: cf in subworkflows, where I've tried three options for cf:
cf = str(workflow.source_path('config.yml')) # 1
cf = f'{workflow.basedir}/config.yml' # 2
cf = 'config.yml' # 3
# With each of these options
configfile: cf
All give me KeyError in <...>/Snakefile: 'config'.
Using config: config in module import, and subworkflows' Snakefiles including something like VALUES = config['values'] gives me KeyError in <...>/Snakefile: 'values' using each option.
Actual question
Am I right in assuming that it isn't possible to honour the configfiles for modules at all, and that instead, I need to use a global config file, e.g. with keys for each subworkflow imported as config: config['<key-for-subworkflow-config-YAML-map>']?
Further experimentation seems to show that my dynamic import of modules doesn't work: While print()ing from the subworkflow Snakefiles works fine, onyl one of the subworkflows gets executed when rules are defined.
This renders the question unanswerable at this point.
Regarding your Actual question:
Rather than trying to pass or overwrite configfile you can try the intended approach for module of passing/overwriting the config dictionary used by the module.
The keyword here is config: <dict> in the module <name>: block which you might be able to combine with the load_config method of `snakemake:
from snakemake.io import load_configfile
for MODULE in ALL_SUBWFS:
module MODULE:
snakefile: f'subworkflows/{MODULE}/Snakefile'
config: load_configfile(f'subworkflows/{MODULE}/config.yml'
Note: Untested as you report other issues with your idea for loading external workflows dynamically.
Good day,
I am trying to unpack the files from a .tar.gz archive into my bitbake generated image.
Basically just copy some files from the archive to usr/lib/fonts
File structure is like so:
├── deploy-executable
│ └── usr
│ └── lib
│ └── fonts
│ ├── LiberationMono-BoldItalic.ttf
│ ├── LiberationMono-Bold.ttf
│ ├── LiberationMono-Italic.ttf
│ ├── LiberationMono-Regular.ttf
│ ├── LiberationSans-BoldItalic.ttf
....
This goes inside an archive called deploy-executable-0.1.tar.gz
Now my deploy-executable_0.1.bb file looks like this:
SUMMARY = "Recipe for populating with bin_package"
DESCRIPTION ="This recipe uses bin_package to add some demo files to an image"
LICENSE = "CLOSED"
SRC_URI = "file://${BP}.tar.gz"
inherit bin_package
(I have followed the instructions from this post: https://www.yoctoproject.org/pipermail/yocto/2015-December/027681.html)
The problem is that I keep getting the following error:
ERROR: deploy-executable-0.1-r0 do_install: bin_package has nothing to install. Be sure the SRC_URI unpacks into S.
Can anyone help me?
Let me know if you need more information. I will be happy to provide.
Solution:
Add a subdir parameter after the filepath (and leave ${S} alone) to your tarball to get it unpack to the right location.
E.G.
SRC_URI = "file://${BP}.tar.gz;subdir=${BP}"
Explanation:
According to bitbake docs
subdir : Places the file (or extracts its contents) into the specified subdirectory. This option is useful for unusual tarballs or other archives that do not have their files already in a subdirectory within the archive.
So when your tarball gets extracted and unpacked, you can specify that it should go into ${BP} (relative to ${WORKDIR}) which is what do_package & co. expect.
Note that this is also called out in the bin_package.bbclass recipe class file itself (though for a slightly different application):
# Note:
# The "subdir" parameter in the SRC_URI is useful when the input package
# is rpm, ipk, deb and so on, for example:
#
# SRC_URI = "http://example.com/foo-1.0-r1.i586.rpm;subdir=foo-1.0"
#
# Then the files would be unpacked to ${WORKDIR}/foo-1.0, otherwise
# they would be in ${WORKDIR}.
I ran into issues simply doing ${S} = ${WORKDIR} because I had some leftover artifacts in my working directory from a recipe from before I made it a bin_package. The leftover sysroot_* artifacts wreaked havoc on do_package_shlibs... Figured it was better to just unpack the archive where it was expected to go instead of mucking with changing ${S} for a bit of robustness.
Is it possible to structure a rust project in this way?
Directory structure:
src
├── a
│ └── bin1.rs
├── b
│ ├── bin2.rs
└── common
├── mod.rs
from Cargo.toml:
[[bin]]
name = "bin1"
path = "src/a/bin1.rs"
[[bin]]
name = "bin2"
path = "src/b/bin2.rs"
I would like to be able to use the common module in bin1.rs and bin2.rs. It's possible by adding the path attribute before the import:
#[path="../common/mod.rs"]
mod code;
Is there a way for bin1.rs and bin2.rs to use common without having to hardcode the path?
The recommended method to share code between binaries is to have a src/lib.rs file. Both binaries automatically have access to anything accessible through this lib.rs file as a separate crate.
Then you would simply define a mod common; in the src/lib.rs file. If your crate is called my_crate, your binaries would be able to use it with
use my_crate::common::Foo;
I have multiple environments (dev/qa/prod) for my application. I would therefore like to differentiate the log conversion pattern based on environment. I have an env variable set which stores which environment the application is running it. But, how do I get log4j.properties to read this env variable?
This is my what my current properties file looks like:
log4j.rootLogger = INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern= [%d{yyyy-MM-dd HH:mm:ss}] my-api.%-5p: %m%n
I have tried following the log4j lookup docs, but this still does not include the environment in my log file.
log4j.appender.stdout.layout.ConversionPattern= [%d{yyyy-MM-dd
HH:mm:ss}] ${env:ENVIRONMENT}-my-api.%-5p: %m%n
The output looks like this:
[2018-01-22 14:17:20] -my-api.INFO : some-message.
But I want it to look like this:
[2018-01-22 14:17:20] dev-my-api.INFO : some-message.
You may also try a pattern that has become some sort of standard in Luminus and other frameworks. You create an env directory that holds prod/dev/test subfolders with some additional code and resources. In your lein project, for each profile you specify where to find those files in addition to the default path.
As the result, you've got three different log settings. Each of them will be loaded depending on what are you doing. When just develop the code -- from env/dev/resources/log4j.properties and when running tests -- from env/test/resources/log4j.properties.
Here is an example:
$ tree env
.
├── dev
│ └── resources
│ └── log4j.properties
├── prod
│ └── resources
│ └── log4j.properties
└── test
└── resources
└── log4j.properties
Some bits from the project.clj:
:profiles {:dev {:plugins [[autodoc/lein-autodoc "1.1.1"]]
:dependencies [[org.clojure/clojure "1.8.0"]
[log4j/log4j "1.2.17"]]
:resource-paths ["env/dev/resources"]}}
For test profile, you probably may want to specify both dev and test paths.
I have a spring boot 1.5.1 project that uses profile properties file. In my /src/main/resources I have all my properties files
When using IntelliJ 2016.3.4 I set the
Run Configuration | Active Profile
to "local" and run it. I see this in the console:
The following profiles are active: local
But there is a value in the property file
data.count.users=2
and used as:
#Value("${data.count.users}")
private int userCount;
that is not being picked up and thus causing the error:
Caused by: java.lang.IllegalArgumentException: Could not resolve
placeholder 'data.count.users' in string value "${data.count.users}"
However, if I run this via gradle
bootRun {
systemProperty 'spring.profiles.active', System.properties['spring.profiles.active'] }
as
gradle bootRun -Dspring.profiles.active=local
then everything starts up using the local profile as expected. Can anyone see why this is not being properly picked up? In IntelliJ Project Structure I have my /src/main/resources defined as my Resource Folders.
UPDATE:
Adding screenshot of Configuration:
I could be wrong here but it doesn't look like the spring.profiles.active environment variable is actually set in your configuration, regardless of what you've selected as your Active Profile. This may be a bug with IntelliJ.
However, setting the environment variable in Run -> Edit Configurations definitely works for me.
Pease add Spring facet to your Spring Boot module to get full support
Is classpath of module heimdall the correct one, i.e. does it contain the shown resources folder with your application.properties?
If this doesn't help, please file a minimum sample project reproducing the exact structure of your project in our bugtracker, there are too many variables to investigate https://youtrack.jetbrains.com/issues/IDEA.
Using -Dspring.config.location in VM options in IntelliJ helped me.
-Dspring.config.location=file:/C:/Users/<project path>/src/main/resources/application-dev.properties
This could also be due to a non-standard configuration setup, for instance:
src/main/resources
├── application.properties
├── config1
│ ├── application-dev.properties
│ ├── application-prod.properties
│ ├── application.properties
│ └── logback-spring.xml
├── config2
│ ├── application-dev.properties
│ ├── application-prod.properties
│ ├── application.properties
│ └── logback-spring.xml
└── config3
├── application-dev.properties
├── application-prod.properties
├── application.properties
└── logback-spring.xml
This can be solved by passing using the parameters logging.config & spring.config.name for logback & spring respectively. For the above example:
java -jar \
-Dspring.profiles.active=dev \
-Dlogging.config=classpath:config1/logback-spring.xml \
-Dspring.config.name=application,config1/application \
target/my-application.0.0.1.jar
Here root application.properties is used, overridden by config1/application.properties, overridden by config1/application-dev.properties. The parameters (environment variables) can be specified in IDEA's run configuration in VM Options.
As far as advanced IDE support (highlighting, completion etc.) is concerned, there is an open issue for complex/custom configuration setups: IDEA-180498