src
|--Manager.cpp
|--Specializations
| |--Manager.cpp
Building this Boost.Build tries to create
/bin/...
|--Manager.o
|--Manager.o
but fails. How to resolve this automatically? I read FAQ item, but I don't like the solution, as I have to fix things manually when I have a same class name, but different namespace. Would it be possible to make Boost.Build automatically prefix object file names with directory?
/bin/...
|--Manager.o
|--Specializations.Manager.o
Or duplicate the source directory tree?
/bin/...
|--Manager.o
|--Specializations
| |--Manager.o
This behavior has been changed a long time ago and should just work. Boost.Build now mimics the source structure, i.e. you should get both bin/Manager.o and bin/Specializations/Manager.o.
Related
I keep getting the error below when I use dbt run - I can't find anything on why this error occurs or how to fix it within the dbt documentation.
[WARNING]: Did not find matching node for patch with name 'vGenericView' in the 'models' section of file 'models\generic_schema\schema.sql'
did you by chance recently upgrade to dbt 1.0.0? If so, this means that you have a model, vGenericView defined in a schema.yml but you don't have a vGenericView.sql model file to which it corresponds.
If all views and tables defined in schema are 1 to 1 with model files then try to run dbt clean and test or run afterward.
Not sure what happened to my project, but ran into frustration looking for missing and/or misspelled files when it was just leftovers from different compiled files not cleaned out. Previously moved views around to different schemas and renamed others.
So the mistake is here in the naming:
The model name in the models.yml file should for example be: employees
And the sql file should be named: employees.sql
So your models.yml will look like:
version: 2
models:
- name: employees
description: "View of employees"
And there must be a model with file name: employees.sql
One case when this will happen is if you have the same data source defined in two different schema.yml file (or whatever you call it)
I am looking for best practices advices regarding the following context:
I am using pytest to run integration tests on my IAC deployment
My IAC code base is structured as:
myapp
|
|_roles
| |_role1
| |_role2
|_resources
|_tomcat
|_java
I'd like to use the same kind of structure for my test files.
Tests are currently divided in file matching roles (role1, role2):
tests
|
|_roles
|_test_role1.py
|_test_role2.py
which lead to duplicated code, e.g:
role1 is a tomcat base app,
role2 holds pure java code,
So in both test files (test_role1.py and test_role2.py) there will be a java test function.
If I could add a dir structure as:
tests
|
|_roles
| |_test_role1.py
| |_test_role2.py
|
|_resources
|_test_tomcat.py
|_test_java.py
Then I could just "include / import" the test_java.py functions to use them in test_role1.py and test_role2.py without duplicating code...
What's the best way to achieve this ?
I am already using fixtures (defined in conftest.py), and I feel that the solution to my duplicated code is something along fixture or test modules but my poor python / pytest knowledge is keeping me away from the actual solution.
Thanks
If you don't mind running your tests as a module, you could turn your Python files into packages by placing a file called 'init.py' in the root of the project, in the directory with the code to be tested and in the directory with the testing code.
You can then perform relative imports to access the functions you need:
eg to access "_test_java.py" from "_test_role2.py"
from ../_roles import _test_java
A single dot represent the current directory. Two dots represents the parent directory.
You will need to use the -m flag when calling your code so Python understands you are running a module with relative imports.
In your case you might consider performing the messy relative imports in conftest.py
This post explains the above in more detail:
http://blog.habnab.it/blog/2013/07/21/python-packages-and-you/
I've already asked a similar question, here:
Linking to modules in external directory Compaq Visual Fortran command prompt
And I thought that the first answer was correct (that is, in the manual they say you can simply specify the path name before the module), but after deleting the temporary files in my library folder, this approach seemed to stop working. Trying with the /include[:path] approach, here is my .bat file:
df /include:..\FORTRAN_LIB\ __constants
myIO griddata_mod myfdgen myDiff magneticField /exe:magneticField
And an error is returned saying:
__constants
myIO
griddata_mod
myfdgen
myDiff
magneticField
f90: Severe: No such file or directory
... file is '__constants'
Again, I apologize that this question is VERY specific, but it seems like it should be simple and does not work at all.
p.s. Originally, I was using:
df ..\FORTRAN_LIB\__constants ..\FORTRAN_LIB\myIO
..\FORTRAN_LIB\griddata_mod ..\FORTRAN_LIB\myfdgen
..\FORTRAN_LIB\myDiff magneticField /exe:magneticField
But, as I've said, it stopped working after I deleted the temporary files in my FORTRAN_LIB folder. Also note, these .bat files used only one line, I've broken them into several lines just for readability. I would prefer using the /include[:path] option since that seems like a better solution.
Okay, so I think I figured out a workaround at the very least. I understood that the /include[:dir] specifies to search in "dir" for included files. But it seemed from documentation, that this also specifies to search for USEd modules but that doesn't seem to be the case.
My program now looks like this:
include '..\FORTRAN_LIB\__constants.f90'
include '..\FORTRAN_LIB\computeError.f90'
include '..\FORTRAN_LIB\griddata_mod.f90'
include '..\FORTRAN_LIB\myfdgen.f90'
include '..\FORTRAN_LIB\myDiff.f90'
include '..\FORTRAN_LIB\myIO.f90'
program magneticField
use constants
use computeError_mod
use griddata_mod
use myfdgen_mod
use myDiff_mod
use myIO_mod
implicit none
...
And my DF command like this:
df magneticField /exe:magneticField
And everything seems to work fine. It would be nicer to have the /include[:dir] option, but so long I'm able to reach in a separate directory, I'm satisfied. If anyone can find a better solution I'll switch the checkmark. I hope this helps with anyone else who was confused like me.
for some test I need to run a data driven test with a configuration that is generated (via reflection) in the ClassInitialize method (by using reflection). I tried out everything, but I just can not get the data source properly set up.
The test takes a list of classes in a csv file (one line per class) and then will test that the mappings to the database work out well (i.e. try to get one item from the database for every entity, which will throw an exception when the table structure does not match).
The testmethod is:
[DataSource(
"Microsoft.VisualStudio.TestTools.DataSource.CSV",
"|DataDirectory|\\EntityMappingsTests.Types.csv",
"EntityMappingsTests.Types#csv",
DataAccessMethod.Sequential)
]
[TestMethod()]
public void TestMappings () {
Obviously the file is EntityMappingsTests.Types.csv. It should be in the DataDirectory.
Now, in the Initialize method (marked with ClassInitialize) I put that together and then try to write it.
WHERE should I write it to? WHERE IS THE DataDirectory?
I tried:
File.WriteAllText(context.TestDeploymentDir + "\\EntityMappingsTests.Types.csv", types.ToString());
File.WriteAllText("EntityMappingsTests.Types.csv", types.ToString());
Both result in "the unit test adapter failed to connect to the data source or read the data". More exact:
Error details: The Microsoft Jet database engine could not find the
object 'EntityMappingsTests.Types.csv'. Make sure the object exists
and that you spell its name and the path name correctly.
So where should I put that file?
I also tried just writing it to the current directory and taking out the DataDirectory part - same result. Sadly, there is limited debugging support here.
Please use the ProcessMonitor tool from technet.microsoft.com/en-us/sysinternals/bb896645. Put a filter on MSTest.exe or the associate qtagent32.exe and find out what locations it is trying to load from and at what point in time in the test loading process. Then please provide an update on those details here .
After you add the CSV file to your VS project, you need to open the properties for it. Set the Property "Copy To Output Directory" to "Copy Always". The DataDirectory defaults to the location of the compiled executable, which runs from the output directory so it will find it there.
Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend:
I had a large mercurial repo that I server via apache and hgweb.cgi.
Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi.
Using the convert tool with the filemap option I created several sub-repositories:
/main/foo
/main/bar
Nicely created an entry for the sub-repositories in .hgsub:
foo = foo
bar = bar
And set hgwebdir.cgi up to show $/** as the root folder.
Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown):
empty_repo http://img707.imageshack.us/img707/8237/emptysubrepo.png
That was allright until I added a new sub-repository.
I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir.
The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir.
Does someone have a good setup for such a mess?
On the webserver your main and its subrepos should appear as siblings -- not with the subrepos inside main.
Main
ASCII
AlignDistribute
And the URLs in your .hgsub should look like:
ASCII = ../ASCII
AlignDistribute = ../AlignDsitribute
Then you'll be able to push/pull to http://foo.com/hg/Main and when you clone it the clone/update will automatically attach and clone down the separate subrepos.
From what I've read on https://www.mercurial-scm.org/wiki/PublishingRepositories#multiple
The keys (on the left) and the values (on the right) are both filesystem paths
The keys should be prefixes of the values and are "subtracted" from the values in order to generate the URL paths to each repository
What I'm guessing happened is that in your hgweb(dir) configuration you're specifying the same value for a collection possibly as the key, so during subtraction it ends up with a blank name and no way to get to it.
When I use [collections] to set /a/full/path = /a/full/path directly to a repo, it'll end up blank too, because it's reading that folder as a repo because it is a repo, instead of each sub-directory being an individual repo, after I removed the .hg folder and .hgsubs and everything from the root of my collection entry, all the subfolders started showing up properly.
I originally used in [paths], /path/to/my/project = /path/to/my/project, and since that is a single referenced repository, it'll subtract the value from the key, leaving you once again with '', instead I used project = /path/to/my/project and it came out as 'project'.
Hopefully that URL or these descriptions will get you out of your pickle!