Narrow down MSBuild verbosity to limit fake build output - msbuild

Any ideas how to limit the output generated by the MSBuild task in fake build?
I am not so much interested in seeing all compile info details as I am interested in seeing warning messages.
I started using StyleCop.Analyzers and if I get a single warning, it is hard to observe among all messages the build task is generating.
Here's how I have it now:
// Target Build Application
Target "BuildApp" (fun _ ->
MSBuildRelease buildDir "Build" appReferences
|> Log "AppBuild-Output: "
)

The most general helper function in the MSBuild helper is MSBuildHelper.build. All other functions are specializations of it.
This function takes a setParams function, which follows a general FAKE pattern: takes default parameters structure and modifies it in some way. To set log verbosity, use the MSBuildParameters.Verbosity field:
Target "BuildApp" (fun _ ->
"Project.sln"
|> MSBuildHelper.build (fun p ->
{ p with
Properties = [ "OutputPath", buildDir ]
Verbosity = Some Minimal
Targets = [ "Build" ] } ) )
Alternatively, you can set the verbosity for the whole build by modifying the MSBuildDefaults structure:
MSBuildDefaults <- { MSBuildDefaults with Verbosity = Some Minimal }
This way, all MSBuild invocations will use minimal verbosity.

Related

Rust Workspace: Is it possible to use a binary crate for integrationstests in lib crate?

I have the following workspace structure:
[workspace]
members = [
"skserver", # binary
"skclient", # binary
"skcommon", # lib
"skintegrationtests" # lib
]
The intention was to have an extra lib crate for integration testing of client/server-functionality. The Cargo.toml of skintegrationtests is as follows:
# for integration tests of own programs etc.
skcommon = {path = "../skcommon"}
skclient = {path = "../skclient"}
skserver = {path = "../skserver"}
skcommon can be referenced, but not skclient (I haven't tried skserver). Is that intentional from Rust? And if so, why?
I started doing integrationtests with skcommon. I want to avoid circular dependencies with skclient and skserver, and so I created skintegrationtests.
If you want to run the skclient binary from skintegrationtests, then you're looking for RFC 3028 binary dependencies, which are not yet implemented. There isn't a clean way to do this yet other than a build script separate from Cargo that makes sure the binary is built and then runs the test.
If you want to call functions defined in the skclient package's code, then you need to modify skclient so it is a library package — has a lib.rs — and all of the functions wanted are defined there rather than main.rs. This does not prevent it from also having a binary, which can refer to the library as use skclient::whatever;.

How can I dynamically generate test cases with common test?

With Common Test test suites, it looks like test cases must be 1:1 with atoms that correspond to top-level functions in the suite. Is this true?
In that case, how can I dynamically generate test cases?
In particular, I want to read a directory, and then, (in parallel) for each file in the directory, do stuff with the file and then compare against a snapshot.
I got the parallelization I wanted with rpc:pmap, but what I don't like is that the entire test case fails on the first bad assert. I want to see what happens with all the files, every time. Is there a way to do this?
Short answer: No.
Long answer: No. I even tried using Ghost Functions
-module(my_test_SUITE).
-export [all/0].
-export [has_files/1].
-export ['$handle_undefined_function'/2].
all() -> [has_files | files() ].
has_files(_) ->
case files() of
[] -> ct:fail("No files in ~s", [element(2, file:get_cwd())]);
_ -> ok
end.
files() ->
[to_atom(AsString) || AsString <- filelib:wildcard("../../lib/exercism/test/*.test")].
to_atom(AsString) ->
list_to_atom(filename:basename(filename:rootname(AsString))).
'$handle_undefined_function'(Func, [_]) ->
Func = file:consult(Func).
And… as soon as I add the undefined function handler, rebar3 ct start reporting…
All 0 tests passed.
Clearly common test is also using the fact that some functions are undefined to work. 🤷‍♂️
Data Directory
Each common test suite can have a "data" directory. This directory can contain anything you want. For example, a test suite mytest_SUITE, can have mytest_SUITE_data/ "data" directory. The path to data directory can be obtained from the Config parameter in test cases.
someTest(Config) ->
DataDir = ?config(data_dir, Config),
%% TODO: do something with DataDir
?assert(false). %% include eunit header file for this to work
Running tests in parallel
To run tests in parallel you need to use groups. Add a group/0 function to the test suite
groups() -> ListOfGroups.
Each member in ListOfGroups is a tuple, {Name, Props, Members}. Name is an atom, Props is list of properties for the groups, and Members is a list of test cases in the group. Setting Props to [parallel|OtherProps] will enable the test cases in the group to be executed in parallel.
Dynamic Test Cases
Checkout cucumberl project.

How to disable default gradle buildType suffix (-release, -debug)

I migrated a 3rd-party tool's gradle.build configs, so it uses android gradle plugin 3.5.3 and gradle 5.4.1.
The build goes all smoothly, but when I'm trying to make an .aab archive, things got broken because the toolchain expects the output .aab file to be named MyApplicationId.aab, but the new gradle defaults to output MyApplicationId-release.aab, with the buildType suffix which wasn't there.
I tried to search for a solution, but documentations about product flavors are mostly about adding suffix. How do I prevent the default "-release" suffix to be added? There wasn't any product flavor blocks in the toolchain's gradle config files.
I realzed that I have to create custom tasks after reading other questions and answers:
How to change the generated filename for App Bundles with Gradle?
Renaming applicationVariants.outputs' outputFileName does not work because those are for .apks.
I'm using Gradle 5.4.1 so my Copy task syntax reference is here.
I don't quite understand where the "app.aab" name string came from, so I defined my own aabFile name string to match my toolchain's output.
I don't care about the source file so it's not deleted by another delete task.
Also my toolchain seems to be removing unknown variables surrounded by "${}" so I had to work around ${buildDir} and ${flavor} by omitting the brackets and using concatenation for proper delimiting.
tasks.whenTaskAdded { task ->
if (task.name.startsWith("bundle")) { // e.g: buildRelease
def renameTaskName = "rename${task.name.capitalize()}Aab" // renameBundleReleaseAab
def flavorSuffix = task.name.substring("bundle".length()).uncapitalize() // "release"
tasks.create(renameTaskName, Copy) {
def path = "$buildDir/outputs/bundle/" + "$flavorSuffix/"
def aabFile = "${android.defaultConfig.applicationId}-" + "$flavorSuffix" + ".aab"
from(path) {
include aabFile
rename aabFile, "${android.defaultConfig.applicationId}.aab"
}
into path
}
task.finalizedBy(renameTaskName)
}
}
As the original answer said: This will add more tasks than necessary, but those tasks will be skipped since they don't match any folder.
e.g.
Task :app:renameBundleReleaseResourcesAab NO-SOURCE

MSBuild: Update custom modified project property with property sheet setting

I am trying to update Microsoft.Cpp.Win32.user property sheet, in which planning to update below mentioned settings.
1. c++ -> General -> optimization -> optimization = Disabled (/Od).
2. Linker -> General -> output directory = C:\BuildPath.
To complete above mentioned task I have updated existing Microsoft.Cpp.Win32.user property files in VS 2013. But I am getting a problem, when some projects are having manual output directory project setting (i.e. D:\BuildProjectPath) and those project outdir is not getting updated from Microsoft.Cpp.Win32.user property files. to update these setting i have to manually select .
let me know how can we resolve this issue.

How to recursively parse xsd files to generate a list of included schemas for incremental build in Maven?

I have a Maven project that uses the jaxb2-maven-plugin to compile some xsd files. It uses the staleFile to determine whether or not any of the referenced schemaFiles have been changed. Unfortunately, the xsd files in question use <xs:include schemaLocation="../relative/path.xsd"/> tags to include other schema files that are not listed in the schemaFile argument so the staleFile calculation in the plugin doesn't accurately detect when things need to be actually recompiled. This winds up breaking incremental builds as the included schemas evolve.
Obviously, one solution would be to list all the recursively referenced files in the execution's schemaFile. However, there are going to be cases where developers don't do this and break the build. I'd like instead to automate the generation of this list in some way.
One approach that comes to mind would be to somehow parse the top-level XSD files and then either sets a property or outputs a file that I can then pass into the schemaFile parameter or schemaFiles parameter. The Groovy gmaven plugin seems like it might be a natural way to embed that functionality right into the POM. But I'm not familiar enough with Groovy to get started.
Can anyone provide some sample code? Or offer an alternative implementation/solution?
Thanks!
Not sure how you'd integrate it into your Maven build -- Maven isn't really my thing :-(
However, if you have the path to an xsd file, you should be able to get the files it references by doing something like:
def rootXsd = new File( 'path/to/xsd' )
def refs = new XmlSlurper().parse( rootXsd ).depthFirst().findAll { it.name()=='include' }.#schemaLocation*.text()
println "$rootXsd references $refs"
So refs is a list of Strings which should be the paths to the included xsds
Based on tim_yates's answer, the following is a workable solution, which you may have to customize based on how you are configuring the jaxb2 plugin.
Configure a gmaven-plugin execution early in the lifecycle (e.g., in the initialize phase) that runs with the following configuration...
Start with a function to collect File objects of referenced schemas (this is a refinement of Tim's answer):
def findRefs { f ->
def relPaths = new XmlSlurper().parse(f).depthFirst().findAll {
it.name()=='include'
}*.#schemaLocation*.text()
relPaths.collect { new File(f.absoluteFile.parent + "/" + it).canonicalFile }
}
Wrap that in a function that iterates on the results until all children are found:
def recursiveFindRefs = { schemaFiles ->
def outputs = [] as Set
def inputs = schemaFiles as Queue
// Breadth-first examine all refs in all schema files
while (xsd = inputs.poll()) {
outputs << xsd
findRefs(xsd).each {
if (!outputs.contains(it)) inputs.add(it)
}
}
outputs
}
The real magic then comes in when you parse the Maven project to determine what to do.
First, find the JAXB plugin:
jaxb = project.build.plugins.find { it.artifactId == 'jaxb2-maven-plugin' }
Then, parse each execution of that plugin (if you have multiple). The code assumes that each execution sets schemaDirectory, schemaFiles and staleFile (i.e., does not use the defaults!) and that you are not using schemaListFileName:
jaxb.executions.each { ex ->
log.info("Processing jaxb execution $ex")
// Extract the schema locations; the configuration is an Xpp3Dom
ex.configuration.children.each { conf ->
switch (conf.name) {
case "schemaDirectory":
schemaDirectory = conf.value
break
case "schemaFiles":
schemaFiles = conf.value.split(/,\s*/)
break
case "staleFile":
staleFile = conf.value
break
}
}
Finally, we can open the schemaFiles, parse them using the functions we've defined earlier:
def schemaHandles = schemaFiles.collect { new File("${project.basedir}/${schemaDirectory}", it) }
def allSchemaHandles = recursiveFindRefs(schemaHandles)
...and compare their last modified times against the stale file's modification time,
unlinking the stale file if necessary.
def maxLastModified = allSchemaHandles.collect {
it.lastModified()
}.max()
def staleHandle = new File(staleFile)
if (staleHandle.lastModified() < maxLastModified) {
log.info(" New schemas detected; unlinking $staleFile.")
staleHandle.delete()
}
}