xunit from TFS2015 - where to put ParallelizeAssemblies - msbuild

I'm trying to speed up our unit tests. By setting xunit.parallelizeAssembly to true in in the app.config files I get multiple tests from different assemblies to run in parallel when run from Visual Studio. But when running on the build server it makes no difference in execution time and I can see than only one core is used.
In the paragraph on MSBuild Runner on this page it is suggested that the setting ParallelizeAssemblies would solve this problem. I'm currently running the tests with the "Visual Studio Test" build step (see image for configuration). Where do I put this setting?
I can't share all of the log but I believe the first and last part might contain good clues.
2017-04-20T16:51:10.5496891Z Executing the powershell script: C:\Tfs_Agent5\tasks\VSTest\1.0.32\VSTest.ps1
2017-04-20T16:51:12.9402898Z ##[debug]Calling Invoke-VSTest for all test assemblies
2017-04-20T16:51:12.9559206Z ##[warning]Install Visual Studio 2015 Update 1 or higher on your build agent machine to run the tests in parallel.
2017-04-20T16:51:13.0027923Z Working folder: C:\Tfs_Agent5\_work\1
2017-04-20T16:51:13.0027923Z Executing C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe "C:\Tfs_Agent5\_work\1\s\src\AwtSG.CompareToAros\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Dynamics.UnitTests\bin\Release\AwtSG.Domain.Dynamics.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.EcdisRouteFiles.UnitTests\bin\Release\AwtSG.Domain.EcdisRouteFiles.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.EcdisRouteFiles.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.GeoSpatial.UnitTests\bin\Release\AwtSG.Domain.Geospatial.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.GeoSpatial.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.MetOceanData.GribApi.UnitTests\bin\Release\AwtSG.Domain.MetOceanData.GribApi.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.MetOceanData.UnitTests\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.NavigationUtilities.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.IntegrationTests\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.IntegrationTests\bin\Release\AwtSG.Domain.Optimization.Evolutionary.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.IntegrationTests\bin\Release\AwtSG.Domain.TechnicalPerformance.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.UnitTests\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.UnitTests\bin\Release\AwtSG.Domain.Optimization.Evolutionary.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Evolutionary.UnitTests\bin\Release\AwtSG.Domain.TechnicalPerformance.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.IntegrationTests\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Optimization.Mesh\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Simulation.IntegrationTests\bin\Release\AwtSG.Domain.MetOceanData.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Simulation.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Simulation.UnitTests\bin\Release\AwtSG.Domain.Simulation.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.TechnicalPerformance.UnitTests\bin\Release\AwtSG.Domain.TechnicalPerformance.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Domain.Units.UnitTests\bin\Release\AwtSG.Domain.Units.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Mapping.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Mapping.UnitTests\bin\Release\AwtSG.Domain.Simulation.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Mapping.UnitTests\bin\Release\AwtSG.Dto.Mapping.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Simulation.Mapping.UnitTests\bin\Release\AwtSG.Domain.NavigationUtilities.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Simulation.Mapping.UnitTests\bin\Release\AwtSG.Domain.Simulation.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Simulation.Mapping.UnitTests\bin\Release\AwtSG.Dto.Simulation.Mapping.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.Simulation.UnitTests\bin\Release\AwtSG.Dto.Simulation.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Dto.UnitTests\bin\Release\AwtSG.Dto.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Infrastructure.UnitTests\bin\Release\AwtSG.Infrastructure.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.Numerics.UnitTests\bin\Release\AwtSG.Numerics.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.VoyageSimulationValidation\bin\Release\AwtSG.Numerics.UnitTests.dll" "C:\Tfs_Agent5\_work\1\s\src\AwtSG.WindowsService.VoyageSimulation.UnitTests\bin\Release\AwtSG.WindowsService.VoyageSimulation.UnitTests.dll" /Settings:"C:\Tfs_Agent5\_work\1\s\src\all.runsettings" /EnableCodeCoverage /InIsolation /logger:trx /TestAdapterPath:"C:\Tfs_Agent5\_work\1\s\src\packages"
2017-04-20T16:51:13.4090709Z Microsoft (R) Test Execution Command Line Tool Version 14.0.25420.1
2017-04-20T16:51:13.4090709Z Copyright (c) Microsoft Corporation. All rights reserved.
2017-04-20T16:51:15.7373080Z Starting test execution, please wait...
2017-04-20T16:51:19.4718867Z Warning: Diagnostic data adapter message: Could not find diagnostic data adapter 'Code Coverage'. Make sure diagnostic data adapter is installed and try again.
2017-04-20T16:51:33.2378718Z Information: [xUnit.net 00:00:01.2430136] Discovering:
2017-04-20T17:17:09.1501081Z Warning: System.AppDomainUnloadedException: Attempted to access an unloaded AppDomain. This can happen if the test(s) started a thread but did not stop it. Make sure that all the threads started by the test(s) are stopped before completion.
2017-04-20T17:17:10.3845539Z Total tests: 17704. Passed: 17679. Failed: 0. Skipped: 25.
2017-04-20T17:17:10.3845539Z Test Run Successful.
2017-04-20T17:17:10.3845539Z Test execution time: 25.8603 Minutes
2017-04-20T17:17:28.5726606Z Results File: C:\Tfs_Agent5\_work\1\TestResults\tfsservice_US-SUN-TFSBUILD 2017-04-20 09_57_25.trx
2017-04-20T17:17:29.3539333Z Publishing Test Results...
2017-04-20T17:17:44.9950924Z Test results remaining: 17704
2017-04-20T17:17:47.0264093Z Test results remaining: 16704
2017-04-20T17:17:49.0421061Z Test results remaining: 15704
2017-04-20T17:17:53.1985047Z Test results remaining: 14704
2017-04-20T17:17:54.9329389Z Test results remaining: 13704
2017-04-20T17:17:56.5579944Z Test results remaining: 12704
2017-04-20T17:17:58.2299179Z Test results remaining: 11704
2017-04-20T17:17:59.9331076Z Test results remaining: 10704
2017-04-20T17:18:01.5894343Z Test results remaining: 9704
2017-04-20T17:18:03.0113618Z Test results remaining: 8704
2017-04-20T17:18:04.3395079Z Test results remaining: 7704
2017-04-20T17:18:05.6052151Z Test results remaining: 6704
2017-04-20T17:18:06.8083476Z Test results remaining: 5704
2017-04-20T17:18:08.5896555Z Test results remaining: 4704
2017-04-20T17:18:09.9178475Z Test results remaining: 3704
2017-04-20T17:18:11.2304148Z Test results remaining: 2704
2017-04-20T17:18:12.5429604Z Test results remaining: 1704
2017-04-20T17:18:13.8867197Z Test results remaining: 704
2017-04-20T17:18:25.5277535Z Published Test Run :
Note the warning that VS2015 Update 1 must be installed. This is what the About dialog looks like on the build agent (Update 3 is installed):

Parallel run is available in VS2015 Update 1 and later, make sure that you are using the right VS version to run the test in parallel.

Related

Ask re-run test failed and merge (RobotFramework)

I have 3 test suites: test1.robot (10 TCs inside), test2.robot(3 TCs inside), test3.robot(2TCs inside).
I run all test suites by shell script: robot --variable:ABC --name Testing --outputdir /perf-logs/Testing test1.robot test2.robot test3.robot
I found that we have 2 ways to rerun:
--rerunfailed (for tests) and --rerunfailedsuites (for testsuites)
I have some question:
1/ What is different between them (--rerunfailed vs --retunfailedsuites)
2/ Assumpting I have 2 TCs failed in test suite (test1.robot) and 1 TCs failed in testsuite test2.robot, so Which re-run I should use?
3/ Assumpting first time run 3 testsuites I have 1 output.xml. After re-running TCs failed (for 2 testsuites) I have another output2.xml. Could I merge them?
4/ In case I only re-run 1 TCs failed (in test1.robot) and get result in output3.xml. Could I merge output3.xml with first output.xml?
Many thanks
Difference:
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
-R, --rerunfailed <file>
Selects failed tests from an earlier output file to be re-executed.
-S, --rerunfailedsuites <file>
Selects failed test suites from an earlier output file to be re-executed.
Which to use:
if you want to rerun an entire suite use rerunfailedsuite if you want to rerun failed test cases only not the passed tests in the suite then use rerunfailed ( if test are independent)
to combine files
rebot --outputdir . --output final_output.xml output.xml output.xml
4)same as above

Running unit test target on XCode9 returns "Early unexpected exit" error

I'm learning how to add unit tests to an objective-c project using XCode9. So I've created a command line project from scratch called Foo and afterwards I've added a new target to the project called FooTests. Afterwards I've edited Foo's scheme to add FooTests. However, whenever I run the tests (i.e., menu "Product" -> "Tests" ) XCode9 throws the following error:
Showing All Messages
Test target FooTests encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted)
However, when I try to run tests by calling xcode-build from the command line, it seems that all unit tests are executed correctly. Here's the output;
a483e79a7057:foo ram$ xcodebuild test -project foo.xcodeproj -scheme foo
2020-05-15 17:39:30.496 xcodebuild[53179:948485] IDETestOperationsObserverDebug: Writing diagnostic log for test session to:
/var/folders/_z/q35r6n050jz5fw662ckc_kqxbywcq0/T/com.apple.dt.XCTest/IDETestRunSession-E7DD2270-C6C2-43ED-84A9-6EBFB9A4E853/FooTests-8FE46058-FC4A-47A2-8E97-8D229C5678E1/Session-FooTests-2020-05-15_173930-Mq0Z8N.log
2020-05-15 17:39:30.496 xcodebuild[53179:948484] [MT] IDETestOperationsObserverDebug: (324DB265-AD89-49B6-9216-22A6F75B2EDF) Beginning test session FooTests-324DB265-AD89-49B6-9216-22A6F75B2EDF at 2020-05-15 17:39:30.497 with Xcode 9F2000 on target <DVTLocalComputer: 0x7f90b2302ef0 (My Mac | x86_64h)> (10.14.6 (18G4032))
=== BUILD TARGET foo OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
=== BUILD TARGET FooTests OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
Test Suite 'All tests' started at 2020-05-15 17:39:30.845
Test Suite 'FooTests.xctest' started at 2020-05-15 17:39:30.846
Test Suite 'FooTests' started at 2020-05-15 17:39:30.846
Test Case '-[FooTests testExample]' started.
Test Case '-[FooTests testExample]' passed (0.082 seconds).
Test Case '-[FooTests testPerformanceExample]' started.
/Users/ram/development/objective-c/foo/FooTests/FooTests.m:36: Test Case '-[FooTests testPerformanceExample]' measured [Time, seconds] average: 0.000, relative standard deviation: 84.183%, values: [0.000006, 0.000002, 0.000001, 0.000002, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001], performanceMetricID:com.apple.XCTPerformanceMetric_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
Test Case '-[FooTests testPerformanceExample]' passed (0.660 seconds).
Test Suite 'FooTests' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.743) seconds
Test Suite 'FooTests.xctest' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.744) seconds
Test Suite 'All tests' passed at 2020-05-15 17:39:31.590.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.745) seconds
** TEST SUCCEEDED **
Does anyone know how to add unit tests to an xcode9 project for a command line application? If you happen to know, what's the right way of doing this and what am I doing wrong?

Does the MSBuild Exec task search STDOUT for the string "error"?

I have a small NUnit test suite with a single ignored test. I'm just now writing a simple MSBuild script to restore and publish and the like. I tried to get a dotnet test task working
<Target Name="test" DependsOnTargets="restore">
<Exec Command="dotnet test"
WorkingDirectory="test\Example.Tests" />
</Target>
But, it exits with code -1 every time!
PS> dotnet msbuild /t:test build.xml
Microsoft (R) Build Engine version 15.3.409.57025 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restore completed in 50.88 ms for C:\...\src\Example\Example.csproj.
Restore completed in 57.18 ms for C:\...\test\Example.Tests\Example.Tests.csproj.
Build started, please wait...
Build completed.
Test run for C:\...\test\Example.Tests\bin\Debug\netcoreapp1.1\Example.Tests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Test Execution Command Line Tool Version 15.0.0.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
NUnit Adapter 3.8.0.0: Test execution started
Running all tests in C:\...\test\Example.Tests\bin\Debug\netcoreapp1.1\Example.Tests.dll
NUnit3TestExecutor converted 26 of 26 NUnit test cases
Skipped FilterEndpointWithCompletelyFilteredSystemsReturnsNotFound("foo")
EXEC : error Message: [C:\...\build.xml]
OneTimeSetUp: Having trouble constructing this scenario with the current command catalog
NUnit Adapter 3.8.0.0: Test execution complete
Total tests: 26. Passed: 25. Failed: 0. Skipped: 1.
Test Run Successful.
Test execution time: 2.8322 Seconds
C:\...\build.xml(16,9): error MSB3073: The command "dotnet test" exited with code -1.
If I run the command directly in my console, it's fine.
PS> dotnet test
Build started, please wait...
Build completed.
Test run for C:\...\test\Example.Tests\bin\Debug\netcoreapp1.1\Example.Tests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Test Execution Command Line Tool Version 15.0.0.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
NUnit Adapter 3.8.0.0: Test execution started
Running all tests in C:\...\test\Example.Tests\bin\Debug\netcoreapp1.1\Example.Tests.dll
NUnit3TestExecutor converted 26 of 26 NUnit test cases
Skipped FilterEndpointWithCompletelyFilteredSystemsReturnsNotFound("foo")
Error Message:
OneTimeSetUp: Having trouble constructing this scenario with the current command catalog
NUnit Adapter 3.8.0.0: Test execution complete
Total tests: 26. Passed: 25. Failed: 0. Skipped: 1.
Test Run Successful.
Test execution time: 3.6485 Seconds
I searched around for help, but didn't quite find anything explicit. I got the impression that the Exec task searches STDOUT for some kind of error message, possibly just the word "error", in order to set the exit code/status.
This does appear on my console (for some reason NUnit prints "Error Message" for ignored/skipped tests).
Skipped FilterEndpointWithCompletelyFilteredSystemsReturnsNotFound("foo")
Error Message:
OneTimeSetUp: Having trouble constructing this scenario with the current command catalog
If I comment out this test, the run passes (via msbuild).
Am I correct about the Exec task? Would I "fix" the problem by overriding the CustomErrorRegularExpression parameter? I can't find any good info about this parameter... would I set it to an empty string?
If provided, Exec checks the output against CustomErrorRegularExpression and CustomWarningRegularExpression first. If those do not match or were not provided, Exec then uses the following RegEx:
new Regex
(
// Beginning of line and any amount of whitespace.
#"^\s*"
// Match a [optional project number prefix 'ddd>'], single letter + colon + remaining filename, or
// string with no colon followed by a colon.
+ #"(((?<ORIGIN>(((\d+>)?[a-zA-Z]?:[^:]*)|([^:]*))):)"
// Origin may also be empty. In this case there's no trailing colon.
+ "|())"
// Match the empty string or a string without a colon that ends with a space
+ "(?<SUBCATEGORY>(()|([^:]*? )))"
// Match 'error' or 'warning'.
+ #"(?<CATEGORY>(error|warning))"
// Match anything starting with a space that's not a colon/space, followed by a colon.
// Error code is optional in which case "error"/"warning" can be followed immediately by a colon.
+ #"( \s*(?<CODE>[^: ]*))?\s*:"
// Whatever's left on this line, including colons.
+ "(?<TEXT>.*)$",
RegexOptions.IgnoreCase | RegexOptions.Compiled
)
That works out lines in the format of the following example being interpreted as errors or warnings (depending which of those two words are in the "Cat." part).
Main.cs(17,20):Command line warning CS0168: The variable 'foo' is declared but never used
-------------- ------------ ------- ------ ----------------------------------------------
Origin SubCategory Cat. Code Text
Would I "fix" the problem by overriding the CustomErrorRegularExpression parameter?
EDIT
(Question for myself: Why on Earth did I say "Yup" initially when my second sentence directly contradicts this?)
Nope. :) Set IgnoreStandardErrorWarningFormat to true. From the documentation you linked in your question:
IgnoreStandardErrorWarningFormat: Optional Boolean parameter.
If false, selects lines in the output that match the standard error/warning format, and logs them as errors/warnings.
If true, disable this behavior.
The default value is false.

Gitlab-CI runner hangs after makefile test fails

I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer

MSBuild script fails but produces no errors

I have a MSBuild script that I am executing through TeamCity.
One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok.
This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors.
For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following:
My questions are:
Would it be possible that the MSBuild
script has timed out? Once the task
has completed it is after a certain
timeout period so it just fails?
Why would the build fail with no
errors and no warnings?
[05:39:06]: [Target "Obfuscate"] Finished.
[05:39:06]: [Target "Obfuscate"] Saving exception map
[05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds
[05:49:22]: [Target "Obfuscate"] Done.
[05:49:51]: MSBuild output:
Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8)
Done. (TaskId:8)
Done executing task "VeilProject" -- FAILED. (TaskId:8)
Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12)
Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED.
Project Performance Summary:
6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls
6535484 ms All 1 calls
Target Performance Summary:
156 ms PreClean 1 calls
266 ms SetBuildVersionNumber 1 calls
2406 ms CopyFiles 1 calls
6532391 ms Obfuscate 1 calls
Task Performance Summary:
16 ms MakeDir 2 calls
31 ms TeamCitySetBuildNumber 1 calls
31 ms Message 1 calls
62 ms RemoveDir 2 calls
234 ms GetAssemblyIdentity 1 calls
2406 ms Copy 1 calls
6528047 ms VeilProject 1 calls
Build FAILED.
0 Warning(s)
0 Error(s)
Time Elapsed 01:48:57.46
[05:49:52]: Process exit code: 1
[05:49:55]: Build finished
If the .exe is not returning standard exit codes then you may want to specify to ignore the exit code when using the Exec task with IgnoreExitCode="true". If that doesn't work then try the additional parameter IgnoreStandardErrorWarningFormat="true".