I have a typescript project set up that is being built with GruntJS using the typescript plugin. I also have a Visual Studio project that I'd like to be able to invoke the build process from.
My first attempt at doing this was adding an <Exec> task to the BeforeBuild target in visual studio, with the <Exec> task configured like this:
<Exec Command="grunt --no-color typescript" />
This runs the build fine, however, when errors are output from Grunt and they populate the Error List in VS, the filename is incorrectly listed as EXEC.
Looking at the Exec Documentation I see that CustomErrorRegularExpression is a parameter to the command, but I can't quite grasp how to use it to solve my problem.
I messed around with it a bit and managed to change the reported filename to my .jsproj file, which is also incorrect. Looking at this post I tried forming my own regex:
<Exec CustomErrorRegularExpression="\.ts\([0-9]+,[0-9]+\):(.*)" Command="grunt --no-color typescript" IgnoreExitCode="true" />
Does anyone have any experience using this command with this parameter to achieve this sort of thing? I think maybe part of the problem is that grunt is printing out errors in two lines?
You are correct about the Exec task only processing single line messages. In addition it also uses Regex.IsMatch to evaluate the error/warning condition, without making use of the pattern's capture groups.
I was not able to find a way to work around this via MSBuild, but the changes to correct the problem were easy to make directly in the grunt task.
I'm using the grunt-typescript task from: https://www.npmjs.org/package/grunt-typescript.
There were 3 trivial changes necessary to make this work.
1) Replace the output utility methods near the top of tasks/typescript.js:
/* Remove the >> markers and extra spacing from the output */
function writeError(str) {
console.log(str.trim().red);
}
function writeInfo(str) {
console.log(str.trim().cyan);
}
2) Replace Compiler.prototype.addDiagnostic to write the file and error data on the same line:
Compiler.prototype.addDiagnostic = function (diagnostic) {
var diagnosticInfo = diagnostic.info();
if (diagnosticInfo.category === 1)
this.hasErrors = true;
var message = " ";
if (diagnostic.fileName()) {
message = diagnostic.fileName() +
"(" + (diagnostic.line() + 1) + "," + (diagnostic.character() + 1) + "): ";
}
this.ioHost.stderr.Write(message + diagnostic.message());
};
Once these changes are made, you no longer need the CustomErrorRegularExpression to be set on your Exec task, and your build output should display the error text including the correct source file with line and column information.
Related
The root problem is that nix uses autoconf to build libxml2-2.9.14 instead of cmake, and a consequence of this is that the cmake-configuration is missing (details like version number, platform specific dependencies like ws2_32 etc which are needed by my project cmake scripts). libxml2-2.9.14 already comes with cmake configuration and works nicely, except that nix does not use it (I guess they have their own reasons).
Therefore I would like to reuse the libxml2-2.9.14 nix package and override the builder script with my own (which is a trivial cmake dance).
Here is my attempt:
defaultPackage = forAllSystems (system:
let
pkgs = nixpkgsFor.${system};
cmakeLibxml = pkgs.libxml2.overrideAttrs( o: rec {
PROJECT_ROOT = builtins.getEnv "PWD";
builder = "${PROJECT_ROOT}/nix-libxml2-builder.sh";
});
in
Where nix-libxml2-builder.sh is my script calling cmake with all the options I need. It fails like this:
last 1 log lines:
> bash: /nix-libxml2-builder.sh: No such file or directory
For full logs, run 'nix log /nix/store/andvld0jy9zxrscxyk96psal631awp01-libxml2-2.9.14.drv'.
As you can see the issue is that PROJECT_ROOT does not get set (ignored) and I do not know how to feed my builder script.
What am I doing wrong?
Guessing from the use of defaultPackage in your snippet, you use flakes. Flakes are evaluated in pure evaluation mode, which means there is no way to influence the build from outside. Hence, getEnv always returns an empty string (unfortunately, this is not properly documented).
There is no need to refer to the builder script via $PWD. The whole flake is copied to the nix store so you can use your files directly. For example:
builder = ./nix-libxml2-builder.sh;
That said, the build will probably still fail, because cmake will not be available in the build environment. You would have to override nativeBuildInputs attribute to add cmake there.
I use Yocto and I was wondering how the variable scope works in a BitBake recipe:
My recipe looks like:
SRC_URI += "file://something"
python do_fetch_prepend() {
d.appendVar("SRC_URI", "https://www.bla.com/resource.tar")
bb.error("SRC_URI_1: %s " % d.getVar("SRC_URI"))
d.setVar("TEST_VAR", "test")
}
python do_unpack_append() {
bb.error("SRC_URI_2: %s " % d.getVar("SRC_URI"))
bb.error("TEST_VAR: %s " % d.getVar("TEST_VAR"))
}
I run bitbake -v -c unpack myrecipe
SRC_URI_1 is printed as expected: "file://something https://www.bla.com/resource.tar"
SRC_URI_2 is printed as: "file://something"
TEST_VAR is printed as: None
I looks like setting/changing a variable in datastore (d) is only done in the scope of the do_fetch. Is this expected behaviour, because I read in the documentation that 'd' is global variable.
If this is expected behaviour, is there away to change global variables in a task of a recipe?
The reason behind the question is that I need another native-recipe before I can add the extra URI to the SRC_URI. I tried first Inline Python Variable Expansion, but the BitBake parser already expanse the variable before the native recipe is put in 'native directory'. So I try to change the SRC_URI during fetch task and I 'load' my native recipe as follow:
python () {
d.appendVarFlag('do_parse', 'depends', 'my-recipe-native:do_populate_sysroot')
}
In the do_fetch_prepend I use this native recipe which gives me the correct URL, which I wanted to append to the SRC_URI. So the fetching, unpacking, cleaning, etc works. It looks like I fetching works, but the unpacking not because the SRC_URI is not updated.
With a given task, variable changes are only local. This means do_unpack does not 'see' a change made by the do_fetch task.
This is necessary to allow some tasks to rerun when others are covered by sstate, to ensure things are deterministic.
If you really want to do what you describe, you'd need something like a prefunc for the tasks where you need to modify SRC_URI.
python myprefunc() {
d.appendVar("SRC_URI", "https://www.bla.com/resource.tar")
}
do_fetch[prefuncs] += "myprefunc"
do_unpack[prefuncs] += "myprefunc"
However note that whilst this will do some of what you want, source archives, license manifests and sstate checksums may not work correctly since you're "hiding" source data from bitbake and this data is only present at task execution time, not parse time.
I have set up a gradle project that uses the Liquibase Gradle Plugin.
I am trying to use the functionality described in liquibase output
When I do gradle updateSQL basically the task outputs every change in the terminal (I try using this and putting the output of the command like "gradle updateSQL > changes.sql" but this also includes stuffs that I can not run later on, besides it haves all the changes and not just the updates).
I am trying to use the command updateCountSql ( the description says "Writes SQL to apply the next change sets to STDOUT.") I have tried to enter parameters to this task but I can't make it work (constantly getting the error "The Liquibase updateCountSql command requires a value"), does anyone know how does it work?
I just need to keep track of the changes on the database, and be able to create a script with all the changes.
Thanks in advance.
You can use specify a target other than stdout using outputFile, e.g.:
liquibase {
activities {
main {
changeLogFile 'src/main/db/changelogs.groovy'
url 'jdbc:mysql://localhost:3306/my_db'
username 'myusername'
password 'mypassword'
outputFile 'path/to/script.sql'
}
}
}
I had a script that was running locally, but it failed on Rally.
The reason it turns out is because the script contains the following line:
var regex = new RegExp("/Metrics/" + this.type + "/(\\d+)-(\\d+)");
This is so I can look for a particular string, based on this.type. Unfortunately something in the rake file changes the \\d expressions to \d, which breaks the script. This will probably break any script that relies on a double \ to escape stuff.
I was able to get around this by using [0-9] instead of \d, but it would be nice to get a more robust workaround to this nasty little gotcha.
This issue is fixed as of this commit. You can download the new rake file and rebuild your app.
I am using a SQL 2008 database project (in visual studio) to manage the schema and initial test data for my project. The atabase project uses a post deployment which includes a number of other scripts using SQLCMD's ":r " syntax.
I would like to be able to conditionally include certain files based on a SQLCMD variable. This will allow me to run the project several times with our nightly build to setup various version of the database with different configurations of the data (for a multi-tenant system).
I have tried the following:
IF ('$(ConfigSetting)' = 'Configuration1')
BEGIN
print 'inserting specific configuration'
:r .\Configuration1\Data.sql
END
ELSE
BEGIN
print 'inserting generic data'
:r .\GenericConfiguration\Data.sql
END
But I get a compilation error:
SQL01260: A fatal parser error occurred: Script.PostDeployment.sql
Has anyone seen this error or managed to configure their postdeployment script to be flexible in this way? Or am I going about this in the wrong way completely?
Thanks,
Rob
P.S. I've also tried changing this around so that the path to the file is a variable, similar to this post. But this gives me an error saying that the path is incorrect.
UPDATE
I've now discovered that the if/else syntax above doesn't work for me because some of my linked scripts require a GO statement. Essentially the :r just imports the scripts inline, so this becomes invalid sytax.
If you need a GO statement in the linked scripts (as I do) then there isn't any easy way around this, I ended up creating several post deployment scripts and then changing my project to overwrite the main post depeployment script at build time depending on the build configuration. This is now doing what I need, but it seems like there should be an easier way!
For anyone needing the same thing - I found this post useful
So in my project I have the following post deployment files:
Script.PostDeployment.sql (empty file which will be replaced)
Default.Script.PostDeployment.sql (links to scripts needed for standard data config)
Configuration1.Script.PostDeployment.sql (links to scripts needed for a specific data config)
I then added the following to the end of the project file (right click to unload and then right click edit):
<Target Name="BeforeBuild">
<Message Text="Copy files task running for configuration: $(Configuration)" Importance="high" />
<Copy Condition=" '$(Configuration)' == 'Release' " SourceFiles="Scripts\Post-Deployment\Default.Script.PostDeployment.sql" DestinationFiles="Scripts\Post-Deployment\Script.PostDeployment.sql" OverwriteReadOnlyFiles="true" />
<Copy Condition=" '$(Configuration)' == 'Debug' " SourceFiles="Scripts\Post-Deployment\Default.Script.PostDeployment.sql" DestinationFiles="Scripts\Post-Deployment\Script.PostDeployment.sql" OverwriteReadOnlyFiles="true" />
<Copy Condition=" '$(Configuration)' == 'Configuration1' " SourceFiles="Scripts\Post-Deployment\Configuration1.Script.PostDeployment.sql" DestinationFiles="Scripts\Post-Deployment\Script.PostDeployment.sql" OverwriteReadOnlyFiles="true" />
</Target>
Finally, you will need to setup matching build configurations in the solution.
Also, for anyone trying other work arounds, I also tried the following without any luck:
Creating a post build event to copy the files instead of having to hack the project file XML. i couldn't get this to work because I couldn't form the correct path to the post deployment script file. This connect issue describes the problem
Using variables for the script path to pass to the :r command. But I came across several errors with this approach.
I managed to work around the problem using the noexec method.
So, instead of this:
IF ('$(ConfigSetting)' = 'Configuration1')
BEGIN
print 'inserting specific configuration'
:r .\Configuration1\Data.sql
END
I reversed the conditional and set NOEXEC ON to skip over the imported statement(s) thusly:
IF ('$(ConfigSetting)' <> 'Configuration1')
SET NOEXEC ON
:r .\Configuration1\Data.sql
SET NOEXEC OFF
Make sure you turn it back off if you want to execute any subsequent statements.
Here's how I am handling conditional deployment within the post deployment process to deploy test data for the Debug but not Release configuration.
First, in solution explorer, open the project properties folder, and right-click to add a new SqlCmd.variables file.
Name the file Debug.sqlcmdvars.
Within the file, add your custom variables, and then add a final variable called $(BuildConfiguration), and set the value to Debug.
Repeat the process to create a Release.sqlcmdvars, setting the $(BuildConfiguration) to Release.
Now, configure your configurations:
Open up the project properties page to the Deploy tab.
On the top dropdown, set the configuration to be Debug.
On the bottom dropdown, (Sql command variables), set the file to Properties\Debug.sqlcmdvars.
Repeat for Release as:
On the top dropdown, set the configuration to be Release.
On the bottom dropdown, (Sql command variables), set the file to Properties\Release.sqlcmdvars.
Now, within your Script.PostDeployment.sql file, you can specify conditional logic such as:
IF 'Debug' = '$(BuildConfiguration)'
BEGIN
PRINT '***** Creating Test Data for Debug configuration *****';
:r .\TestData\TestData.sql
END
In solution explorer, right click on the top level solution and open Configuration Manager. You can specify which configuration is active for your build.
You can also specify the configuration on the MSBUILD.EXE command line.
There you go- now your developer builds have test data, but not your release build!
As Rob worked out, GO statements aren't allowed in the linked SQL scripts as this would nest it within the BEGIN/END statements.
However, I have a different solution to his - if possible, remove any GO statements from the referenced scripts, and put a single one after the END statement:
IF '$(DeployTestData)' = 'True'
BEGIN
:r .\TestData\Data.sql
END
GO -- moved from Data.sql
Note that I've also created a new variable in my sqlcmdvars file called $(DeployTestData) which allows me to turn on/off test script deployment.
I found a hack from an MSDN blog which worked fairly well. The trick is to write the commands to a temp script file and then execute that script instead. Basically the equivalent of dynamic SQL for SQLCMD.
-- Helper newline variable
:setvar CRLF "CHAR(13) + CHAR(10)"
GO
-- Redirect output to the TempScript.sql file
:OUT $(TEMP)\TempScript.sql
IF ('$(ConfigSetting)' = 'Configuration1')
BEGIN
PRINT 'print ''inserting specific configuration'';' + $(CRLF)
PRINT ':r .\Configuration1\Data.sql' + $(CRLF)
END
ELSE
BEGIN
PRINT 'print ''inserting generic data'';' + $(CRLF)
PRINT ':r .\GenericConfiguration\Data.sql' + $(CRLF)
END
GO
-- Change output to stdout
:OUT stdout
-- Now execute the generated script
:r $(TEMP)\TempScript.sql
GO
The TempScript.sql file will then contain either:
print 'inserting specific configuration';
:r .\Configuration1\Data.sql
or
print 'inserting generic data';
:r .\GenericConfiguration\Data.sql
depending on the value of $(ConfigSetting) and there will be no problems with GO statements etc. when it is executed.
I was inspired by Rob Bird's solution. However, I am simply using the Build Events to replace the post deployment scripts based on the selected build configuration.
I have one empty "dummy" post deployment script.
I set up a pre-build event to replace this "dummy" file based on the selected build configuration (see attached picture).
I set up a post-build event to place the "dummy" file back after the build has finished (see attached picture). The reason is that I do not want to generate changes in the change control after the build.