Showing RavenDB request in test result output - ravendb

I would like all calls made to RavenDb to be shown in the Resharper testrunner when i run my tests, is there some kind of logging or tracing that can be turned on in the client?

Pelle,
RavenDB uses NLog under the covers to log its actions. You can configure NLog to output everything from Raven.Client.* to the console / debug, you can do it with the following configuration:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.netfx35.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target xsi:type="Console" Name="Console" />
</targets>
<rules>
<logger name="Raven.Client.*" writeTo="Console"/>
</rules>
</nlog>

This is just something that's off the top of my head. I use Resharper and NUnit or XUnit. To get any info in the Resharper unit test output window, I just use Debug.WriteLine("blah"); Works great.
Now RavenDb does have some places you can add your own custom logic, such as Debug.WL(..) by creating your own IDocumentQueryListener's ..
So maybe implement one of them?
public class DebugWriteLineDocumentListener : IDocumentQueryListener
{
#region Implementation of IDocumentQueryListener
public void BeforeQueryExecuted(IDocumentQueryCustomization queryCustomization)
{
// Do whatever you want, here.
// UnTested .. but I *think* the ToString will list the Linq query.
Debug.WriteLine(queryCustomization.ToString());
}
#endregion
}
Then just register this listener with your DocumentStore ...
documentStore.RegisterListener(new DebugWriteLineDocumentListener());
See if that helps ..
I've not tried it .. just the first thing that came into my head :)
GL! Keep us posted!

Related

Play 2.4 - Display Ebeans SQL statement in logs

How to display SQL Statements in the log ? I'm using EBeans and it fails to insert for some reasons but I can't see what's the problem.
I tried to edit my config to:
db.default.logStatements=true
and add this to logback.xml
<logger name="com.jolbox" level="DEBUG" />
to follow some answers I found online, but it doesn't seem to work for 2.4…
Logging has changed with Play 2.4. Starting from now, to display the SQL statements in the console, simply add the following line to the conf/logback.xml file:
<logger name="org.avaje.ebean.SQL" level="TRACE" />
It should work just fine.
As #Flo354 pointed out in the comments, with Play 2.6 you should use:
<logger name="io.bean" level="TRACE" />
From Play 2.5 Logging SQL statements is very easy, Play 2.5 has an easy way to log SQL statements, built on jdbcdslog, that works across all JDBC databases, connection pool implementations and persistence frameworks (Anorm, Ebean, JPA, Slick, etc). When you enable logging you will see each SQL statement sent to your database as well as performance information about how long the statement takes to run.
The SQL log statement feature in Play 2.5 can be configured by database, using logSql property:
db.default.logSql=true
After that, you can configure the jdbcdslog-exp log level by adding this lines to logback.xml:
<logger name="org.jdbcdslog.ConnectionLogger" level="OFF" /> <!-- Won' log connections -->
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
<logger name="org.jdbcdslog.ResultSetLogger" level="OFF" /> <!-- Won' log result sets -->
FYI, there's nice video tutorial on Ebean's new doc page showing the way to capture SQL statements only for selected areas of the code.
Thanks to this you can log statements only in problematic places while developing and/or use the logged statements for performing tests as showed in video.
In short: add latest avaje-ebeanorm-mocker dependency to your built.sbt as usually, so later you can use it in your code like:
LoggedSql.start();
User user = User.find.byId(123);
// ... other queries
List<String> capturedLogs = LoggedSql.stop();
Note you don't even need to fetch the List of statements if you do not need to process them as they are displayed in the console as usually. So you can use it like this as well:
if (Play.isDev()) LoggedSql.start();
User user = User.find.byId(345);
// ... other queries
if (Play.isDev()) LoggedSql.stop();
I had success using jdbcdslog. As #Saeed Zarinfam mentioned here, Play 2.5 includes this by default.
Unlike this answer, this solution shows the parameter values instead of question marks.
Here are the steps I followed to get it working for Play 2.4 and MySQL:
Add to build.sbt:
"com.googlecode.usc" % "jdbcdslog" % "1.0.6.2"
Add to logback.xml:
<logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements -->
Create conf/jdbcdslog.properties file containing:
jdbcdslog.driverName=mysql
jdbcdslog.showTime=true
Change db.default.url (example):
jdbc:mysql://127.0.0.1:3306/mydb
changes to
jdbc:jdbcdslog:mysql://127.0.0.1:3306/mydb;targetDriver=com.mysql.jdbc.Driver
Change db.default.driver:
org.jdbcdslog.DriverLoggingProxy

How can I use perform MSBuild batching on Items that don't represent files?

The MSBuild documentation hints in several places that Items aren't necessarily the same as files.
"MSBuild items are inputs into the build system, and they typically represent files."
"Items are objects that typically represent files."
However, I can't seem to find any examples where Items do not represent files. In particular, I would like to perform batching over a set of non-file items. But every item that I create, even from a custom build task, somehow acquires file-like metadata (FullPath, RootDir, Filename, Extension, etc.). Furthermore, I'm confused about the ramifications of setting the Inputs of a target to a set of items that aren't files, and what to use as that target's Outputs.
Does anybody have an example of using non-file Items to perform batching in MSBuild?
edit
Sorry for taking so long to come up with an example. I understand things a bit more, but I'm still uncertain (and there seems to be a complete lack of documentation about this). Everything here is going off my recollection; I'm not at my work computer right now, so I can't verify any of it.
In my experience, MSBuild doesn't like to build multiple configurations of a .sln file in one go. So, this:
msbuild.exe SampleMSBuild.sln /p:Configuration=Debug%3BRelease
(The encoded semicolon being necessary so that it doesn't try to define multiple properties.)
Produces this:
"D:\src\SampleMSBuild\SampleMSBuild.sln" (default target) (1) ->
(ValidateSolutionConfiguration target) ->
D:\src\SampleMSBuild\SampleMSBuild.sln.metaproj : error MSB4126: The
specified solution configuration "Debug;Release|Any CPU" is invalid.
Please specify a valid solution configuration using the Configuration
and Platform properties (e.g. MSBuild.exe Solution.sln
/p:Configuration=Debug /p:Platform="Any CPU") or leave those properties
blank to use the default solution configuration.
[D:\src\SampleMSBuild\SampleMSBuild.sln]
So, it seems like it should be possible to use batching and items to handle this.
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
DefaultTarget="Rebuild"
ToolsVersion="4.0">
<ItemGroup>
<Configurations Include="Debug" />-->
<Configurations Include="Release" />-->
</ItemGroup>
<UsingTask TaskName="LogMetadata"
TaskFactory="CodeTaskFactory"
AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll">
<ParameterGroup>
<Items ParameterType="Microsoft.Build.Framework.ITaskItem[]"
Required="true" />
</ParameterGroup>
<Task>
<Code Type="Fragment" Language="cs">
<![CDATA[
foreach (var item in Items) {
Console.Write(item.ItemSpec);
Console.Write(" {");
foreach (string metadataKey in item.MetadataNames) {
Console.Write(metadataKey);
Console.Write("=\"");
Console.Write(item.GetMetadata(metadataKey).
ToString().Replace("\"", "\\\""));
Console.Write("\" ");
}
Console.WriteLine("}");
}]]>
</Code>
</Task>
</UsingTask>
<Target Name="Rebuild">
<LogMetadata Items="%(Configurations.Identity)" />
</Target>
</Project>
Which produces this:
Debug {
FullPath="D:\src\SampleMSBuild\Debug"
RootDir="D:\"
Filename="Debug"
Extension=""
RelativeDir=""
Directory="src\SampleMSBuild\"
RecursiveDir=""
Identity="Debug"
ModifiedTime=""
CreatedTime=""
AccessedTime=""
}
Release {
FullPath="D:\src\SampleMSBuild\Release"
RootDir="D:\"
Filename="Release"
Extension=""
RelativeDir=""
Directory="src\SampleMSBuild\"
RecursiveDir=""
Identity="Release"
ModifiedTime=""
CreatedTime=""
AccessedTime=""
}
As you can see, the items have all kinds of file metadata attached to them. I can't drop the Include attribute, since it's required, but I could synthesize the items in a custom task. HOWEVER, when I do that, they still somehow magically gain all the same file metadata.
What are the ramifications of this? Since I haven't specified these as Inputs or Outputs to a target, will the file metadata cause any problems? Will the build system skip over targets, or build more than it needs to, because the files specified in the Items' FullPath metadata do not exist? What if those files did exist? Would it cause any problems?
I use items to build several solutions after each other and doing some "manual" task with them, therefore I define my own items:
<ItemGroup>
<MergeConfigurations Include="project1\project1.SDK.sln">
<MergeOutAssemblyName>product1.dll</MergeOutAssemblyName>
<MergePrimaryAssemblyName>project1.Interfaces.dll</MergePrimaryAssemblyName>
<SolutionBinaries>project1\bin\$(FlavorToBuild)</SolutionBinaries>
</MergeConfigurations>
<MergeConfigurations Include="project1\project1.Plugin.sln">
<MergeOutAssemblyName>product1.dll</MergeOutAssemblyName>
<MergePrimaryAssemblyName>project1.Interfaces.dll</MergePrimaryAssemblyName>
<SolutionBinaries>project1\bin\plugin\$(FlavorToBuild)</SolutionBinaries>
</MergeConfigurations>
<ItemGroup>
Then I use a target to take the information and do what is necessary:
<Target Name="MergeSolution"
Inputs="%(MergeConfigurations.Identity)"
Outputs="%(MergeConfigurations.Identity)\Ignore_this">
<PropertyGroup>
<MergeSolution>%(MergeConfigurations.Identity)</MergeSolution>
<MergeOutAssemblyName>%(MergeConfigurations.MergeOutAssemblyName)</MergeOutAssemblyName>
<MergePrimaryAssemblyName>%(MergeConfigurations.MergePrimaryAssemblyName)</MergePrimaryAssemblyName>
<SolutionBinaries>%(MergeConfigurations.SolutionBinaries)</SolutionBinaries>
</PropertyGroup>
[....]
</Target>
Hope this helps to point you in the direction you need.
Tricky question. I'm an MSBuild noob, and found myself needing to understand batching recently, so figured I'd share some of what I found.
In general, yes, it seems that most everywhere you see samples and discussions about ITaskItems they tend to be about files. But the underlying implementation is very flexible and can deal with many other things as well. In my case, I've been working with strings and XML data.
This MS article gives some great examples of non-file ItemGroups and metadata.
This article was the best summary I can find that talks about the mechanics of Items and how they are different than Properties. It also covers the whole bit about # and % syntax, converting between strings and Items, and a hint as to where those file metadata properties are coming from - MSBuild is optimized for it.
Whenever you have an interface used as a parameter to a task or whatever, there is going to be a default implementation of that interface somewhere. My guess is that your code sample is newing up a number of these default objects under the hood and those define the metadata that are created by default. If you were to implement this interface yourself, I'll wager you could change this behavior. Probably beyond the scope of the question though =)

How to check if a MSBuild-Task fails if using ContinueOnError=true

I am running the MSBuild task with ContinueOnError=true:
<MSBuild Projects="#(ComponentToDeploy)"
Targets="$(DeploymentTargets)"
Properties="$(CommonProperties);%(AdditionalProperties)"
ContinueOnError="true"
Condition="%(Condition)"/>
So my build always succeeds.
Is there a way to find out if any error occurs?
I could not find any Output of the MSBuild task containing this information.
The only way I know is to parse the log file for errors but it looks like a workaround for me.
(I am using MSBuild 4.0)
This is an answer to the last feedback of #Ilya.
I'm using feedback/answer because of the length and formatting restrictions of the comments.
Log is scoped to individual targets or to be more specific tasks...
This was indeed the first question arose when I was reading your comment with the suggestion to use Log.HasLoggedErrors: "Was is the scope of the Log?".
Unfortunately I was not be able to finde a proper documentation. MSND does not help much...
Why did you know it is scoped to the task?
I'm not in doubt about your statement at all! I'm just wondering if there is a proper documentation somewhere..
(I haven't been using MSBuild for years ;-)
In any case, what are you building as project?
My test projects are very simple.
MyTest.project
<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="ElenasTarget" ToolsVersion="4.0">
<UsingTask AssemblyFile="$(MSBuildProjectDirectory)\MyCompany.Tools.MSBuild.Tasks.dll" TaskName="MSBuildWithHasLoggedErrors" />
<ItemGroup>
<MyProjects Include="CopyNotExistingFile.proj" />
</ItemGroup>
<Target Name="ElenasTarget">
<MSBuildWithHasLoggedErrors Projects="#(MyProjects)" ContinueOnError="true" >
<Output TaskParameter="HasLoggedErrors" PropertyName="BuildFailed" />
</MSBuildWithHasLoggedErrors>
<Message Text="BuildFailed=$(BuildFailed)" />
</Target>
</Project>
The CopyNotExistingFile.proj just tries to copy a file that does not exist:
<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Target1" ToolsVersion="4.0">
<Target Name="Target1">
<Copy SourceFiles="C:\lalala.bum" DestinationFiles="C:\tralala.bam" />
</Target>
</Project>
And this is my custom task MSBuildWithHasLoggedErrors
namespace MyCompany.Tools.MSBuild.Tasks
{
public class MSBuildWithHasLoggedErrors : Microsoft.Build.Tasks.MSBuild
{
[Output]
public bool HasLoggedErrors { get; private set; }
public override bool Execute()
{
try
{
base.Execute();
HasLoggedErrors = Log.HasLoggedErrors;
}
catch (Exception e)
{
Log.LogErrorFromException(e, true);
return false;
}
return true;
}
}
}
If I build my MyTest.proj the HasLoggedErrorswill be set to false although an error (MSB3021) was logged(?) to the console logger:
Project "C:\Users\elena\mytest.proj" on node 1 (default targets).
Project "C:\Users\elena\mytest.proj" (1) is building "C:\Users\elena\CopyNotExistingFile.proj" (2) on node 1 (default targets).
Target1:
Copying file from "C:\lalala.bum" to "C:\tralala.bam".
C:\Users\elena\CopyNotExistingFile.proj(5,4): error MSB3021: Unable to copy file "C:\lalala.bum" to "C:\tralala.bam". Could not find file 'C:\lalala.bum'.
Done Building Project "C:\Users\elena\CopyNotExistingFile.proj" (default targets) -- FAILED.
ElenasTarget:
BuildFailed=False
Done Building Project "C:\Users\elena\mytest.proj" (default targets).
Build succeeded.
My expectation was HasLoggedErrors would be set to true.
one way is to build self but with different target, for example your DefaultTargets one launches your custom MSBuildWrapper task pointing to itself (ie $(MSBuildProjectFile)) but with a different target that does other builds, copies
I've already tried it (that were my investigations I meant in my post). Unfortunately it doesn't work either :-(
(I am aware you said in theory).
My new single project looks like this:
<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="ElenasTarget" ToolsVersion="4.0">
<UsingTask AssemblyFile="$(MSBuildProjectDirectory)\MyCompany.Tools.MSBuild.Tasks.dll" TaskName="MSBuildWithHasLoggedErrors" />
<Target Name="ElenasTarget">
<MSBuildWithHasLoggedErrors Projects="$(MSBuildProjectFile)" Targets="CopyNotExistingFile" ContinueOnError="true" >
<Output TaskParameter="HasLoggedErrors" PropertyName="BuildFailed" />
</MSBuildWithHasLoggedErrors>
<Message Text="BuildFailed=$(BuildFailed)" />
</Target>
<Target Name="CopyNotExistingFile" >
<Copy SourceFiles="C:\lalala.bum" DestinationFiles="C:\tralala.bam" />
</Target>
</Project>
If I build this project HasLoggedErrors will still be set to false.
(Furthermore, my "real" build I'm currently maintaining is much complexer containing several project files with targets... so I can't pack them all in a single project file ).
or writing custom logger and passing it through command line
That was my last hope!
My "real" build has a custom logger passed through the command line (I didn't use it for my test project for the sake of simplicity). That is actually producing the log (a XML file) I'm going to parse to find out if any errors have been logged.
BTW, I thought the console logger is a kind of "global" logger. Am I wrong?
Anyway, the custom logger does not help neither, the Log.HasLoggedErrors is still set to false.
Is there some way I am not aware of to reference a particular logger (e.g. my custom logger) to ask if it has logged any errors?
It really looks like Log is scoped to individual targets.
Hmm... if the reflection on the buildengine instance is the last resort I would still prefer parsing the log.
(Don't blame me! :-) )
My decision
After some investigations I've decided to stick with my initial solution: parse the log to find out if the build failed.
Check my comments to see why I prefer that to the suggestions have been provided so far.
If someone has some other ideas do not hesitate to share :-)
(Otherwise this question can be closed, I suppose...)
The MSBuildLastTaskResult reserved property will be set to True if the last task succeeded and False if the last task failed:
<MSBuild Projects="#(ComponentToDeploy)"
Targets="$(DeploymentTargets)"
Properties="$(CommonProperties);%(AdditionalProperties)"
ContinueOnError="true"
Condition="%(Condition)" />
<Message Text="MSBuild failed!" Condition="'$(MSBuildLastTaskResult)' == 'False'" />
I believe this was introduced with MSBuild v4.0.
I know this thread is a bit old, but another possible solution, as I presume you needed to know that build failed in order to execute some "final task", is to use:
<OnError ExecuteTargets="FinalReportTarget;CleanupTarget" />
That would fail the build in case of error, but execute the "FinalReportTarget" and "CleanupTarget".
ContinueOnError="true" is not needed in this case.
You could capture TargetOutputs and check them for error conditions afterwards, but that's still quite hackish.
If you only want to check if MSBuild task failed, use Exec task. Set IgnoreExitCode to true and check ExitCode output value. If not zero, something is wrong.
If you need the list of build errors, use /fileloggerparameters command line switch to log errors only to some specific file:
/flp1:logfile=errors.txt;errorsonly
But if another task inside some target (e.g. Copytask) raised an error the Log.HasLoggedErrors returns false.
Didn't know comments have length limits...
Log is scoped to individual targets or to be more specific tasks, and (as far as I'm aware) there is no way to get a "global" one, may be through reflection on the buildengine instance, or writing custom logger and passing it through command line. In any case, what are you building as project? HasLoggedErrors works as expected (and has been working unchanged for years), it shows if project being built logged any errors. It doesn't, and shouldn't, have any control over logging of other tasks (that might use other types of loggers). If you want a global one, one way is to build self but with different target, for example your DefaultTargets one launches your custom MSBuildWrapper task pointing to itself (ie $(MSBuildProjectFile)) but with a different target that does other builds, copies, etc, in theory it should simulate a global HasLoggedErrors...

Problem with struts 2 and json plugin

I'm using the json plugin that comes with struts 2 (json-lib-2.1.jar) and trying to follow the website to set it up.
Here's my struts.xml
<struts>
<package name="example" extends="json-default">
<action name="AjaxRetrieveUser" class="actions.view.RetrieveUser">
<result type="json"/>
</action>
</package>
</struts>
but I get this warning:
SEVERE: Unable to find parent packages json-default
Is there something else I'm supposed to do?
Edit:
I added this method to my RetrieveUser:
public Map<String,Object> getJsonModel()
{
return jsonModel;
}
And my struts.xml looks like this:
<struts>
<package name="example" extends="json-default">
<action name="AjaxRetrieveUser" class="actions.view.RetrieveUser">
<result type="json"/>
<param name="root">jsonModel</param>
</action>
</package>
</struts>
However, I don't think the response is going from the RetrieveUser class to the javascript. I'm using firebug and no request gets sent.
I believe that net.sf.json-lib is just a toolset you can use in your Java to build up JSON-ready objects, suitable to be returned by actions such as you describe.
Probably, you need to include struts-json-plugin - make sure its version matches your struts version.
I notice also that as written, your action will attempt to return RetrieveUser, serialized. Most implementations I've done/seen specify the root object to be returned, by adding
<param name="root">jsonUser</param>
Under the tag, and define this method in RetrieveUser
public Map<String, Object> getJsonUser()
[This is mentioned in the Sruts2 doc]. Hope that helps.
[edit] I use Map - you could also use the object structures provided by json-lib instead.
Re: Your edit. Probably need to see your calling javascript. And probably I will suggest that you make sure you have both a success and an error handler. Can you debug/log to show that the method is being called in java ? Do your logs show anything ? This is usually some sort of error....

Simulating the Maven2 filter mechanism using Ant

I have a properties file, let say my-file.properties.
In addition to that, I have several configuration files for my application where some information must be filled regarding the content of my-file.properties file.
my-file.properties:
application.version=1.0
application.build=42
user.name=foo
user.password=bar
Thus, in my configuration files, I will find some ${application.version}, ${user.name} that will be replaced by their value taken in the properties file...
When I build my application using Maven2, I only need to specify the properties file and say that my resources files are filtered (as in this answer to another problem). However, I need to achieve the same thing by using only Ant.
I've seen that Ant offers a filter task. However, it forces me to use the pattern #property.key# (i.e. #user.name# instead of #{user.name}) in my configuration files, which is not acceptable in my case.
How can I solve my problem?
I think expandproperties is what you are looking for. This acts just like Maven2's resource filters.
INPUT
For instance, if you have src directory (one of many files):
<link href="${css.files.remote}/css1.css"/>
src/test.txt
PROCESS
And in my ANT build file we have this:
<project default="default">
<!-- The remote location of any CSS files -->
<property name="css.files.remote" value="/css/theCSSFiles" />
...
<target name="ExpandPropertiesTest">
<mkdir dir="./filtered"/>
<copy todir="./filtered">
<filterchain>
<expandproperties/>
</filterchain>
<fileset dir="./src" />
</copy>
</target>
</project>
build.xml
OUTPUT
*When you run the ExpandPropertiesTest target you will have the following in your filtered directory: *
<link href="/css/theCSSFiles/css1.css"/>
filtered/test.txt
You can define a custom FilterReader. So you have a couple of choices:
Extend/copy the org.apache.tools.ant.filters.ReplaceTokens class and define a Map property that references another properties file containing all the replacements. This is still a bit of a chore as you have to define all the replacements.
Extend/copy the org.apache.tools.ant.filters.ReplaceTokens class with additional processing that just substitutes the matched token with a version with the correct garnish. Of course you'd have to be really careful where you use this type as it will match anything with the begin and end token.
So in the read() method of ReplaceTokens, replace:
final String replaceWith = (String) hash.get(key.toString());
with a call to a getReplacement() method:
...
final String replaceWith = getReplacement(key.toString);
...
private String getReplacement(String key) {
//first check if we have a replacement defined
if(has.containsKey(key)) {
return (String)hash.get(key);
}
//now use our built in rule, use a StringBuilder if you want to be tidy
return "$" + key + "}";
}
To use this, you'd ensure your class is packaged and on Ant's path and modify your filter:
<filterreader classname="my.custom.filters.ReplaceTokens">
<!-- Define the begin and end tokens -->
<param type="tokenchar" name="begintoken" value="$"/>
<param type="tokenchar" name="endtoken" value="}"/>
<!--Can still define explicit tokens, any not
defined explicitly will be replaced by the generic rule -->
</filterreader>
One hooooooorible way to make this work, inspired by the solution of Mnementh, is with the following code:
<!-- Read the property file -->
<property file="my-file.properties"/>
<copy todir="${dist-files}" overwrite="true">
<fileset dir="${src-files}">
<include name="*.properties"/>
</fileset>
<filterchain>
<filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
<!-- Define the begin and end tokens -->
<param type="tokenchar" name="begintoken" value="$"/>
<param type="tokenchar" name="endtoken" value="}"/>
<!-- Define one token per entry in the my-file.properties. Arggh -->
<param type="token" name="{application.version" value="${application.version}"/>
<param type="token" name="{user.name" value="${user.name}"/>
...
</filterreader>
</filterchain>
</copy>
Explanations:
I am using the ReplaceTokens reader to look for all $...} pattern. I cannot search for ${...} patterns, as the begintoken is a char and not a String. Then, I set the list of tokens starting with a { (i.e. I see {user.name instead of user.name). Hopefully, I have "only" about 20 lines in my-file.properties, so I need to define "only" 20 tokens in my Ant file...
Is there any simple and stupid solution to solve this simple and stupid problem??
Ant knows a concept named Filterchains, that is useful here. Use the ReplaceTokens-filter and specify the begintoken and endtoken as empty (normally that's '#'). That should do the trick.