Xslt task not working as expected - msbuild

I have an XSLT transform that I developed in VS. It works great when I use VS to run it (via XML->Show Xslt Output). However, when I execute it via the MsBuildCommunityTasks Xslt task I get wildly different results.
Specifically, the output is only the contents of a handful of elements I don't even reference in my XSLT. I guess the default transform is picking them up.
My task declaration couldn't get any simpler:
<Xslt
Inputs="BuildLo​gs\partcover-result​s.xml"
Xsl="ExtTools\​xslt\partcover.asse​mbly.report.xsl​"
RootTag=""
RootAttributes=""
Output="partcov​er.assembly.report.h​tml"
/>
Perhaps msbuildtasks is using a different XSLT engine than VS uses internally? Any guidance would be appreciated.

I had trouble getting <Xslt /> to work as well. As of .NET 4.0, there is built in XmlTransformation task. Here is how it would look for your example:
<XslTransformation
OutputPaths="partcov​er.assembly.report.h​tml"
XmlInputPaths="BuildLo​gs\partcover-result​s.xml"
XslInputPath="ExtTools\​xslt\partcover.asse​mbly.report.xsl"
/>
Worked for me the first time! Credit to Bryan Cook at the urban canuk, eh for providing a nice overview of the XSLT options in MSBuild

I also spent some time on trying to get this Xslt-task working, fiddling with the RootTag and Attributes. After some 2 hours i gave up and instead wrote my own task to get this done, which worked on my first try..
public override bool Execute()
{
bool result = true;
Log.LogMessage("Transforming from {0} to {1} using {2}",
XmlFile, OutputFile, XsltFile);
XmlWriter xmlWriter = null;
try
{
XslCompiledTransform xslTransform = GetXslTransform(XsltFile);
XmlReader xmlReader = GetXmlReader(XmlFile);
xmlWriter = GetXmlWriter(OutputFile);
xslTransform.Transform(xmlReader, xmlWriter);
}
catch (Exception e)
{
Log.LogErrorFromException(e);
result = false;
}
finally
{
if (xmlWriter != null)
{
xmlWriter.Flush();
xmlWriter.Close();
}
}
return result;
}

The RootTag is applied before the transformation is run, not after. Take the RootTag into consideration when writing your xslt, and it will work

Related

Rss20FeedFormatter Ignores TextSyndicationContent type for SyndicationItem.Summary

While using the Rss20FeedFormatter class in a WCF project, I was trying to wrap the content of my description elements with a <![CDATA[ ]]> section. I found that no matter what I did, the HTML content of the description elements was always encoded and the CDATA section was never added. After peering into the source code of Rss20FeedFormatter, I found that when building the Summary node, it basically creates a new TextSyndicationContent instance which wipes out whatever settings were previously specified (I think).
My Code
public class CDataSyndicationContent : TextSyndicationContent
{
public CDataSyndicationContent(TextSyndicationContent content)
: base(content)
{
}
protected override void WriteContentsTo(System.Xml.XmlWriter writer)
{
writer.WriteCData(Text);
}
}
... (The following code should wrap the Summary with a CDATA section)
SyndicationItem item = new SyndicationItem();
item.Title = new TextSyndicationContent(name);
item.Summary = new CDataSyndicationContent(
new TextSyndicationContent(
"<div>This is a test</div>",
TextSyndicationContentKind.Html));
Rss20FeedFormatter Code
(AFAIK, the above code does not work because of this logic)
...
else if (reader.IsStartElement("description", ""))
result.Summary = new TextSyndicationContent(reader.ReadElementString());
...
As a workaround, I've resorted to using the RSS20FeedFormatter to build the RSS, and then patch the RSS manually. For example:
StringBuilder buffer = new StringBuilder();
XmlTextWriter writer = new XmlTextWriter(new StringWriter(buffer));
feedFormatter.WriteTo(writer ); // feedFormatter = RSS20FeedFormatter
PostProcessOutputBuffer(buffer);
WebOperationContext.Current.OutgoingResponse.ContentType =
"application/xml; charset=utf-8";
return new MemoryStream(Encoding.UTF8.GetBytes(buffer.ToString()));
...
public void PostProcessOutputBuffer(StringBuilder buffer)
{
var xmlDoc = XDocument.Parse(buffer.ToString());
foreach (var element in xmlDoc.Descendants("channel").First()
.Descendants("item")
.Descendants("description"))
{
VerifyCdataHtmlEncoding(buffer, element);
}
foreach (var element in xmlDoc.Descendants("channel").First()
.Descendants("description"))
{
VerifyCdataHtmlEncoding(buffer, element);
}
buffer.Replace(" xmlns:a10=\"http://www.w3.org/2005/Atom\"",
" xmlns:atom=\"http://www.w3.org/2005/Atom\"");
buffer.Replace("a10:", "atom:");
}
private static void VerifyCdataHtmlEncoding(StringBuilder buffer,
XElement element)
{
if (!element.Value.Contains("<") || !element.Value.Contains(">"))
{
return;
}
var cdataValue = string.Format("<{0}><![CDATA[{1}]]></{2}>",
element.Name,
element.Value,
element.Name);
buffer.Replace(element.ToString(), cdataValue);
}
The idea for this workaround came from the following location, I just adapted it to work with WCF instead of MVC. http://localhost:8732/Design_Time_Addresses/SyndicationServiceLibrary1/Feed1/
I'm just wondering if this is simply a bug in Rss20FeedFormatter or is it by design? Also, if anyone has a better solution, I'd love to hear it!
Well #Page Brooks, I see this more as a solution then as a question :). Thanks!!! And to answer your question ( ;) ), yes, I definitely think this is a bug in the Rss20FeedFormatter (though I did not chase it as far), because had encountered precisely the same issue that you described.
You have a 'localhost:8732' referral in your post, but it wasn't available on my localhost ;). I think you meant to credit the 'PostProcessOutputBuffer' workaround to this post:
http://damieng.com/blog/2010/04/26/creating-rss-feeds-in-asp-net-mvc
Or actually it is not in this post, but in a comment to it by David Whitney, which he later put in a seperate gist here:
https://gist.github.com/davidwhitney/1027181
Thank you for providing the adaption of this workaround more to my needs, because I had found the workaround too, but was still struggling to do the adaptation from MVC. Now I only needed to tweak your solution to put the RSS feed to the current Http request in the .ashx handler that I was using it in.
Basically I'm guessing that the fix you mentioned using the CDataSyndicationContent, is from feb 2011, assuming you got it from this post (at least I did):
SyndicationFeed: Content as CDATA?
This fix stopped working in some newer ASP.NET version or something, due to the code of the Rss20FeedFormatter changing to what you put in your post. This code change might as well have been an improvement for other stuff that IS in the MVC framework, but for those using the CDataSyndicationContent fix it definitely causes a bug!
string stylesheet = #"<xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform""><xsl:output cdata-section-elements=""description"" method=""xml"" indent=""yes""/></xsl:stylesheet>";
XmlReader reader = XmlReader.Create(new StringReader(stylesheet));
XslCompiledTransform t = new XslCompiledTransform(true);
t.Load(reader);
using (MemoryStream ms = new MemoryStream())
{
XmlWriter writer = XmlWriter.Create(ms, t.OutputSettings);
rssFeed.WriteTo(writer); // rssFeed is Rss20FeedFormatter
writer.Flush();
ms.Position = 0;
string niko = Encoding.UTF8.GetString(ms.ToArray());
}
I'm sure someone pointed this out already but this a stupid workaround I used.
t.OutputSettings is of type XmlWriterSettings with cdataSections being populated with a single XmlQualifiedName "description".
Hope it helps someone else.
I found the code for Cdata elsewhere
public class CDataSyndicationContent : TextSyndicationContent
{
public CDataSyndicationContent(TextSyndicationContent content)
: base(content)
{
}
protected override void WriteContentsTo(System.Xml.XmlWriter writer)
{
writer.WriteCData(Text);
}
}
Code to call it something along the lines:
item.Content = new Helpers.CDataSyndicationContent(new TextSyndicationContent("<span>TEST2</span>", TextSyndicationContentKind.Html));
However the "WriteContentsTo" function wasn't being called.
Instead of Rss20FeedFormatter I tried Atom10FeedFormatter - and it worked!
Obviously this gives Atom feed rather than traditional RSS - but worth mentioning.
Output code is:
//var formatter = new Rss20FeedFormatter(feed);
Atom10FeedFormatter formatter = new Atom10FeedFormatter(feed);
using (var writer = XmlWriter.Create(response.Output, new XmlWriterSettings { Indent = true }))
{
formatter.WriteTo(writer);
}

Very slow performance deserializing using datacontractserializer in a Silverlight Application

Here is the situation:
Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most.
On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem.
One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast.
Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts?
Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger
EDIT: I now know where the FormatExceptions are coming from. It seems that they are "by design" - they occur when when I have doubles being serialized that are double.NaN so that that xml looks like NaN...It seems that the DCS tries to parse the value as a number, that fails with an exception and then it looks for "NaN" et. al. and handles them. My problem is not that this does not work...it does...it is just that it completely cripples the debugger. Does anyone know how to configure the debugger/vs2008sp1 to handle this more efficiently.
cartden,
You may want to consider switching over to XMLSerializer instead. Here is what I have determined over time:
The XMLSerializer and DataContractSerializer classes provides a simple means of serializing and deserializing object graphs to and from XML.
The key differences are:
1.
XMLSerializer has much smaller payload than DCS if you use [XmlAttribute] instead of [XmlElement]
DCS always store values as elements
2.
DCS is "opt-in" rather than "opt-out"
With DCS you explicitly mark what you want to serialize with [DataMember]
With DCS you can serialize any field or property, even if they are marked protected or private
With DCS you can use [IgnoreDataMember] to have the serializer ignore certain properties
With XMLSerializer public properties are serialized, and need setters to be deserialized
With XmlSerializer you can use [XmlIgnore] to have the serializer ignore public properties
3.
BE AWARE! DCS.ReadObject DOES NOT call constructors during deserialization
If you need to perform initialization, DCS supports the following callback hooks:
[OnDeserializing], [OnDeserialized], [OnSerializing], [OnSerialized]
(also useful for handling versioning issues)
If you want the ability to switch between the two serializers, you can use both sets of attributes simultaneously, as in:
[DataContract]
[XmlRoot]
public class ProfilePerson : NotifyPropertyChanges
{
[XmlAttribute]
[DataMember]
public string FirstName { get { return m_FirstName; } set { SetProperty(ref m_FirstName, value); } }
private string m_FirstName;
[XmlElement]
[DataMember]
public PersonLocation Location { get { return m_Location; } set { SetProperty(ref m_Location, value); } }
private PersonLocation m_Location = new PersonLocation(); // Should change over time
[XmlIgnore]
[IgnoreDataMember]
public Profile ParentProfile { get { return m_ParentProfile; } set { SetProperty(ref m_ParentProfile, value); } }
private Profile m_ParentProfile = null;
public ProfilePerson()
{
}
}
Also, check out my Serializer class that can switch between the two:
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Text;
using System.Xml;
using System.Xml.Serialization;
namespace ClassLibrary
{
// Instantiate this class to serialize objects using either XmlSerializer or DataContractSerializer
internal class Serializer
{
private readonly bool m_bDCS;
internal Serializer(bool bDCS)
{
m_bDCS = bDCS;
}
internal TT Deserialize<TT>(string input)
{
MemoryStream stream = new MemoryStream(input.ToByteArray());
if (m_bDCS)
{
DataContractSerializer dc = new DataContractSerializer(typeof(TT));
return (TT)dc.ReadObject(stream);
}
else
{
XmlSerializer xs = new XmlSerializer(typeof(TT));
return (TT)xs.Deserialize(stream);
}
}
internal string Serialize<TT>(object obj)
{
MemoryStream stream = new MemoryStream();
if (m_bDCS)
{
DataContractSerializer dc = new DataContractSerializer(typeof(TT));
dc.WriteObject(stream, obj);
}
else
{
XmlSerializer xs = new XmlSerializer(typeof(TT));
xs.Serialize(stream, obj);
}
// be aware that the Unicode Byte-Order Mark will be at the front of the string
return stream.ToArray().ToUtfString();
}
internal string SerializeToString<TT>(object obj)
{
StringBuilder builder = new StringBuilder();
XmlWriter xmlWriter = XmlWriter.Create(builder);
if (m_bDCS)
{
DataContractSerializer dc = new DataContractSerializer(typeof(TT));
dc.WriteObject(xmlWriter, obj);
}
else
{
XmlSerializer xs = new XmlSerializer(typeof(TT));
xs.Serialize(xmlWriter, obj);
}
string xml = builder.ToString();
xml = RegexHelper.ReplacePattern(xml, RegexHelper.WildcardToPattern("<?xml*>", WildcardSearch.Anywhere), string.Empty);
xml = RegexHelper.ReplacePattern(xml, RegexHelper.WildcardToPattern(" xmlns:*\"*\"", WildcardSearch.Anywhere), string.Empty);
xml = xml.Replace(Environment.NewLine + " ", string.Empty);
xml = xml.Replace(Environment.NewLine, string.Empty);
return xml;
}
}
}
This is a guess, but I think it is running slow in debug mode because for every exception, it is performing some actions to show the exception in the debug window, etc. If you are running in release mode, these extra steps are not taken.
I've never done this, so I really don't know id it would work, but have you tried just setting that one assembly to run in release mode while all others are set to debug? If I'm right, it may solve your problem. If I'm wrong, then you only waste 1 or 2 minutes.
About your debugging problem, have you tried to disable the exception assistant ? (Tools > Options > Debugging > Enable the exception assistant).
Another point should be the exception handling in Debug > Exceptions : you can disable the user-unhandled stuff for the CLR or only uncheck the System.FormatException exception.
Ok - I figured out the root issue. It was what I alluded to in the EDIT to the main question. The problem was that in the xml, it was correctly serializing doubles that had a value of double.NaN. I was using these values to indicate "na" for when the denominator was 0D. Example: ROE (Return on Equity = Net Income / Average Equity) when Average Equity is 0D would be serialized as:
<ROE>NaN</ROE>
When the DCS tried to de-serialize it, evidently it first tries to read the number and then catches the exception when that fails and then handles the NaN. The problem is that this seems to generate a lot of overhead when in DEBUG mode.
Solution: I changed the property to double? and set it to null instead of NaN. Everything now happens instantly in DEBUG mode now. Thanks to all for your help.
Try disabling some IE addons. In my case, the LastPass toolbar killed my Silverlight debugging. My computer would freeze for minutes each time I interacted with Visual Studio after a breakpoint.

Creating an SPListItem in a WCF service deployed to SharePoint

i have the following method in a WCF service, that has been deployed to SharePoint using Shail Malik's guide:
[OperationContract]
public string AddItem(string itemTitle, Guid? idOfListToUse)
{
using (var portal = new SPSite(SPContext.Current.Site.Url, SPContext.Current.Site.SystemAccount.UserToken))
{
using (var web = portal.OpenWeb())
{
Guid listId;
web.AllowUnsafeUpdates = true;
if (idOfListToUse != null && idOfListToUse.Value != new Guid())
{
listId = idOfListToUse.Value;
}
else
{
try
{
listId = new Guid(web.Properties[PropertyBagKeys.TagsList]);
}
catch (Exception ex)
{
throw new MyException("No List Id for the tag list (default list) has been found!", ex);
}
}
var list = web.Lists[listId];
string title = "";
SPSecurity.RunWithElevatedPrivileges(delegate{
var newItem = list.Items.Add();
newItem["Title"] = itemTitle;
newItem.Update();
title = newItem.Title;
});
web.AllowUnsafeUpdates = false;
return title;
}
}
}
When the method gets called from Javascript (using Rick Strahl's excellent ServiceProxy.js) it fails and it does so on newItem.Update() because of ValidateFormDigest().
Here's the kicker though, when I step through the code it works! No exceptions at all!
Ok, found the answer (there's 2 even :-D)
First, the dirty one:
Set FormDigestValidatedProperty in the context:
HttpContext.Current.Items["FormDigestValidated"] = true;
Second, the slightly less dirty version (basically leaving the way open for XSS attacks, but this is an intranet anyway)
The answer
I don't think you can access 'list' as it was created outside the elevated code block.
http://blogs.pointbridge.com/Blogs/herzog_daniel/Pages/Post.aspx?_ID=8
I'm guessing when you are stepping though the entire process is in admin mode so all are elevated.
Colin, it's a really bad idea to try to access HttpContext (likewise SPContext) inside a WCF service. See here: MSDN: WCF Services and ASP.NET
From the article:
HttpContext: Current is always null
when accessed from within a WCF
service.
It's likely this is the cause of your problem.
EDIT: I notice that you're trying to use SPContext to get the url of the site collection. I didn't find a good solution to this either so I just send the url of the target site collection as a parameter to the service call. Not the most optimal solution but I couldn't think of a better way. Also, if you need to check authentication/identities, etc use ServiceSecurityContext.Current.

How to programmatically get DLL dependencies

How can I get the list of all DLL dependencies of a given DLL or EXE file?
In other words, I'd like to do the same as the "Dependency walker" tool, but programmatically.
What is the Windows (ideally .NET) API for that?
You can use EnumProcessModules function. Managed API like kaanbardak suggested won't give you a list of native modules.
For example see this page on MSDN
If you need to statically analyze your dll you have to dig into PE format and learn about import tables. See this excellent tutorial for details.
NOTE: Based on the comments from the post below, I suppose this might miss unmanaged dependencies as well because it relies on reflection.
Here is a small c# program written by Jon Skeet from bytes.com on a .NET Dependency Walker
using System;
using System.Reflection;
using System.Collections;
public class DependencyReporter
{
static void Main(string[] args)
{
//change this line if you only need to run the code one:
string dllToCheck = #"";
try
{
if (args.Length == 0)
{
if (!String.IsNullOrEmpty(dllToCheck))
{
args = new string[] { dllToCheck };
}
else
{
Console.WriteLine
("Usage: DependencyReporter <assembly1> [assembly2 ...]");
}
}
Hashtable alreadyLoaded = new Hashtable();
foreach (string name in args)
{
Assembly assm = Assembly.LoadFrom(name);
DumpAssembly(assm, alreadyLoaded, 0);
}
}
catch (Exception e)
{
DumpError(e);
}
Console.WriteLine("\nPress any key to continue...");
Console.ReadKey();
}
static void DumpAssembly(Assembly assm, Hashtable alreadyLoaded, int indent)
{
Console.Write(new String(' ', indent));
AssemblyName fqn = assm.GetName();
if (alreadyLoaded.Contains(fqn.FullName))
{
Console.WriteLine("[{0}:{1}]", fqn.Name, fqn.Version);
return;
}
alreadyLoaded[fqn.FullName] = fqn.FullName;
Console.WriteLine(fqn.Name + ":" + fqn.Version);
foreach (AssemblyName name in assm.GetReferencedAssemblies())
{
try
{
Assembly referenced = Assembly.Load(name);
DumpAssembly(referenced, alreadyLoaded, indent + 2);
}
catch (Exception e)
{
DumpError(e);
}
}
}
static void DumpError(Exception e)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("Error: {0}", e.Message);
Console.WriteLine();
Console.ResetColor();
}
}
To get native module dependencies, I believe it should be ok to get it from the PE file's import table, here are 2 links which explain that in-depth:
http://msdn.microsoft.com/en-us/magazine/bb985992.aspx
http://msdn.microsoft.com/en-us/magazine/cc301808.aspx
To get .NET dependencies, we can use .NET's API, like Assembly.Load.
To get a .NET module's all dependencies, How about combine the 2 ways - .NET assemblies are just PE file with meta data.
While this question already has an accepted answer, the documentation referenced in the other answers, where not broken, is old. Rather than reading through all of it only to find it doesn't cover differences between Win32 and x64, or other differences, my approach was this:
C:\UnxUtils\usr\local\wbin>strings.exe E:\the-directory-I-wanted-the-info-from\*.dll > E:\TEMP\dll_strings.txt
This allowed me to use Notepad++ or gvim or whatever to search for dlls that were still depending on MS dlls with 120.dll at the end of the dll name so I could find the ones that needed updating.
This could easily be scripted in your favorite language.
Given that my search for this info was with VS 2015 in mind, and this question was the top result for a Google search, I supply this answer that it may perhaps be of use to someone else who comes along looking for the same thing.
To read the DLL's (modules) loaded by a running exe, use the ToolHelp32 functions
Tool help Documentation on MSDN.
Not sure what it will show for a .Net running exe (I've never tried it). But, it does show the full path from where the DLL's were loaded. Often, this was the information I needed when trying to sort out DLL problems. .Net is supposed to have removed the need to use these functions (look up DLL Hell for more information).
If you don't want to load the assembly in your program, you can use DnSpy (https://www.nuget.org/packages/dnSpyLibs):
var assemblyDef = dnlib.DotNet.AssemblyDef.Load("yourDllName.dll");
var dependencies = assemblyDef.ManifestModule.GetAssemblyRefs();
Notice that you have all the infos you can want in the "ManifestModule" property.

Does the "Cartridge" pattern exist?

AndroMDA uses the term "cartridge" (e.g. for out-of-the-box NHibernate support).
As I understood it, it takes an API/component, wrapps it, never adds new features, simplifies it, often taking away "the full power", but works well for most cases.
My questions:
Is the term widely used?
Can one properly define it?
Should the suffix "Cartridge" be used in class/method names?
An example: is the following Base64 helper a cartridge for Base64 conversion?
You give away all the power for performance-tuning, but if you simply want to decode a simple (and small) string it works fine:
Usage:
Base64StringCartridge.Decode(input);
Implementation
public static string Decode(string data)
{
try
{
System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding();
System.Text.Decoder utf8Decode = encoder.GetDecoder();
byte[] todecode_byte = System.Convert.FromBase64String(data);
int charCount = utf8Decode.GetCharCount(todecode_byte, 0, todecode_byte.Length);
char[] decoded_char = new char[charCount];
utf8Decode.GetChars(todecode_byte, 0, todecode_byte.Length, decoded_char, 0);
string result = new String(decoded_char);
return result;
}
catch
{
return "";
}
}
It's called the Facade Pattern. Presumably the AndroMDA folks are big fans of old video game machines...