I should to validate in C# an XSD file to check that his syntax complies with XMLSchema rules.
I suppose something like this:
XmlSchemaSet schemas = new XmlSchemaSet();
schemas.Add("", XmlReader.Create(new StringReader(xsdMarkup)));
XDocument doc1 = new XDocument("schema_to_validate.xsd")
doc1.Validate(schemas);
where xsdMarkup is the instance of the official schema validator for all the XSDs in the world, "schema_to_validate.xsd" in my case. This schema should be an online resource, by W3C for example, or provided by Visual Studio.
Can someone help me with this?
Thank you in advance.
Nicola
Related
I'm currently working in an integration with Deltek Vision 7.6, I'm using the SOAP API, it exposes all actions and I'm creating and updating records currently.
The problem is, adding a mew field in the database table and in Deltek Vision, executing the same call it returns an error like this:
<?xml version="1.0" encoding="UTF-8"?>
<DLTKVisionMessage>
<ReturnCode>ErrSave</ReturnCode>
<ReturnDesc>An unexpected error has occured while saving</ReturnDesc>
<ChangesNotProcessed>
<InsertErrors>
<Error rowNum="1">
<ErrorCode>InsertError</ErrorCode>
<Message>Column: does not exist.</Message>
<Table>Projects_MilestoneCompletionLog</Table>
<ROW new="1" mod="1" del="0">
<WBS1>100434</WBS1>
<WBS2>1014</WBS2>
<WBS3>SD</WBS3>
<Seq>a0D0m000000cf9NEAQ</Seq>
<CustMilestoneNumber>MS01</CustMilestoneNumber>
<CustMilestoneName>DM91 - Data Maintenance SAQ</CustMilestoneName>
<CustAmount>1150.0</CustAmount>
<CustSiteTrackerDate>2018-07-06T10:01:50</CustSiteTrackerDate>
</ROW>
</Error>
</InsertErrors>
</ChangesNotProcessed>
<Detail>Column: does not exist.</Detail>
<CallStack>UpdateProject.SendDataToDeltekVision</CallStack>
</DLTKVisionMessage>
The problematic field is: CustSiteTrackerDate if I remove this from Vision and Database the update call happens correctly.
Does anyone knows if after create a new custom field in Deltek is anything special we need to do to allow the update calls throw the API?
Thanks
I have been working with the Deltek Soap API as well and found this in some of the documentation:
XML Schema for Vision Web Services/APIs The data that you are adding
or updating in the Vision database must be sent in XML format. The
format of the XML data must comply with the schema. The order of the
fields in your XML file must match the order of the fields that is
defined by the schema. If your XML file does not match the required
schema and the order of the fields, you will receive an error when you
use web services to update the Vision database. Each applicable Info
Center in Vision has an XML schema defined. Examples of the schema for
each Info Center are included in schema files that are located on the
Vision Web/app server in \Vision\Web\Xsd directory
( is the directory where Deltek Vision is installed). The
names of the schema files start with the generic Info-Center-name
followed by ‘_Schema.xsd.’ For example, the name of the XML schema
file used for Employee Info Center would be ‘Employee_Schema.Xsd.’
It may be that you need to add the new field to the Info Center XML, go to the server hosting your Vision/Web/App and find the infocenter XML that this new field should exist in and make sure it is there.
re http://blog.bdoughan.com/2011/12/reusing-generated-jaxb-classes.html
I am trying to switch from using castor to jaxb.
I am importing a commontypes.xsd schema into another schema and then using jaxb to generate the java classes but when I unmarhsal a sample XML file the imported types are null unless I explicitly set all the namespaces in the sample xml.
This is a real pain because I want calling apps to be able to send me plain XML not one littered with a tonne of namespaces and prefixes etc.
Any suggestions as to how to avoid having to do this?
I generated .episodes files in maven using the above article and XJC episode with maven but it doesnt help and Im still getting nulls when I unmarshal.
Can anyone help?
thanks
I got it working!
The problem was the package-info.java file generated by xjc from my .xsd file had elementFormDefault set to be QUALIFIED
#javax.xml.bind.annotation.XmlSchema(
namespace = "http://www.example.com/commontypes",
elementFormDefault = javax.xml.bind.annotation.XmlNsForm.QUALIFIED
)
package com.example.commontypes;
When I changed this to be unqualified and recompiled the java code, the unmarshall then worked.
The root cause fix was in my .xsd file, where I set elementFormDefault="unqualified"
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.example.com/commontypes"
xmlns="http://www.example.com/commontypes"
elementFormDefault="unqualified"
attributeFormDefault="unqualified">
This resulting in the following generated package-info.java file
#javax.xml.bind.annotation.XmlSchema(
namespace = "http://www.example.com/commontypes"
)
package com.example.commontypes;
and again, the unmarshall then worked!
Thanks to Blaise for all the work he puts in, it was comment on one of his blog posts that let me figure it out!
This is a follow-on from this question really:
Moving From LINQpad to a Proper Visual Studio Project?
..but I'm not able to get it to work properly.
An answer to that question suggestions dumping the context assembly out as a dll but although I have done that, when I import it as a reference, it's not exactly clear to me how I would create an instance of that context, point it at a database and actually run a query against it, something like the following:
var db = new ContextFromThatDLL(myconnectionstring);
var query = from a in db.MYTABLE where a.ID == 1 select a;
Extra information:
I am using the IQ driver in LinqPad to connect to Oracle.
I do have a license for DevArt already (which the IQ driver uses) but am aware that the IQ driver generates its own SQL from LINQ - and I prefer it. Plus, I develop queries in LinqPad which works great for my workflow but find that DevArt doesn't always generate SQL as good as IQ.
First, extract the typed data context in LINQPad as follows:
string dcPath = GetType().BaseType.Assembly.Location;
string targetFolder = #"c:\temp";
File.Copy (dcPath, Path.Combine (targetFolder, Path.GetFileName (dcPath)));
Then in Visual Studio, reference the typed data context DLL, along with the following DLLs from the driver folder:
IQDriver.dll
IQToolkit.dll
IQToolkit.Data.dll
IQToolkit.Data.(provider).dll
plus the DevArt driver.
Then, you can instantiate the typed data context as follows (this illustrates how to do it for SQLite):
var dc = new LINQPad.User.TypedDataContext (IQToolkit.Data.DbEntityProvider.From
("IQToolkit.Data.Sqlite", #"Data Source=D:\SQLite.NET\nutshell.db",
"LINQPad.User.TypedDataContext"));
var customerCount = dc.Customers.Count();
This should get you started. Bear in mind the caveats, as stated in the answer to which you linked!
I am still quite confused about NHibernate schema export and creation. What I want to achieve is to export schema drop-create sql file AND/OR recreate database schema depending on the application configuration.
Obviously I started with
private void BuildSchema(NHConf.Configuration cfg){
var schema = new SchemaExport(cfg);
schema.SetOutputFile(filename);
schema.Create(true, true);
schema.Drop(true, true);
}
But recently I have figured out, that what actually causes my schema to recreate is NHConf.Environment.Hbm2ddlAuto set to 'create' and SchemaExport has nothing to it.
Also the files with exported SQL schema exists but they are all empty (0KB), which is my main issue, as I manage schema recreation by Hbm2ddlAuto property.
Any ideas?
EDIT:
The BuildSchema method is called just before cfg.BuildSessionFactory()
I use FluentNHibernate with NH 3.1 and Oracle 11g
in your method you execute drop-create and then drop and also enabled writing to database.
this is enough to create the files, make sure you set filename correctly
new SchemaExport(config)
.SetDelimiter(";")
.SetOutputFile(filename)
.Create(false, false);
to create it in database, this works for me
new SchemaExport(config).Create(false, true);
If you are using Fluent configuration, check your mapping file for:
SchemaAction.None();
In my case I commented this line and schema export to file now works!
This post moved me in the right direction: http://lostechies.com/rodpaddock/2010/06/29/using-fluent-nhibernate-with-legacy-databases/
SchemaAction.None();
The next interesting feature is SchemaAction.None(). When developing our applications I have an integration test that is used to build all our default schema. I DONT want these table to be generated in our schema, they are external. SchemaAction.None() tells NHibernate not to create this entity in the database.
I'm working with a complicated xml schema, for which I have created a class structure using xsd.exe (with some effort). I can now reliably deserialize the xml into the generated class structure. For example, consider the following xml from the web service:
<ODM FileType="Snapshot" CreationDateTime="2009-10-09T19:58:46.5967434Z" ODMVersion="1.3.0" SourceSystem="XXX" SourceSystemVersion="999">
<Study OID="2">
<GlobalVariables>
<StudyName>Test1</StudyName>
<StudyDescription/>
<ProtocolName>Test0001</ProtocolName>
</GlobalVariables>
<MetaDataVersion OID="1" Name="Base Version" Description=""/>
<MetaDataVersion OID="2" Name="Test0001" Description=""/>
<MetaDataVersion OID="3" Name="Test0002" Description=""/>
</Study>
</ODM>
I can deserialize the xml as follows:
public ODMcomplexTypeDefinitionStudy GetStudy(string studyId)
{
ODMcomplexTypeDefinitionStudy study = null;
ODM odm = Deserialize<ODM>(Service.GetStudy(studyId));
if (odm.Study.Length > 0)
study = odm.Study[0];
return study;
}
Service.GetStudy() returns an HTTPResponse stream from the web service. And Deserialize() is a helper method that deserializes the stream into the object type T.
My question is this: is it more efficient to let the deserialization process create the entire class structure and deserialize the xml, or is it more efficient to grab only the xml of interest and deserialize that xml. For example, I could replace the above code with:
public ODMcomplexTypeDefinitionStudy GetStudy(string studyId)
{
ODMcomplexTypeDefinitionStudy study = null;
using (XmlReader reader = XmlReader.Create(Service.GetStudy(studyId)))
{
XDocument xdoc = XDocument.Load(reader);
XNamespace odmns = xdoc.Root.Name.Namespace;
XElement elStudy = xdoc.Root.Element(odmns + "Study");
study = Deserialize<ODMcomplexTypeDefinitionStudy>(elStudy.ToString());
}
return study;
}
I suspect that the first approach is preferred -- there is a lot of dom manipulation going on in the second example, and the deserialization process must have optimizations; however, what happens when the xml grows dramatically? Let's say the source returns 1 MB of xml and I'm really only interested in a very small component of that xml. Should I let the deserialzation process fill up the containing ODM class with all it's arrays and properties of child nodes? Or just go get the child node as in the second example!!??
Not sure this helps, but here's a summary image of the dilemma:
Brett,
Later versions of .net will build custom serializer assemblies. Click on project properties -> build and look for "Generate serialization assemblies" and change to On. The XML deserializer will use these assemblies which are customized to the classes in your project. They are much faster and less resource intensive since reflection is not involved.
I would go this route so that if you class changes you will not have to worry about serialization issues. Performance should not be an issue.
I recommend that you not preoptimize. If you have your code working, then use it as it is. Go on to work on some code that is not finished, or which does not work.
Later, if you find you have a performance problem in that area, you can explore performance.