Deltek Vision 7.6 - Column: does not exist when UpdateProject - sql

I'm currently working in an integration with Deltek Vision 7.6, I'm using the SOAP API, it exposes all actions and I'm creating and updating records currently.
The problem is, adding a mew field in the database table and in Deltek Vision, executing the same call it returns an error like this:
<?xml version="1.0" encoding="UTF-8"?>
<DLTKVisionMessage>
<ReturnCode>ErrSave</ReturnCode>
<ReturnDesc>An unexpected error has occured while saving</ReturnDesc>
<ChangesNotProcessed>
<InsertErrors>
<Error rowNum="1">
<ErrorCode>InsertError</ErrorCode>
<Message>Column: does not exist.</Message>
<Table>Projects_MilestoneCompletionLog</Table>
<ROW new="1" mod="1" del="0">
<WBS1>100434</WBS1>
<WBS2>1014</WBS2>
<WBS3>SD</WBS3>
<Seq>a0D0m000000cf9NEAQ</Seq>
<CustMilestoneNumber>MS01</CustMilestoneNumber>
<CustMilestoneName>DM91 - Data Maintenance SAQ</CustMilestoneName>
<CustAmount>1150.0</CustAmount>
<CustSiteTrackerDate>2018-07-06T10:01:50</CustSiteTrackerDate>
</ROW>
</Error>
</InsertErrors>
</ChangesNotProcessed>
<Detail>Column: does not exist.</Detail>
<CallStack>UpdateProject.SendDataToDeltekVision</CallStack>
</DLTKVisionMessage>
The problematic field is: CustSiteTrackerDate if I remove this from Vision and Database the update call happens correctly.
Does anyone knows if after create a new custom field in Deltek is anything special we need to do to allow the update calls throw the API?
Thanks

I have been working with the Deltek Soap API as well and found this in some of the documentation:
XML Schema for Vision Web Services/APIs The data that you are adding
or updating in the Vision database must be sent in XML format. The
format of the XML data must comply with the schema. The order of the
fields in your XML file must match the order of the fields that is
defined by the schema. If your XML file does not match the required
schema and the order of the fields, you will receive an error when you
use web services to update the Vision database. Each applicable Info
Center in Vision has an XML schema defined. Examples of the schema for
each Info Center are included in schema files that are located on the
Vision Web/app server in \Vision\Web\Xsd directory
( is the directory where Deltek Vision is installed). The
names of the schema files start with the generic Info-Center-name
followed by ‘_Schema.xsd.’ For example, the name of the XML schema
file used for Employee Info Center would be ‘Employee_Schema.Xsd.’
It may be that you need to add the new field to the Info Center XML, go to the server hosting your Vision/Web/App and find the infocenter XML that this new field should exist in and make sure it is there.

Related

MarkLogic - Error with loading pipeline for content processing

MarkLogic version - 9.0-6.2 (on windows)
I am following the guide (https://docs.marklogic.com/guide/cpf/quickStart) to perform the sample exercise provided. After installing CPF on data-hub-FINAL (with data-hub-TRIGGERS as the triggers db), I created a pipeline XML document (as given in example) in my C drive at directory C:\copyright. Then on the admin console, I navigated to databases -->data-hub-FINAL--> Content Processing--> Pipelines --> Load, and provided below values.
directory : C:\copyright
filter : *.xml
source : (file system)
However, when I click 'Ok', I am getting error message 'Invalid input: No readable XML files found:'
I verified that the pipeline xml is present and valid in the directory C:\copyright.
Any inputs appreciated!
Marklogic could not read the xml document because of non UTF-8 content in the document, as shown below.
<state-transition>
<annotation>
When a document containing ‘book' as a root element is created,
add a ‘copyright' statement.
</annotation>
For now, I removed the annotation from the xml document and successfully loaded the pipeline.

JSON Schema for FHIR false positives

I am new to JSON Schema, and am trying to validate JSON based on the HL7-FHIR schemas. Data I think should be invalid (and that the official Java-based validator says are invalid) shows up as valid.
For example, {"dog": "food"} should be invalid, because when I run the validator, I get:
> java -jar org.hl7.fhir.validator.jar bad.json -defn definitions.json.zip
.. load FHIR from definitions.json.zip
.. connect to tx server # http://tx.fhir.org/r3
(vnull-null)
.. validate
*FAILURE* validating bad.json: error:1 warn:0 info:0
Fatal # $ (line 1, col2) : Unable to find resourceType property
But if I paste the fhir.schema.json file from here into a JSON Schema validator like the one here, and evaluate {"dog": "food"}, it's valid.
It's valid even if I supply a resourceType, which I thought might cause the restrictions to kick in. It's also valid if I copy an example I expect to be valid—say, this Practitioner example—and change some of the types (set name to be a string rather than an array, for example).
I'm not sure if I'm running into a problem with the HL7-FHIR JSON Schema in particular or with JSON Schemas in general. I believe my question is different than this one because it appears that we're up to release 3.0, and so the schema I'm using is updated.

common xsd schema imported into another schema not being unmarshalled

re http://blog.bdoughan.com/2011/12/reusing-generated-jaxb-classes.html
I am trying to switch from using castor to jaxb.
I am importing a commontypes.xsd schema into another schema and then using jaxb to generate the java classes but when I unmarhsal a sample XML file the imported types are null unless I explicitly set all the namespaces in the sample xml.
This is a real pain because I want calling apps to be able to send me plain XML not one littered with a tonne of namespaces and prefixes etc.
Any suggestions as to how to avoid having to do this?
I generated .episodes files in maven using the above article and XJC episode with maven but it doesnt help and Im still getting nulls when I unmarshal.
Can anyone help?
thanks
I got it working!
The problem was the package-info.java file generated by xjc from my .xsd file had elementFormDefault set to be QUALIFIED
#javax.xml.bind.annotation.XmlSchema(
namespace = "http://www.example.com/commontypes",
elementFormDefault = javax.xml.bind.annotation.XmlNsForm.QUALIFIED
)
package com.example.commontypes;
When I changed this to be unqualified and recompiled the java code, the unmarshall then worked.
The root cause fix was in my .xsd file, where I set elementFormDefault="unqualified"
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.example.com/commontypes"
xmlns="http://www.example.com/commontypes"
elementFormDefault="unqualified"
attributeFormDefault="unqualified">
This resulting in the following generated package-info.java file
#javax.xml.bind.annotation.XmlSchema(
namespace = "http://www.example.com/commontypes"
)
package com.example.commontypes;
and again, the unmarshall then worked!
Thanks to Blaise for all the work he puts in, it was comment on one of his blog posts that let me figure it out!

How to fix source is empty error in XML source while using Foreach loop container in SSIS 2012?

I have an issue with a very simple task in SSIS 2012.
I have a for-each container that runs in FOR-EACH-FILE Enumerator mode. I want to read a target folder with XML files. The path to the folder is correctly configured. The files field is set to *.xml
The variable mapping is defined with the follwing Variable: User::FileVar , Index 0.
Now I add a simple data flow task inside the container. The dataflow task only has a XML-Data Source task, that's it. For the XML Data source task, the XSD location is set. When I click choose columns, I can see the columns from the XSD schema.
BUT: When I save the XML task , I always get the error message: The Property XMLDataVariable is empty. I tried both data Access modes, XML file from variable and XML data from variable. The error message remains, I cannot run the package.
I don't use any expressions, neither at the foreach loop container nor at the data flow task.
I dont know what's wrong here, I did the steps exactly as shown in some tutorials for older versions of SSIS.
Do you have any ideas?
The issue is that the XML Source is trying to validate the existence of the given file during the design time. However, you will know the file name only during runtime when the Foreach loop container executes and loops through every XML file available in a given folder.
I recreated an SSIS 2012 package using my answer to one of other SO questions.
SSIS reading multiple xml files from folder
I was able to reproduce the error The property "XMLDataVariable" on the XML Source was empty
On the XML source, I set the property ValidateExternalMetadata to False. Setting this to false will force the package not to verify the existence of the xml file path during design time.
I was successfully able to execute the package.
Hope that helps.

Birt data source parameters from a property file

I have multiple BIRT reports that obtains the data from the same jdbc data source.
Is it possible to obtain the conection parameters (Driver URL, User Name, and Password) from an external property file or similar?
One you create a functional data source, you can add that data source to a report library that can be imported and used by all BIRT reports in your system. The source inside the library can have static connection attributes, or you can abstract them using externalized properties.
If you want to externalize the connection info, you will need to tweak the Data source itself. Inside the Data Source Editor, there is a "Property Binding" section that allows you to abstract all the values governing the data connection. From there you can bind the values (using the expression editor) to either report parameters or a properties file.
To bind to a report parameter, use this syntax: params[parametername].value as the expression.
To bind to a properties file, set the Resource file in the Report's top-level properties. From there you can just use the property key value to bind the entry to the Data Source.
Good Luck!
An alternative to the good #Mystik's "Property binding" solution is externalizing to a connection profile.
Create a data source (say "DS"), setting up a correct configuration of the parameters to connect to a DB.
Right click on "DS" > Externalize to Connection Profile... > check both options, set a name for the Connection Profile, Ok > set the path and filename were to save the Connection Profile Store (say "reportName.cps"), uncheck Encrypt... (in this way we can modify information in the XML file by hand).
Now we have "reportName.cps", an XML file that we can modify according to the environment where we place our report (development, production,...). The problem is that "DS" has loaded statically those info from "reportName.cps". It loads them dinamically if it can find "reportName.cps" in the absolute path we specified. So changing environment the file path will be different and the report won't find our file. To tell the report the correct location of the file and load it dynamically let's write a script:
Setup a beforeOpen script to use the connection profile that is deployed in the resource folder which can be different for every environment:
var myresourcefolder = reportContext.getDesignHandle().getResourceFolder();
this.setExtensionProperty("OdaConnProfileStorePath", myresourcefolder + "/reportName.cps");
For those struggling configuring a connection profile, the files must look as follow (exemple using PostgreSQL as an exemple):
db-config-birt.xml (or whatever name)
<?xml version="1.0"?>
<DataTools.ServerProfiles version="1.0">
<profile autoconnect="No" desc="" id="uuid" name="MyPostgreSQL"
providerID="org.eclipse.birt.report.data.oda.jdbc">
<baseproperties>
<property name="odaDriverClass" value="org.postgresql.Driver"/>
<property name="odaURL" value="jdbc:postgresql://XX:5432/YY"/>
<property name="odaPassword" value="zzz"/>
<property name="odaUser" value="abc"/>
</baseproperties>
</profile>
</DataTools.ServerProfiles>
The key points here are:
The xml MUST start with <?xml version="1.0"?> (or <?xml version="1.0" encoding="UTF-8" standalone="no"?> but when I was using it, I have having a parsing exception while deploying on Tomcat)
The properties keys MUST be odaDriverClass, odaURL, odaPassword, odaUser (order doesn't matter)
This file must have the right to be accessed, for e.g. chmod 664 this file
If any of the 2 conditions above aren't met, Birt will throw a laconic :
org.eclipse.birt.report.engine.api.EngineException: An exception occurred during processing. Please see the following message for details:
Cannot open the connection for the driver: org.eclipse.birt.report.data.oda.jdbc.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: Missing properties in Connection.open(Properties). ;
org.eclipse.datatools.connectivity.oda.OdaException: Unable to find or access the named profile (MyPostgreSQL) in profile store path (/opt/tomcat/mytomcat/conf/db-config-birt.xml). ;
org.eclipse.datatools.connectivity.oda.OdaException ;
Then in the report (myreport.rptdesign), in the XML of it, the datasource must then look like that:
myreport.rptdesign (or whatever name)
<data-sources>
<oda-data-source extensionID="org.eclipse.birt.report.data.oda.jdbc" name="MyPostgreSQL" id="320">
<property name="OdaConnProfileName">MyPostgreSQL</property>
<property name="OdaConnProfileStorePath">/opt/tomcat/mytomcat/conf/db-config-birt.xml</property>
</oda-data-source>
</data-sources>
Obviously, you will adapt the OdaConnProfileStorePath to suit your needs