setting Breeze js naming conventions to "camelCase" is resulting in an error when getting metadata - serialization

I have a Web App built using breeze js to communicate with a breeze api controller on top of entity framework.
I want to have the properties names and nav properties by in camelCase. on the server they are PascalCase.
Following the instructions here I added this to my code:
breeze.NamingConvention.camelCase.setAsDefault();
As a result I am now getting an error when breeze is trying to get the metadata
Error: Metadata import failed for Breeze/ZenAPI/Metadata; Unable to process
returned metadata:NamingConvention for this server property name does not roundtrip properly:houseId-->HouseId
Things I know:
All the properties are PascalCased on the server.
There is no non default formatter set on the server.
When I look at the server response it is in Pascal.
If I take out this setting, everything works fine and the naming is Pascal.
The setting is set, meaning that when I check
breeze.NamingConvention.defaultInstance.name;
I get "camelCase";
What might be the reason for the problem?

NamingConvention.camelCase is intended for use in converting server property names that are PascalCased into CamelCased names on the client. According to the error message, you are trying to do the reverse, i.e. in your case 'houseId' is a server property name.
When metadata is being processed breeze attempts to verify that every property name can be roundtripped by passing it thru the NamingConvention.clientPropertyNameToServer method and then thru the NamingConvention.serverPropertyNameToClient method or the reverse depending on whether a client or a server name is provided within the metadata. The message you got indicates that
ServerName ClientName ServerName
---------- ---------- ---------
'houseId' -> 'houseId' -> 'HouseId' ( 'houseId' != 'HouseId');
Note that if 'HouseId' was the server name then this works just fine.
ServerName ClientName ServerName
---------- ---------- ---------
'HouseId' -> 'houseId' -> 'HouseId' ( 'HouseId' == 'HouseId');
If it turns out that you really do want 'houseId' as both the server name and the client name, then you will need to write your own NamingConvention ( which is actually pretty easy). See http://www.breezejs.com/sites/all/apidocs/classes/NamingConvention.html

I found that with Code First generation of the model with Entity Framework Powertools on EF6+ did not allow for the selection of database objects, consequently the table "sysdiagrams" came across all in lowercase instead of the Pascal Case notation I normally use for db objects. Once I removed this table from the model and context classes, then this error with breeze went away. All good. I also tested with breeze.NamingConvention.none.setAsDefault() and used Pascal case in my javascript and that worked OK too, but not preferred.

Related

JSON Schema for FHIR false positives

I am new to JSON Schema, and am trying to validate JSON based on the HL7-FHIR schemas. Data I think should be invalid (and that the official Java-based validator says are invalid) shows up as valid.
For example, {"dog": "food"} should be invalid, because when I run the validator, I get:
> java -jar org.hl7.fhir.validator.jar bad.json -defn definitions.json.zip
.. load FHIR from definitions.json.zip
.. connect to tx server # http://tx.fhir.org/r3
(vnull-null)
.. validate
*FAILURE* validating bad.json: error:1 warn:0 info:0
Fatal # $ (line 1, col2) : Unable to find resourceType property
But if I paste the fhir.schema.json file from here into a JSON Schema validator like the one here, and evaluate {"dog": "food"}, it's valid.
It's valid even if I supply a resourceType, which I thought might cause the restrictions to kick in. It's also valid if I copy an example I expect to be valid—say, this Practitioner example—and change some of the types (set name to be a string rather than an array, for example).
I'm not sure if I'm running into a problem with the HL7-FHIR JSON Schema in particular or with JSON Schemas in general. I believe my question is different than this one because it appears that we're up to release 3.0, and so the schema I'm using is updated.

Biztalk 2009: SQL to WCF-SQL adapter migration; orchestration not receiving message?

From topic,
I have a receive location that currently uses sql adapter(receive port) to call(poll?) a stored procedure. The stored proc returns a FOR XML result.
The receiver then activates an orchestration which takes the message and populate the data from the message and into some variables (expression shape).
Orchestration looks like:
LongScope[ AtomicScope[ Receive location -> Expression ] ][Error handling]
I tried a direct migration to wcf-sql with XMLpolling as InboundOperationType, but it throws a null exception during the variable assignation(I assume).
Additional detail:
I caught the message from the receiver by filtering pipelineName using a send port. There is a slight different in the message retrieved by sql and wcf-sql adapter
sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" /></rootnode>
wcf-sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" xmlns=""/></rootnode>
Which should do nothing, if this msdn post is correct
I also went into orchestration debugger. Weird thing is, when using sql adapter, the message is still = null, but the varibles are assigned without problem. I also tried adding a send port directly after the receive port to dump the message. Nothing came out
I would appreciate any info/suggestion/solution
Do tell me if im missing any info.
Irrelevant Info:
As of this post the receive port doesnt even trigger anymore. I dont know why. Rebooting PC.
Also I suspect Biztalk gave my bruxism and lead to me requiring 6 teeth fillings
The difference between XML in SQL en WCF-SQL has nothing to do with the MSDN post you are linking to.
In the 2nd XML (WCF-SQL adapter), the row node does not have a namespace. In the 1st XML (SQL adapter), the row node inherits the default namespace "namespace" from its parent: 'root'.
Regarding the Receive Port not triggering anymore:
Are you sure your Host Instance(s) are still running?
My solution:
I added "xmlns = 'namespace'" as a 'data' in the stored procedure.
The adapter recognized it and removed it(since it was the same as parent node), allowing me to use the old schema.
Filler:
So I generated a schema using the output from WCF-SQL adapter,
however I couldnt replace my old one with it, since the expression shape will not recognize its child elements (var = messageObject.childElement)
I created a map to map the new one back to the old one.
But that didnt work, because they both shared the same namespace, and biztalk complained during runtime that it couldnt decide which schema to use.

DBD::Oracle, Cursors and Environment under mod_perl

Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.

Rackspace Cloud Files PHP get_objects at the "Root level"

I have been trying to figure out how to get files that are at the Root level, meaning get all files that don't have a path attached to their file name.
I have a container that looks like this
image.png image/png
ui application/directory
ui/css application/directory
ui/css/test.css text/css
ui/image2.jpg image/jpg
I'm using the call
Container->get_objects(0, null, null, 'ui/');
which returns 2 CF_Objects:
ui/css
ui/image2.jpg
This is the desired output
but if I request the files at the "root level"
Container->get_objects(0, null, null, '/');
returns an empty array.
Container->get_objects(0, null, null, '');
returns all the files in the container.
Ideally It would return two CF_Objects image.png, and ui.
Is there a way to do this?
Thank you!
The Cloud Files Developer guide of Nov 15 2011 page 20 says:
You can also use a delimiter parameter to represent a nested directory
hierarchy without the need for the directory marker objects. You can
use any single character as a delimiter. The listings can return
virtual directories - they are virtual in that they don't actually
represent real objects. like the directory markers, though, they will
have a content-type of application/directory and be in a subdir
section of json and xml results.
If you have the following objects—photos/photo1, photos/photo2,
movieobject, videos/ movieobj4—in a container, your delimiter
parameter query using slash (/) would give you
photos,
movieobject,
videos.
The parameter "delimiter" is not supported by the get_objects in the PHP SDK, and using it seems to be the only way to get the base directory files.
There is currently a merge request in github [this request has since been approved] adding this particular parameter to the get_objects method.
Other users of the Rackspace Cloud Files API PHP SDK have also added support for this parameter.
See if the original php-cloudfiles repo gets updated or just create a fork of the original and add your own code, if you don't feel comfortable adding your own changes, clone a fork that has added the delimiter parameter like
https://github.com/michealmorgan/php-cloudfiles
or
https://github.com/onema/php-cloudfiles
The merge request referenced in the answer was approved on May 09, 2012
An optional parameter for get_objects was added for $delimiter ...
However, there was an error introduced into the code at some other point which falsely reports the Container name is not set if one tries to use any of the optional parameters.
A request has been put in to correct this error.

Determine request Uri from WCF Data Services LINQ query for FirstOrDefault against Azure without executing it?

Problem
I would like to trace the Uri that will be generated by a LINQ query executed against a Microsoft.WindowsAzure.StorageClient.TableServiceContext object. TableServiceContext just extends System.Data.Services.Client.DataServiceContext with a couple of properties.
The issue I am having is that the query executes fine against our Azure Table Storage instance when we run the web role on a dev machine in debug mode (we are connecting to Azure storage in the cloud not using Dev Storage). I can get the resulting query Uri using Fiddler or just hovering over the statement in the debugger.
However, when we deploy the web role to Azure the query fails against the exact same Azure Table Storage source with a ResourceNotFound DataServiceClientException. We have had ResoureNotFound errors before that dealt with the behavior of FirstOrDefault() on empty tables. This is not the problem here.
As one approach to the problem, I wanted to compare the query Uri that is being generated when the web role is deployed versus when it is running on a dev machine.
Question
Does anyone know a way to get the query Uri for the query that will be sent when the FirstOrDefault() method is called. I know that you can call ToString() on the IQueryable returned from the TableServiceContext but my concern is that when FirstOrDefault() is called the Uri might be further optimized and ToString() on IQueryable might not be what is ultimately sent to the server when FirstOrDefault() is called.
If someone has another approach to the problem I am open to suggestions. It seems to be a general problem with LINQ when trying to determine what will happen when the expression tree is finally evaluated. I am open to suggestions here as well because my LINQ skills could use some improvement.
Sample Code
public void AddSomething(string ProjectID, string Username) {
TableServiceContext context = new TableServiceContext();
var qry = context.Somethings.Where(m => m.RowKey == Username
&& m.PartitionKey == ProjectID);
System.Diagnostics.Trace.TraceInformation(qry.ToString());
// ^ Here I would like to trace the Uri that will be generated
// and sent to the server when the qry.FirstOrDefault() call below is executed.
if (qry.FirstOrDefault() == null) {
// ^ This statement generates an error when the web role is running
// in the fabric
...
}
}
Edit Update and Answer
Steve provided the write answer. Our problem was as exactly described in this post which describes an issue with PartitionKey/RowKey ordering in Single Entity query which was fixed with an update to the Azure OS. This explains the discrepancy between our dev machines and when the web role was deployed to Azure.
When I indicated we had dealt with the ResourceNotFound issue before in our existence checks, we had dealt with it in two ways in our code. One way was using exception handling to deal with the ResourceNotFound error the other way was to put the RowKey first in the LINQ query (as some MS people had indicated was appropriate).
It turns out we have several places where the RowKey was first instead of using the exception handling. We will address this by refactoring our code to target .NET 4 and using the .IgnoreResourceNotFoundException = true property of theTableServiceContext .
Lesson learned (more than once): Don't depend on quirky undocumented behavior.
Aside
We were able to get the query Uri's. They did turn out to be different (as indicated they would be in the blog post). Here are the results:
Query Uri from Dev Fabric
`https://ourproject.table.core.windows.net/Somethings()?$filter=(RowKey eq 'test19#gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
Query Uri from Azure Fabric
`https://ourproject.table.core.windows.net/Somethings(RowKey='test19#gmail.com',PartitionKey='41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
I can do one better... I think I know what the problem is. :)
See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.
Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.
I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.