I have appsettings.json with code:
"Serilog": {
"WriteTo": [
{
"Name": "RollingFile",
"Args": {
"pathFormat": "/home/www-data/aissubject/storage/logs/log-{Date}.txt"
}
}
]
}
How can I read value of "pathFormat" key?
What you're referring to is a JSON array. How you access that varies depending on what you're doing, but I'm assuming that since you're asking this, you're trying to get it directly out of IConfiguration, rather than using the options pattern (as you likely should be).
IConfiguration is basically a dictionary. In order to create the keys of that dictionary from something like JSON, the JSON is "flattened" using certain conventions. Each level will be separated by a colon. Arrays will be flattened by adding a colon-delimited component containing the index. In other words, to get at pathFormat in this particular example, you'd need:
Configuration["Serilog:WriteTo:0:Args:pathFormat"]
Where the 0 portion denotes that you're getting the first item in the array. Again, it's much better and more appropriate to use the options pattern to map the configuration values onto an actual object, which would let you actually access this as an array rather than a magic string like this.
Related
I have an application that requests JSON objects from various other applications via their REST APIs. The response from any application comes in the following format:
{
data : {
key1: { val: value, defBy: "ontology class"}
key2: ...,
}
}
The following code depicts an object from App1:
{
data : {
key1: { val: "98404506-385576361", defBy: "abc:SHA-224"}
}
}
The following code depicts an object from App2:
{
data : {
key2: { val: "495967838-485694812", defBy: "xyz:SHA3-224"}
}
}
Here, DefBy refers to the algorithm used to encrypt the string in val. When my application receives such objects, it parses the JSON and converts each kv in the object into RDF such that:
// For objects from App1:
key1 rdf:type osba:key
key1 osba:generatedBy abc:SHA-224
...
// For objects from App2
key2 rdf:type osba:key
key2 osba:generatedBy xyz:SHA3-224
I need to query the generated RDF data in a way that I can specify if osba:generatedBy of any key belongs to the SHA family, then return the subject as a valid query-result, such that: where {?k osba:generatedBy ???}
Please note the following points:
I also receive objects with other encryption algorithms such as MD5, etc.
I don't know in advance what encryption algorithm will be used by a new application joining the network nor what NS it uses. For example, in the above objects, one uses abc:, and the other uses xyz:.
I can't use SPARQL filtering because the value could be SecureHashAlgorithm instead of SHA
My problem is that I can't define an upper (referenced) ontology in advance and map the value stored in defBy: of the incoming objects, because I don't know in advance what ontology is used nor what encryption algorithm the value represents.
I read about Automatic Ontology Integration, Alignment, Mapping, etc,. but I can't find the rationale of this concept to my problem.
Any solutions?
3) I can't use SPARQL filtering because the value could be SecureHashAlgorithm instead of SHA
SPARQL filtering supports matching against regular expressions as defined by xpath. Thus, something along the line of
SELECT ?key
WHERE { ?key osba:generatedBy ?generator
FILTER regex(?generator, "^s(ecure)?h(ash)?a(lgorithm)?.*", "i") }
(note: untested) should do the job. To build a good regex I can recommend http://regexr.com/
In case it's necessary: You can convert an IRI to a string (for matching) with the str() function.
For the following json string :
{
"abc" : 123,
"def" : 345
}
The following schema considers it valid :
{
"$schema": "http://json-schema.org/draft-03/schema#",
"title": "My Schema",
"description": "Blah",
"type": "object",
"patternProperties": {
".+": {
"type": "number"
}
}
}
However, changing the the patternProperties to properties still considers it valid. What then, is the difference between these 2 tags?
For the schema above all properties should be number. This data is invalid:
{ a: 'a' }
If you replace patternProperties with properties only property '.+' should be number. All other properties can be anything. This would be invalid:
{ '.+': 'a' }
This would be valid:
{ a: 'a' }
The properties (key-value pairs) on an object are defined using the properties keyword. The value of properties is an object, where each key is the name of a property and each value is a JSON schema used to validate that property.
additionalProperties can restrict the object so that it either has no additional properties that weren’t explicitly listed, or it can specify a schema for any additional properties on the object. Sometimes that isn’t enough, and you may want to restrict the names of the extra properties, or you may want to say that, given a particular kind of name, the value should match a particular schema. That’s where patternProperties comes in: it is a new keyword that maps from regular expressions to schemas. If an additional property matches a given regular expression, it must also validate against the corresponding schema.
Note: When defining the regular expressions, it’s important to note that the expression may match anywhere within the property name. For example, the regular expression "p" will match any property name with a p in it, such as "apple", not just a property whose name is simply "p". It’s therefore usually less confusing to surround the regular expression in ^...$, for example, "^p$".
for further reference --http://spacetelescope.github.io/understanding-json-schema/reference/object.html
Semantic of properties:
If you declare a property with a key included in properties, it must satisfy the schema declared in properties.
Semantic of patternProperties:
If you declare a property and the key satisfy the regex defined in patternProperties, it must satisfy the schema declared in patternProperties.
According to the docs, properties priority is higher than patternProperties, meaning that the schema is validated against patternProperties only if there has not been a match in properties first.
A JSON object is composed of key: value pairs. In a schema the key correspond to a property and for the value part we define it's data type and some other constratints.
Therefore the following schema
{
"type": "object",
"properties": {
"a": {
"type": "number"
}
}
will only validate a JSON object with the key "a" that is an object like {"a": 1}. An object like {"b": 1} won't validate
Meanwhile the patternProperties tag allows you to define properties using a regex. In this you basically don't need to define all the properties one after another. A use case of this will be for example if you don't know the name of the keys in advance but you know that all the keys match a certain pattern.
Hence your schema can validate {"a": 1} as well as {"b": 1}
The patternProperties tag does the job of an additionalProperties tag but in addition allows you to have a finer control on the keys
I'm building polymorphic serialized types in JavaScript and deserializing them in .Net. This works fine, unless my "$type" property is not the first property (Json.net seems to ignore it then).
So:
{
"$type" : "...",
"FirstName" : "Bob"
}
works (it deserializes to the type provided by $type), but:
{
"FirstName" : "Bob",
"$type" : "..."
}
doesn't.
Is there a way that I can make the order not matter, or an easy way to take my json string and modify it such that my "$type" properties are always at the top, in .Net? In other words, can I use json.net before I deserialize the string to re-order the properties so that "$type" is on top? I don't want to make it a requirement on the JavaScript/serialization side.
Update: Use MetadataPropertyHandling.ReadAhead
http://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_MetadataPropertyHandling.htm
It has to be first.
You could load the JSON into a JObject, rearrange the property order so $type is the first property and then deserialize that.
This question already has an answer here:
Newtonsoft JSON.net deserialization error where fields in JSON change order
(1 answer)
Closed 6 years ago.
I have the following method in my web api
public void Put(string id, [FromBody]IContent value) {
//Do stuff
}
I'm using backbone js to send the following JSON to the server using fiddler the value is null:
{
"id": "articles/1",
"heading": "Bar",
"$type": "BrickPile.Samples.Models.Article, BrickPile.Samples"
}
but if I add the $type property first in the JSON object the deserialization works fine, see:
{
"$type": "BrickPile.Samples.Models.Article, BrickPile.Samples",
"id": "articles/1",
"heading": "Bar"
}
is it possible to configure newtonsoft to check for the $type property anywhere in the object instead of the first property or can I configure backbone so it always adds the $type property first in the JSON object?
I would very strongly recommend against configuring any serializer (including JSON.NET) to read the object type from the incoming payload. This has historically been the cause of a large number of vulnerabilities in web applications. Instead, change the public entry point to your action to take the actual type as a bound parameter, then delegate to an internal testable method if desired.
First, AFAIK, the code of Json.NET is optimized to avoid holding the whole object in memory just to read its type. So it's better to place $type as the first property.
Second, you can write your own JsonConverter which reads first to JObject (using Load method), manually reads $type property, gets type from serializer's SerializationBinder, creates the value and populates it from JObject.
Third, regarding security. While Json.NET's $type may sound like a good idea, it's often not. It allows Json.NET to create any object type from any assembly just by writing its type in JSON file. It's better to use custom SerializationBinder with a dictionary which allows only types which you specify. You can find an example in my private framework (it also supports getting values for $type from JsonObjectAttribute):
https://github.com/Athari/Alba.Framework/blob/742ff1aeeb114179a16ca42667781944b26f3815/Alba.Framework/Serialization/DictionarySerializationBinder.cs
(This version uses some methods from other classes, but they're trivial. Later commits made the class more complex.)
I had kind of the same question apparently, and someone found an answer. Not sure whats the appropriate way to share an answer and give him ALL the credit, but this is the link:
Newtonsoft JSON.net deserialization error where fields in JSON change order
and this is the guy:
https://stackoverflow.com/users/3744182/dbc
This will work in backbone, but I don't know if every browser will behave the same. There's no guarantee, basically, that every browser will keep the items in the order which they are added.
MyModel = Backbone.Model.extend({
// ...
toJSON: function(){
// build the "$type" as the first parameter
var json = {"$type": "BrickPile.Samples.Models.Article, BrickPile.Samples"};
// get the rest of the data
_.extend(json, Backbone.Model.prototype.toJSON.call(this));
// send it back, and hope it's in the right order
return json;
}
});
You're better of getting NewtonSoft's JSON deserializer to work without needing it in a specific position, though. Hopefully that will be possible.
I have slightly peculiar program which deals with cases very similar to this
(in C#-like pseudo code):
class CDataSet
{
int m_nID;
string m_sTag;
float m_fValue;
void PrintData()
{
//Blah Blah
}
};
class CDataItem
{
int m_nID;
string m_sTag;
CDataSet m_refData;
CDataSet m_refParent;
void Print()
{
if(null == m_refData)
{
m_refParent.PrintData();
}
else
{
m_refData.PrintData();
}
}
};
Members m_refData and m_refParent are initialized to null and used as follows:
m_refData -> Used when a new data set is added
m_refParent -> Used to point to an existing data set.
A new data set is added only if the field m_nID doesn't match an existing one.
Currently this code is managing around 500 objects with around 21 fields per object and the format of choice as of now is XML, which at 100k+ lines and 5MB+ is very unwieldy.
I am planning to modify the whole shebang to use ProtoBuf, but currently I'm not sure as to how I can handle the reference semantics. Any thoughts would be much appreciated
Out of the box, protocol buffers does not have any reference semantics. You would need to cross-reference them manually, typically using an artificial key. Essentially on the DTO layer you would a key to CDataSet (that you simply invent, perhaps just an increasing integer), storing the key instead of the item in m_refData/m_refParent, and running fixup manually during serialization/deserialization. You can also just store the index into the set of CDataSet, but that may make insertion etc more difficult. Up to you; since this is serialization you could argue that you won't insert (etc) outside of initial population and hence the raw index is fine and reliable.
This is, however, a very common scenario - so as an implementation-specific feature I've added optional (opt-in) reference tracking to my implementation (protobuf-net), which essentially automates the above under the covers (so you don't need to change your objects or expose the key outside of the binary stream).