How to remove a value of multi-valued SCIM 2.0 sub-attribute? - scim

I have a complex SCIM attribute that looks like follows:
"myattr1": {
"subattr1": 5,
"subattr2": [1, 2, 3]
}
I want to modify this to become
"myattr1": {
"subattr1": 5,
"subattr2": [1, 3]
}
How can I do this using PATCH ? Should I replace the entire sub-attribute or can I just remove the value 2 from it using PATCH ?
I know how to do this with multi-valued attributes. But I don't know how to do it for sub-attributes.

[EDIT: This is wrong..]
I believe this will work:
PATCH /resource/id
{ "schemas":
["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations":[
{
"op":"remove",
"path":"myattr1[subattr2 eq \"2\"]"
}
]
}
Example on path was taken from https://datatracker.ietf.org/doc/html/rfc7644#page-33 where it mentions "path":"members[value eq
"2819c223-7f76-453a-919d-413861904646"].displayName" as a way to target the displayName sub-attribute for the complex multi-valued group attribute "members".
The escaped quotes around 2 are necessary if it's a string - if the value is in fact an integer, they won't be necessary.

As per the PingIdentity documentation https://github.com/pingidentity/scim2/wiki/Working-with-SCIM-paths#the-value-sub-attribute, Simple multivalued attributes have a special implicit sub-attribute called "value".
If that is the way, your PATCH request payload should be as follows.
{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "remove",
"path": "myattr1.subattr2[value eq \"2\"]"
}
]
}
However, this type of patch operation is not clearly defined in RFC 7644 (https://datatracker.ietf.org/doc/html/rfc7644#section-3.5.2)
You would be able to confirm this if the question is raised in SCIM mailing list https://mailarchive.ietf.org/arch/browse/scim/

Related

Proper way to convert Data type of a field in MongoDB

Possible Replication of How to change the type of a field?
I am currently newly learning MongoDB and I am facing problem while converting Data type of field value to another data type.
Below is an example of my document
[
{
"Name of Restaurant": "Briyani Center",
"Address": " 336 & 338, Main Road",
"Location": "XYZQWE",
"PriceFor2": "500.0",
"Dining Rating": "4.3",
"Dining Rating Count": "1500",
},
{
"Name of Restaurant": "Veggie Conner",
"Address": " New 14, Old 11/3Q, Railway Station Road",
"Location": "ABCDEF",
"PriceFor2": "1000.0",
"Dining Rating": "4.4",
}]
Like above I have 12k documents. Notice the datatype of PriceFor2 is a string. I would like to convert the data type to Integer data type.
I have referred many amazing answers given in the above link. But when I try to run the query, I get .save() is not a function error. Please advice what is the problem.
Below is the code I used
db.chennaiData.find().forEach( function(x){ x.priceFor2= new NumberInt(x.priceFor2);
db.chennaiData.save(x);
db.chennaiData.save(x);});
This is the error I am getting..
TypeError: db.chennaiData.save is not a function
From MongoDB's save documentation:
Starting in MongoDB 4.2, the
db.collection.save()
method is deprecated. Use db.collection.insertOne() or db.collection.replaceOne() instead.
Likely you are having a MongoDB with version 4.2+, so the save function is no longer available. Consider migrate to the usage of insertOne and replaceOne as suggested.
For your specific scenario, it is actually preferred to do with a single update as mentioned in another SO answer. It only does one db call(while your approach fetches all documents in the collection to the application level) and performs n db call to save them back.
db.collection.update({},
[
{
$set: {
PriceFor2: {
$toDouble: "$PriceFor2"
}
}
}
],
{
multi: true
})
Mongo Playground

Pydantic: how to make model with some mandatory and arbitrary number of other optional fields, which names are unknown and can be any?

I'd like to represent the following json by Pydantic model:
{
"sip" {
"param1": 1
}
"param2": 2
...
}
Means json may contain sip field and some other field, any number any names, so I'd like to have model which have sip:Optional[dict] field and some kind of "rest", which will be correctly parsed from/serialized to json. Is it possible?
Maybe you are looking for the extra model config:
extra
whether to ignore, allow, or forbid extra attributes during model initialization. Accepts the string values of 'ignore', 'allow', or 'forbid', or values of the Extra enum (default: Extra.ignore). 'forbid' will cause validation to fail if extra attributes are included, 'ignore' will silently ignore any extra attributes, and 'allow' will assign the attributes to the model.
Example:
from typing import Any, Dict, Optional
import pydantic
class Foo(pydantic.BaseModel):
sip: Optional[Dict[Any, Any]]
class Config:
extra = pydantic.Extra.allow
foo = Foo.parse_raw(
"""
{
"sip": {
"param1": 1
},
"param2": 2
}
"""
)
print(repr(foo))
print(foo.json())
Output:
Foo(sip={'param1': 1}, param2=2)
{"sip": {"param1": 1}, "param2": 2}

Karate json key list variable assignment

New to Karate, and JSON, for that matter, but I've got a variable like:
response {
entries {
products [
{
names [
"Peter Parker",
"Tony Stark",
"Captain America"
]
},
{
names [
"Thomas Tinker",
"Jimmy Johnson",
"Mama Martha"
]
}
]
}
}
match each response.entries.products[*].names returns a list like:
["Peter Parker","Tony Stark","Captain America","Thomas Tinker","Jimmy Johnson","Mama Martha"]
But I'd like to assign that output to a variable, such as:
* def variable = response.entries.products[*].names
that would hold a similar value. When I use the above line, I get the following error:
Expected an operand but found *
Is it possible to achieve that, or something similar? If so, how?
Thanks!
Yes, there is syntax for that:
* def variable = $response.entries.products[*].names
Read the docs: https://github.com/intuit/karate#get

Collapsing a group using Google Sheets API

So as a workaround to difficulties creating a new sheet with groups I am trying to create and collapse these groups in a separate call to batchUpdate. I can call request an addDimensionGroup successfully, but when I request updateDimensionGroup to collapse the group I just created, either in the same API call or in a separate one, I get this error:
{
"error": {
"code": 400,
"message": "Invalid requests[1].updateDimensionGroup: dimensionGroup.depth must be \u003e 0",
"status": "INVALID_ARGUMENT"
}
}
But I'm passing depth as 0 as seen by the following JSON which I send in my request:
{
"requests":[{
"addDimensionGroup":{
"range":{
"dimension":"ROWS",
"sheetId":0,
"startIndex":2,
"endIndex":5}
}
},{
"updateDimensionGroup":{
"dimensionGroup":{
"range": {
"dimension":"ROWS",
"sheetId":0,
"startIndex":2,
"endIndex":5
},
"depth":0,
"collapsed":true
},
"fields":"*"
}
}],
"includeSpreadsheetInResponse":true}',
...
I'm not entirely sure what I am supposed to provide for "fields", the documentation for UpdateDimensionGroupRequest says it is supposed to be a string ("string ( FieldMask format)"), but the FieldMask definition itself shows the possibility of multiple paths, and doesn't tell me how they are supposed to be separated in a single string.
What am I doing wrong here?
The error message is actually instructing you that the dimensionGroup.depth value must be > 0:
If you call spreadsheets.get() on your sheet, and request only the DimensionGroup data, you'll note that your created group is actually at depth 1:
GET https://sheets.googleapis.com/v4/spreadsheets/{SSID}?fields=sheets(rowGroups)&key={API_KEY}
This makes sense, since the depth is (per API spec):
depth numberThe depth of the group, representing how many groups have a range that wholly contains the range of this group.
Note that any given particular DimensionGroup "wholly contains its own range" by definition.
If your goal is to change the status of the DimensionGroup, then you need to set its collapsed property:
{
"requests":
[
{
"updateDimensionGroup":
{
"dimensionGroup":
{
"range":
{
"sheetId": <your sheet id>,
"dimension": "ROWS",
"startIndex": 2,
"endIndex": 5
},
"collapsed": true,
"depth": 1
},
"fields": "collapsed"
}
}
]
}
For this particular Request, the only attribute you can set is collapsed - the other properties are used to identify the desired DimensionGroup to manipulate. Thus, specifying fields: "*" is equivalent to fields: "collapsed". This is not true for the majority of requests, so specifying fields: "*" and then omitting a non-required request parameter is interpreted as "Delete that missing parameter from the server's representation".
To change a DimensionGroup's depth, you must add or remove other DimensionGroups that encompass it.

How to prevent Facet Terms from tokenizing

I am using Facet Terms to get all the unique values and their count for a field. And I am getting wrong results.
term: web
Count: 1191979
term: misc
Count: 1191979
term: passwd
Count: 1191979
term: etc
Count: 1191979
While the actual result should be:
term: WEB-MISC /etc/passwd
Count: 1191979
Here is my sample query:
{
"facets": {
"terms1": {
"terms": {
"field": "message"
}
}
}
}
If reindexing is an option, it would be the best to change mapping and mark this fields as not_analyzed
"your_field" : { "type": "string", "index" : "not_analyzed" }
You can use multi field type if keeping an analyzed version of the field is desired:
"your_field" : {
"type" : "multi_field",
"fields" : {
"your_field" : {"type" : "string", "index" : "analyzed"},
"untouched" : {"type" : "string", "index" : "not_analyzed"}
}
}
This way, you can continue using your_field in the queries, while running facet searches using your_field.untouched.
Alternatively, if this field is stored, you can use a script field facet instead:
"facets" : {
"term" : {
"terms" : {
"script_field" : "_fields.your_field.value"
}
}
}
As the last resort, if this field is not stored, but record source is stored in the index, you can try this:
"facets" : {
"term" : {
"terms" : {
"script_field" : "_source.your_field"
}
}
}
The first solution is the most efficient. The last solution is the least efficient and may take a lot of time on a large index.
Wow, I also got this same issue today while term aggregating in the recent elastic-search. After googling and some partial understanding, found how this geeky indexing works(which is very simple).
Queries can find only terms that actually exist in the inverted index
When you index the following string
"WEB-MISC /etc/passwd"
it will be passed to an analyzer. The analyzer might tokenize it into
"WEB", "MISC", "etc" and "passwd"
with its position details. And this tokens might filtered to lowercase such as
"web", "misc", "etc" and "passwd"
So, after indexing,the search query can see the above 4 only. not the complete word "WEB-MISC /etc/passwd". For your requirement the following are my options you can use
1.Change the Default Analyzer used by elasticsearch([link][1])
2.If it is not need, just TurnOff the analyzer by setting 'not_analyzed' for the fields you need
3.To convert the already indexed data searchable, re-indexing is the only option
I have briefly explained this problem and proposed two solutions here.
I have talked about multiple approaches here.
One is use of not_analyzed to preserve the string as it is. But then as it has the drawback of being case insensitive , a better approach would be use keyword tokenizer + lowercase filter