Gson ignores list of Strings - serialization

I have model which contains list of strings
data class BSLQuery(var contentTypeKeys: List<String>? = null, ...)
When I'm trying to convert model to JSON
gson.toJson(query)
gson always ignores contentTypeKeys.
Typical data for unmapped field
BSLQuery(contentTypeKeys = listOf("com.assignment.attachment")
How I can convert it correctly?
EDIT
Actual result JSON: {...}
Required result JSON: {"contentTypeKeys":["com.assignment.attachment"], ...}

Related

Pydantic: how to make model with some mandatory and arbitrary number of other optional fields, which names are unknown and can be any?

I'd like to represent the following json by Pydantic model:
{
"sip" {
"param1": 1
}
"param2": 2
...
}
Means json may contain sip field and some other field, any number any names, so I'd like to have model which have sip:Optional[dict] field and some kind of "rest", which will be correctly parsed from/serialized to json. Is it possible?
Maybe you are looking for the extra model config:
extra
whether to ignore, allow, or forbid extra attributes during model initialization. Accepts the string values of 'ignore', 'allow', or 'forbid', or values of the Extra enum (default: Extra.ignore). 'forbid' will cause validation to fail if extra attributes are included, 'ignore' will silently ignore any extra attributes, and 'allow' will assign the attributes to the model.
Example:
from typing import Any, Dict, Optional
import pydantic
class Foo(pydantic.BaseModel):
sip: Optional[Dict[Any, Any]]
class Config:
extra = pydantic.Extra.allow
foo = Foo.parse_raw(
"""
{
"sip": {
"param1": 1
},
"param2": 2
}
"""
)
print(repr(foo))
print(foo.json())
Output:
Foo(sip={'param1': 1}, param2=2)
{"sip": {"param1": 1}, "param2": 2}

DRF Serializers. Different fields on serialize and deserialize methods

What is the best approach to have the same field name in Serializer but different behaviour on serializing and deserializing data? (I want to put only group_id on input and get full related info on the output)
So I want my schema looks like this when I input my data
{
"group": 1,
"other_fields": []
...
}
But got this (This is the way i want data looks like on the output only)
{
"group": {
"name": "string",
"description": "string",
"image": "string",
"is_public": true
},
"other_fields": []
...
}
My serializer right now
class TaskSerializer(serializers.ModelSerializer):
group = GroupSerializer()
class Meta:
model = Task
fields = "__all__"
Edit: Added group serializer and my Group model. Nothing special
class GroupSerializer(serializers.ModelSerializer):
class Meta:
model = Group
fields = "id", "owner", "name", "description", "image", "is_public"
read_only_fields = "owner",
class Group(models.Model):
name = models.CharField(max_length=32)
owner = models.ForeignKey("user.User", on_delete=models.CASCADE)
description = models.CharField(max_length=32)
image = models.ImageField(upload_to=func)
is_public = models.BooleanField(default=True)
In your TaskSerializer class you are serializing group with GroupSerializer(), which serializes the relation as an object with the fields you specified in it's implementation.
You instead want to serialize a single field, for that you could use SlugRelatedField(slug_field='id') which serializes the relation as a single field from group.
Because your Group model doesn't have a primary key field, Django generates an automatic AutoField(primary_key=True) id field, thus you can use PrimaryKeyRlatedField().
Try:
class TaskSerializer(serializers.ModelSerializer):
group = serializers.PrimaryKeyRelatedField()
class Meta:
model = Task
fields = "__all__"
For more examples, I suggest reading this tip from testdriven.io.

I'm having trouble printing the values from multiple keys that have the same name

I've had trouble printing the values of the key "uuid". The "uuid" key shows up multiple times throughout the file whilst other keys that are only present in the file once have no trouble being printed. So I'm wondering is it possible to do what I want it to do? The error I'm getting is a KeyError: 'uuid' for your information.
path = os.path.join(startn, listn + endn)
with open(path, encoding='utf-8') as json_file:
auction = json.load(json_file)
print("Type:", type(auction))
print("\nAuction:", auction['uuid'])
Also the file data looks like this
{"uuid":"36ff18f6e56d49b18c55cd06df3dfce8","auctioneer":"7c1251d409524cfd96b68da183698676","profile_id":"b7c111408b7c4d57a0665edda28c3b77"}
{"uuid":"754c3f2a25d949d1907f9c29f761b636","auctioneer":"f281bf681baa4cfea8a798cbe76c15f3","profile_id":"f281bf681baa4cfea8a798cbe76c15f3"}
etc...
Given that you haven't given us the full json but rather a messed up one
{"uuid":"36ff18f6e56d49b18c55cd06df3dfce8","auctioneer":"7c1251d409524cfd96b68da183698676","profile_id":"b7c111408b7c4d57a0665edda28c3b77"} {"uuid":"754c3f2a25d949d1907f9c29f761b636","auctioneer":"f281bf681baa4cfea8a798cbe76c15f3","profile_id":"f281bf681baa4cfea8a798cbe76c15f3"}
I assume that the actual json is something like this
[
{
"uuid": "36ff18f6e56d49b18c55cd06df3dfce8",
"auctioneer": "7c1251d409524cfd96b68da183698676",
"profile_id": "b7c111408b7c4d57a0665edda28c3b77"
},
{
"uuid": "754c3f2a25d949d1907f9c29f761b636",
"auctioneer": "f281bf681baa4cfea8a798cbe76c15f3",
"profile_id": "f281bf681baa4cfea8a798cbe76c15f3"
}
]
This is an array of json. What python is returning is an array, which is not accessed by key. You need to iterate the array like so
with open(path, encoding='utf-8') as json_file:
// Changed the return as plural because this is an array
auctions = json.load(json_file)
// Iterate through your array
for auction in auctions:
print("Type:", type(auction))
print("\nAuction:", auction['uuid'])

How to get FlatBufferToString to generate valid JSON for union types?

I have a union type in my flatbuffers schema:
union Quux { Foo, Bar, Baz }
table Root {
quux: Quux
}
If I convert to json using flatc, it looks like this:
{
quux_type: "Bar",
quux: {...}
}
But if I use FlatBufferToString from flatbuffers/minireflect.h, then I get this instead, which is not valid JSON.
{
quux_type: Bar,
quux: {...},
}
I'm calling flatc like this
flatc --reflect-names --cpp -o include src/quux.fbs
How can I get minireflect to produce valid json output for union types?
As you can see from the comment: https://github.com/google/flatbuffers/blob/4e45f7c9e8da64a9601eeba1231079c3ce0a6dc2/include/flatbuffers/minireflect.h#L282 the minireflect string conversion is very simple, and only trying to be JSON-alike.
That said, if you pass true to the tostring_visitor in https://github.com/google/flatbuffers/blob/4e45f7c9e8da64a9601eeba1231079c3ce0a6dc2/include/flatbuffers/minireflect.h#L396-L404 it looks like you will get quotes both around the enum value and the field names.

escape backslashes in dataweave

I have a string array in properties file, and I want to read its value in dataweave in JSON format.
The array in properties file is-
Countries = ["USA","England","Australia"]
in dataweave, I am using this-
%output application/json
---
{
countries: p('Countries')
}
Output I am getting is-
"countries": "[\"USA\",\"England\",\"Australia\"]",
Output I want is-
"countries": [
"USA",
"England",
"Australia"
]
I have tried with replace but no luck.
I also tried countries map $ as String after changing country array to Countries = ['USA','England','Australia'] but it says Invalid input 'S', expected :type or enclosedExpr
How to achieve this?
The problem is that properties file values are strings and not arrays so your expression is not interpreted. But don't worry you can use the read function
read(p('Countries'), "application/json"))