Struggling with type Document on seeding. There was an issue. Reason: Could not infer type of value - aqueduct

I have an column defined like this:
#Column(nullable: true)
Document openHours; // List "openHours": ["Tuesday - Sunday: 11.00 - 21.00"],
In my migration file i use seed():
#override
Future seed() async {
const sClientSQL = ...
and the json part that throws me out, is:
"openHours": [
"Monday - Friday: 17.00 - 22.00",
"Saturday: 14:00 - 23:00",
"Sunday & Holidays: 11:30 - 22:00"
],
Error in Terminal looks like:
Seeding data from migration version 1...
*** There was an issue. Reason: Could not infer type of value '[Monday - Friday: 17.00 - 22.00,
Saturday: 14:00 - 23:00, Sunday & Holidays: 11:30 - 22:00]'.. Table: null Column: null
Documentation says:
Document map or list (Map<String, dynamic> or List<dynamic>)
JSON Looks ok to me, but obviously in this context it's wrong. So what am i doing wrong here? I couldn't find an example on how to code the json part for type Document in aqueduct.
Thank all and regards
Antonio
[Edit 4]
Here is a shortened example of the query that:
const sClientSQL = 'INSERT INTO _Clients (name, owner, address, contacts, openhours, dayOff, description) VALUES (#name, #owner, #address, #contacts, #openhours, #dayOff, #description)';
await database.store.execute(sClientSQL, substitutionValues: {
'name': 'Dolce Vita',
'owner': 'David Driss',
'address': {
'street': 'Johannisstr. 3',
'zipCode': '54290',
'city': 'Trier'
},
'contacts': {
'phone': '0651 94 90 40',
'fax': '0651 43 23 2',
'mail': 'info#dolcevita.de'
},
'openhours': [
"Montag - Freitag: 17.00 - 22.00",
"Samstag: 14:00 - 23:00",
"Sonntag & Feiertag: 11:30 - 22:00"
],
'dayOff': '',
'description': 'Alle Speisen und Getränke sind inkl. MwST. Extrazutaten werden gesondert berechnet'
});
[Edit 1]
Further Information:
substitution values is as Map.
I used double quotes, and changed it to single quotes, that didn't have effect.
[EDIT 2]
Tried out complex map in dartpad and created a gist to show:
Gist to Dart Maps sample put the code in to dartpad.
Result The map as it is, is valid.
[EDIT 3]
I removed all json columns to secure it works. Success.
Added one json column, the first example i showed before. Same problem
Tried to make a manual insert of jsonb column. Success.
So, only the await database.store.execute command doesn't want my json type literal.

With the help from a user on an other channel i finally figured out, how to get it work.
The magic is all about using the right syntax and it's been a lot trial and error, wonder where to find the docs about it?
When using multiline then use three single quotes ''' at the beginning and the end
Inside the multiline expression use double quotes for the json
Working Examples
all following "fields" in substitutionValues are jsonb columns in database
address and contacts are examples for json objects
openhours is an example for a json array of strings, multilined
fess is an example for a json array of objects, multilined
await database.store.execute(sClientSQL, substitutionValues: {
'address': {
'street': 'a street',
'zipCode': '123456',
'city': 'mycity'
},
'contacts': {
'phone': '12345678',
'fax': '12346578',
'mail': 'info#mydomain.com'
},
'openhours': '''[
"Monday - Friday: 17.00 - 22.00",
"Saturday: 14:00 - 23:00",
"Sunday: 11:30 - 22:00"
]''',
'fees': '''[
{"delivery": 2.00, "minOrder": 7.50, "location": "all"},
{"delivery": 2.50, "minOrder": 9.50, "location": "here"},
{"delivery": 2.50, "minOrder": 9.50, "location": "there"}
]''',
I think that with these we get all complexity done.

Related

Proper way to convert Data type of a field in MongoDB

Possible Replication of How to change the type of a field?
I am currently newly learning MongoDB and I am facing problem while converting Data type of field value to another data type.
Below is an example of my document
[
{
"Name of Restaurant": "Briyani Center",
"Address": " 336 & 338, Main Road",
"Location": "XYZQWE",
"PriceFor2": "500.0",
"Dining Rating": "4.3",
"Dining Rating Count": "1500",
},
{
"Name of Restaurant": "Veggie Conner",
"Address": " New 14, Old 11/3Q, Railway Station Road",
"Location": "ABCDEF",
"PriceFor2": "1000.0",
"Dining Rating": "4.4",
}]
Like above I have 12k documents. Notice the datatype of PriceFor2 is a string. I would like to convert the data type to Integer data type.
I have referred many amazing answers given in the above link. But when I try to run the query, I get .save() is not a function error. Please advice what is the problem.
Below is the code I used
db.chennaiData.find().forEach( function(x){ x.priceFor2= new NumberInt(x.priceFor2);
db.chennaiData.save(x);
db.chennaiData.save(x);});
This is the error I am getting..
TypeError: db.chennaiData.save is not a function
From MongoDB's save documentation:
Starting in MongoDB 4.2, the
db.collection.save()
method is deprecated. Use db.collection.insertOne() or db.collection.replaceOne() instead.
Likely you are having a MongoDB with version 4.2+, so the save function is no longer available. Consider migrate to the usage of insertOne and replaceOne as suggested.
For your specific scenario, it is actually preferred to do with a single update as mentioned in another SO answer. It only does one db call(while your approach fetches all documents in the collection to the application level) and performs n db call to save them back.
db.collection.update({},
[
{
$set: {
PriceFor2: {
$toDouble: "$PriceFor2"
}
}
}
],
{
multi: true
})
Mongo Playground

Why does this AWS API Gateway Mapping Template not output the correct variable?

I am using AWS API Gateway. I have a response from my backend and I am trying to map the response into a different output using a Mapping Template.
I am trying to return each of the hits.hits._source.display elements. But the mapping template returns the entire object with the literal "_source.display" string added to the end.
The original unmodified server response is like -
{
"took":7,
"timed_out":false,
"_shards":{
"total":1,
"successful":1,
"skipped":0,
"failed":0
},
"hits":{
"total":{
"value":10000,
"relation":"gte"
},
"max_score":1.0,
"hits":[
{
"_index":"address",
"_type":"_doc",
"_id":"GAVIC411064535",
"_score":1.0,
"_source":{
"id":"GAVIC411064535",
"display":"1/28 George Street, Casterton VIC 3311",
"location":{
"lat":-37.59205672,
"lon":141.38825665
},
"component":{
"buildingName":"",
"number":"1/28",
"street":"George Street",
"locality":"Casterton",
"state":"VIC",
"postcode":"3311"
}
}
},
{
"_index":"address",
"_type":"_doc",
"_id":"GAVIC411066597",
"_score":1.0,
"_source":{
"id":"GAVIC411066597",
"display":"52 St Georges Road, Corio VIC 3214",
"location":{
"lat":-37.59205672,
"lon":141.38825665
},
"component":{
"buildingName":"",
"number":"52",
"street":"St Georges Road",
"locality":"Corio ",
"state":"VIC",
"postcode":"3214"
}
}
},
My Mapping Template so far looks like this -
#set($inputRoot = $input.path('$'))
{
#foreach($elem in $inputRoot.hits.hits)
{
"address": $elem._source.display,
}#if($foreach.hasNext),#end
#end
}
The resulting output looks like this. You can see that its returing the entire $elem rather than returning the properties I've asked for. That is ._source.display
It has also literally printed ._source.display to the end.
{
"address": {_index=address, _type=_doc, _id=GAVIC411064535, _score=1.0, _source={id=GAVIC411064535, display=1/28 George Street, Casterton VIC 3311, location={lat=-37.59205672, lon=141.38825665}, component={buildingName=, number=1/28, street=George Street, locality=Casterton, state=VIC, postcode=3311}}}._source.display,
},
{
"address": {_index=address, _type=_doc, _id=GAVIC411066597, _score=1.0, _source={id=GAVIC411066597, display=52 St Georges Road, Corio VIC 3214, location={lat=-37.59205672, lon=141.38825665}, component={buildingName=, number=52, street=Georges Road, locality=Corio , state=VIC, postcode=3214}}}._source.display,
},
I have truncated the server response and mapping template output for brevity.
The desired mapping template output is -
{
"address" : "28 George Street, Casterton VIC 3311",
"address" : "52 St Georges Road, Corio VIC 3214"
}
This is a bug in version 1.7 which has been fixed in 2.x versions (in July 2016... AWS really ought to upgrade its libraries sometimes).
You can work around this bug like this:
#foreach($elem in $inputRoot.hits.hits)
{
"address": "$elem.get('_source').display"
}#if($foreach.hasNext),#end
#end

Setting date format in Google Sheets using API and Python

I'm trying to set the date format on a column so that dates are displayed like this: 14-Aug-2017. This is the way I'm doing it:
requests = [
{
'repeatCell':
{
'range':
{
'startRowIndex': 1,
'startColumnIndex': 4,
'endColumnIndex': 4
},
'cell':
{
"userEnteredFormat":
{
"numberFormat":
{
"type": "DATE",
"pattern": "dd-mmm-yyyy"
}
}
},
'fields': 'userEnteredFormat.numberFormat'
}
}
]
body = {"requests": requests}
response = service.spreadsheets().batchUpdate(spreadsheetId=SHEET, body=body).execute()
I want all the cells in column E except the header cell to be updated, hence the range definition. I used http://wescpy.blogspot.co.uk/2016/09/formatting-cells-in-google-sheets-with.html and https://developers.google.com/sheets/api/samples/formatting as the basis for this approach.
However, the cells don't show their contents using that format. They continue to be in "Automatic" format, either showing the numeric value that I'm storing (the number of days from 1st Jan 1900) or (sometimes) the date.
Adding sheetId to the range definition doesn't alter the outcome.
I'm not getting an error back from the service and the response only contains the spreadsheetId and an empty replies structure [{}].
What am I getting wrong?
I've found the error - the endColumnIndex needs to be 5, not 4.
I didn't read that first linked article carefully enough!

Mule Dataweave Fixed Width File with header and footer

I am working on a project where we receive a flat file but the first and last lines have information that does not fit the fixed width pattern. Is there a way to dataweave all of this information correctly and if possible put the header and footer into variables and just have the contents in the payload.
Example File
HDMTFSBEUP00000220170209130400 MT HD07
DT01870977 FSFSS F3749261 CR00469002017020820170225 0000
DT01870978 FSFSS F3749262 CR00062002017020820170125 0000
TRMTFSBEUP00000220170209130400 000000020000002000000000000043330000000000000 0000
I know for CSV you can skip a line but dont see it with fixed width and also the header and footer will both start with the first 2 letters every time so maybe they can be filtered by dataweave?
Please refer to the DataWeave Flatfile Schemas documentation. There are several examples for processing several different types of data.
In this case, I tried to simplify your example data, and apply a custom schema as follow:
Example data:
HDMTFSBEUP00000220170209130400
DT01870977
DT01870978
TRMTFSBEUP00000220170209130400
Schema/Flat File Definition:
form: FLATFILE
structures:
- id: 'test'
name: test
tagStart: 0
tagLength: 2
data:
- { idRef: 'header' }
- { idRef: 'data', count: '>1' }
- { idRef: 'footer' }
segments:
- id: 'header'
name: header
tag: 'HD'
values:
- { name: 'header', type: String, length: 39 }
- id: 'data'
name: data
tag: 'DT'
values:
- { name: 'code', type: String, length: 17 }
- id: 'footer'
name: footer
tag: 'TR'
values:
- { name: 'footer', type: String, length: 30 }
The schema will validate the example data and identify based on the tag, the first 2 letters. The output will be grouped accordingly.
{
"header": {},
"data": [{}, {}],
"footer": {}
}
Since the expected result is only the data, then just select it: payload.data.
Use range selector to skip header and footer.
payload[1..-2] map {
field1: $[0..15],
field2: $[16..31]
...,
...
}
[1..-2] will select from 2nd line till the second last line in the payload.
$[0..15] will select from 1st column index to 16th index. $[16..31] select from 17th column index to 32nd index.
I was facing the same issue and the answer #sulthony h wrote needs a little tweak. I used these lines instead and it worked for me.
data:
- { idRef: 'header', count: 1 }
- { idRef: 'data', count: '>1' }
- { idRef: 'footer', count: 1 }
"count" was missing from header and footer, and that was throwing an exception. Hope this helps.

import.io json API: get the list of columns, with subfields

I'm using the import.io API and have noticed that some field types return several columns in the generated json. For instance a field foo of type Money will return three columns: foo, foo/_currency and foo/_source.
Is there a reference somewhere? I found some documentation here http://blog.import.io/post/11-columns-of-importio through an incomplete example:
{
"whole_number_field": 123,
"whole_number_field/_source": "123",
"language_field": "ben",
"language_field/_source": "bn",
"country_field": "CHN",
"country_field/_source": "China",
"boolean_field": false,
"boolean_field/_source": "false",
"currency_field/_currency": "GBP",
"currency_field/_source": "£123.45",
"link_field": "http://chris-alexander.co.uk",
"link_field/_text": "Blog",
"link_field/_title": "linktitle",
"datetime_field": 611368440000,
"datetime_field/_source": "17/05/89 12:34",
"datetime_field/_utc": "Wed May 17 00:34:00 GMT 1989",
"image_field": "http://io.chris-alexander.co.uk/gif2.gif",
"image_field/_alt": "imgalt",
"image_field/_title": "imgtitle",
"image_field/_source": "gif2.gif"
}
The columns are documented in the API docs:
http://api.docs.import.io/
For example, for currency, the columns are:
myvar <== Extracted value
myvar/_currency <== ISO currency code
myvar/_source <== Original value
The ISO currency code is returned as myvar/_currency, the numeric value in myvar
I established this through several tests, I'd like to know if I'm missing something:
{
'DATE': ['_source', '_utc'],
# please tell me if you have an example of an import.io API with a date!
'BOOLEAN': ['_source'],
'LANG': ['_source'],
'COUNTRY': ['_source'],
'HTML':[],
'STRING':[],
'URL': ['_text', '_source', '_title'],
'IMAGE': ['_alt', '_title', '_source'],
'DOUBLE': ['_source'],
'CURRENCY': ['_currency', '_source'],
}