Karate match expression - karate

I am using schema validation to validate a response, value returns a number or 'NA' and below is response and schema validation.
Response:
{
"ID": "ES123D74590Z",
"companyName": "ABC Corp",
"hourMs": 67890000000,
"date": "2020-06-09T00:00:00.000Z",
"scores": {
"AllScore": 61,
"MaxScore": 59,
"ScoreA": 75,
"ScoreB": "NA",
"ScoreC": 49,
"ScoreD": "NA"
},
"movement": {},
"amt": {}
}
Schema Assertion:
{
"ID": '#string',
"companyName": '#string',
"hourMs": '#number',
"date": '#regex[^\d\d\d\d-([0-9]{2})-([0-9]{2})(\T)([0-9]{3}):([0-9]{3}):([0-9]{3})\.[0-9]{3}\Z)$]',
"scores": {
"AllScore": '##number? _ >= 0 && _ <=100 || _ == "NA"',
"MaxScore": '##number? _ >= 0 && _ <=100 || _ == "NA"',
"ScoreA": '##number? _ >= 0 && _ <=100 || _ == "NA"',
"ScoreB": '##number? _ >= 0 && _ <=100 || _ == "NA"',
"ScoreC": '##number? _ >= 0 && _ <=100 || _ == "NA"',
"ScoreD": '##number? _ >= 0 && _ <=100 || _ == "NA"'
},
"movement": {},
"amt": {}
}
Error message received :
com.intuit.karate.exception.KarateException: score.feature:19 - path: $.scores.ScoreB, actual: 'NA', expected: '##number? _ >= 0 && _ <=100 || _ == "NA"', reason: not a number
How can I correct the match expression?

Here you go. Try to take the help of someone to review your code. And read the docs carefully. And next time, simplify your question like this.
* def response =
"""
{
"AllScore": 61,
"MaxScore": 59,
"ScoreA": 75,
"ScoreB": "NA",
"ScoreC": 49,
"ScoreD": "NA"
}
"""
* def isNum = function(x){ if (x === 'NA') return true; return typeof x === 'number' }
* def schema =
"""
{
"AllScore": '#? isNum(_)',
"MaxScore": '#? isNum(_)',
"ScoreA": '#? isNum(_)',
"ScoreB": '#? isNum(_)',
"ScoreC": '#? isNum(_)',
"ScoreD": '#? isNum(_)'
}
"""
* match response == schema
Also I suggest you look at this date validation example for more ideas: https://stackoverflow.com/a/55938480/143475

Related

Constructing a pandas DataFrame with columns and sub-columns from dict

I have a dict of the following form
dict = {
"Lightweight_model_20221103_downscale_1536px_RecOut": {
"CRR": "75.379",
"Sum Time": 33132,
"Sum Detection Time": 18406,
"images": {
"uk_UA_02 (1).jpg": {
"Time": "877",
"Time_detection": "469"
},
"uk_UA_02 (10).jpg": {
"Time": "914",
"Time_detection": "323"
},
"uk_UA_02 (11).jpg": {
"Time": "1169",
"Time_detection": "428"
},
"uk_UA_02 (12).jpg": {
"Time": "881",
"Time_detection": "371"
},
"uk_UA_02 (13).jpg": {
"Time": "892",
"Time_detection": "335"
}
}
},
"Lightweight_model_20221208_RecOut": {
"CRR": "71.628",
"Sum Time": 41209,
"Sum Detection Time": 25301,
"images": {
"uk_UA_02 (1).jpg": {
"Time": "916",
"Time_detection": "573"
},
"uk_UA_02 (10).jpg": {
"Time": "927",
"Time_detection": "442"
},
"uk_UA_02 (11).jpg": {
"Time": "1150",
"Time_detection": "513"
},
"uk_UA_02 (12).jpg": {
"Time": "1126",
"Time_detection": "531"
},
"uk_UA_02 (13).jpg": {
"Time": "921",
"Time_detection": "462"
}
}
}
}
and I want to make DataFrame with sub-columns in output like on image
[![enter image description here][1]][1]
but I don't understand how to open subdicts in ['images']
when I use code
df = pd.DataFrame.from_dict(dict, orient='index')
df_full = pd.concat([df.drop(['images'], axis=1), df['images'].apply(pd.Series)], axis=1)
receive dictionaries in columns whit filenames
[![result][2]][2]
how to open nested dicts and convert them to sub-columns
[1]: https://i.stack.imgur.com/hGrKo.png
[2]: https://i.stack.imgur.com/8LlUW.png
Here is one way to do it with the help of Pandas json_normalize, MultiIndex.from_product, and concat methods:
import pandas as pd
df = pd.DataFrame.from_dict(dict, orient='index')
# Save first columns and add a second empty level header
tmp = df[["CRR", "Sum Time", "Sum Detection Time"]]
tmp.columns = [tmp.columns, ["", "", ""]]
dfs= [tmp]
# Process "images" column
df = pd.DataFrame.from_dict(df["images"].to_dict(), orient='index')
# Create new second level column header for each column in df
for col in df.columns:
tmp = pd.json_normalize(df[col])
tmp.index = df.index
tmp.columns = pd.MultiIndex.from_product([[col], tmp.columns])
dfs.append(tmp)
# Concat everything in a new dataframe
new_df = pd.concat(dfs, axis=1)
Then:
print(new_df)
Outputs:

How to filter different color map markers Category leaflet and show popup from API using Vue.JS?

I have a case, so I want to give a color to marker according to filtered with cluster map category from an API using the Leaflet map in Vue.js framework. How can I do this?
first you need install Dependency
leaflet.
leaflet.markercluster.
leaflet-panel-layers.
and Import it like this
import L from 'leaflet';
import axios from 'axios';
import 'leaflet.markercluster/dist/MarkerCluster.css';
import 'leaflet.markercluster/dist/MarkerCluster.Default.css';
import markerClusterGroup from 'leaflet.markercluster/dist/leaflet.markercluster';
import 'leaflet/dist/leaflet.css';
import 'leaflet-search/dist/leaflet-search.min.css';
import LControlSearch from 'leaflet-search';
you need install the markers into mounted
this.markerClusterGroup = L.markerClusterGroup()
put this code into mounted
axios.get(`{https://localhost:3030/api/company/locations}`)
.then(response => {
// Add the markers to the map
let locations = response.data.data.content;
locations.forEach(marker => {
let m = L.marker([marker.latitude, marker.longitude])
if (marker.type_cluster === "Military") {
m.setIcon(L.icon({
iconUrl: 'https://cdn.rawgit.com/pointhi/leaflet-color-markers/master/img/marker-icon-2x-red.png',
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/images/marker-shadow.png',
iconSize: [25, 41],
iconAnchor: [12, 41],
popupAnchor: [1, -34],
shadowSize: [41, 41]
}))
m.bindPopup("Company Name : "+marker.name_company + "<br> Tipe Organisasi : " + marker.type_cluster + "<br>" + "Phone Number : " + marker.contact_number + "")
this.markerClusterGroup.addLayer(m)
} else if (marker.type_cluster === "Campus") {
m.setIcon(L.icon({
iconUrl: 'https://cdn.rawgit.com/pointhi/leaflet-color-markers/master/img/marker-icon-2x-green.png',
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/images/marker-shadow.png',
iconSize: [25, 41],
iconAnchor: [12, 41],
popupAnchor: [1, -34],
shadowSize: [41, 41]
}))
m.bindPopup("Company Name : "+marker.name_company + "<br> Tipe Organisasi : " + marker.type_cluster + "<br>" + "Phone Number : " + marker.contact_number + "")
this.markerClusterGroup.addLayer(m)
} else if (marker.type_cluster === "Others") {
m.setIcon(L.icon({
iconUrl: 'https://cdn.rawgit.com/pointhi/leaflet-color-markers/master/img/marker-icon-2x-blue.png',
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/images/marker-shadow.png',
iconSize: [25, 41],
iconAnchor: [12, 41],
popupAnchor: [1, -34],
shadowSize: [41, 41]
}))
m.bindPopup("Company Name : "+marker.name_company + "<br> Tipe Organisasi : " + marker.type_cluster + "<br>" + "Phone Number : " + marker.contact_number + "")
this.markerClusterGroup.addLayer(m)
}
});
})
I hope this help you

Convert PyTorch AutoTokenizer to TensorFlow TextVectorization

I have a PyTorch encoder loaded on my PC with transformers.
I saved it in JSON with tokenizer.save_pretrained(...) and now I need to load it on another PC with TensorFlow TextVectorization as I don't have access to the transformers library.
How can I convert ? I read about the tf.keras.preprocessing.text.tokenizer_from_json but it does not work.
In PyTorch JSON I have :
{
"version": "1.0",
"truncation": null,
"padding": null,
"added_tokens": [...],
"normalizer": {...},
"pre_tokenizer": {...},
"post_processor": {...},
"decoder": {...},
"model": {...}
}
and TensorFlow is expecting, with TextVectorizer :
def __init__(
self,
max_tokens=None,
standardize="lower_and_strip_punctuation",
split="whitespace",
ngrams=None,
output_mode="int",
output_sequence_length=None,
pad_to_max_tokens=False,
vocabulary=None,
idf_weights=None,
sparse=False,
ragged=False,
**kwargs,
):
or with the tokenizer_from_json these kind of fields :
config = tokenizer_config.get("config")
word_counts = json.loads(config.pop("word_counts"))
word_docs = json.loads(config.pop("word_docs"))
index_docs = json.loads(config.pop("index_docs"))
# Integer indexing gets converted to strings with json.dumps()
index_docs = {int(k): v for k, v in index_docs.items()}
index_word = json.loads(config.pop("index_word"))
index_word = {int(k): v for k, v in index_word.items()}
word_index = json.loads(config.pop("word_index"))
tokenizer = Tokenizer(**config)
Simply "tf.keras.preprocessing.text.tokenizer_from_json.()" but you may need to correct format in JSON.
Sample: The sample they using " I love cats " -> " Sticky "
import tensorflow as tf
text = "I love cats"
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000, oov_token='<oov>')
tokenizer.fit_on_texts([text])
# input
vocab = [ "a", "b", "c", "d", "e", "f", "g", "h", "I", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "_" ]
data = tf.constant([["_", "_", "_", "I"], ["l", "o", "v", "e"], ["c", "a", "t", "s"]])
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
sequences_mapping_string = layer(data)
sequences_mapping_string = tf.constant( sequences_mapping_string, shape=(1,12) )
print( 'result: ' + str( sequences_mapping_string ) )
print( 'tokenizer.to_json(): ' + str( tokenizer.to_json() ) )
new_tokenizer = tf.keras.preprocessing.text.tokenizer_from_json(tokenizer.to_json())
print( 'new_tokenizer.to_json(): ' + str( new_tokenizer.to_json() ) )
Output:
result: tf.Tensor([[27 27 27 9 12 15 22 5 3 1 20 19]], shape=(1, 12), dtype=int64)
tokenizer.to_json(): {"class_name": "Tokenizer", "config": {"num_words": 10000, "filters": "!\"#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n", "lower": true, "split": " ", "char_level": false, "oov_token": "<oov>", "document_count": 1, "word_counts": "{\"i\": 1, \"love\": 1, \"cats\": 1}", "word_docs": "{\"cats\": 1, \"love\": 1, \"i\": 1}", "index_docs": "{\"4\": 1, \"3\": 1, \"2\": 1}", "index_word": "{\"1\": \"<oov>\", \"2\": \"i\", \"3\": \"love\", \"4\": \"cats\"}", "word_index": "{\"<oov>\": 1, \"i\": 2, \"love\": 3, \"cats\": 4}"}}
new_tokenizer.to_json(): {"class_name": "Tokenizer", "config": {"num_words": 10000, "filters": "!\"#$%&()*+,-./:;<=>?#[\\]^_`{|}~\t\n", "lower": true, "split": " ", "char_level": false, "oov_token": "<oov>", "document_count": 1, "word_counts": "{\"i\": 1, \"love\": 1, \"cats\": 1}", "word_docs": "{\"cats\": 1, \"love\": 1, \"i\": 1}", "index_docs": "{\"4\": 1, \"3\": 1, \"2\": 1}", "index_word": "{\"1\": \"<oov>\", \"2\": \"i\", \"3\": \"love\", \"4\": \"cats\"}", "word_index": "{\"<oov>\": 1, \"i\": 2, \"love\": 3, \"cats\": 4}"}}

Showing only none in output instead of changing numeric value to capitilized alphabets

def convert_digits(input_string, start_position, end_position):
# The ending index was required as it was not returning the whole sentence
new_string = input_string[:end_position]
newstring = " "
# return new_string
digit_mapping = {
'0': 'ZERO',
'1': 'ONE',
'2': 'TWO',
'3': 'THREE',
'4': 'FOUR',
'5': 'FIVE',
'6': 'SIX',
'7': 'SEVEN',
'8': 'EIGHT',
'9': 'NINE'
}
if start_position >= 1:
if end_position <= len(new_string):
if start_position < end_position:
for index in range(start_position - 1, end_position):
if input_string[index].isdigit():
mapped = digit_mapping[input_string[index]]
newstring += " " + mapped + " "
else:
newstring += input_string[index]
else:
return "INVALID"
else:
return "INVALID"
else:
return "INVALID"
return newstring
if name == 'main':
print(convert_digits("you are a 4king 5shole", 1, 21))
Use this code.
Your problem was in line 39, you add 2 tabs place 1.
def convert_digits(input_string, start_position, end_position):
# The ending index was required as it was not returning the whole sentence
new_string = input_string[:end_position]
newstring = " "
# return new_string
digit_mapping = {
'0': 'ZERO',
'1': 'ONE',
'2': 'TWO',
'3': 'THREE',
'4': 'FOUR',
'5': 'FIVE',
'6': 'SIX',
'7': 'SEVEN',
'8': 'EIGHT',
'9': 'NINE'
}
if start_position >= 1:
if end_position <= len(new_string):
if start_position < end_position:
for index in range(start_position - 1, end_position):
if input_string[index].isdigit():
mapped = digit_mapping[input_string[index]]
newstring += " " + mapped + " "
else:
newstring += input_string[index]
else:
return "INVALID"
else:
return "INVALID"
else:
return "INVALID"
return newstring
if __name__ == '__main__':
print(convert_digits("you are a 4king 5shole", 1, 21))

Filtering down a Karate test response object to get a sub-list?

Given this feature file:
Feature: test
Scenario: filter response
* def response =
"""
[
{
"a": "a",
"b": "a",
"c": "a",
},
{
"d": "ab",
"e": "ab",
"f": "ab",
},
{
"g": "ac",
"h": "ac",
"i": "ac",
}
]
"""
* match response[1] contains { e: 'ab' }
How can I filter the response down so that it is equal to:
{
"d": "ab",
"e": "ab",
"f": "ab",
}
Is there a built-in way to do this? In the same way as you can filter a List using a Java stream?
Sample code:
Feature: test
Scenario: filter response
* def response =
"""
[
{
"a": "a",
"b": "a",
"c": "a",
},
{
"d": "ab",
"e": "ab",
"f": "ab",
},
{
"g": "ac",
"h": "ac",
"i": "ac",
}
]
"""
* def filt = function(x){ return x.e == 'ab' }
* def items = get response[*]
* def res = karate.filter(items, filt)
* print res