cocoEval summarize() return all mAP -1 - tensorflow2.0

I create a json with annotations and execute the following code:
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import numpy as np
import skimage.io as io
import pylab
import json
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
annType = ['segm','bbox','keypoints']
annType = annType[1] #specify type here
prefix = 'person_keypoints' if annType=='keypoints' else 'instances'
print ('Running demo for *%s* results.'%(annType))
# use the valadation labelme file
annFile = '/content/json_moded.json'
cocoGt=COCO(annFile)
#initialize COCO detections api
# use the generated results
resFile = '/content/test_data_normal.json'
cocoDt=cocoGt.loadRes(resFile)
'''
dts = json.load(open(resFile,'r'))
imgIds = [imid['image_id'] for imid in dts]
imgIds = sorted(list(set(imgIds)))
'''
imgIds=sorted(cocoGt.getImgIds())
'''
imgIds=sorted(cocoGt.getImgIds())
imgIds=imgIds[0:24]
imgId = imgIds[np.random.randint(24)]
'''
# running box evaluation
cocoEval = COCOeval(cocoGt,cocoDt,annType)
cocoEval.params.imgIds = imgIds
'''
cocoEval.params.catIds = [3] # 1 stands for the 'person' class, you can increase or decrease the category as needed
'''
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
but the results are the following:
Running demo for *bbox* results.
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.11s).
Accumulating evaluation results...
DONE (t=0.04s).
Average Precision (AP) #[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) #[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) #[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) #[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) #[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) #[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
why every mAP return -1?
I dont know why return these result, i think that both json are corrects.
I dont know why return these result, i think that both json are corrects.
I dont know why return these result, i think that both json are corrects.

You have to pay attention to different things:
the mapping between images->id from ground truth json and image_id in prediction.json
the mapping between category
Ground truth json example
{
"info": {
"year": "2021",
"version": "2",
"description": "",
"contributor": "",
"url": "",
"date_created": ""
},
"licenses": [
{
"id": 1,
"url": "https://creativecommons.org/licenses/by/4.0/",
"name": "CC BY 4.0"
}
],
"categories": [
{
"id": 1,
"name": "cat",
"supercategory": "none"
}
],
"images": [
{
"id": 0,
"license": 1,
"file_name": "1.jpg",
"height": 512,
"width": 512,
"date_captured": ""
},
{
"id": 1,
"license": 1,
"file_name": "e.jpg",
"height": 512,
"width": 512,
"date_captured": ""
}
],
"annotations": [
{
"id": 0,
"image_id": 0,
"category_id": 1,
"bbox": [
261,
358,
16,
26
],
"area": 416,
"segmentation": [],
"iscrowd": 0
}
]
}
prediction.json
[{"image_id": 0, "category_id": 1, "bbox": [84.116, 442.13, 43.356, 25.849], "score": 0.87929}]

Related

MariaDB extract json nested data

I have following SQL query and trying to extract nested json data field.
*************************** 2. row ***************************
created_at: 2023-01-05 14:25:52
updated_at: 2023-01-05 14:26:02
deleted_at: NULL
deleted: 0
id: 2
instance_uuid: ef6380b4-5455-48f8-9e4b-3d04199be3f5
numa_topology: NULL
pci_requests: []
flavor: {"cur": {"nova_object.name": "Flavor", "nova_object.namespace": "nova", "nova_object.version": "1.2", "nova_object.data": {"id": 2, "name": "tempest2", "memory_mb": 512, "vcpus": 1, "root_gb": 1, "ephemeral_gb": 0, "flavorid": "202", "swap": 0, "rxtx_factor": 1.0, "vcpu_weight": 0, "disabled": false, "is_public": true, "extra_specs": {}, "description": null, "created_at": "2023-01-05T05:30:36Z", "updated_at": null, "deleted_at": null, "deleted": false}}, "old": null, "new": null}
vcpu_model: {"nova_object.name": "VirtCPUModel", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": {"arch": null, "vendor": null, "topology": {"nova_object.name": "VirtCPUTopology", "nova_object.namespace": "nova", "nova_object.version": "1.0", "nova_object.data": {"sockets": 1, "cores": 1, "threads": 1}, "nova_object.changes": ["cores", "threads", "sockets"]}, "features": [], "mode": "host-model", "model": null, "match": "exact"}, "nova_object.changes": ["mode", "model", "vendor", "features", "topology", "arch", "match"]}
migration_context: NULL
keypairs: {"nova_object.name": "KeyPairList", "nova_object.namespace": "nova", "nova_object.version": "1.3", "nova_object.data": {"objects": []}}
device_metadata: NULL
trusted_certs: NULL
vpmems: NULL
resources: NULL
In flavor: section i have some json data and i am trying to extract "name": "tempest2" value in my question but it's nested so i am not able to find way to extract that value.
My query but how do i remove [] square brackets in value
MariaDB [nova]> select uuid, instances.created_at, instances.deleted_at, json_extract(flavor, '$.cur.*.name') AS FLAVOR from instances join instance_extra on instances.uuid = instance_extra.instance_uuid;
+--------------------------------------+---------------------+---------------------+--------------+
| uuid | created_at | deleted_at | FLAVOR |
+--------------------------------------+---------------------+---------------------+--------------+
| edb0facb-3353-4848-82e2-f12701a0a3aa | 2023-01-05 05:37:13 | 2023-01-05 05:37:49 | ["tempest1"] |
| ef6380b4-5455-48f8-9e4b-3d04199be3f5 | 2023-01-05 14:25:51 | NULL | ["tempest2"] |
+--------------------------------------+---------------------+---------------------+--------------+
#Update
This is the MariaDB version I have
MariaDB [nova]> SELECT VERSION();
+-------------------------------------------+
| VERSION() |
+-------------------------------------------+
| 10.5.12-MariaDB-1:10.5.12+maria~focal-log |
+-------------------------------------------+
1 row in set (0.000 sec)

Create tabular View by Spreading Data from JSON in Snowflake

I'm very new to Snowflake and I am working on creating a view from the table that holds JSON data as follows :
"data": {
"baseData": {
"dom_url": "https://www.soccertables.com/european_tables",
"event_id": "01b2722a-d8e6-4f67-95d0-8dd7ba088a4a",
"event_utc_time": "2020-05-11 09:01:14.821",
"ip_address": "125.238.134.96",
"table_1": [
{
"position": "1",
"team_name": "Liverpool",
"games_played": "29",
"games_won": "26",
"games_drawn": "2",
"games_lost": "1",
"goals_for": "75",
"goals_against": "35"
"points": "80"
},
{
"position": "2",
"team_name": "Man. City",
"games_played": "29",
"games_won": "20",
"games_drawn": "5",
"games_lost": "4",
"goals_for": "60",
"goals_against": "45"
"points": "65"
},
{
"position": "...",
"team_name": "...",
"games_played": "...",
"games_won": "...",
"games_drawn": "...",
"games_lost": "...",
"goals_for": "...",
"goals_against": "..."
"points": "..."
}
],
"unitID": "CN 8000",
"ver": "1.0.0"
},
"baseType": "MatchData"
},
"dataName": "CN8000.Prod.MatchData",
"id": "18a89f9e-9620-4453-a546-23412025e7c0",
"tags": {
"itrain.access.level1": "Private",
"itrain.access.level2": "Kumar",
"itrain.internal.deviceID": "",
"itrain.internal.deviceName": "",
"itrain.internal.encodeTime": "2022-03-23T07:41:19.000Z",
"itrain.internal.sender": "Harish",
"itrain.software.name": "",
"itrain.software.partNumber": 0,
"itrain.software.version": ""
},
"timestamp": "2021-02-25T07:32:31.000Z"
}
I want to extract the common values like dom_url, event_id, event_utc_time, ip_address along with each team_name in a separate column and the associated team details like position, games_played etc possibly in rows for each team name
E.g :
I've been trying Lateral flatten function but couldn't succeed so far
create or replace view AWSS3_PM.PUBLIC.PM_POWER_CN8000_V1(
DOM_URL,
EVENT_ID,
EVENT_UTC_TIME,
IP_ADDRESS,
TIMESTAMP,
POSITION,
GAMES_PLAYED,
GAMES_WON,
GAMES_LOST,
GAMES_DRAWN
) as
select c1:data:baseData:dom_url dom_url,
c1:data:baseData:event_id event_id,
c1:data:baseData:event_utc_time event_utc_time,
c1:data:baseData:ip_address ip_address,
c1:timestamp timestamp,
value:position TeamPosition,
value:games_played gamesPlayed,
value:games_won wins ,
value:games_lost defeats,
value:games_drawn draws
from pm_power, lateral flatten(input => c1:data:baseData:table_1);
Any help would be really grateful
Thanks,
Harish
#For the table Portion in JSON it would need flattening and transpose, example below -
Sample table -
select * from test_json;
+--------------------------------+
| TAB_VAL |
|--------------------------------|
| { |
| "table_1": [ |
| { |
| "games_drawn": "2", |
| "games_lost": "1", |
| "games_played": "29", |
| "games_won": "26", |
| "goals_against": "35", |
| "goals_for": "75", |
| "points": "80", |
| "position": "1", |
| "team_name": "Liverpool" |
| }, |
| { |
| "games_drawn": "5", |
| "games_lost": "4", |
| "games_played": "29", |
| "games_won": "20", |
| "goals_against": "45", |
| "goals_for": "60", |
| "points": "65", |
| "position": "2", |
| "team_name": "Man. City" |
| } |
| ] |
| } |
+--------------------------------+
1 Row(s) produced. Time Elapsed: 0.285s
Perform transpose after flattening JSON
select * from (
select figures,stats,team_name
from (
select
f.value:"games_drawn"::number as games_drawn,
f.value:"games_lost"::number as games_lost,
f.value:"games_played"::number as games_played,
f.value:"games_won"::number as games_won,
f.value:"goals_against"::number as goals_against,
f.value:"goals_for"::number as goals_for,
f.value:"points"::number as points,
f.value:"position"::number as position,
f.value:"team_name"::String as team_name
from
TEST_JSON, table(flatten(input=>tab_val:table_1, mode=>'ARRAY')) as f
) flt
unpivot (figures for stats in(games_drawn, games_lost, games_played, games_won, goals_against, goals_for, points,position))
) up
pivot (min(up.figures) for up.team_name in ('Liverpool','Man. City'));
+---------------+-------------+-------------+
| STATS | 'Liverpool' | 'Man. City' |
|---------------+-------------+-------------|
| GAMES_DRAWN | 2 | 5 |
| GAMES_LOST | 1 | 4 |
| GAMES_PLAYED | 29 | 29 |
| GAMES_WON | 26 | 20 |
| GOALS_AGAINST | 35 | 45 |
| GOALS_FOR | 75 | 60 |
| POINTS | 80 | 65 |
| POSITION | 1 | 2 |
+---------------+-------------+-------------+
8 Row(s) produced. Time Elapsed: 0.293s

Add Map within a Map in a column

In the Metadata column i have a Map type value:
+-----------+--------+-----------+--------------------------------+
| Noun| Pronoun| Adjective|Metadata |
+-----------+--------+-----------+--------------------------------+
| Homer| Simpson|Engineer |["Age": "50", "Country": "USA"] |
| Elon | Musk |King |["Age": "45", "Country": "RSA"] |
| Bart | Lee |Cricketer |["Age": "35", "Country": "AUS"] |
| Lisa | Jobs |Daughter |["Age": "35", "Country": "IND"] |
| Joe | Root |Player |["Age": "31", "Country": "ENG"] |
+-----------+--------+-----------+--------------------------------+
I want to append another Map type value in the Metadata against a key called tags.
+-----------+--------+-----------+--------------------------------------------------------------------+
| Noun| Pronoun| Adjective|Metadata |
+-----------+--------+-----------+--------------------------------------------------------------------+
| Homer| Simpson|Engineer |["Age": "50", "Country": "USA", "tags": ["Gen": "M", "Fit": "Yes"]] |
| Elon | Musk |King |["Age": "45", "Country": "RSA", "tags": ["Gen": "M", "Fit": "Yes"]] |
| Bart | Lee |Cricketer |["Age": "35", "Country": "AUS", "tags": ["Gen": "M", "Fit": "No"]] |
| Lisa | Jobs |Daughter |["Age": "35", "Country": "IND", "tags": ["Gen": "F", "Fit": "Yes"]] |
| Joe | Root |Player |["Age": "31", "Country": "ENG", "tags": ["Gen": "M", "Fit": "Yes"]] |
+-----------+--------+-----------+--------------------------------------------------------------------+
In the Metadata column, the outer Map is already a typedLit, adding another Map within it is not being allowed.
I implemented it using a struct. This is how it looks:
df.withColumn("Metadata", struct(lit("Age").alias("Age"), lit("Country").alias("Country"), typedLit(tags).alias("tags")))
It won't be exactly key val pair but still be queryable with alias.

Nested JSON aggregation in Postgres

I have a need to run a query over a Postgres database and aggregate it and export it as a json object using native Postgres tooling.
I can't quite get the aggregation working correctly and I'm a bit stumped.
Below is an example of some of the data
| msgserial | object_type | payload_key | payload | user_id |
+-----------+---------------+-------------+-----------------------------------------------------------+---------+
| 1696962 | CampaignEmail | a8901b2c | {"id": "ff7221da", "brand": "MAGIC", "eventType": "SENT"} | 001 |
| 1696963 | OtherType | b8901b2c | {"id": "ff7221db", "brand": "MAGIC", "eventType": "SENT"} | 001 |
| 1696964 | OtherType | c8901b2c | {"id": "ff7221dc", "brand": "MAGIC", "eventType": "SENT"} | 002 |
| 1696965 | OtherType | d8901b2c | {"id": "ff7221dd", "brand": "MAGIC", "eventType": "SENT"} | 001 |
| 1696966 | CampaignEmail | e8901b2c | {"id": "ff7221de", "brand": "MAGIC", "eventType": "SENT"} | 001 |
| 1696967 | CampaignEmail | f8901b2c | {"id": "ff7221df", "brand": "MAGIC", "eventType": "SENT"} | 002 |
| 1696968 | SomethingElse | g8901b2c | {"id": "ff7221dg", "brand": "MAGIC", "eventType": "SENT"} | 001 |
+-----------+---------------+-------------+-----------------------------------------------------------+---------+
I need to output a JSON object like this grouped by user_id
{
"user_id": 001,
"brand": "MAGIC",
"campaignEmails": [
{"id": "ff7221da", "brand": "MAGIC", "eventType": "SENT"},
{"id": "ff7221de", "brand": "MAGIC", "eventType": "SENT"},
{"id": "ff7221de", "brand": "MAGIC", "eventType": "SENT"}
],
"OtherTypes": [
{"id": "ff7221db", "brand": "MAGIC", "eventType": "SENT"},
{"id": "ff7221dd", "brand": "MAGIC", "eventType": "SENT"}
],
"Somethingelses": [
{"id": "ff7221dg", "brand": "MAGIC", "eventType": "SENT"}
]
},
{
"user_id": 002,
"campaignEmails": [
],
"OtherTypes": [
],
"Somethingelses": [
]
}
Essentially need to group al the payloads into arrays by their type grouped by the user_id
I started with JSONB_BUILD_OBJECT getting one of the object_types grouped together into an array but then got stumped.
Am I trying to achieve the impossible in raw PSQL? I'm really stumped and I keep hitting errors like X needs to be included in the GROUP BY clause etc...
I can group one of the object_types into an array grouped by user_id but can't seem to do all 3
My other thinking was to do have 3 subqueries but I'm not sure how to do that either.
You need two aggregations, first one in groups by user_id, object_type and the other by user_id only:
select
jsonb_build_object('user_id', user_id)
|| jsonb_object_agg(object_type, payload) as result
from (
select user_id, object_type, jsonb_agg(payload) as payload
from my_table
group by user_id, object_type
) s
group by user_id
Db<>Fiddle.

Why its result is very poor using ssd_mobilenet_v1_pnp with my own dataset?

Tensorflow 1.12.0
I am currently trying to train SSD_Mobilenet_V1_pnp model (pre-trained with COCO) with my dataset. My data set has 490 images for training, and 210 for evaluating, 23 classes
label_map.pbtxt:
item {
id: 1
name: 'a'
}
item {
id: 2
name: 'b'
}
...
pipeline.config:
model {
ssd {
num_classes: 24
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
feature_extractor {
type: "ssd_mobilenet_v1_ppn"
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.99999989895e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.00999999977648
}
}
activation: RELU_6
batch_norm {
decay: 0.97000002861
center: true
scale: true
epsilon: 0.0010000000475
}
}
override_base_feature_extractor_hyperparams: true
}
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
conv_hyperparams {
regularizer {
l2_regularizer {
weight: 3.99999989895e-05
}
}
initializer {
random_normal_initializer {
mean: 0.0
stddev: 0.00999999977648
}
}
activation: RELU_6
batch_norm {
decay: 0.97000002861
center: true
scale: true
epsilon: 0.0010000000475
train: true
}
}
depth: 512
num_layers_before_predictor: 1
kernel_size: 1
class_prediction_bias_init: -4.59999990463
share_prediction_tower: true
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.15000000596
max_scale: 0.949999988079
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.333299994469
reduce_boxes_in_lowest_layer: false
}
}
post_processing {
batch_non_max_suppression {
score_threshold: 0.300000011921
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
normalize_loss_by_num_matches: true
loss {
localization_loss {
weighted_smooth_l1 {
}
}
classification_loss {
weighted_sigmoid_focal {
gamma: 2.0
alpha: 0.75
}
}
classification_weight: 1.0
localization_weight: 1.5
}
encode_background_as_zeros: true
normalize_loc_loss_by_codesize: true
inplace_batchnorm_update: true
freeze_batchnorm: false
}
}
train_config {
batch_size: 512
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
sync_replicas: true
optimizer {
momentum_optimizer {
learning_rate {
cosine_decay_learning_rate {
learning_rate_base: 0.699999988079
total_steps: 50000
warmup_learning_rate: 0.13330000639
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.899999976158
}
use_moving_average: false
}
fine_tune_checkpoint: "model.ckpt"
num_steps: 50000
startup_delay_steps: 0.0
replicas_to_aggregate: 8
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
from_detection_checkpoint: true
}
train_input_reader {
label_map_path: "annotations\label_map.pbtxt"
tf_record_input_reader {
input_path: "train.record"
}
}
eval_config {
num_examples: 210
max_evals: 10
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}
eval_input_reader {
label_map_path: "annotations\label_map.pbtxt"
shuffle: false
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "val.record"
}
}
train:
python object_detection/model_main.py --logtostderr --pipeline_config_path=pipeline.config --model_dir=train
log:
Accumulating evaluation results...
DONE (t=0.05s).
Average Precision (AP) #[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) #[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) #[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) #[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) #[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) #[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) #[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) #[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Is this a normal?
How to solve?
Few things I notice that may help you out:
num_classes in your config file is at 24 but you are training 23 classes.
Also see you are using a fixed image resizer, depending on the dimensions of your photos this may be an issue since you are not maintaining aspect ratio.
Might see slight improvement by training for fewer steps (20k) since your data set is fairly small.
If none of this helps out would look into adding a hard data miner argument to config file to introduce a minimum number of negative examples.