NER spacy custom trained model not predicting the label properly - spacy

Trained NER spacy custom training model using the document https://towardsdatascience.com/train-ner-with-custom-training-data-using-spacy-525ce748fab7 and https://spacy.io/usage/processing-pipelines by sample test case dataset to find the currency exactly in the given text.
Examble dataset:
TRAIN_DATA = [('This is AFN currency', {'entities': [(8, 11, 'CUR')]}),
('I have EUR european currency', {'entities': [(7, 10, 'CUR')]}),
('let as have ALL money', {'entities': [(12, 15, 'CUR')]}),
('DZD is a dollar', {'entities': [(0, 3, 'CUR')]}),
('money USD united states', {'entities': [(6, 9, 'CUR')]})
]
trained a model successfully by naming the model 'currency'. It predicts good for the trained dataset with proper label but mostly it predicts untrained text data with wrong label.
Input test line: 'I have AZWSQTS lot LOT of Indian MZW currency USD INR'
output:
AZWSQTS - CUR , LOT - CUR, MZW - CUR, USD - CUR, INR - CUR
Here, 'AZWSQTS' & 'LOT' is not a currency but it predicts, this is the problem I am getting.
Complete code:
from __future__ import unicode_literals, print_function
import random
from pathlib import Path
import spacy
from tqdm import tqdm
from spacy.training import Example
def spacy_train_model():
''' Sample traning dataset format'''
'''list of currency'''
currency_list = ['AFN', 'EUR', 'EUR', 'ALL', 'DZD', 'USD', 'EUR', 'AOA', 'XCD', 'XCD', 'ARS',
'AMD', 'AWG', 'SHP', 'AUD', 'EUR', 'AZN', '', 'BSD', 'BHD', 'BDT', 'BBD', 'BYN', 'EUR', 'BZD',
'XOF', 'BMD', 'BTN', 'BOB', 'USD', 'BAM', 'BWP', 'BRL', 'USD', 'USD', 'BND', 'BGN', 'XOF', 'BIF',
'CVE', 'KHR', 'XAF', 'CAD', 'USD', 'KYD', 'XAF', 'XAF', 'NZD', 'CLP', 'CNY', 'AUD', 'AUD', 'COP',
'KMF', 'CDF', 'XAF', 'none', 'CRC', 'XOF', 'HRK', 'CUP', 'ANG', 'EUR', 'CZK', '', 'DKK', 'DJF',
'XCD', 'DOP', '', 'USD', 'EGP', 'USD', 'XAF', 'ERN', 'EUR', 'SZL', 'ETB', '', 'FKP', 'FJD',
'EUR', 'EUR', 'EUR', 'XPF', '', 'XAF', 'GMD', 'GEL', 'EUR', 'GHS', 'GIP', 'EUR', 'DKK', 'XCD',
'EUR', 'USD', 'GTQ', 'GGP', 'GNF', 'XOF', 'GYD', '', 'HTG', 'HNL', 'HKD', 'HUF', 'ISK', 'INR',
'IDR', 'XDR', 'IRR', 'IQD', 'EUR', 'IMP', 'ILS', 'EUR', '', 'JMD', 'JPY', 'JEP', 'JOD',
'KZT', 'KES', 'AUD', 'EUR', 'KWD', 'KGS', '', 'LAK', 'EUR', 'LBP', 'LSL', 'LRD', 'LYD', 'CHF',
'EUR', 'EUR', '', 'MOP', 'MGA', 'MWK', 'MYR', 'MVR', 'XOF', 'EUR', 'USD', 'EUR', 'MRU', 'MUR',
'EUR', 'MXN', 'USD', 'MDL', 'EUR', 'MNT', 'EUR', 'XCD', 'MAD', 'MZN', 'MMK', '', 'NAD', 'AUD',
'NPR', 'EUR', 'XPF', 'NZD', 'NIO', 'XOF', 'NGN', 'NZD', 'AUD', 'USD', 'KPW', 'MKD', 'NOK',
'OMR','PKR', 'USD', 'ILS', 'USD', 'PGK', 'PYG', 'PEN', 'PHP', 'NZD', 'PLN', 'EUR', 'USD','QAR',
'EUR', 'RON', 'RUB', 'RWF', '', 'USD', 'EUR', 'SHP', 'XCD', 'XCD', 'EUR', 'EUR', 'XCD', 'WST',
'EUR', 'STN', 'SAR', 'XOF', 'RSD', 'SCR', 'SLL', 'SGD', 'USD', 'ANG', 'EUR', 'EUR', 'SBD', 'SOS',
'ZAR', 'GBP', 'KRW', 'SSP', 'EUR', 'LKR', 'SDG', 'SRD', 'NOK', 'SEK', 'CHF', 'SYP', '', 'TWD',
'TJS', 'TZS', 'THB', 'USD', 'XOF', 'NZD', 'TOP', 'TTD', 'GBP', 'TND', 'TRY', 'TMT', 'USD', 'AUD',
'UGX', 'UAH', 'AED', 'GBP', 'USD', 'UYU', 'USD', 'UZS', '', 'VUV', 'EUR', 'VES', 'VND', '',
'USD', 'XPF', 'YER', 'ZMW', 'USD']
TRAIN_DATA = [('This is AFN currency', {'entities': [(8, 11, 'CUR')]}),
('I have EUR europen currency', {'entities': [(7, 10, 'CUR')]}),
('let as have ALL money', {'entities': [(12, 15, 'CUR')]}),
('DZD is a dollar', {'entities': [(0, 3, 'CUR')]}),
('money USD united states', {'entities': [(6, 9, 'CUR')]})
]
# model = "en_core_web_lg"
model = None
output_dir=Path(r"D:\currency") # Path to save training model - create new empty directory
n_iter=100
#load the model
if model is not None:
nlp = spacy.load(model)
optimise = nlp.create_optimizer()
print("Loaded model '%s'" % model)
else:
nlp = spacy.blank('en')
optimise = nlp.begin_training()
print("Created blank 'en' model")
#set up the pipeline
if 'ner' not in nlp.pipe_names:
ner = nlp.create_pipe('ner')
nlp.add_pipe('ner', last=True)
else:
ner = nlp.get_pipe('ner')
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.initialize()
# optimizer = optimise
for itn in range(n_iter):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in tqdm(TRAIN_DATA):
doc = nlp.make_doc(text)
example = Example.from_dict(doc, annotations)
nlp.update(
[example],
drop=0.5,
sgd=optimizer,
losses=losses)
print(losses)
for text, _ in TRAIN_DATA:
doc = nlp(text)
print('Entities', [(ent.text, ent.label_) for ent in doc.ents])
if output_dir is not None:
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
def test_model(text):
nlp = spacy.load(r'D:\currency')
for tex in text.split('\n'):
doc = nlp(tex)
for token in doc.ents:
print(token.text, token.label_)
spacy_train_model() #Training the model
test_model('text') #Testing the model

Couple of thoughts here...
You can't train a model with only five examples. Maybe this is just example code and you have more, but you generally needs hundreds of examples.
If you only need to recognize currency names like USD or GBP, use spaCy's rule-based matchers. You would only need an NER model if these are ambiguous somehow. Like if ALL is a currency, but you don't want to recognize it in "I ate ALL the donuts", an NER model can help, but that's a pretty hard distinction to learn, so you'll need hundreds of examples.
What is probably happening in your example problem is that the NER model has learned that any all capital token is a currency. If you want to fix that with an NER model, you'll need to give it examples where an all capital token isn't currency to learn from.

Related

Spacy NER output, How to save ent.label,ent.text?

I am using this code:
"
c=[]
for i, j in article.iterrows():
c.append(j)
d=[]
for i in c:
e={}
e['Urls']=(i[0])
a = str(i[2])
doc = ner(a)
for ent in doc.ents:
e[ent.label_]=(ent.text)
d.append(e)
"
My output looks something like this:
[{'Urls': 'https://somewebsite.com',
'Fruit': 'Apple',
'Fruit_colour': 'Red'},
{'Urls': 'Urls': 'https://some_other_website.com/',
'Fruit': 'Papaya',
'Fruit_Colour': 'Yellow'}
I have multiple values fruit , Desire output looks like:
{'Urls': 'https://somewebsite.com'
'Fruit': 'Apple',
'Fruit': 'orange',
'Fruit': 'watermelon',
'Fruit_colour': 'Red',
'Fruit_colour': 'orange',
'Fruit_colour': 'Green'}
{'Urls': 'Urls': 'https://some_other_website.com/',
'Fruit': 'Papaya',
'Fruit': 'Peach',
'Fruit': Mango'
'Fruit_Colour': 'Yellow',
'Fruit_Colour': 'Yellow
'Fruit_Colour': 'Green'}
Your help and time is much appreciated thank you.
It sounds like you want to save multiple values in a single key. You can use a defaultdict with lists for that.
from collections import defaultdict
out = defaultdict(list)
doc = ... get it from spaCy ...
for ent in doc.ents:
out[ent.label_].append(ent.text)
print(out)

Error using H2O (from Python 3.9.10) and XGBoost backend on MacOS (Monterey, Apple M1)

I am currently trying to use H2O from Python, and I encounter some problems on my Mac OS with XGBoost.
It seems like H2O does not find it anywhere.
More precisely, the next simple snippet
import pandas as pd
import h2o
data = [['2015-01-01', '2490.925806' , '-0.41'],
['2015-01-02', '2412.623113' , '-0.48'],
['2015-01-03', '2365.611276' , '-0.55']]
df = pd.DataFrame(data, columns=["time", "base", "target"]).set_index("time", drop=True)
h2o.init(nthreads=-1)
estimator = h2o.estimators.H2OXGBoostEstimator()
training_frame = h2o.H2OFrame(df)
estimator.train(["base"], "target", training_frame)
gives me the error :
H2OResponseError: Server error water.exceptions.H2ONotFoundArgumentException:
Error: POST /3/ModelBuilders/xgboost not found
Request: POST /3/ModelBuilders/xgboost
data: {'training_frame': 'Key_Frame__upload_893634781f588299bbd20d51c98d43a9.hex', 'nfolds': '0', 'keep_cross_validation_models': 'True', 'keep_cross_validation_predictions': 'False', 'keep_cross_validation_fold_assignment': 'False', 'score_each_iteration': 'False', 'fold_assignment': 'auto', 'response_column': 'target', 'ignore_const_cols': 'True', 'stopping_rounds': '0', 'stopping_metric': 'auto', 'stopping_tolerance': '0.001', 'max_runtime_secs': '0.0', 'seed': '-1', 'distribution': 'auto', 'tweedie_power': '1.5', 'categorical_encoding': 'auto', 'quiet_mode': 'True', 'ntrees': '50', 'max_depth': '6', 'min_rows': '1.0', 'min_child_weight': '1.0', 'learn_rate': '0.3', 'eta': '0.3', 'sample_rate': '1.0', 'subsample': '1.0', 'col_sample_rate': '1.0', 'colsample_bylevel': '1.0', 'col_sample_rate_per_tree': '1.0', 'colsample_bytree': '1.0', 'colsample_bynode': '1.0', 'max_abs_leafnode_pred': '0.0', 'max_delta_step': '0.0', 'score_tree_interval': '0', 'min_split_improvement': '0.0', 'gamma': '0.0', 'nthread': '-1', 'build_tree_one_node': 'False', 'calibrate_model': 'False', 'max_bins': '256', 'max_leaves': '0', 'sample_type': 'uniform', 'normalize_type': 'tree', 'rate_drop': '0.0', 'one_drop': 'False', 'skip_drop': '0.0', 'tree_method': 'auto', 'grow_policy': 'depthwise', 'booster': 'gbtree', 'reg_lambda': '1.0', 'reg_alpha': '0.0', 'dmatrix_type': 'auto', 'backend': 'auto', 'gainslift_bins': '-1', 'auc_type': 'auto', 'scale_pos_weight': '1.0'}
For more information about my distribution:
OS: Monterey 12.3
Processor: Apple M1
Python: 3.9.10
H2O: 3.36.0.3
I suspect Apple M1 to be the cause of the error, but is that really the case ?
I am sorry, the XGBoost is not supported on Apple M1 processor yet.
https://h2oai.atlassian.net/browse/PUBDEV-8482

Huggingface's BERT tokenizer not adding pad token

It's not entirely clear from the documentation, but I can see that BertTokenizer is initialised with pad_token='[PAD]', so I assume when you encode with add_special_tokens=True then it would automatically pad it. Given that pad_token_id=0, I can't see any 0s in the token_ids however:
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokens = tokenizer.tokenize(text)
token_ids = tokenizer.encode(text, add_special_tokens=True, max_length=2048)
# Print the original sentence.
print('Original: ', text)
# Print the sentence split into tokens.
print('\nTokenized: ', tokens)
# Print the sentence mapped to token ids.
print('\nToken IDs: ', token_ids)
Output:
Original: Toronto's key stock index ended higher in brisk trading on Thursday, extending Wednesday's rally despite being weighed down by losses on Wall Street.
The TSE 300 Composite Index rose 29.80 points to close at 5828.62, outperforming the Dow Jones Industrial Average which slumped 21.27 points to finish at 6658.60.
Toronto added to Wednesday's 55-point rally while investors took profits in New York after the Dow's 92-point gains, said MMS International analyst Katherine Beattie.
"That shows that the markets are very fragile," Beattie said. "They (investors) want to take advantage of any strength to sell," she said.
Toronto was also buoyed by its heavyweight gold group which jumped nearly 2.2 percent, aided by firmer COMEX gold prices. The key June contract rose $1.00 to $344.30.
Ten of Toronto's 14 sub-indices posted gains, led by golds, transportation, forestry products and consumer products.
The weak side included conglomerates, base metals and utilities.
Trading was heavy at 100 million shares worth C$1.54 billion ($1.1 billion).
Advancing stocks outnumbered declines 556 to 395, with 276 issues flat.
Among hot stocks, Bre-X Minerals Ltd. rose 0.13 to 2.30 on 5.0 million shares as investors continued to consider the viability of its Busang gold discovery in Indonesia.
Kenting Energy Services Inc. rose 0.25 to 9.05 after Precision Drilling Corp. amended its takeover offer
Bakery and foodstuffs maker George Weston Ltd. jumped 4.50 to close at 74.50, the TSE's top gainer.
Tokenized: ['toronto', "'", 's', 'key', 'stock', 'index', 'ended', 'higher', 'in', 'brisk', 'trading', 'on', 'thursday', ',', 'extending', 'wednesday', "'", 's', 'rally', 'despite', 'being', 'weighed', 'down', 'by', 'losses', 'on', 'wall', 'street', '.', 'the', 'ts', '##e', '300', 'composite', 'index', 'rose', '29', '.', '80', 'points', 'to', 'close', 'at', '58', '##28', '.', '62', ',', 'out', '##per', '##form', '##ing', 'the', 'dow', 'jones', 'industrial', 'average', 'which', 'slumped', '21', '.', '27', 'points', 'to', 'finish', 'at', '66', '##58', '.', '60', '.', 'toronto', 'added', 'to', 'wednesday', "'", 's', '55', '-', 'point', 'rally', 'while', 'investors', 'took', 'profits', 'in', 'new', 'york', 'after', 'the', 'dow', "'", 's', '92', '-', 'point', 'gains', ',', 'said', 'mm', '##s', 'international', 'analyst', 'katherine', 'beat', '##tie', '.', '"', 'that', 'shows', 'that', 'the', 'markets', 'are', 'very', 'fragile', ',', '"', 'beat', '##tie', 'said', '.', '"', 'they', '(', 'investors', ')', 'want', 'to', 'take', 'advantage', 'of', 'any', 'strength', 'to', 'sell', ',', '"', 'she', 'said', '.', 'toronto', 'was', 'also', 'bu', '##oy', '##ed', 'by', 'its', 'heavyweight', 'gold', 'group', 'which', 'jumped', 'nearly', '2', '.', '2', 'percent', ',', 'aided', 'by', 'firm', '##er', 'come', '##x', 'gold', 'prices', '.', 'the', 'key', 'june', 'contract', 'rose', '$', '1', '.', '00', 'to', '$', '344', '.', '30', '.', 'ten', 'of', 'toronto', "'", 's', '14', 'sub', '-', 'indices', 'posted', 'gains', ',', 'led', 'by', 'gold', '##s', ',', 'transportation', ',', 'forestry', 'products', 'and', 'consumer', 'products', '.', 'the', 'weak', 'side', 'included', 'conglomerate', '##s', ',', 'base', 'metals', 'and', 'utilities', '.', 'trading', 'was', 'heavy', 'at', '100', 'million', 'shares', 'worth', 'c', '$', '1', '.', '54', 'billion', '(', '$', '1', '.', '1', 'billion', ')', '.', 'advancing', 'stocks', 'outnumbered', 'declines', '55', '##6', 'to', '395', ',', 'with', '276', 'issues', 'flat', '.', 'among', 'hot', 'stocks', ',', 'br', '##e', '-', 'x', 'minerals', 'ltd', '.', 'rose', '0', '.', '13', 'to', '2', '.', '30', 'on', '5', '.', '0', 'million', 'shares', 'as', 'investors', 'continued', 'to', 'consider', 'the', 'via', '##bility', 'of', 'its', 'bus', '##ang', 'gold', 'discovery', 'in', 'indonesia', '.', 'kent', '##ing', 'energy', 'services', 'inc', '.', 'rose', '0', '.', '25', 'to', '9', '.', '05', 'after', 'precision', 'drilling', 'corp', '.', 'amended', 'its', 'takeover', 'offer', 'bakery', 'and', 'foods', '##tu', '##ffs', 'maker', 'george', 'weston', 'ltd', '.', 'jumped', '4', '.', '50', 'to', 'close', 'at', '74', '.', '50', ',', 'the', 'ts', '##e', "'", 's', 'top', 'gain', '##er', '.']
Token IDs: [101, 4361, 1005, 1055, 3145, 4518, 5950, 3092, 3020, 1999, 28022, 6202, 2006, 9432, 1010, 8402, 9317, 1005, 1055, 8320, 2750, 2108, 12781, 2091, 2011, 6409, 2006, 2813, 2395, 1012, 1996, 24529, 2063, 3998, 12490, 5950, 3123, 2756, 1012, 3770, 2685, 2000, 2485, 2012, 5388, 22407, 1012, 5786, 1010, 2041, 4842, 14192, 2075, 1996, 23268, 3557, 3919, 2779, 2029, 14319, 2538, 1012, 2676, 2685, 2000, 3926, 2012, 5764, 27814, 1012, 3438, 1012, 4361, 2794, 2000, 9317, 1005, 1055, 4583, 1011, 2391, 8320, 2096, 9387, 2165, 11372, 1999, 2047, 2259, 2044, 1996, 23268, 1005, 1055, 6227, 1011, 2391, 12154, 1010, 2056, 3461, 2015, 2248, 12941, 9477, 3786, 9515, 1012, 1000, 2008, 3065, 2008, 1996, 6089, 2024, 2200, 13072, 1010, 1000, 3786, 9515, 2056, 1012, 1000, 2027, 1006, 9387, 1007, 2215, 2000, 2202, 5056, 1997, 2151, 3997, 2000, 5271, 1010, 1000, 2016, 2056, 1012, 4361, 2001, 2036, 20934, 6977, 2098, 2011, 2049, 8366, 2751, 2177, 2029, 5598, 3053, 1016, 1012, 1016, 3867, 1010, 11553, 2011, 3813, 2121, 2272, 2595, 2751, 7597, 1012, 1996, 3145, 2238, 3206, 3123, 1002, 1015, 1012, 4002, 2000, 1002, 29386, 1012, 2382, 1012, 2702, 1997, 4361, 1005, 1055, 2403, 4942, 1011, 29299, 6866, 12154, 1010, 2419, 2011, 2751, 2015, 1010, 5193, 1010, 13116, 3688, 1998, 7325, 3688, 1012, 1996, 5410, 2217, 2443, 22453, 2015, 1010, 2918, 11970, 1998, 16548, 1012, 6202, 2001, 3082, 2012, 2531, 2454, 6661, 4276, 1039, 1002, 1015, 1012, 5139, 4551, 1006, 1002, 1015, 1012, 1015, 4551, 1007, 1012, 10787, 15768, 21943, 26451, 4583, 2575, 2000, 24673, 1010, 2007, 25113, 3314, 4257, 1012, 2426, 2980, 15768, 1010, 7987, 2063, 1011, 1060, 13246, 5183, 1012, 3123, 1014, 1012, 2410, 2000, 1016, 1012, 2382, 2006, 1019, 1012, 1014, 2454, 6661, 2004, 9387, 2506, 2000, 5136, 1996, 3081, 8553, 1997, 2049, 3902, 5654, 2751, 5456, 1999, 6239, 1012, 5982, 2075, 2943, 2578, 4297, 1012, 3123, 1014, 1012, 2423, 2000, 1023, 1012, 5709, 2044, 11718, 15827, 13058, 1012, 13266, 2049, 15336, 3749, 18112, 1998, 9440, 8525, 21807, 9338, 2577, 12755, 5183, 1012, 5598, 1018, 1012, 2753, 2000, 2485, 2012, 6356, 1012, 2753, 1010, 1996, 24529, 2063, 1005, 1055, 2327, 5114, 2121, 1012, 102]
No, it would not. There is a different parameter to allow padding:
transformers >=3.0.0 padding (accepts True, max_length and False as values)
transformers < 3.0.0 pad_to_max_length (accepts True or False as Values)
add_special_tokens will add the [CLS] and the [SEP] token (101 and 102 respectively).

multi-gpu inference tensorflow

I wanted to perform multi-gpu inference using tensorflow/Keras
this is my prediction
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
Is there a way to run this model on multiple gpus?
Thanks in advance.
Increase the GPU_COUNT as per the number of GPUs in the system and pass the new config when creating the model using modellib.MaskRCNN.
class InferenceConfig(coco.CocoConfig):
GPU_COUNT = 1 # increase the GPU count based on number of GPUs
IMAGES_PER_GPU = 1
config = InferenceConfig()
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
https://github.com/matterport/Mask_RCNN/blob/master/samples/demo.ipynb

Keras: Missing clear_session, set_session, and get_session?

I'm using Keras 2.2.0 and am trying to do something like the following:
import keras.backend as K
K.clear_session()
sess = tf.Session()
K.set_session(sess)
...
with K.get_session() as sess:
However, I get errors saying AttributeError: 'module' object has no attribute 'clear_session'. So it seems this functionality is no longer in keras.backend?
For instance, if I do dir(keras.backend), I get:
['Function', 'NAME_SCOPE_STACK', 'Print', 'RandomStreams', 'T', 'T_softsign', '_BACKEND', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '_backend', '_config', '_config_path', '_epsilon', '_floatx', '_image_data_format', '_keras_base_dir', '_keras_dir', 'abs', 'absolute_import', 'all', 'any', 'arange', 'argmax', 'argmin', 'backend', 'batch_dot', 'batch_flatten', 'batch_get_value', 'batch_normalization', 'batch_set_value', 'bias_add', 'binary_crossentropy', 'cast', 'cast_to_floatx', 'categorical_crossentropy', 'clip', 'common', 'concatenate', 'constant', 'contextmanager', 'conv1d', 'conv2d', 'conv2d_transpose', 'conv3d', 'conv3d_transpose', 'cos', 'count_params', 'ctc_batch_cost', 'ctc_cost', 'ctc_create_skip_idxs', 'ctc_interleave_blanks', 'ctc_path_probs', 'ctc_update_log_p', 'cumprod', 'cumsum', 'defaultdict', 'depthwise_conv2d', 'division', 'dot', 'dropout', 'dtype', 'elu', 'epsilon', 'equal', 'eval', 'exp', 'expand_dims', 'eye', 'f', 'flatten', 'floatx', 'foldl', 'foldr', 'function', 'gather', 'get_uid', 'get_value', 'get_variable_shape', 'gradients', 'greater', 'greater_equal', 'hard_sigmoid', 'has_arg', 'identity', 'ifelse', 'image_data_format', 'image_dim_ordering', 'importlib', 'in_test_phase', 'in_top_k', 'in_train_phase', 'int_shape', 'is_keras_tensor', 'is_placeholder', 'is_sparse', 'is_tensor', 'json', 'l2_normalize', 'learning_phase', 'less', 'less_equal', 'local_conv1d', 'local_conv2d', 'log', 'logsumexp', 'map_fn', 'max', 'maximum', 'mean', 'min', 'minimum', 'moving_average_update', 'name_scope', 'ndim', 'normalize_batch_in_training', 'not_equal', 'np', 'one_hot', 'ones', 'ones_like', 'os', 'pattern_broadcast', 'permute_dimensions', 'placeholder', 'pool', 'pool2d', 'pool3d', 'pow', 'print_function', 'print_tensor', 'prod', 'py_all', 'py_any', 'py_slice', 'py_sum', 'random_binomial', 'random_normal', 'random_normal_variable', 'random_uniform', 'random_uniform_variable', 'relu', 'repeat', 'repeat_elements', 'reset_uids', 'reshape', 'resize_images', 'resize_volumes', 'reverse', 'rnn', 'round', 'separable_conv1d', 'separable_conv2d', 'set_epsilon', 'set_floatx', 'set_image_data_format', 'set_image_dim_ordering', 'set_learning_phase', 'set_value', 'shape', 'sigmoid', 'sign', 'sin', 'slice', 'softmax', 'softplus', 'softsign', 'sparse_categorical_crossentropy', 'spatial_2d_padding', 'spatial_3d_padding', 'sqrt', 'square', 'squeeze', 'stack', 'std', 'stop_gradient', 'sum', 'switch', 'sys', 'tanh', 'temporal_padding', 'th_sparse_module', 'theano', 'theano_backend', 'tile', 'to_dense', 'transpose', 'truncated_normal', 'update', 'update_add', 'update_sub', 'var', 'variable', 'zeros', 'zeros_like']
and don't see any of those 3 in there.
How should I be writing this code in modern Keras?
Thanks!
EDIT: https://github.com/keras-team/keras/issues/11015
Seems like it is not available any may have to downgrade
It might be that your backend is set to using Theano (I believe clear_session is only available through the Tensorflow backend with Keras). You can change these settings in your keras.json to TF and clear_session should be available to you.