in Xcode 8 throughs some errors After run the project [duplicate] - xcode8
This question already has answers here:
Hide strange unwanted Xcode logs
(14 answers)
Closed 6 years ago.
After run the project in console It throughs some errors,
subsystem: com.apple.UIKit, category: HIDEventFiltered, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
subsystem: com.apple.UIKit, category: HIDEventIncoming, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
subsystem: com.apple.BaseBoard, category: MachPort, enable_level: 1, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0
subsystem: com.apple.UIKit, category: StatusBar, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
subsystem: com.apple.libsqlite3, category: logging, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
subsystem: com.apple.SystemConfiguration, category: SCPreferences, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 2, enable_private_data: 0
Finally got the answer :
Click edit scheme->left choose "Run"->the top right choose "Arguments"->the bottom right add the environemnt variable as stated above
"Name->OS_ACTIVITY_MODE value->disable "
Related
Issues with OFDM transmitter and reciever in GNUradio
I am having some issues with GNUradio when trying to use OFDM transmitter and reciever. I am vaguely following the example on the WiKi here is my flow chart: I am struggling to get the correct values for the OFDM modules. I have tried multiple values for the Occupied Carriers and Sync Word. Current values: Occupied Carriers: (list(range(-26, -21)) + list(range(-20, -7)) + list(range(-6, 0)) + list(range(1, 7)) + list(range(8, 21)) + list(range(22, 27)),) Sync Word 1&2: (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) Pilot Carrier: ((-21, -7, 7, 21,),) This program is not working and it gives the following error: TypeError: __init__(): incompatible constructor arguments. The following argument types are supported: 1. gnuradio.digital.digital_python.ofdm_carrier_allocator_cvc(fft_len: int, occupied_carriers: List[List[int]], pilot_carriers: List[List[int]], pilot_symbols: List[List[complex]], sync_words: List[List[complex]], len_tag_key: str = 'packet_len', output_is_shifted: bool = True) Invoked with: 64; kwargs: occupied_carriers=([-26, -25, -24, -23, -22, -20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -6, -5, -4, -3, -2, -1, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 22, 23, 24, 25, 26],), pilot_carriers=((-21, -7, 7, 21),), pilot_symbols=(1, 1, 1, -1), sync_words=[(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], len_tag_key='packet_len' This is new territory for me so please ask if I can make any clarifications
Python - Convert from Response Variable to Pandas Dataframe
I ran a LIWC analysis and it gives me the following results (below). I would like to turn the result into a pandas dataframe. If anyone can chip in, that would be wonderful. Thanks in advance :) Best, David resp = requests.post(url, auth=(api_key, api_secret), data=data) resp1 = resp print(resp.json()) {'plan_usage': {'call_limit': 1000, 'calls_made': 6, 'calls_remaining': 994, 'percent_used': 0.6, 'start_date': '2020-12-09T03:05:57.779556Z', 'end_date': '2020-12-23T03:05:57.779556Z'}, 'results': [{'response_id': 'd1382f42-5c28-4528-ab2e-81b80ba185e2', 'request_id': 'req-1', 'language': 'en', 'version': 'v1.0.0', 'summary': {'word_count': 57, 'words_per_sentence': 11.4, 'sentence_count': 5, 'six_plus_words': 0.2982456140350877, 'emojis': 0, 'emoticons': 0, 'hashtags': 0, 'urls': 0}, 'liwc': {'scores': {'analytical_thinking': 80.77394876079086, 'authentic': 38.8220872694557, 'clout': 50, 'emotional_tone': 97.58138119866139, 'dictionary_words': 0.8771929824561403, 'categories': {'achievement': 0, 'adjectives': 0.017543859649122806, 'adverbs': 0.03508771929824561, 'affect': 0.05263157894736842, 'affiliation': 0.017543859649122806, 'all_punctuation': 0.10526315789473684, 'anger_words': 0, 'anxiety_words': 0, 'apostrophes': 0, 'articles': 0.12280701754385964, 'assent': 0, 'auxiliary_verbs': 0.14035087719298245, 'biological_processes': 0, 'body': 0, 'causation': 0, 'certainty': 0, 'cognitive_processes': 0.05263157894736842, 'colons': 0, 'commas': 0.017543859649122806, 'comparisons': 0, 'conjunctions': 0.07017543859649122, 'dashes': 0, 'death': 0, 'differentiation': 0, 'discrepancies': 0.017543859649122806, 'drives': 0.03508771929824561, 'exclamations': 0, 'family': 0, 'feel': 0, 'female': 0, 'filler_words': 0, 'focus_future': 0, 'focus_past': 0, 'focus_present': 0.14035087719298245, 'friends': 0.017543859649122806, 'function_words': 0.543859649122807, 'health': 0, 'hear': 0, 'home': 0, 'i': 0.03508771929824561, 'impersonal_pronouns': 0.03508771929824561, 'informal_language': 0, 'ingestion': 0, 'insight': 0, 'interrogatives': 0.017543859649122806, 'leisure': 0.14035087719298245, 'male': 0, 'money': 0, 'motion': 0.05263157894736842, 'negations': 0, 'negative_emotion_words': 0, 'netspeak': 0, 'nonfluencies': 0, 'numbers': 0, 'other_grammar': 0.2807017543859649, 'other_punctuation': 0, 'parentheses': 0, 'perceptual_processes': 0.017543859649122806, 'periods': 0.08771929824561403, 'personal_concerns': 0.14035087719298245, 'personal_pronouns': 0.03508771929824561, 'positive_emotion_words': 0.05263157894736842, 'power': 0, 'prepositions': 0.10526315789473684, 'pronouns': 0.07017543859649122, 'quantifiers': 0.05263157894736842, 'question_marks': 0, 'quotes': 0, 'relativity': 0.17543859649122806, 'religion': 0, 'reward': 0.017543859649122806, 'risk': 0, 'sad_words': 0, 'see': 0.017543859649122806, 'semicolons': 0, 'sexual': 0, 'she_he': 0, 'social': 0.03508771929824561, 'space': 0.10526315789473684, 'swear_words': 0, 'tentative': 0.03508771929824561, 'they': 0, 'time': 0.017543859649122806, 'time_orientation': 0.14035087719298245, 'verbs': 0.19298245614035087, 'we': 0, 'work': 0, 'you': 0}}}, 'sallee': {'counts': {'emotions': {'admiration': 5, 'amusement': 0, 'anger': 0, 'boredom': 0, 'calmness': 0, 'curiosity': 0, 'desire': 0, 'disgust': 0, 'excitement': 0.375, 'fear': 0, 'gratitude': 2, 'joy': 6.375, 'love': 5, 'pain': 0, 'sadness': 0, 'surprise': 0}, 'goodfeel': 13.375, 'ambifeel': 0, 'badfeel': 0, 'emotionality': 13.375, 'sentiment': 13.375, 'non_emotion': None}, 'scores': {'emotions': {'admiration': 0.3333333333333333, 'amusement': 0, 'anger': 0, 'boredom': 0, 'calmness': 0, 'curiosity': 0, 'desire': 0, 'disgust': 0, 'excitement': 0.03614457831325301, 'fear': 0, 'gratitude': 0.16666666666666666, 'joy': 0.3893129770992366, 'love': 0.3333333333333333, 'pain': 0, 'sadness': 0, 'surprise': 0}, 'goodfeel': 0.2015065913370998, 'ambifeel': 0, 'badfeel': 0, 'emotionality': 0.2015065913370998, 'sentiment': 0.6541600137038615, 'non_emotion': 0.7984934086629002}, 'emotion_word_count': 4}}]}
js = resp.json() df = pd.json_normalize(js['results'][0]) df.columns Index(['response_id', 'request_id', 'language', 'version', 'summary.word_count', 'summary.words_per_sentence', 'summary.sentence_count', 'summary.six_plus_words', 'summary.emojis', 'summary.emoticons', ... 'sallee.scores.emotions.pain', 'sallee.scores.emotions.sadness', 'sallee.scores.emotions.surprise', 'sallee.scores.goodfeel', 'sallee.scores.ambifeel', 'sallee.scores.badfeel', 'sallee.scores.emotionality', 'sallee.scores.sentiment', 'sallee.scores.non_emotion', 'sallee.emotion_word_count'], dtype='object', length=150) df.iloc[0] response_id d1382f42-5c28-4528-ab2e-81b80ba185e2 request_id req-1 language en version v1.0.0 summary.word_count 57 ... sallee.scores.badfeel 0 sallee.scores.emotionality 0.202 sallee.scores.sentiment 0.654 sallee.scores.non_emotion 0.798 sallee.emotion_word_count 4 Name: 0, Length: 150, dtype: object
KeyError: "None of [Index([...] are in the [columns]
I've got numpy array with shape of (3, 50): data = np.array([[0, 3, 0, 2, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 0, 1, 0, 0, 7, 0, 0, 0, 0, 1, 1, 2, 0, 0, 2], [0, 0, 0, 0, 0, 3, 0, 1, 6, 1, 1, 0, 0, 0, 0, 2, 0, 0, 1, 0, 1, 0, 3, 0, 0, 0, 0, 0, 0, 5, 2, 2, 2, 1, 0, 0, 1, 0, 1, 3, 2, 0, 0, 0, 0, 0, 2, 0, 0, 0], [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]]) and the following column names: new_cols = [f'description_word_{i+1}_count' for i in range(50)] I'm trying to add new columns in already existing dataframe in such way: df[new_cols] = data but get the error: KeyError: "None of [Index(['description_word_1_count', 'description_word_2_count',\n 'description_word_3_count', 'description_word_4_count',\n 'description_word_5_count', 'description_word_6_count',\n 'description_word_7_count', 'description_word_8_count',\n 'description_word_9_count', 'description_word_10_count',\n 'description_word_11_count', 'description_word_12_count',\n 'description_word_13_count', 'description_word_14_count',\n 'description_word_15_count', 'description_word_16_count',\n 'description_word_17_count', 'description_word_18_count',\n 'description_word_19_count', 'description_word_20_count',\n 'description_word_21_count', 'description_word_22_count',\n 'description_word_23_count', 'description_word_24_count',\n 'description_word_25_count', 'description_word_26_count',\n 'description_word_27_count', 'description_word_28_count',\n 'description_word_29_count', 'description_word_30_count',\n 'description_word_31_count', 'description_word_32_count',\n 'description_word_33_count', 'description_word_34_count',\n 'description_word_35_count', 'description_word_36_count',\n 'description_word_37_count', 'description_word_38_count',\n 'description_word_39_count', 'description_word_40_count',\n 'description_word_41_count', 'description_word_42_count',\n 'description_word_43_count', 'description_word_44_count',\n 'description_word_45_count', 'description_word_46_count',\n 'description_word_47_count', 'description_word_48_count',\n 'description_word_49_count', 'description_word_50_count'],\n dtype='object')] are in the [columns]" Also I don't know where it finds a '\n' symbols in my column names. At the same time creating a new dataframe with the data is OK: new_df = pd.DataFrame(data=data, columns=new_cols) Does anyone know what is causing the error?
Suppose you have a df like this: df = pd.DataFrame({'person': [1,1,1], 'event': ['A','B','C']}) You can add new columns like this: import pandas as pd import numpy as np data = np.array([[0, 3, 0, 2, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 0, 1, 0, 0, 7, 0, 0, 0, 0, 1, 1, 2, 0, 0, 2], [0, 0, 0, 0, 0, 3, 0, 1, 6, 1, 1, 0, 0, 0, 0, 2, 0, 0, 1, 0, 1, 0, 3, 0, 0, 0, 0, 0, 0, 5, 2, 2, 2, 1, 0, 0, 1, 0, 1, 3, 2, 0, 0, 0, 0, 0, 2, 0, 0, 0], [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]]) new_cols = [f'description_word_{i+1}_count' for i in range(50)] df[new_cols] = pd.DataFrame(data, index=df.index) I think the problem is that you are using a syntax to create series, when you actually need to create several series. In other words, a dataframe.
How to build word embedding model using Tflearn?
Updated I am working on the word embedding model for answer Matching score prediction using Tflearn. I have to build a model using sentence vector using tflearn dnn classifier, Now I have to add a word embedding layer to the dnn model. How to do that? Thanks in advance. "JVMdefines": enables a computer to run a Java program is coverted as : "JVMdefines": [[list([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) enables a computer to run a Java program : list([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])] My question: Is there any method that the machine can able to analyze. enables a "machine" to run a Java program That is It can detect computer and machine as in same meaning.
I would post a clarifying comment, but I do not have enough reputation to do so, so I will try to answer given the information you have presented in the original question... Your problem seems unclear, but here is how you would do this for a binary classification problem in tflearn. Step 1: Preprocessing First thing you need to do is to tokenize and transform your sentences into list of integers: "What kind of food do you like?" ---> [234,64,12,5224,43,96,23] Then, most people pad their sequences to all be the same length, cutting off the long ones or increasing the length of short ones by padding with 0's. [234,64,12,5224,43,96,23] ---> [0,0,0,0....234,64,12,5224,43,96,23] Hint: from tflearn.data_utils import pad_sequences padded = pad_sequences(unpadded, maxlen=max_document_length, value=0.) Step 2: Model Building After you transform all the text you have into integer sequences, you can build the model. Note here that our input shape is [None, max_document_length]. None means optional size (allows for variable batch size), and max_document_length is the length of our sequences that we padded previously. #Create our model network = input_data(shape=[None, max_document_length], name='input') Create embedding matrix. Note that you push the embedding matrix to the CPU. The input dim parameter is looking for an integer that represents the size of your vocabulary. the output_dim is the size of your embedding. with tf.device('/cpu:0'): network = tflearn.embedding(network, input_dim=vocabulary_size, output_dim=128) #Pass embeddings into an lstm layer (handles sequential problems) network = tflearn.lstm(network, 512, dropout=0.8) #Squish data into a fully connected layer, with 2 outputs for binary classification network = tflearn.fully_connected(network, 2, activation='softmax') #Perform regression to get the final anaswer network = tflearn.regression(network, optimizer='rmsprop', learning_rate=0.001, loss='categorical_crossentropy') #Wrap the graph we just created in a tflearn DNN wrapper model = tflearn.DNN(network) #Run model.fit to actually train your model model.fit(x_train, y_train, n_epoch=15, shuffle=True, validation_set=(x_val, y_val), show_metric=True, batch_size=batch_size)
hmmlearn doesn't converge on a simple input
import numpy as np from hmmlearn.hmm import MultinomialHMM startprob_prior = np.array([0.5, 0.5]) # guess transmat_prior = np.array([[0.9, 0.1], [0.3, 0.7]]) # guess #data is binary, 0\1 with bursts of 1's x = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] # data x = np.array(x).reshape(-1,1) # make it in the desirable format hmm = MultinomialHMM(n_components=2, verbose=True, startprob_prior=startprob_prior, transmat_prior=transmat_prior) hmm.fit(x) print(hmm.monitor_.converged) # returns True print(hmm.transmat_) # returns 2x2 matrix of NaN Why doesn't it converges? clearly the 1's comes in bulks. see issue 137
The solution was to tell the model not to initialize the emission rate (model.init_params = 'st' ) + set it up by setting the private attribute startprob_. Now it seems like working! - red is state, blue is observation : import numpy as np from hmmlearn.hmm import MultinomialHMM import hmmlearn start_probability = np.array([0.9, 0.1]) # guess transition_probability = np.array([[0.9, 0.1], [0.1, 0.9]]) emission_probability = np.array([[0.9, 0.1], [0.1, 0.9]]) model = MultinomialHMM(n_components=2, verbose=True, n_iter=1000, tol=1e-3) model.startprob = start_probability model.transmat = transition_probability model.emissionprob_ = emission_probability # notice here the init is to the internal variable emissionprob_ and not model.init_params = 'st' # data is binary, 0\1 with bursts of 1's x = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # data x = np.array(x).reshape(-1, 1) # make it in the desirable format model.fit(x) print(model.monitor_.converged) # returns True print(model.transmat_) # returns 2x2 matrix of NaN print(model.emissionprob_) # returns 2x2 matrix of NaN print(model.startprob_) # returns 2x2 matrix of NaN logprob, estimated_states = model.decode(x, algorithm="viterbi") import matplotlib.pyplot as plt plt.stem(x, label='observation') plt.plot(estimated_states, label='hidden states', color='red') plt.show()