Erroneous and inconsistent output from env.render() in openai gym Taxi-v3 in Google Colab - google-colaboratory

I am trying to set up the OpenAI gym environment for the Taxi - V3 application in Google Colab and using the following code :
from IPython.display import clear_output
import gym
env = gym.make("Taxi-v3", render_mode = 'ansi').env
#env = gym.make("Taxi-v3", render_mode = 'ansi')
Then I have a function that shows the Taxi position in the Colab cell
def showStateVec(txR=3, txC=1,pxI=2,des=0):
env.reset()
state = env.encode(txR,txC,pxI,des)
env.s = state
print("State ", env.s, list(env.decode(env.s)))
env.s = state
p = env.render()
print(p[0])
for k,v in env.P[state].items():
print(v)
When I call
# taxi at 3,1, passenger at 2, destination = 0
# note, moving to the WEST is not possible, the position does not change
showStateVec(3,1,2,0)
I get the following output ( i have replaced the yellow box with 'x'). Evidently, this is not correct. The taxi is being somewhere else
State 328 [3, 1, 2, 0]
+---------+
|R: |x: :G|
| : | : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
[(1.0, 428, -1, False)]
[(1.0, 228, -1, False)]
[(1.0, 348, -1, False)]
[(1.0, 328, -1, False)]
[(1.0, 328, -10, False)]
[(1.0, 328, -10, False)]
However if I run the command again a second time, the yellow box moves elsewhere, even though the rest of the output is identical
State 328 [3, 1, 2, 0]
+---------+
|R: | : :G|
| : | : : |
| : : : : |
| | : | : |
|Y| :x|B: |
+---------+
[(1.0, 428, -1, False)]
[(1.0, 228, -1, False)]
[(1.0, 348, -1, False)]
[(1.0, 328, -1, False)]
[(1.0, 328, -10, False)]
[(1.0, 328, -10, False)]
Here is the link to the Colab notebook where you can replicate the problem. I have also seen this and other solutions in stackoverflow but none seem to work.
What should I do to ensure that the taxi ( or the yellow box representing the taxi) is displayed exactly where the state of the taxi says it should be. Please help.

Related

Replacing unique array of strings in a row using pyspark

I am trying the following code which replace an empty list with unique array of a column("apples_set") when the condition "all" is satisfied.
The column "apple_logic_string" is of type Array[String]
Data frame looks like this:
apples_patterns.show()
+--------------------+-----------------+
| apples_logic_string|apples_set |
+--------------------+-----------------+
| "234" |["43","54"] |
| "65" |["95"] |
| "all" |[] |
| "76" |["84","67"] |
+--------------------+-----------------+
The code:
unique_all_apples = set(apples_patterns.agg(F.flatten(F.collect_set('apples_set'))).head()[0]) # noqa
error_patterns = apples_patterns.withColumn('apples_set', F.when(F.col('apples_logic_string') == 'all',
unique_all_apples).otherwise(F.col('apples_set')))
The Error:
Traceback (most recent call last):
File "/myproject/datasets/apples_matching.py", line 24, in compute
apples_patterns = apples_patterns.withColumn('apples_set', F.when(F.col('apples_logic_string') == 'all',
File "/scratch/asset-install/1c9821b4f6adc95ac4d5f15ff109001b/miniconda38/lib/python3.8/site-packages/pyspark/sql/functions.py", line 1518, in when
jc = sc._jvm.functions.when(condition._jc, v)
File "/scratch/asset-install/1c9821b4f6adc95ac4d5f15ff109001b/miniconda38/lib/python3.8/site-packages/py4j/java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "/scratch/asset-install/1c9821b4f6adc95ac4d5f15ff109001b/miniconda38/lib/python3.8/site-packages/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/scratch/asset-install/1c9821b4f6adc95ac4d5f15ff109001b/miniconda38/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.when.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [43,54,95,84,67]
You can use array function: array documentation
In your case you may use it like this:
F.array([F.lit(x) for x in unique_all_apples])
sample code
import pyspark.sql.functions as F
x = [("234", ["43", "54"]), ("65", ["95"]), ("all", []), ("76", ["84", "67"])]
apples_patterns = spark.createDataFrame(x, schema=["apples_logic_string", "apples_set"])
unique_all_apples = set(
apples_patterns.agg(F.flatten(F.collect_set("apples_set"))).head()[0]
)
error_patterns = apples_patterns.withColumn(
"apples_set",
F.when(
F.col("apples_logic_string") == "all",
F.array([F.lit(x) for x in unique_all_apples]),
).otherwise(F.col("apples_set")),
)
And the output:
+-------------------+--------------------+
|apples_logic_string| apples_set|
+-------------------+--------------------+
| 234| [43, 54]|
| 65| [95]|
| all|[54, 95, 43, 67, 84]|
| 76| [84, 67]|
+-------------------+--------------------+
The easiest solution is to create another dataframe with one row that contains all distinct apples_set using explode than collect_set, after that joined to the original dataframe:
import spark.implicits._
val data = Seq(
("234", Seq("43", "54")),
("65", Seq("95")),
("all", Seq()),
("76", Seq("84", "67"))
)
val df = spark.sparkContext.parallelize(data).toDF("apples_logic_string", "apples_set")
val allDf = df.select(explode(col("apples_set")).as("apples_set")).agg(collect_set("apples_set").as("all_apples_set"))
.withColumn("apples_logic_string", lit("all"))
df.join(broadcast(allDf), Seq("apples_logic_string"), "left")
.withColumn("apples_set", when(col("apples_logic_string").equalTo("all"), col("all_apples_set")).otherwise(col("apples_set")))
.drop("all_apples_set").show(false)
+-------------------+--------------------+
|apples_logic_string|apples_set |
+-------------------+--------------------+
|234 |[43, 54] |
|65 |[95] |
|all |[84, 95, 67, 54, 43]|
|76 |[84, 67] |
+-------------------+--------------------+

Converting from Spell Format to STS when each individual has multiple, separate spells

I am trying to convert data of this form to STS format in order to perform sequence analysis:
|Person ID |Spell |Start Month |End Month |Status (Economic Activity) |
| -------- |----- |------------|----------|---------------------------|
|1|1|300|320|4|
|1|2|320|360|4|
|2|1|330|360|4|
|3|1|270|360|7|
|4|1|280|312|4|
|4|2|312|325|4|
|4|3|325|360|6|
Does anyone know how I can deal with the issue of multiple spells per person and somehow combine each spell for a given individual?
You should have a look at TraMiner's excellent documentation. Particularly, the user guide is very helpful. There you would find a section on the seqformat function, which is exactly what you are looking for
library(TraMineR)
## Create spell data
data <-
as.data.frame(
matrix(
c(1, 1, 300, 320, 4,
1, 2, 320, 360, 4,
2, 1, 330, 360, 4,
3, 1, 270, 360, 7,
4, 1, 280, 312, 4,
4, 2, 312, 325, 4,
4, 3, 325, 360, 6),
ncol = 5, byrow = T)
)
names(data) <- c("id", "spell", "start", "end", "status")
## Converting from SPELL to STS format with TraMineR::seqformat
data.sts <-
seqformat(data, from = "SPELL", to = "STS",
id = "id", begin = "start", end = "end", status = "status",
process = FALSE)

euclidean distance between two dataframes

I have two dataframes. For simplicity assume, they each have only one entry
+--------------------+
| entry |
+--------------------+
|[0.34, 0.56, 0.87] |
+--------------------+
+--------------------+
| entry |
+--------------------+
|[0.12, 0.82, 0.98] |
+--------------------+
How can I compute the euclidean distance between the entries of these two dataframes? Right now I have the following code:
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType
from scipy.spatial import distance
inference = udf(lambda x, y: float(distance.euclidean(x, y)), DoubleType())
inference_result = inference(a, b)
but I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/udf.py", line 197, in wrapper
return self(*args)
File "/usr/lib/spark/python/pyspark/sql/udf.py", line 177, in __call__
return Column(judf.apply(_to_seq(sc, cols, _to_java_column)))
File "/usr/lib/spark/python/pyspark/sql/column.py", line 68, in _to_seq
cols = [converter(c) for c in cols]
File "/usr/lib/spark/python/pyspark/sql/column.py", line 68, in <listcomp>
cols = [converter(c) for c in cols]
File "/usr/lib/spark/python/pyspark/sql/column.py", line 56, in _to_java_column
"function.".format(col, type(col)))
TypeError: Invalid argument, not a string or column: DataFrame[embedding:
array<float>] of type <class 'pyspark.sql.dataframe.DataFrame'>. For column
literals, use 'lit', 'array', 'struct' or 'create_map' function.

suggest_int() missing 1 required positional argument: 'high' error on Optuna

I have the following code of Optuna to do the hyperparameter tunning for a Xgboost classifier.
import optuna
from optuna import Trial, visualization
from optuna.samplers import TPESampler
from xgboost import XGBClassifier
def objective(trial: Trial,X_train,y_train,X_test,y_test):
param = {
"n_estimators" : Trial.suggest_int("n_estimators", 0, 1000),
'max_depth':Trial.suggest_int('max_depth', 2, 25),
'reg_alpha':Trial.suggest_int('reg_alpha', 0, 5),
'reg_lambda':Trial.suggest_int('reg_lambda', 0, 5),
'min_child_weight':Trial.suggest_int('min_child_weight', 0, 5),
'gamma':Trial.suggest_int('gamma', 0, 5),
'learning_rate':Trial.suggest_loguniform('learning_rate',0.005,0.5),
'colsample_bytree':Trial.suggest_discrete_uniform('colsample_bytree',0.1,1,0.01),
'nthread' : -1
}
model = XGBClassifier(**param)
model.fit(X_train,y_train)
return cross_val_score(model,X_test,y_test).mean()
study = optuna.create_study(direction='maximize',sampler=TPESampler())
study.optimize(lambda trial : objective(trial,X_train,y_train,X_test,y_test),n_trials= 50)
It keeps giving me the following error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\JaneStreet\lib\site-packages\optuna\_optimize.py", line 217, in _run_trial
value_or_values = func(trial)
File "<ipython-input-74-c1454daaa53e>", line 2, in <lambda>
study.optimize(lambda trial : objective(trial,X_train,y_train,X_test,y_test),n_trials= 50)
File "<ipython-input-73-4438e1db47ef>", line 4, in objective
"n_estimators" : Trial.suggest_int("n_estimators", 0, 1000),
TypeError: suggest_int() missing 1 required positional argument: 'high'
Thanks so much
The problem is that you are calling suggest_int on the class Trial as if it were a class/static method. suggest_int is a regular method and should be called on an object, in this case trial. Changing Trial.suggest_int to trial.suggest_int should get rid of the error.
What about below. I just changed the params after objective and changed Trial to trial.
def objective(trial,X_train,y_train,X_test,y_test):
param = {
"n_estimators" : trial.suggest_int("n_estimators", 0, 1000),
'max_depth':trial.suggest_int('max_depth', 2, 25),
'reg_alpha':trial.suggest_int('reg_alpha', 0, 5),
'reg_lambda':trial.suggest_int('reg_lambda', 0, 5),
'min_child_weight':trial.suggest_int('min_child_weight', 0, 5),
'gamma':trial.suggest_int('gamma', 0, 5),
'learning_rate':trial.suggest_loguniform('learning_rate',0.005,0.5),
'colsample_bytree':trial.suggest_discrete_uniform('colsample_bytree',0.1,1,0.01),
'nthread' : -1
}
"n_estimators" : trial.suggest_int("n_estimators", 0, 1000, 20) where
0 is the starting range,
1000 is the ending range, and
20 is the step difference

Unable to convert VGG-16 to IR

I have truncated version of vgg16 in .pb format. I am unable to convert to IR using OpenVino Model Optimizer getting following error:
[ ANALYSIS INFO ] It looks like there is IteratorGetNext as input
Run the Model Optimizer with:
--input "IteratorGetNext:0[-1 224 224 3]"
And replace all negative values with positive values
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (): Graph contains 0 node after executing . It considered as error because resulting IR will be empty which is not usual
python3 /opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo_tf.py --input_model model.pb
With *.meta
python3 /opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo_tf.py --input_meta_graph model.meta --log_level DEBUG
[ 2020-06-11 10:59:34,182 ] [ DEBUG ] [ main:213 ] Placeholder shapes : None
'extensions.back.ScalarConstNormalize.RangeInputNormalize'>
| 310 | True | <class 'extensions.back.AvgPool.AvgPool'>
| 311 | True | <class 'extensions.back.ReverseInputChannels.ApplyReverseChannels'>
| 312 | True | <class 'extensions.back.split_normalizer.SplitNormalizer'>
| 313 | True | <class 'extensions.back.ParameterToPlaceholder.ParameterToInput'>
| 314 | True | <class 'extensions.back.GroupedConvWeightsNormalize.GroupedConvWeightsNormalize'>
| 315 | True | <class 'extensions.back.ConvolutionNormalizer.DeconvolutionNormalizer'>
| 316 | True | <class 'extensions.back.StridedSliceMasksNormalizer.StridedSliceMasksNormalizer'>
| 317 | True | <class 'extensions.back.ConvolutionNormalizer.ConvolutionWithGroupsResolver'>
| 318 | True | <class 'extensions.back.ReshapeMutation.ReshapeMutation'>
| 319 | True | <class 'extensions.back.ForceStrictPrecision.ForceStrictPrecision'>
| 320 | True | <class 'extensions.back.I64ToI32.I64ToI32'>
| 321 | True | <class 'extensions.back.ReshapeMutation.DisableReshapeMutationInTensorIterator'>
| 322 | True | <class 'extensions.back.ActivationsNormalizer.ActivationsNormalizer'>
| 323 | True | <class 'extensions.back.pass_separator.BackFinish'>
| 324 | False | <class 'extensions.back.SpecialNodesFinalization.RemoveConstOps'>
| 325 | False | <class 'extensions.back.SpecialNodesFinalization.CreateConstNodesReplacement'>
| 326 | True | <class 'extensions.back.kaldi_remove_memory_output.KaldiRemoveMemoryOutputBackReplacementPattern'>
| 327 | False | <class 'extensions.back.SpecialNodesFinalization.RemoveOutputOps'>
| 328 | True | <class 'extensions.back.blob_normalizer.BlobNormalizer'>
| 329 | False | <class 'extensions.middle.MulFakeQuantizeFuse.MulFakeQuantizeFuse'>
| 330 | False | <class 'extensions.middle.AddFakeQuantizeFuse.AddFakeQuantizeFuse'>
[ 2020-06-11 10:59:34,900 ] [ DEBUG ] [ class_registration:282 ] Run replacer <class 'extensions.load.tf.loader.TFLoader'>
[ INFO ] Restoring parameters from %s
[ WARNING ] From %s: %s (from %s) is deprecated and will be removed %s.
Instructions for updating:
%s
[ WARNING ] From %s: %s (from %s) is deprecated and will be removed %s.
Instructions for updating:
%s
[ FRAMEWORK ERROR ] Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
[ 2020-06-11 10:59:35,760 ] [ DEBUG ] [ main:328 ] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/front/tf/loader.py", line 220, in load_tf_graph_def
outputs)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/graph_util_impl.py", line 330, in convert_variables_to_constants
returned_variables = sess.run(variable_names)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 288, in apply_transform
for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
func(graph)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/load/loader.py", line 27, in find_and_replace_pattern
self.load(graph)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/load/tf/loader.py", line 58, in load
saved_model_tags=argv.saved_model_tags)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/front/tf/loader.py", line 231, in load_tf_graph_def
raise FrameworkError('Cannot load input model: {}', e) from e
mo.utils.error.FrameworkError: Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 312, in main
ret_code = driver(argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 273, in driver
ret_res = emit_ir(prepare_ir(argv), argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/main.py", line 238, in prepare_ir
graph = unified_pipeline(argv)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/pipeline/unified.py", line 29, in unified_pipeline
class_registration.ClassType.BACK_REPLACER
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 334, in apply_replacements
apply_replacements_list(graph, replacers_order)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 324, in apply_replacements_list
num_transforms=len(replacers_order))
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/logger.py", line 124, in wrapper
function(*args, **kwargs)
File "/opt/intel/openvino_2020.3.194/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 306, in apply_transform
raise FrameworkError('{}'.format(str(err))) from err
mo.utils.error.FrameworkError: Cannot load input model: Attempting to use uninitialized value metrics/accuracy/total
[[{{node _retval_metrics/accuracy/total_0_54}}]]
The problem is that models trained in TensorFlow have some shapes undefined. In your case, it looks like batch of the input is not defined. To fix it, please add an additional argument to the command line: -b 1. The option sets batch to 1. It should fix this particular issue.
After that, I guess, you may encounter other issues so I would leave the following link: Converting a TensorFlow Model.
There are some tips about how to convert TensorFlow model to IR.