How to customize Actor policy in Actor-Learner setup - tensorflow

I am following https://github.com/tensorflow/agents/tree/master/tf_agents/experimental/distributed/examples/sac to implement an Actor-Learner environment for DDQN. It is not clear to me how to adapt the actor-policies that get their policy variables from the reverb variable container? I would like to use a different epsilon-greedy policy for each actor (similar to Ape-X, page 6).
Actors are differentiated by FLAGS.task in the example. I tried to pass an Epsilon-Greedy Polciy to actor.Actor.policy
collect_eps_policy = EpsilonGreedyPolicy(collect_policy, epsilon=1/FLAGS.task)
env_step_metric = py_metrics.EnvironmentSteps()
collect_actor = actor.Actor(
collect_env,
collect_eps_policy,
train_step,
steps_per_run=configs["collectors"]["num_steps_per_collect"],
metrics=actor.collect_metrics(configs["collectors"]["num_steps_per_collect"]),
summary_dir=summary_dir,
observers=[rb_observer, env_step_metric])
but that resulted in the following error:
File "/x/lib/python3.10/site-packages/tf_agents/policies/greedy_policy.py", line 58, in __init__
emit_log_probability=policy.emit_log_probability,
File "/x/python3.10/site-packages/tf_agents/policies/py_tf_eager_policy.py", line 246, in __getattr__
return getattr(self._policy, name)
AttributeError: '_UserObject' object has no attribute 'emit_log_probability'
In call to configurable 'GreedyPolicy' (<class 'tf_agents.policies.greedy_policy.GreedyPolicy'>)
In call to configurable 'EpsilonGreedyPolicy' (<class 'tf_agents.policies.epsilon_greedy_policy.EpsilonGreedyPolicy'>)
In call to configurable 'collect' (<function collect at 0x2b33a2607490>)
Is there a simple way to implement what I want to do?

Related

Is there a way I can access the attribute in an Attribute Error without parsing the string?

My python version is 3.6
I am trying to give a more helpful message on attribute errors in a CLI framework that I am building. I have the following code
print(cli_config.test_exension_config.input_menu)
Which produces the error AttributeError: 'CLIConfig' object has no attribute 'test_exension_config'
Perfect, however now I want to give a recommendation on the closest attribute match as the attributes are dynamically created from a yaml file.
test_extension:
input_menu: # "InputMenuConfig_instantiation_test"
var:
So the closest attribute match would be test_extension_config.
Below is me catching the error and about to give a recommendation.
def __getattribute__(self, name) -> Any:
try:
return super().__getattribute__(name)
except AttributeError as ae:
# chance to handle the attribute differently
attr = get_erroring_attr(ae)
closest_match = next(get_close_matches(attr, list(vars(self).keys())))
if closest_match: # probably will have some threshold based on 'edit distance'
return closest_match
# if not, re-raise the exception
raise ae
I am wanting to just receive the attribute
I can parse the args of AttributeError but I wanted to know if there was another way to access the actual attribute name that is erroring without parsing the message.
In other words, in the last code block I have a method get_erroring_attr(ae) that takes in the AttributeError.
What would be the cleanest definition of def get_erroring_attr(ae) that will return the erroring attribute?
UPDATE:
So I did this and it works. I would just like to remove parsing as much as possible.
def __getattribute__(self, name) -> Any:
try:
return super().__getattribute__(name)
except AttributeError as ae:
# chance to handle the attribute differently
attr = self.get_erroring_attr(ae)
closest_match = next(match for match in get_close_matches(attr, list(vars(self).keys())))
if closest_match: # probably will have some threshold based on 'edit distance'
traceback.print_exc()
print(CLIColors.build_error_string(f"ERROR: Did you mean {CLIColors.build_value_string(closest_match)}?"))
sys.exit()
# if not, re-raise the exception
raise ae
def get_erroring_attr(self, attr_error: AttributeError):
message = attr_error.args[0]
_, error_attr_name = self.parse_attr_error_message(message)
return error_attr_name
def parse_attr_error_message(self, attr_err_msg: str):
parsed_msg = re.findall("'([^']*)'", attr_err_msg)
return parsed_msg
Which produces

DataFrame Definintion is lazy evaluation

I am new to spark and learning it. can someone help with below question
The quote in spark definitive regarding dataframe definition is "In general, Spark will fail only at job execution time rather than DataFrame definition timeā€”even if,
for example, we point to a file that does not exist. This is due to lazy evaluation,"
so I guess spark.read.format().load() is dataframe definition. On top of this created dataframe we apply transformations and action and load is read API and not transformation if I am not wrong.
I tried to "file that does not exist" in load and I am thinking this is dataframe definition. but I got below error. according to the book it should not fail right?. I am surely missing something. can someone help on this?
df=spark.read.format('csv')
.option('header',
'true').option('inferschema', 'true')
.load('/spark_df_data/Spark-The-Definitive-Guide/data/retail-data/by-day/2011-12-19.csv')
Error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/current/spark2-client/python/pyspark/sql/readwriter.py", line 166, in load
return self._df(self._jreader.load(path))
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 69, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u'Path does not exist: /spark_df_data/Spark-The-Definitive-Guide/data/retail-data/by-day/2011-12-19.csv;'
why dataframe definition is referring Hadoop metadata when it is lazy evaluated?
Till here dataframe is defined and reader object instantiated.
scala> spark.read.format("csv").option("header",true).option("inferschema",true)
res2: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader#7aead157
when you actually say load.
res2.load('/spark_df_data/Spark-The-Definitive-Guide/data/retail-data/by-day/2011-12-19.csv') and the file doesnt exist...... is execution time.(that means it has to check the data source and then it has to load the data from csv)
To get dataframe its checking meta data of hadoop since it will check hdfs whether this file exist or not.
It doesnt then you are getting
org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://203-249-241:8020/spark_df_data/Spark-The-Definitive-Guide/data/retail-data/by-day/2011-12-19.csv
In general
1) RDD/DataFrame lineage will be created and will not be executed is definition time.
2) when load is executed then it will be the execution time.
See the below flow to understand better.
Conclude : Any traformation (definition time in your way ) will not be executed until action is called (execution time in your way)
Spark is a lazy evolution. However, that doesn't mean It can't verify if file exist of not while loading it.
Lazy evolution happens on DataFrame object, and in order to create dataframe object they need to first check if file exist of not.
Check the following code.
#scala.annotation.varargs
def load(paths: String*): DataFrame = {
if (source.toLowerCase(Locale.ROOT) == DDLUtils.HIVE_PROVIDER) {
throw new AnalysisException("Hive data source can only be used with tables, you can not " +
"read files of Hive data source directly.")
}
DataSource.lookupDataSourceV2(source, sparkSession.sessionState.conf).map { provider =>
val catalogManager = sparkSession.sessionState.catalogManager
val sessionOptions = DataSourceV2Utils.extractSessionConfigs(
source = provider, conf = sparkSession.sessionState.conf)
val pathsOption = if (paths.isEmpty) {
None
} else {
val objectMapper = new ObjectMapper()
Some("paths" -> objectMapper.writeValueAsString(paths.toArray))
}

Using try except block in Odoo server action

I'm defining a server action in Odoo 10. Within this action I am trying to use the following code:
for data in datas:
try:
inventory_level = int(data[context['inventory_level_column']].strip())
except TypeError:
continue
However, I receive an error:
ValueError: <type 'exceptions.NameError'>: "name 'TypeError' is not defined"
Is it not possible to catch errors within the context of an Odoo server action? Why is TypeError not defined?
The code written on a server action is passed through to safe_eval method. There the __builtins__ are stripped and replaced (thus the exceptions.NameError class is removed.
You can check this behaviour on odoo/tools/safe_eval.py on the definition of safe_eval method. See globals_dict['__builtins__'] = _BUILTINS where _BUILTINS does not contain this exception.
Exception is the parent class of all exception classes, so if you want to catch exception then just specify top most parent class in except and in e you will get error message.
for data in datas:
try:
inventory_level = int(data[context['inventory_level_column']].strip())
except Exception ,e:
print e
pass

Trippinin api Keyerror

I need som help to get started with this trippinin api, if you have worked with this api it would be very nice of you to just help me here to get started! I dont understand what I should write in for dayin data[....]:
import requests
import json
r = requests.get("http://api.v1.trippinin.com/City/London/Eat?day=monday&time=morning&limit=10& offset=2&KEY=58ffb98334528b72937ce3390c0de2b7")
data = r.json()
for day in data['city Name']:
print (day['city Name']['weekday'] + ":")
The error:
Traceback (most recent call last):
File "C:\Users\Nux\Desktop\Kurs3\test.py", line 7, in <module>
for day in data['city Name']:
KeyError: 'city Name'
The error KeyError: 'X' means you are trying to access the key X in a dictionary, but it doesn't exist. In your case you're trying to access data['city Name']. Apparently, the information in data does not have the key city Name. That means either a) you aren't getting any data back, or b) the data isn't in the format you expected. In both cases you can validate (or invalidate) your assumptions by printing out the value of data.
To help debug this issue, add the following immediately after you assign a value to data:
print(data)

ComponentLookupError on Dexterity types during testing

I have a custom product with a number of Dexterity types, several of which are used by a setuphandler to create the site structure. This works without any issues outside of testing, but within tests it keeps failing:
Traceback (most recent call last):
[snip]
File "/opt/ctcc_plone/src/ctcc.model/ctcc/model/setuphandlers.py", line 52, in setupStructure
random = createSiteFolder(portal, 'ctcc.model.servicefolder', 'Randomisation', 'random')
File "/opt/ctcc_plone/src/ctcc.model/ctcc/model/setuphandlers.py", line 35, in createSiteFolder
return createContentInContainer(context, type, title=title, id=id)
File "/opt/ctcc_plone/eggs/plone.dexterity-1.1-py2.7.egg/plone/dexterity/utils.py", line 166, in createContentInContainer
content = createContent(portal_type, **kw)
File "/opt/ctcc_plone/eggs/plone.dexterity-1.1-py2.7.egg/plone/dexterity/utils.py", line 112, in createContent
fti = getUtility(IDexterityFTI, name=portal_type)
File "/opt/ctcc_plone/eggs/zope.component-3.9.5-py2.7.egg/zope/component/_api.py", line 169, in getUtility
raise ComponentLookupError(interface, name)
ComponentLookupError: (<InterfaceClass plone.dexterity.interfaces.IDexterityFTI>, 'ctcc.model.servicefolder')
I'm ensuring the package's profile is imported during setup:
class CTCCModelSandboxLayer(PloneSandboxLayer):
defaultBases = (PLONE_FIXTURE,)
def setUpZope(self, app, configurationContext):
import ctcc.model
self.loadZCML(package=ctcc.model)
def setUpPloneSite(self, portal):
self.applyProfile(portal, 'ctcc.model:default')
While they're listed as install requirements in the package's setup, I've also tried an explicit applyProfile on plone.app.dexterity, as well as quickInstallProduct, but for some reason the Dexterity FTI's don't appear to be registered at the time they're called.
I'm using Plone 4.1, Dexterity 1.1, and plone.app.testing 4.2
As suggested by Mikko, I moved the setuphandler configuration from the product's zcml and into a GenericSetup import_steps.xml instead, allowing for an explicit dependency on typeinfo to be specified:
<?xml version="1.0"?>
<import-steps>
<import-step
id="ctcc-setup"
title="Additional CTCC setup"
handler="ctcc.model.setuphandlers.setupVarious"
version="20120731"
>
<dependency step="typeinfo" />
</import-step>
</import-steps>
Tests now run instead of failing during the applyProfile stage, and the tests of the site structure show it's being set up as expected.
Thanks again!