When I run the train.py script from https://github.com/tensorflow/models/tree/master/official/nlp, I got a 403 permission error.
python3 official/nlp/train.py --tpu=con-bert1 --experiment=bert/pretraining --mode=train --model_dir=gs://con_bioberturk/general/ --config_file=gs://con_bioberturk/bert_base.yaml --config_file=gs://con_bioberturk/pretrain.yaml --params_override="task.init_checkpoint=gs://con_bioberturk/bert-base-turkish-cased-tf/model.ckpt"`
and my output is below:
I1115 07:49:02.847452 139877506112576 train_utils.py:368] Saving experiment configuration to gs://con_bioberturk/general/params.yaml
Traceback (most recent call last):
File "/usr/share/tpu/models/official/modeling/hyperparams/params_dict.py", line 349, in save_params_dict_to_yaml
yaml.dump(params.as_dict(), f, default_flow_style=False)
File "/usr/local/lib/python3.8/dist-packages/yaml/__init__.py", line 290, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/local/lib/python3.8/dist-packages/yaml/__init__.py", line 278, in dump_all
dumper.represent(data)
File "/usr/local/lib/python3.8/dist-packages/yaml/representer.py", line 28, in represent
self.serialize(node)
File "/usr/local/lib/python3.8/dist-packages/yaml/serializer.py", line 55, in serialize
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 115, in emit
self.state()
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 220, in expect_document_end
self.flush_stream()
File "/usr/local/lib/python3.8/dist-packages/yaml/emitter.py", line 790, in flush_stream
self.stream.flush()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 219, in flush
self._writable_file.flush()
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "Access denied.",
"errors": [
{
"message": "Access denied.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
when initiating an upload to gs://con_bioberturk/general/params.yaml
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "official/nlp/train.py", line 82, in <module>
app.run(main)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "official/nlp/train.py", line 47, in main
train_utils.serialize_config(params, model_dir)
File "/usr/share/tpu/models/official/core/train_utils.py", line 370, in serialize_config
hyperparams.save_params_dict_to_yaml(params, params_save_path)
File "/usr/share/tpu/models/official/modeling/hyperparams/params_dict.py", line 349, in save_params_dict_to_yaml
yaml.dump(params.as_dict(), f, default_flow_style=False)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 197, in __exit__
self.close()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/lib/io/file_io.py", line 239, in close
self._writable_file.close()
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
"error": {
"code": 403,
"message": "Access denied.",
"errors": [
{
"message": "Access denied.",
"domain": "global",
"reason": "forbidden"
}
]
}
}
'
Here is my settings:
tpu-vm name:con-bert1
TPU software version: tpu-vm-tf-2.10.0-pod
cloud bucket (con_bioberturk) and tpu-vm are in the same location
Looks like you need to add the service account that is currently active on your TPU VM to the GCS IAM. Instructions here - https://github.com/google-research/text-to-text-transfer-transformer/issues/1003
If that fails, try running gcloud auth login --update-adc on your TPU VM to add your credentials.
Hope this resolves your issue.
Related
I am trying to export all data of AWX to JSON File , with following command, and this command is a part of gitlab cicd, So self hosted gitlab runner executing this command. I tried running the same command on other machine, which works fine. The version of python is same on both side.
awx --conf.host http://{AWX_URL} --conf.token {AWX_TOKEN} --conf.insecure export -k --job-template > job_tempalte.json;
DEBUG:awxkit.api.pages.page:get_page: /api/v2/workflow_job_templates/
DEBUG:awxkit.api.pages.page:set_page: <class 'awxkit.api.pages.workflow_job_templates.WorkflowJobTemplates'> /api/v2/workflow_job_templates/
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/awxkit/cli/__init__.py", line 25, in run
cli.parse_resource()
File "/usr/lib/python3.9/site-packages/awxkit/cli/client.py", line 152, in parse_resource
self.resource = parse_resource(self, skip_deprecated=skip_deprecated)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 220, in parse_resource
response = command.handle(client, parser)
File "/usr/lib/python3.9/site-packages/awxkit/cli/resource.py", line 179, in handle
data = client.v2.export_assets(**kwargs)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/api.py", line 201, in export_assets
endpoint = getattr(self, resource)
File "/usr/lib/python3.9/site-packages/awxkit/api/pages/page.py", line 115, in __getattr__
raise AttributeError("{!r} object has no attribute {!r}".format(self.__class__.__name__, name))
AttributeError: 'ApiV2' object has no attribute 'execution_environments'
Downgrade the awxkit version to 17.1.0
pip install awxkit==17.1.0
When i try to run the following .yml role i get an error with nsupdate.
I am using centos7 and the machine is running bind.
When i do nsupdate with either the original DNS server or the ansible master i can update the records, only when i use the nsupdate module it doesn't work, any help? ty!
tasks/main.yml
This is the part with the relevant code
- name: Add or modify ansible.example.org A to 192.168.1.1"
community.general.nsupdate:
server: "10.0.0.40"
zone: "ben.com."
record: "ansible"
value: "192.168.1.1"
when: ansible_eth1.ipv4.address == '10.0.0.40'
The error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: invalid syntax
fatal: [10.0.0.40]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 10.0.0.40 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/root/.ansible/tmp/ansible-tmp-1624977590.7-4712-16053022547656/AnsiballZ_nsupdate.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.nsupdate', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_community.general.nsupdate_payload_xAhaGd/ansible_community.general.nsupdate_payload.zip/ansible_collections/community/general/plugins/modules/nsupdate.py\", line 189, in <module>\r\n File \"build/bdist.linux-x86_64/egg/dns/update.py\", line 21, in <module>\r\n File \"/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py\", line 201\r\n s.write(f';{name}\\n')\r\n ^\r\nSyntaxError: invalid syntax\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
relevant line in traceback
File "build/bdist.linux-x86_64/egg/dns/update.py", line 21, in <module>
File "/usr/lib/python2.7/site-packages/dnspython-2.1.1.dev77+gf61a939-py2.7.egg/dns/message.py", line 201
s.write(f';{name}\n')
^
SyntaxError: invalid syntax
Problem was that it was prolly running on the wrong interpeter on the host machine like #Patrick mentioned..
Fixed it by adding host group vars like so:
[DNS_Master:vars]
ansible_python_interpreter=/usr/local/bin/python3.9
How to use postman to test odoo 14.0 controller methods that require authentication?
I used to have a simple request for authentication:
url: http://localhost:8014/web/session/authenticate
method: GET
headers: Content-Type: application/json
body:
{
"jsonrpc": "2.0",
"params": {
"db": "v14pos",
"login": "admin",
"password": "admin"
}
}
After sending the authentication request, postman will set the session_id cookie, and it will work.
But in 14.0 even though the session_id cookie is set, I get the following error when trying to call a url that requires authenticatoin:
{
"jsonrpc": "2.0",
"id": null,
"error": {
"code": 200,
"message": "Odoo Server Error",
"data": {
"name": "odoo.exceptions.AccessDenied",
"debug": "Traceback (most recent call last):\n File \"/home/obi/src/vs/odoo14/addons/http_routing/models/ir_http.py\", line 450, in _dispatch\n cls._authenticate(func)\n File \"/home/obi/src/vs/odoo14/odoo/addons/base/models/ir_http.py\", line 132, in _authenticate\n raise AccessDenied()\nException\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/obi/src/vs/odoo14/odoo/http.py\", line 639, in _handle_exception\n return super(JsonRequest, self)._handle_exception(exception)\n File \"/home/obi/src/vs/odoo14/odoo/http.py\", line 315, in _handle_exception\n raise exception.with_traceback(None) from new_cause\nodoo.exceptions.AccessDenied: Access Denied\n",
"message": "Access Denied",
"arguments": [
"Access Denied"
],
"context": {}
}
}
}
This worked for me for version 11.0.
I noticed that the HTTP header in 14.0 includes the cookie in a different way:
Cookie: TWISTED_SESSION=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyX2luZm8iOnsiYW5vbnltb3VzIjp0cnVlfSwiZXhwIjoxNjAzNjM0NDM5fQ.pJs2oOjQYOQrFnolafUlNZ4Bg4OMJ_itRaZPEUoaLeE; frontend_lang=en_US; fileToken=dummy-because-api-expects-one; tz=Africa/Khartoum; session_id=d36df662e749f368c32dcbecc07bf578dd57de8a
What is the TWISTED_SESSOIN? is it causing the problem?
I found the solution, or rather the problem.
I set wrong value for auth in the controller method, it was:
#http.route('/route/', auth='auth', type='json')
And changed it to:
#http.route('/route/', auth='user', type='json')
It seems that jsonschema version 3.0.1 does not accept multi-stage schema using $refs (while it works with jsonschema version 2.6.0).
I have to make it work under several module versions simply because my code will be running on different computers with different environments.
I verified my jsons on https://www.jsonschemavalidator.net/ (thanks for this link found in another StackOverflow question).
I Tried :
jsonschema -i myjson.json noRefs.schema.json --> 2.6.0 = OK, 3.0.1 OK
jsonschema -i myjson.json usingRefs.schema.json --> 2.6.0 = OK, 3.0.1 KO
Note :
Both *.schema.json worked on https://www.jsonschemavalidator.net/
File myjson.json :
{
"TopProperty" : {
"LowerProperty" : {"toto" : "plop"}
}
}
File noRefs.schema.json :
{
"type": "object",
"properties": {
"TopProperty": {"$ref": "#/schemaTopProperty"}
},
"schemaTopProperty": {
"$id": "schemaTopProperty",
"type": "object",
"properties": {
"LowerProperty": {
"type": "object",
"properties": {
"toto": {"type": "string"}
}
}
}
}
}
File usingRefs.schema.json :
{
"type": "object",
"properties": {
"TopProperty": {"$ref": "#/schemaTopProperty"}
},
"schemaTopProperty": {
"$id": "schemaTopProperty",
"type": "object",
"properties": {
"LowerProperty": {
"type": "object",
"properties": {
"toto": {"$ref": "#/justAString"}
}
}
}
},
"justAString": {
"$id": "justAString",
"type": "string"
}
}
Error message received :
Traceback (most recent call last):
File "/usr/bin/jsonschema", line 11, in <module>
sys.exit(main())
File "/usr/lib/python2.7/site-packages/jsonschema/cli.py", line 67, in main
sys.exit(run(arguments=parse_args(args=args)))
File "/usr/lib/python2.7/site-packages/jsonschema/cli.py", line 78, in run
for error in validator.iter_errors(instance):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 274, in properties
schema_path=property,
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 339, in descend
for error in self.iter_errors(instance, schema):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 251, in ref
for error in validator.descend(instance, resolved):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 339, in descend
for error in self.iter_errors(instance, schema):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 274, in properties
schema_path=property,
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 339, in descend
for error in self.iter_errors(instance, schema):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 73, in items
for error in validator.descend(item, items, path=index):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 339, in descend
for error in self.iter_errors(instance, schema):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 274, in properties
schema_path=property,
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 339, in descend
for error in self.iter_errors(instance, schema):
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 323, in iter_errors
for error in errors:
File "/usr/lib/python2.7/site-packages/jsonschema/_validators.py", line 247, in ref
scope, resolved = validator.resolver.resolve(ref)
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 734, in resolve
return url, self._remote_cache(url)
File "/usr/lib/python2.7/site-packages/functools32/functools32.py", line 400, in wrapper
result = user_function(*args, **kwds)
File "/usr/lib/python2.7/site-packages/jsonschema/validators.py", line 744, in resolve_from_url
raise exceptions.RefResolutionError(exc)
jsonschema.exceptions.RefResolutionError: unknown url type: schemaTopProperty
Edit: my previous answer was incorrect.
TL;DR: You have two options:
Remove the $id properties from the definitions
Use #/ in the $id properties (Example: {"$id": "#/justAString"})
Details:
The issue is with the IDs, up until draft-04, $ref and $id were treated at face value, nothing special, but starting with draft-06 these are uri-references, in which case, when descending into {"$id": "schemaTopProperty"}, resolving {"$ref": "justAString"} is no more looking for a fragment justAString at the root structure, but for /justAString under schemaTopProperty host, which is a remote reference.
Hence my solutions to either remove the $ids which cause the definitions to be URLs (hosts in fact), or to define the $ids as what they are, fragments in the current schema.
I am new to scrapyd,
I have insert the below code into scrapy.cfg file.
[settings]
default = uk.settings
[deploy:scrapyd]
url = http://localhost:6800/
project=ukmall
[deploy:scrapyd2]
url = http://scrapyd.mydomain.com/api/scrapyd/
username = john
password = secret
If I run below code code
$scrapyd-deploy -l
I can get
scrapyd2 http://scrapyd.mydomain.com/api/scrapyd/
scrapyd http://localst:6800/
To see all available projects
scrapyd-deploy -L scrapyd
But it shows nothing in my machine?
Ref: http://scrapyd.readthedocs.org/en/latest/deploy.html#deploying-a-project
If Did
$ scrapy deploy scrapyd2
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$ scrapy deploy scrapyd2
Packing version 1412322816
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 103, in run
egg, tmpdir = _build_egg()
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 228, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/lib/pymodules/python2.7/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-VLM6W7']' returned non-zero exit status 1
anandhakumar#MMTPC104:~/ScrapyProject/mall_uk$
If I do this for another project means it shows.
$ scrapy deploy scrapyd
Packing version 1412325181
Deploying to project "project2" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "[Errno 13] Permission denied: 'eggs'"}
You'll only be able to list the spiders that have been deployed. If you haven't deployed anything yet then to deploy your spider you simply use scrapy deploy:
scrapy deploy [ <target:project> | -l <target> | -L ]
vagrant#portia:~/takeovertheworld$ scrapy deploy scrapyd2
Packing version 1410145736
Deploying to project "takeovertheworld" in http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/addversion.json
Server response (200):
{"status": "ok", "project": "takeovertheworld", "version": "1410145736", "spiders": 1}
Verify that the project was installed correctly by accessing the scrapyd API:
vagrant#portia:~/takeovertheworld$ curl http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/listprojects.json
{"status": "ok", "projects": ["takeovertheworld"]}
I had same error too. As #hugsbrugs said,because a folder inside the scrapy project had root rights.So, I do that.
sudo scrapy deploy scrapyd2