I have an Odoo 14 app running on GKE cluster. The multiprocessor mode has been enabled by setting workers = 2. I chose the default GKE ingress to route the requests that match /longpolling/* to the longpolling port and the rest to the normal port.
Here's how I configure my GKE ingress.
ingress.tf
resource "kubernetes_ingress" "ingress_service" {
metadata {
name = "ingress-service"
annotations = {
"networking.gke.io/managed-certificates": "subdomain-company-com"
"networking.gke.io/v1beta1.FrontendConfig": "frontend-config" # for https redirection
}
}
spec {
rule {
host = "dev.company.com"
http {
path {
path = "/*"
backend {
service_name = kubernetes_service.core_service.metadata.0.name
service_port = kubernetes_service.core_service.spec.0.port.0.port
}
}
}
}
rule {
host = "dev.company.com"
http {
path {
path = "/longpolling/*"
backend {
service_name = kubernetes_service.core_service.metadata.0.name
service_port = kubernetes_service.core_service.spec.0.port.1.port
}
}
}
}
}
wait_for_load_balancer = true
}
The Kubernetes Service looks like this.
resource "kubernetes_service" "core_service" {
metadata {
name = "core-service"
annotations = {
"cloud.google.com/backend-config" = jsonencode({
"ports" = {
"longpolling" = "long-polling-be-config"
}
})
}
}
spec {
type = "NodePort"
selector = {
app = "core"
}
port {
name = "normal"
port = 8080
protocol = "TCP"
target_port = "8069"
}
port {
name = "longpolling"
port = 8081
protocol = "TCP"
target_port = "8072"
}
}
}
The custom backend config long-polling-be-config is as follows.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: long-polling-be-config
spec:
healthCheck:
checkIntervalSec: 30
healthyThreshold: 5
type: HTTP
requestPath: /web/database/selector
port: 8081
The health check for port 8069 is defined in the Kubernetes Deployment as follows.
readiness_probe {
http_get {
path = "/web/database/selector"
port = "8069"
}
initial_delay_seconds = 15
period_seconds = 30
}
The backend is healthy but I am still seeing this error once every minute or so.
2021-07-10 12:41:24,014 15 INFO odoo werkzeug: 10.2.0.1 - - [10/Jul/2021 12:41:24] "POST /longpolling/poll HTTP/1.1" 200 - 1 0.001 0.010
2021-07-10 12:41:24,739 15 INFO ? werkzeug: 10.2.0.1 - - [10/Jul/2021 12:41:24] "GET /web/database/selector HTTP/1.1" 200 - 6 0.008 0.102
2021-07-10 12:41:29,106 15 INFO ? werkzeug: 10.2.0.1 - - [10/Jul/2021 12:41:29] "GET /web/database/selector HTTP/1.1" 200 - 6 0.009 0.076
2021-07-10 12:41:54,054 14 ERROR odoo odoo.http: Exception during JSON request handling.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/odoo/addons/base/models/ir_http.py", line 237, in _dispatch
result = request.dispatch()
File "/usr/lib/python3/dist-packages/odoo/http.py", line 683, in dispatch
result = self._call_function(**self.params)
File "/usr/lib/python3/dist-packages/odoo/http.py", line 359, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/usr/lib/python3/dist-packages/odoo/service/model.py", line 94, in wrapper
return f(dbname, *args, **kwargs)
File "/usr/lib/python3/dist-packages/odoo/http.py", line 347, in checked_call
result = self.endpoint(*a, **kw)
File "/usr/lib/python3/dist-packages/odoo/http.py", line 912, in __call__
return self.method(*args, **kw)
File "/usr/lib/python3/dist-packages/odoo/http.py", line 531, in response_wrap
response = f(*args, **kw)
File "/usr/lib/python3/dist-packages/odoo/addons/bus/controllers/main.py", line 35, in poll
raise Exception("bus.Bus unavailable")
Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/odoo/http.py", line 639, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/usr/lib/python3/dist-packages/odoo/http.py", line 315, in _handle_exception
raise exception.with_traceback(None) from new_cause
Exception: bus.Bus unavailable
I followed the solution from this question, which is via nginx reverse proxy. How can I replicate that using GKE ingress?
Related
I am using the Gramex Redis Cache service but facing the below error sometimes which is happening randomly and it throws the error in gramex console, could you please help?
here is the error log from gramex console
Error log -
E 19-Jul 07:23:59 gramex:cache 8000 gramex.cache.open: <class 'gramex.services.rediscache.RedisCache'> cannot cache <tornado.template.Template object at 0x7f5cc43c1290>
Traceback (most recent call last):
File "/home/star/conda/lib/python3.7/site-packages/tornado/web.py", line 1704, in _execute
result = await result
File "/home/star/conda/lib/python3.7/site-packages/tornado/gen.py", line 769, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/star/conda/lib/python3.7/site-packages/gramex/handlers/filehandler.py", line 188, in get
yield self._get_path(self.root)
File "/home/star/conda/lib/python3.7/site-packages/tornado/gen.py", line 762, in run
value = future.result()
File "/home/star/conda/lib/python3.7/site-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/home/star/conda/lib/python3.7/site-packages/gramex/handlers/filehandler.py", line 244, in _get_path
raise HTTPError(FORBIDDEN, f'{self.file} not allowed')
tornado.web.HTTPError: HTTP 403: Forbidden (/home/star/conda/lib/python3.7/site-packages/gramex/favicon.ico not allowed)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/star/conda/lib/python3.7/site-packages/gramex/cache.py", line 168, in open
_cache[key] = cached
File "/home/star/conda/lib/python3.7/site-packages/gramex/services/rediscache.py", line 58, in __setitem__
value = pickle.dumps(value, pickle.HIGHEST_PROTOCOL)
TypeError: can't pickle _thread.RLock objects
Here is my gramex.yaml configuration set:
gramex.yaml config -
cache:
memory:
default: false
redis:
type: redis
# path: $REDIS_CACHE_HOST:$REDIS_CACHE_PORT:$REDIS_CACHE_DB
path: $REDIS_CACHE_HOST:$REDIS_CACHE_PORT:$REDIS_CACHE_DB:password=$REDIS_CACHE_PASSWORD
size: 0 #GB cache
default: true
app:
session:
type: redis # Persistent multi-instance data store
path: $REDIS_HOST:$REDIS_PORT:0:password=$REDIS_PASSWORD # Redis server
expiry: 10 # Session cookies expiry in days
purge: 86400 # Delete old sessions periodically (in seconds)
domain: .mydomain.com
url:
show_daypart_data:
pattern: /$YAMLURL/show_daypart_data
handler: FormHandler
kwargs:
cors: true
methods: $http_methods
headers:
$request_headers
frm_30:
url: $BigQ_CONN
credentials_path: $CREDENTIAL_PATH
state: my_utilities.cache_query(handler, '30 mi')
queryfunction: star.get_show_daypart_query(handler, 'frm_period')
modify: star.modify_show_daypart_data(handler, data)
default:
_limit: 450000
error: *API_ERROR
Here is the "state" python function -
def cache_query(handler, table):
'''Runs a cache validation query'''
args = handler.argparse(
src={'default': 'any'},
view={'default': 'net'})
state = ''
for tbl in table.split('+'):
table_name = rds_tables[args.src][tbl]
query = f"""SELECT CONCAT(year, '-', week) as week FROM {table_name}
ORDER BY year DESC, week DESC limit 1"""
try:
val = REDIS_CACHE[str(query)]
if not val:
df = gramex.cache.query(query, db_engine)
val = df.week.iloc[0]
REDIS_CACHE[str(query)] = val
state = '-' + val
except Exception as error:
app_log.error(f'Cache query failed ==> {error}')
pass
return state
This is an error in Gramex as of Jul 2022 (Release 1.81). It happens because
The Gramex BaseHandler loads an error template with gramex.cache.open
But the Redis Cache does not support caching such objects
So the custom error does not get displayed, and instead, it reports the TypeError above.
Short of avoiding the Redis Cache, there's no workaround for this right now, unfortunately. We plan to fix this in Gramex.
I'm trying to connect AWS elasticache(redis in cluster mode) with TLS enabled, the library versions and django cache settings as below
====Dependencies======
redis==3.0.0
redis-py-cluster==2.0.0
django-redis==4.11.0
======settings=======
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com:6379/0",
'OPTIONS': {
'PASSWORD': '<password>',
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True,
"ssl_cert_reqs": False,
"ssl": True
}
}
}
}
It doesn't seem to be a problem with client-class(provided by redis-py-cluster) since I'm able to access
from rediscluster import RedisCluster
startup_nodes = [{"host": "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com", "port": "6379"}]
rc = RedisCluster(startup_nodes=startup_nodes, ssl=True, ssl_cert_reqs=False, decode_responses=True, skip_full_coverage_check=True, password='<password>')
rc.set("foo", "bar")
rc.get('foo')
'bar'
but I'm seeing this error when django service is trying to access the cache, is there any configuration detail that I might be missing?
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 32, in _decorator
return method(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 81, in get
client=client)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 194, in get
client = self.get_client(write=False)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 90, in get_client
self._clients[index] = self.connect(index)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 103, in connect
return self.connection_factory.connect(self._server[index])
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 64, in connect
connection = self.get_connection(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 75, in get_connection
pool = self.get_or_create_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 94, in get_or_create_connection_pool
self._pools[key] = self.get_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 107, in get_connection_pool
pool = self.pool_cls.from_url(**cp_params)
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 916, in from_url
return cls(**kwargs)
File "/usr/lib/python3.6/site-packages/rediscluster/connection.py", line 146, in __init__
self.nodes.initialize()
File "/usr/lib/python3.6/site-packages/rediscluster/nodemanager.py", line 172, in initialize
raise RedisClusterException("ERROR sending 'cluster slots' command to redis server: {0}".format(node))
rediscluster.exceptions.RedisClusterException: ERROR sending 'cluster slots' command to redis server: {'host': 'xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com', 'port': '6379'}
I also tried passing "ssl_ca_certs": "/etc/ssl/certs/ca-certificates.crt" to CONNECTION_POOL_KWARGS and setting the location scheme to rediss still no luck
you need to change ssl_cert_reqs=False to ssl_cert_reqs=None
Here's the link to the redis Python git repo that points to this:
https://github.com/andymccurdy/redis-py#ssl-connections
I am using requests in order to fetch and parse some data scraped using Scrapy with Scrapyrt (real time scraping).
This is how I do it:
#pass spider to requests parameters #
params = {
'spider_name': spider,
'start_requests':True
}
# scrape items
response = requests.get('http://scrapyrt:9080/crawl.json', params)
print ('RESPONSE JSON',response.json())
data = response.json()
As per Scrapy documentation, with 'start_requests' parameter set as True, the spider automatically requests urls and passes the response to the parse method which is the default method used for parsing requests.
start_requests
type: boolean
optional
Whether spider should execute Scrapy.Spider.start_requests method. start_requests are executed by default when you run Scrapy Spider normally without ScrapyRT, but this method is NOT executed in API by default. By default we assume that spider is expected to crawl ONLY url provided in parameters without making any requests to start_urls defined in Spider class. start_requests argument overrides this behavior. If this argument is present API will execute start_requests Spider method.
But the setup is not working. Log:
[2019-05-19 06:11:14,835: DEBUG/ForkPoolWorker-4] Starting new HTTP connection (1): scrapyrt:9080
[2019-05-19 06:11:15,414: DEBUG/ForkPoolWorker-4] http://scrapyrt:9080 "GET /crawl.json?spider_name=precious_tracks&start_requests=True HTTP/1.1" 500 7784
[2019-05-19 06:11:15,472: ERROR/ForkPoolWorker-4] Task project.api.routes.background.scrape_allmusic[87dbd825-dc1c-4789-8ee0-4151e5821798] raised unexpected: JSONDecodeError('Expecting value: line 1 column 1 (char 0)',)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/src/app/project/api/routes/background.py", line 908, in scrape_allmusic
print ('RESPONSE JSON',response.json())
File "/usr/lib/python3.6/site-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The error was due to a bug with Twisted 19.2.0, a scrapyrt dependency, which assumed response to be of wrong type.
Once I installed Twisted==18.9.0, it worked.
I am trying to automate the deployment of a rabbitmq processing chain with Ansible.
The rabbitmq user setup below works perfectly
rabbitmq_user:
user: admin
password: supersecret
read_priv: .*
write_priv: .*
configure_priv: .*
But the queue setup below crashes
rabbitmq_queue:
name: feedfiles
login_host: 127.0.0.1
login_user: admin
login_password: admin
login_port: 5672
the crash log looks like this:
{"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.
", "module_stdout": "
Traceback (most recent call last):
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 285, in <module>
main()
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 178, in main
r = requests.get(url, auth=(module.params['login_user'], module.params['login_password']))
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 487,in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=15672): Max retries exceeded with url: /api/queues/%2F/feedfiles (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fcfdaf8e7d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
", "msg": "MODULE FAILURE", "rc": 1}
I have voluntarily removed the guest/guest default setup, which is why I am using the admin credentials
Any idea where the issue could come from?
EDIT:
setting the admin user tag "administrator" doesn't help
UPDATE: I am now running this command:
scrapyd-deploy <project_name>
And getting this error:
504 Connect to localhost:8123 failed: General SOCKS server failure
I am trying to deploy my scrapy spider through scrapyd-deploy, the following is the command I use:
scrapyd-deploy -L <project_name>
I get the following error message:
Traceback (most recent call last):
File "/usr/local/bin/scrapyd-deploy", line 269, in <module>
main()
File "/usr/local/bin/scrapyd-deploy", line 74, in main
f = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not found
The following is my scrapy.cfg file:
[settings]
default = <project_name>.settings
[deploy:<project_name>]
url = http://localhost:8123
project = <project_name>
eggs_dir = eggs
logs_dir = logs
items_dir = items
jobs_to_keep = 5
dbs_dir = dbs
max_proc = 0
max_proc_per_cpu = 4
finished_to_keep = 100
poll_interval = 5
http_port = 8123
debug = on
runner = scrapyd.runner
application = scrapyd.app.application
launcher = scrapyd.launcher.Launcher
[services]
schedule.json = scrapyd.webservice.Schedule
cancel.json = scrapyd.webservice.Cancel
addversion.json = scrapyd.webservice.AddVersion
listprojects.json = scrapyd.webservice.ListProjects
listversions.json = scrapyd.webservice.ListVersions
listspiders.json = scrapyd.webservice.ListSpiders
delproject.json = scrapyd.webservice.DeleteProject
delversion.json = scrapyd.webservice.DeleteVersion
listjobs.json = scrapyd.webservice.ListJobs
I am running tor and polipo, with the polipo proxy on port 'http://localhost:8123'. I can perform a wget and download that page without any problems. The proxy is correctly working, I can connect to the internet and so on. Please ask if you need more clarification.
Thanks!
urllib2.HTTPError: HTTP Error 404: Not found
The url is not reached.
Anything interesting in /var/log/polipo/polipo.log? What comes from tail -100 /var/log/polipo/polipo.log?
Apparently this is because I forgot to run the main command. It is easy to miss because it is mentioned in the Overview page of the documentation, and not the Deployment page. The following is the command:
scrapyd
504 Connect to localhost:8123 failed: General SOCKS server failure
You're asking Polipo to connect to localhost:8123; Polipo passes the request to tor, which returns a failure result which is dutifully returned by Polipo ("General SOCKS server failure").
url = http://localhost:8123
This is certainly not what you meant.
http_port = 8123
I'm also pretty sure you didn't want to run scrapyd on the same port as Polipo.