Why am I getting a 503 from the Amedeus airport API? - amadeus

I am using the Amadeus Python 3 client, and I get a 503 when calling the method `
response = amadeus.reference_data.locations.airports.get(
longitude=long,
latitude=lat)
`
Stack trace: `
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/reference_data/locations/_airports.py", line 25, in get
return self.client.get(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/http.py", line 40, in get
return self.request('GET', path, params)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/http.py", line 110, in request
return self._unauthenticated_request(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/http.py", line 126, in _unauthenticated_request
return self.__execute(request)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/http.py", line 152, in __execute
response._detect_error(self)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/parser.py", line 16, in _detect_error
self.__raise_error(error, client)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/amadeus/mixins/parser.py", line 67, in __raise_error
raise error
amadeus.client.errors.ServerError: [503]
`
Is the API down at the moment?

a 503 is a server error. If the api has a status page, check that. If not, reach out to whoever runs it if that is known. Other than that, only those with access to the servers can tell you why.

Related

telethon.errors.rpcerrorlist.BotMethodInvalidError

The first time I touched Telethon, I changed the api_id and api_hash, and then ran the program, but the following error was reported:
Traceback (most recent call last):
File "scraper.py", line 370, in
client.loop.run_until_complete(main())
File "/root/.miniconda3/envs/python38/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "scraper.py", line 41, in main
await init_empty()
File "scraper.py", line 187, in init_empty
async for dialog in client.iter_dialogs():
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/requestiter.py", line 74, in anext
if await self._load_next_chunk():
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/dialogs.py", line 53, in _load_next_chunk
r = await self.client(self.request)
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/users.py", line 30, in call
return await self._call(self._sender, request, ordered=ordered)
File "/root/.miniconda3/envs/python38/lib/python3.8/site-packages/telethon/client/users.py", line 84, in _call
result = await future
telethon.errors.rpcerrorlist.BotMethodInvalidError: The API access for bot users is restricted. The method you tried to invoke cannot be executed as a bot (caused by GetDialogsRequest)
May I ask what modifications I need to make to run this program(https://github.com/edogab33/telegram-groups-crawler)? Do I need a Telegram group file? Can you provide a simple example? Thank you.

bot.sendAudio and bot.sendPhoto methods in telepot return { 'error code' : 400 , 'Bad Request: wrong HTTP URL specified'}

I am using the
telepot.Bot(bot_id).sendAudio(chat_id, file_url)
method, is supposed to send the file, but it returns
Traceback (most recent call last):
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 1158, in collector
callback(item)
File "bot.py", line 72, in handle
bot.sendAudio(chat_id, url)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 556, in sendAudio
return self._api_request_with_file('sendAudio', _rectify(p), 'audio', audio)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 496, in _api_request_with_file
return self._api_request(method, _rectify(params), **kwargs)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\__init__.py", line 491, in _api_request
return api.request((self._token, method, params, files), **kwargs)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\api.py", line 155, in request
return _parse(r)
File "C:\Users\vinu\AppData\Local\Programs\Python\Python37\lib\site-packages\telepot\api.py", line 150, in _parse
raise exception.TelegramError(description, error_code, data)
telepot.exception.TelegramError: ('Bad Request: wrong HTTP URL specified', 400, {'ok': False, 'error_code': 400, 'description': 'Bad Request: wrong HTTP URL specified'})
the same happened with sendPhoto, but I used python requests to send photos
response =requests.post('https://api.telegram.org/bot/sendphoto', files=files`)
I either want to know why the sendAudio() and sendPhoto() methods work or the http url to send audio
with telepot bot.SendPhoto and bot.sendVideo and bot.sendAudio work either with files and urls that contains a file.
In your case it seems that you used and url and it was uncorrect, can you share it?
In my experience it can be because the url contains & instead of &

Google Oauth2 refresh token error using python-social-auth

I'm using python-social-auth and when I try to refresh my Google Oauth2 access token I get the following error:
[2017-02-15 14:41:00,089: ERROR/MainProcess] Task tasks.tasks.test_login[169e5810-489d-4134-af8f-db3b80629fd2] raised unexpected: HTTPError(u'400 Client Error: Bad Request for url: https://accounts.google.com/o/oauth2/token',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/home/paulozullu/dev/workspaces/wopik/wopik/tasks/tasks.py", line 1928, in test_login
social.refresh_token(strategy)
File "/usr/local/lib/python2.7/dist-packages/social/storage/base.py", line 54, in refresh_token
response = backend.refresh_token(token, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/social/backends/oauth.py", line 418, in refresh_token
request = self.request(url, **request_args)
File "/usr/local/lib/python2.7/dist-packages/social/backends/base.py", line 225, in request
response.raise_for_status()
File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 909, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://accounts.google.com/o/oauth2/token
I use the following code to refresh the access token:
from social.apps.django_app.utils import load_strategy
w_user = WUser.objects.get(auth_user=A('username','xxxx'))
social = UserSocialAuth.objects.get(user_id=w_user.auth_user.id)
strategy = load_strategy()
social.refresh_token(strategy)
Am I doing something wrong?
I had the same problem when calling social.get_access_token(load_strategy()). If you don't want to implement Google sign-in manually, I used this workaround which forces the user to re-authenticate to refresh their tokens.
try:
strategy = load_strategy()
access_token = social.get_access_token(ls)
except HTTPError as e:
return HttpResponseRedirect(reverse('social:begin', kwargs={'backend': "google-oauth2"}))
As yilmazhuseyin pointed above, the issue is related to refresh token not being present. You need to pass access_type='offline' in the parameters in order for Google to return the refresh token. This can be done by adding the following in settings.py for python-social-auth in django:
SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = {
'access_type': 'offline',
}
More details can be found in Google OAuth 2.0 documentation.

Error occurred while adding a new consumer

I am getting following error when I am trying to add a new consumer(queue) from the current_app imported from celery by issuing the control command.
The details of logged error are as follows:
reply = current_app.control.add_consumer(queue_name, destination = WORKER_PROCESSES, reply = True)
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 232, in add_consumer
**kwargs
File "/opt/msx/python-env/lib/python2.7/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 300, in _broadcast
channel=chan)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/pidbox.py", line 336, in _collect
with consumer:
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 396, in __enter__
self.consume()
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 445, in consume
self._basic_consume(T, no_ack=no_ack, nowait=False)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/messaging.py", line 567, in _basic_consume
no_ack=no_ack, nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/kombu/entity.py", line 611, in consume
nowait=nowait)
File "/opt/msx/python-env/lib/python2.7/site-packages/librabbitmq/__init__.py", line 81, in basic_consume
no_local, no_ack, exclusive, arguments or {},
ChannelError: basic.consume: server channel error 404, message: NOT_FOUND - no queue '2795c73e-2b6a-34d6-bd1f-13de0d1e5497.reply.celery.pidbox' in vhost '/'
I didn't understand the error. I am passing a queue name different from the one mentioned in logs.
Any help will be appreciated. Thanks.
Note: This issue starts occurring after setting MAX_TASK_PER_CHILD value. Is this related to the error ?

Scrapyd with Polipo and Tor

UPDATE: I am now running this command:
scrapyd-deploy <project_name>
And getting this error:
504 Connect to localhost:8123 failed: General SOCKS server failure
I am trying to deploy my scrapy spider through scrapyd-deploy, the following is the command I use:
scrapyd-deploy -L <project_name>
I get the following error message:
Traceback (most recent call last):
File "/usr/local/bin/scrapyd-deploy", line 269, in <module>
main()
File "/usr/local/bin/scrapyd-deploy", line 74, in main
f = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not found
The following is my scrapy.cfg file:
[settings]
default = <project_name>.settings
[deploy:<project_name>]
url = http://localhost:8123
project = <project_name>
eggs_dir = eggs
logs_dir = logs
items_dir = items
jobs_to_keep = 5
dbs_dir = dbs
max_proc = 0
max_proc_per_cpu = 4
finished_to_keep = 100
poll_interval = 5
http_port = 8123
debug = on
runner = scrapyd.runner
application = scrapyd.app.application
launcher = scrapyd.launcher.Launcher
[services]
schedule.json = scrapyd.webservice.Schedule
cancel.json = scrapyd.webservice.Cancel
addversion.json = scrapyd.webservice.AddVersion
listprojects.json = scrapyd.webservice.ListProjects
listversions.json = scrapyd.webservice.ListVersions
listspiders.json = scrapyd.webservice.ListSpiders
delproject.json = scrapyd.webservice.DeleteProject
delversion.json = scrapyd.webservice.DeleteVersion
listjobs.json = scrapyd.webservice.ListJobs
I am running tor and polipo, with the polipo proxy on port 'http://localhost:8123'. I can perform a wget and download that page without any problems. The proxy is correctly working, I can connect to the internet and so on. Please ask if you need more clarification.
Thanks!
urllib2.HTTPError: HTTP Error 404: Not found
The url is not reached.
Anything interesting in /var/log/polipo/polipo.log? What comes from tail -100 /var/log/polipo/polipo.log?
Apparently this is because I forgot to run the main command. It is easy to miss because it is mentioned in the Overview page of the documentation, and not the Deployment page. The following is the command:
scrapyd
504 Connect to localhost:8123 failed: General SOCKS server failure
You're asking Polipo to connect to localhost:8123; Polipo passes the request to tor, which returns a failure result which is dutifully returned by Polipo ("General SOCKS server failure").
url = http://localhost:8123
This is certainly not what you meant.
http_port = 8123
I'm also pretty sure you didn't want to run scrapyd on the same port as Polipo.