uber pool no_drivers_available - api

has anyone had this issue? ive been using uber api for months to order rides, but last week something changed, and evey time i try ordering uber pool, the v1.2/requests/estimate endpoint gives me an estimate and fare_id, and when i book the ride, i get http 200 success. but then a few minutes later i get error no_drivers_available. this does not seem to happen using uberX. i've tried ordering rides in places that defintely have drivers (i.e manhattan) but i always get the same error after a few minutes

Related

Why am I getting alternating data from the Flight Cheapest Date Search endpoint?

When sending GET requests to the Test API for Flight Cheapest Date Search with Postman, I seem to be getting two different result sets on repeated requests.
GET https://test.api.amadeus.com/v1/shopping/flight-dates?origin=ATL&destination=SFO
E.g., for a search for the connection ATL-SFO which is listed here, I'm getting, alternatingly, 404 errors ("no price result found") on one request and a full 390 KB list of results with the warning "maximum response size reached" on the next request.
Even for pairs that are supposedly not supported like LAX-SFO, I'm getting alternatingly 500-errors and a full, sorted list of flights.
Is this documented behavior and is there something I can do on my side to get consistent results?
Thanks for raising the issue. We had an issue with one of the node used for this API. This has been fixed and should work properly now.

Getting "Backend Error" once a day when loading data into BigQuery using the insertAll

We are loading data every hour from a Postgres database into BigQuery using StreamSets Data Collector and utilizing the insertAll API. In general it works, but almost every morning (8 out of the last 10 days) the job that runs around 6:00AM Pacific fails with all of the rows for ~25 tables returned with a message "Backend Error". The total amount of data being loaded is about 150MB.
Is there any way to get more insight as to what would be causing these errors to throw during this time period? Are these Backend Errors considered "normal"?
This is using StreamSets Data Collector, so there isn't really any code to post. The stage sending in the rows is an HTTP Processor that uses https://bigquery.googleapis.com/bigquery/v2/projects/{projectid}/datasets/{dataset}/tables/{table}/insertAll

Finding statistical outliers in timestamp intervals with SQL Server

We have a bunch of devices in the field (various customer sites) that "call home" at regular intervals, configurable at the device but defaulting to 4 hours.
I have a view in SQL Server that displays the following information in descending chronological order:
DeviceInstanceId uniqueidentifier not null
AccountId int not null
CheckinTimestamp datetimeoffset(7) not null
SoftwareVersion string not null
Each time the device checks in, it will report its id and current software version which we store in a SQL Server db.
Some of these devices are in places with flaky network connectivity, which obviously prevents them from operating properly. There are also a bunch in datacenters where administrators regularly forget about it and change firewall/ proxy settings, accidentally preventing outbound communication for the device. We need to proactively identify this bad connectivity so we can start investigating the issue before finding out from an unhappy customer... because even if the problem is 99% certainly on their end, they tend to feel (and as far as we are concerned, correctly) that we should know about it and be bringing it to their attention rather than vice-versa.
I am trying to come up with a way to query all distinct DeviceInstanceId that have currently not checked in for a period of 150% their normal check-in interval. For example, let's say device 87C92D22-6C31-4091-8985-AA6877AD9B40 has, for the last 1000 checkins, checked in every 4 hours or so (give or take a few seconds)... but the last time it checked in was just a little over 6 hours ago now. This is information I would like to highlight for immediate review, along with device E117C276-9DF8-431F-A1D2-7EB7812A8350 which normally checks in every 2 hours, but it's been a little over 3 hours since the last check-in.
It seems relatively straightforward to brute-force this, looping through all the devices, examining the average interval between check-ins, seeing what the last check-in was, comparing that to current time, etc... but there's thousands of these, and the device count grows larger every day. I need an efficient query to quickly generate this list of uncommunicative devices at least every hour... I just can't picture how to write that query.
Can someone help me with this? Maybe point me in the right direction? Thanks.
I am trying to come up with a way to query all distinct DeviceInstanceId that have currently not checked in for a period of 150% their normal check-in interval.
I think you can do:
select *
from (select DeviceInstanceId,
datediff(second, min(CheckinTimestamp), max(CheckinTimestamp)) / nullif(count(*) - 1, 0) as avg_secs,
max(CheckinTimestamp) as max_CheckinTimestamp
from t
group by DeviceInstanceId
) t
where max_CheckinTimestamp < dateadd(second, - avg_secs * 1.5, getdate());

Changes in query behaviour

I have some queries that run every day for several month with no problem. I didn't change anything in the queries for a long while.
In the past few days some of them fail. Error message says something regarding some fields: "Field 'myfield' not found.". these queries usually involve some sub-queries and window functions.
Example for the BQ guys:
On 2015-08-03 Job ID: job_EUWyK5DIFSxJxGAEC4En2Q_hNO8 run successfully
on the following days, same query, failed. Job IDs: (job_A9KYJLbQJQvHjh1g7Fc0Abd2qsc , job__15ff66aYseR-YjYnPqWmSJ30N8)
In addition, for some other queries running times extended from minutes to hours and sometime return "timeout".
My questions:
Was something changed in the BQ engine?
What should I do to make my queries run again?
Thanks
So the problem could be two folded:
An update to query execution engine was rolled out during the week of August 3, 2015 as documented in the announcement
If this is the case, you need to update your queries.
Some performance issues were detected lately, but in order to actually identify if there is something wrong with your project or not, you need to create a issue request I did the same in past and it was fixed.

BigQuery completed job returns 404 on getting query results (immediately after)

We run a set of queries on a 2 hour interval which have been running for a week now without issues. Recently on 2015-06-04 00:00:26 UTC we had a job (job_OY8G2_I-F6dbXFW93GdB94wc_W0 ) marked as done, but we received a 404 HTTP exception when trying to get the query results.
I understand that results only last for 24 hours but in this case the query results are obtained right after the job status is 'DONE'.
bq wait job_OY8G2_I-F6dbXFW93GdB94wc_W0 also claims the job was a success.
Is this a situation we should code for (e.g. wait for the job completion, then do a test query to make sure the results can be accessed before paginating through the results and resubmit the entire job on a 404?)
There was a brief period yesterday (June 3) where a small percentage of requests to BigQuery were rejected with a 404 response. It should have cleared up by about 8pm Pacific Time.
This was due to a problem with a configuration change that was caught before it rolled out widely, but it took a while to undo.
Pentium10 if you have seen something similar before, it is likely unrelated.