I am using fusion table APIs to insert/update data in my table. Last week I migrated my API to new version v1 as referred in this sample. But now, when I run the code, the following error displayed.
400 Bad Request
{
"error" : "unauthorized_client"
}
com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "unauthorized_client"
}
at com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:303)
at com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:323)
at com.google.api.client.auth.oauth2.Credential.executeRefreshToken(Credential.java:607)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:526)
at com.google.api.client.auth.oauth2.Credential.intercept(Credential.java:287)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:836)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:412)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:345)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:463)
at com.prasanna.survey.pushapis.FusionPush.insertData(FusionPush.java:198)
at com.prasanna.survey.pushapis.FusionPush.main(FusionPush.java:96)
Java Result: 1
How to debug this error?
This error typically occurs if you change the client_id of an application.
The reason is, you already have an access token for the Fusion API that's based on the old client_id. When you request a refresh token (as you can see is happening in the stack trace) with the new client_id, you get that very unhelpful error message.
The easiest way to handle this is to clear the existing credential from the system so it has to receive a clean access token. You can do this programatically with the Google API Java Client, or you can just remove the file from your system. On my Ubuntu machine, it is located at ~/.credentials/<api-name>.json.
Related
Our server OAuth validation via Google has started throwing NullPointerException within GoogleTokenResponse.parseIdToken():
java.lang.NullPointerException:
at com.google.api.client.json.webtoken.JsonWebSignature$Parser.parse(JsonWebSignature.java:462)
at com.google.api.client.googleapis.auth.oauth2.GoogleIdToken.parse(GoogleIdToken.java:57)
at com.google.api.client.googleapis.auth.oauth2.GoogleTokenResponse.parseIdToken(GoogleTokenResponse.java:106)
This is new behavior that started today. There was no change to our server code (it has worked for months). The problem occurs only with credentials from one Android device -- I have another that works fine. Refreshing the client's server access token does not solve the problem.
The GoogleTokenResponse is being created by GoogleAuthorizationCodeTokenRequest(), that call succeeds and when I log the GoogleTokenResponse it looks valid:
{"access_token":"ya29.mwJvM...","expires_in":3600,"token_type":"Bearer"}
UPDATE: tested some more and found tokenResponse.getIdToken() is returning null, so I assume that's what's causing the NPE when I call parseIdToken().
What would cause getIdToken() to return null when GoogleAuthorizationCodeTokenRequest() apparently succeeded and there is an access token?
Final resolution: this issue appears to be triggered intermittently by the Google Play Services update in early 2016 to anonymize PlayerID. We were able to fix our problems by changing our server validation of the access token to a newer method instead of relying on the older getIdToken()/parseIdToken() methods. See the last UPDATE below for details
After two days the Android device with this failure mysteriously started to work again. So the cause may be a transient error in the client's Google Play Services state which self-corrected. The fix occurred after a device reboot.
However I'm not certain that was the cause. There are also Play Services changes rolling out to enable authentication without exposing the G+ user ID -- another explanation is the server was not being given scope to retrieve the ID. (If that was the cause, then again the fix must have been deployed by Google as we have not changed anything)
We'll continue to monitor it, if anyone else runs into this add a comment please.
4/19/16 This problem has occurred on a different device. I am wondering if this is related to the Google Play auth changes described here http://android-developers.blogspot.com/2016/01/play-games-permissions-are-changing-in.html?m=1
That explanation is a bit sparse but it does say "The user_id returned by token info may no longer be present with the new model. And even if it is present, the value won’t be the same as the new player ID"
In this case the problem occurred after
Device had previously authorized with Google Play Services in the old G+-style
App data was cleared so re-auth was necessary
During re-auth GPS prompted for the new GPS-only player ID (not real name), which makes me wonder if it switched that device to the new non-G+ ID
Then server calls to tokenResponse.getIdToken() returned null
I'm not yet sure what's happening but researching two areas of concern:
1) Although the Google docs referenced above say "existing players ... will continue to get their Google+ ID" I'm wondering if this is managed per-client. That would be a big problem because we use that ID to store cloud state for a user across devices, so if a user who originally set up their account before the new player ID then installed the app on a second device, they could sign in with gplay but the two accounts would not match
2) If this is the cause, then either our server code fails to work with the new non-G+ player ID, or there is a google back-end bug when a device transitions between the two. This is still confusing though because our prior problem did self-correct after a couple of days, which implies the server code is fine -- but I'm sure hoping the alternate explanation of a bug with google back-end auth is wrong!
--- UPDATE
I think the issue is related to the new GPS anonymized PlayerID changes. It has been hard to debug because it appears that Google's legacy server auth flow, which requires a non-null GoogleTokenResponse.getIdToken(), fails for a newly created GPS PlayerID, but after 12-24 h the problem seems to self-correct and the legacy Google auth calls begin to succeed including returning a non-null getIdToken().
However I tried implementing the new PlayerID flow in the Step 7 of the google info page above which converts the access token (generated from a server auth code) to a Player ID via www.googleapis.com/games/v1/applications//verify/
This code successfully retrieves a Player ID from the accessToken even when getToken() returns null:
// URL: www.googleapis.com/games/v1/applications/<app_id>/verify/
URL url = new URL("https://www.googleapis.com/games/v1/applications/" + GPlayServicesAppId + "/verify/");
HttpURLConnection httpConnection = (HttpURLConnection) url.openConnection();
httpConnection.setRequestProperty("Authorization", "OAuth " + accessToken);
httpConnection.setRequestMethod("GET");
int responseCode = httpConnection.getResponseCode();
if (responseCode != HttpURLConnection.HTTP_OK) {
...
}
BufferedReader reader = new BufferedReader(new InputStreamReader(httpConnection.getInputStream()));
String responseJson = (read contents of reader)
// Example response: { "kind": "games#applicationVerifyResponse", "player_id": "11520..."}
I ran some tests, far as I can tell the new method works in all cases where the older G+ getToken() method works as well as fixing the cases where it doesn't, so I believe we can just switch to the new method in the code snippet above and hopefully that will be reliable.
I'm trying to create a backend system with AWS API Gateway and Lambda.
In the past days I created a PUT method for a new API resource, with an API Key as a simple first security step. The PUT method invoke a Lambda function on AWS.
Then I deployed this API to a "prod" stage for some tests.
In the first days everything were working well as expected: I created a call to the API with postman and I received all the data I was expecting.
But a couple of days ago I started to receive always the 429 "Too many requests" response. I created also a new stage, but nothing changed: also the new stage, with the same version or with newer version, is getting always the same error.
The API is not reaching any limit, because they are called 4 or 5 times per day, not per second (checked on CloudWatch). There is no cycle, it is only a single invocation.
I suppose there is no error on the lambda side, because if I test the API in the AWS API Gateway console I get no error (and the lambda was working well in the past, no new changes from that version). The error only shows when I use an external client to test my api (in my case it is Postman).
Can anyone help to solve this problem?
UPDATE: I've just created a POST method on the same resource, with the same parameters and the same lambda. It is working. I wonder if the problem is related to the PUT methods in general or if within 2 days also my POST method will be affected by the same problem.
I had the same problem. I deleted and recreated the deployment. It did work in my case.
Here is a link to errors related to Amazon's API gateway. The last paragraph has additional information on the 429 error you discussed above.
I had the same issue. I created the case in AWS, and they suggested that I implement this dependsOn fix in the template file. Refer: Link
And it worked for me.
I am getting following error from Google bigquer while doing streaming inserts:
Error message Signet::AuthorizationError: Unexpected status code: 500. Server message: { "error": "internal_failure" }
I can understand that there can few errors but the same doesn't get reflected on console as shown below:
As you can see in above, image, there are no 500 errors but yet in real there were 10 500 internal_failure errors.
Can you tell me why these errors don't refect on console & how do I ensure they don't happen?
This looks like a failiure to get your authentication token. This failure would occur before the client code even attempts to call the bigquery api, so the console you are looking at is accurately representing traffic.
I suspect it is a failure on a request to https://accounts.google.com/o/oauth2/token. Perhaps monitoring on outgoing http requests could verify this? (For example, see Getting error 500 when trying to obtain access token from authorization code and Internal_failure while getting refreshtoken using code?)
Back to the BigQuery API: when it returns an http error code 500, the error string will be one of "backendError" or "internalError". (For the curious: "backendError" is usually retriable, while "internalError" is likely a permanent failure.)
I’m using the rest api to create an envelope, and then configure it using the api sender view call (/restapi/v2/accounts//envelopes//views/sender) to get the DocuSign UI. Creating the envelope and viewing it the first time using sender view to bring up the docusign api works fine.
The problem occurs if instead of sending the envelope I click ‘save as draft’. When I try to go back to the envelope and view it again using sender view I get the following error with http status of 400:
{
"errorCode": "EDIT_LOCK_ENVELOPE_LOCKED",
"message": "The envelope is locked. The lock must be released before requesting the sender token for envelope, id = xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx."
}
The lock seems to wear off after time (apprx 20 mins). However after it wears off I can only view the item once, and then the lock is reapplied. This error only happens on my demo account but not on production, so it seems like it's an account setting, but I can't figure out what/where the setting is.
We have a bug logged on our side where Save Draft isn't correctly
releasing the lock on the envelope. We should have a fix for this issue
in our DEMO environment soon. The locking feature is currently only "ON"
in our DEMO environment but not in our Production environments while we
find and fix potential issues such as the one identified here. More
information about locking is in our February service pack (PDF) release
notes available here: https://www.docusign.com/support/releases.
According to Yodlee, when you add a site to a user, you are meant to check the status of the site refresh using getRefreshInfo from the RefreshInfo locator
Whenever I attempt to use the getRefreshInfo on a user context, Yodlee throws a 405 (Method Not Allowed) error. This is even when using the sample java code that uses the SOAP API.
The actual call of getRefreshInfo is what is throwing the error
Please use getSiteRefreshInfo API to check the status of the site refresh, this will help.
We'll get that document corrected.