I got the Google API for Objective C for using Cloud Storage Module from below path.
Google API Objective-C Client
Unluckily the api didn't provide any Sample Code for Cloud Storage so I tried to do it myself but I couldn't be successful. Below is what I am doing
I enabled my billing for Cloud Storage from Google API Console
I made a bucket with name "ahs_test"
I made a Client ID for installed applications
I got successful with Outh2.0 with the library available at upper SVN Path. After doing this I wrote below code and get below error message
Note that After reading Google Cloud Storage I am sort of sure that I have to send the "x-goog-project-id" in my request header but I am wondering that this API's code doesn't do anything like that. ( I might be doing some mistake so leaving this for getting any sort of help.. Thanks in Advance...)
// Code....
GTLServiceStorage *service = self.storageService
GTLQueryStorage *query = [GTLQueryStorage queryForBucketsGetWithBucket:#"ahs_test"];
_fileListTicket = [service executeQuery:query
completionHandler:^(GTLServiceTicket *ticket,
GTLStorageBuckets *bucketList,
NSError *error) {
}];
// Error Message I get (Detailed from loger)
storage.buckets.get
2012-12-30 07:11:30 +0000
Request: POST https://www.googleapis.com/rpc?prettyPrint=false
Request headers:
Accept: application/json-rpc
Authorization: Bearer _snip_
Cache-Control: no-cache
Content-Type: application/json-rpc; charset=utf-8
User-Agent: com.example.DriveSample/1.0 google-api-objc-client/2.0 MacOSX/10.8 (gzip)
Request body: (128 bytes)
{
"jsonrpc" : "2.0",
"method" : "storage.buckets.get",
"id" : "gtl_3",
"params" : {
"bucket" : "ahs_test",
"max-results" : 150
},
"apiVersion" : "v1beta1"
}
Response: status 200
Response headers:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Encoding: gzip
Content-Length: 132
Content-Type: application/json; charset=UTF-8
Date: Sun, 30 Dec 2012 07:10:44 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: GSE
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Response body: (168 bytes)
{
"error" : {
"message" : "Access Not Configured",
"data" : [
{
"reason" : "accessNotConfigured",
"message" : "Access Not Configured",
"domain" : "usageLimits"
}
],
"code" : 403
},
"id" : "gtl_3"
}
The Google API library uses the Cloud Storage JSON API, which isn't enabled by default. Please check to see if it is enabled in the Google APIs console.
Related
I'm sending requests to the Google safe browsing API. I believe I'm following their documentation correctly. I've tried regenerating my key.
I'm sending the request below
POST https://safebrowsing.googleapis.com/v4/threatMatches:find?key=AIxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
User-Agent: Fiddler
Host: safebrowsing.googleapis.com
Content-Length: 511
{
"client": {
"clientId": "yourcompanyname",
"clientVersion": "1.5.2"
},
"threatInfo": {
"threatTypes": ["MALWARE", "SOCIAL_ENGINEERING"],
"platformTypes": ["WINDOWS"],
"threatEntryTypes": ["URL"],
"threatEntries": [
{"url": "http://www.urltocheck1.org/"},
{"url": "http://malware.testing.google.test"},
{"url": "http://www.urltocheck2.org/"},
{"url": "http://www.urltocheck3.com/"}
]
}
}
And getting back an empty response which is not what I'm expecting with the URLs supplied and following their example.
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Date: Wed, 08 Sep 2021 15:05:59 GMT
Server: scaffolding on HTTPServer2
Cache-Control: private
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Accept-Ranges: none
Vary: Accept-Encoding
Content-Length: 3
{}
https://transparencyreport.google.com/safe-browsing/search?url=malware.testing.google.test
https://developers.google.com/safe-browsing/v4/lookup-api
You need to pass API key
You need to pass MALWARE url": "http://www.urltocheck1.org/"
if it is not malware it will show empty. try the following url
https://testsafebrowsing.appspot.com/s/malware.html with your code. please search and test with other maleware site
My request to google healthcare API is not working from OAuth 2.0 playground using refresh token option. I am getting "status": "PERMISSION_DENIED". The requested API has been enabled for many days. Here's the Request and Response details.
POST
/v1alpha2/projects/<project_id>/locations/<location>/datasets?
datasetId=<dataset_id> HTTP/1.1
Host: healthcare.googleapis.com
Content-length: 0
Content-type: application/json
Authorization: Bearer
HTTP/1.1 403 Forbidden
Content-length: 767
X-xss-protection: 0
X-content-type-options: nosniff
Transfer-encoding: chunked
Vary: Origin, X-Origin, Referer
Server: ESF
-content-encoding: gzip
Cache-control: private
Date: Fri, 12 Jul 2019 17:57:39 GMT
X-frame-options: SAMEORIGIN
Alt-svc: quic=":443"; ma=2592000; v="46,43,39"
Content-type: application/json; charset=UTF-8
{
"error": {
"status": "PERMISSION_DENIED",
"message": "Cloud Healthcare API has not been used in project <project_id> before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/healthcare.googleapis.com/overview?project=<project_ud> then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.",
"code": 403,
"details": [
{
"#type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"url": "https://console.developers.google.com/apis/api/healthcare.googleapis.com/overview?project=<project_id>",
"description": "Google developers console API activation"
}
]
}
]
}
}
You are using the Alpha endpoint for the Healthcare API, which has been decomissioned by Google. You can see how to make the transition for the Beta API in here: https://cloud.google.com/healthcare/docs/how-tos/transition-guide.
Also note that in addition to the change in the Request URL
/v1beta1/projects/<project_id>/locations/<location>/datasets
the response now is a long-running operation:
https://cloud.google.com/healthcare/docs/reference/rest/v1beta1/projects.locations.datasets.operations
Right now the default Content-Type of my spring-data-rest (spring-boot 1.4.3.RELEASE) provided controllers are application/hal+json which makes sense. If I use chrome I get application/hal+json for the root of my application for instance since chrome uses an Accept header of "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8". However, the /profile (formally /alps) URLs provide text/html even though the response body is json (making the Content-Type not match the body). If you specifically ask for only application/json then you get the correct response header.
Here is the incorrectly working case (returns text/html when the document/body returned is NOT text/html):
$ http --verbose "http://localhost:8080/v1/profile/eldEvents" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
GET /v1/profile/eldEvents HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2
HTTP/1.1 200
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Location, X-Auth, Authorization
Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, PATCH
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Date: Fri, 03 Feb 2017 01:16:14 GMT
Expires: 0
Pragma: no-cache
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
Transfer-Encoding: chunked
X-Application-Context: application
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
{
"alps" : {
"version" : "1.0",
"descriptors" : [ {
"id" : "eldEvent-representation",
"href" : "http://localhost:8080/v1/profile/eldEvents",
"descriptors" : [ {
"name" : "sequenceId",
"type" : "SEMANTIC"
}, {
...
Cut out the rest of the response, you can see from above it is json data.
I believe the correct Content-Type for the above request should be something similar to "application/json".
If this is still relevant for you: I've solved this by manually overriding all requests against /profile/* with no content-type defined.
#Component
public class ProfileContentTypeFilter extends OncePerRequestFilter
{
private static final AntPathMatcher matcher = new AntPathMatcher();
#Override
protected void doFilterInternal (HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException
{
if (request.getContentType() == null && matcher.match("/profile/*", request.getRequestURI()))
{
// Override response content type for unspecified requests on profile endpoints
response.setContentType(MediaType.APPLICATION_JSON_VALUE);
}
filterChain.doFilter(request, response);
}
}
Referencing https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
Google BigQuery API is returning 200 however the data is not being inserted in the table;
Request
POST https://www.googleapis.com/bigquery/v2/projects/***/datasets/***/tables/visits/insertAll?fields=insertErrors%2Ckind&key={YOUR_API_KEY}
{
"rows": [
{
"json": {
"hostId": "Value A",
"statusCode": "Value B"
}
},
{
"insertId": "inser-id-yo",
"json": {
"hostId": "Value A",
"statusCode": "Value B"
}
}
]
}
Response
Response: 200
cache-control: no-cache, no-store, max-age=0, must-revalidate
content-encoding: gzip
content-length: 69
content-type: application/json; charset=UTF-8
date: Fri, 16 Dec 2016 12:00:16 GMT
etag: "wWvNncJfeAdSHVaIWRpICxBS7AM/vyGp6PvFo4RvsFtPoIWeCReyIC8"
expires: Mon, 01 Jan 1990 00:00:00 GMT
pragma: no-cache
server: GSE
vary: Origin, X-Origin
{
"kind": "bigquery#tableDataInsertAllResponse"
}
My BigQuery table however is empty and no data is being inserted.
I know there a number of SDKs however I need to be able to do this via Curl as I am working in a language where Google have not developed and SDK.
Anyone else who had this problem, its because BigQuery can have up to a two hour delay.
It took me a while to find the answer.
Once i got rid of the line setInsertId(String.valueOf(System.currentTimeMillis()))
please see: Data streaming insertAll api usage not equal to actually inserted rows
I'm using Hyper to send HTTP requests, but when multiple cookies are included in the response, Hyper will combine them to one which then fails the parsing procedure.
For example, here's a simple PHP script
<?php
setcookie("hello", "world");
setcookie("foo", "bar");
Response using curl:
$ curl -sLD - http://local.example.com/test.php
HTTP/1.1 200 OK
Date: Sat, 24 Dec 2016 09:24:04 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=world
Set-Cookie: foo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
However for the following Rust code:
let client = Client::new();
let response = client.get("http://local.example.com/test.php")
.send()
.unwrap();
println!("{:?}", response);
for header in response.headers.iter() {
println!("{}: {}", header.name(), header.value_string());
}
...the output will be:
Response { status: Ok, headers: Headers { Date: Sat, 24 Dec 2016 09:31:54 GMT, Server: Apache/2.4.25 (Unix) PHP/7.0.14, X-Powered-By: PHP/7.0.14, Set-Cookie: hello=worldfoo=bar, Content-Length: 0, Content-Type: text/html; charset=UTF-8, }, version: Http11, url: "http://local.example.com/test.php", status_raw: RawStatus(200, "OK"), message: Http11Message { is_proxied: false, method: None, stream: Wrapper { obj: Some(Reading(SizedReader(remaining=0))) } } }
Date: Sat, 24 Dec 2016 09:31:54 GMT
Server: Apache/2.4.25 (Unix) PHP/7.0.14
X-Powered-By: PHP/7.0.14
Set-Cookie: hello=worldfoo=bar
Content-Length: 0
Content-Type: text/html; charset=UTF-8
This seems to be really weird to me. I used Wireshark to capture the response and there're two Set-Cookie headers in it. I also checked the Hyper documentation but got no clue...
I noticed Hyper internally uses a VecMap<HeaderName, Item> to store the headers. So they concatenate the them to one? Then how should I divide them into individual cookies afterwards?
I think that Hyper prefers to keep the cookies together in order to make it easier do some extra stuff with them, like checking a cryptographic signature with CookieJar (cf. this implementation outline).
Another reason might be to keep the API simple. Headers in Hyper are indexed by type and you can only get a single instance of that type with Headers::get.
In Hyper, you'd usually access a header by using a corresponding type. In this case the type is SetCookie. For example:
if let Some (&SetCookie (ref cookies)) = response.headers.get() {
for cookie in cookies.iter() {
println! ("Got a cookie. Name: {}. Value: {}.", cookie.name, cookie.value);
}
}
Accessing the raw header value of Set-Cookie makes less sense, because then you'll have to reimplement a proper parsing of quotes and cookie attributes (cf. RFC 6265, 4.1).
P.S. Note that in Hyper 10 the cookie is no longer parsed, because the crate that was used for the parsing triggers the openssl dependency hell.