Cannot upload file through Sharefile API V3 - api

Seattle 10
Chilkat
Trying to migrate code from V1 to V3 for sharefile API
Post and recieve OAuth key successsfully
Post and recieve Chunk URi from sharefile successfully
extracted ChunkURI from the below response between "ChunkUri": and ,"IsResume"
{"Method":"Standard","ChunkUri":"https://sf-apiadapter-sharefile-useast.sharefile.com/service/apiadapter/api/standardupload?uploadid=cabd0d0f-df06-4986-ae9e-958514ea16f5&accountid=a9f1feea-c601-45f7-ad4d-12a08c7eba73&zoneid=zpfed2b3f5-fbbf-4ed5-9a58-f1bd888f01&zsid=ef&h=xNWf9apYIkqIxFEqKE5eLZSICPY6A5RtsKig08lVZl4%3D","IsResume":false,"ResumeIndex":0,"ResumeOffset":0,"ResumeFileHash":"","MaxNumberOfThreads":4,"odata.metadata":"https://ncgm.sf-api.com/sf/v3/$metadata#UploadSpecification/ShareFile.Api.Models.UploadSpecification#Element","odata.type":"ShareFile.Api.Models.UploadSpecification"}
While posting to the chunkUri to upload a file receive the following response.
ChilkatLog:
SynchronousRequest:
DllDate: Feb 26 2020
ChilkatVersion: 9.5.0.82
UnlockPrefix: NCGRNG.CB4062022
Architecture: Little Endian; 32-bit
Language: Delphi DLL
VerboseLogging: 0
domain:
port: 443
ssl: True
httpRequest:
httpVersion: 1.1
verb: POST
path: /
contentType: multipart/form-data
charset: windows-1252
sendCharset: 0
mimeHeader: Expect: 100-continue
requestParams:
requestItem:
name: File1
contentType: application/octet-stream
fileOnDisk: xxx.pdf
numValueBytes: 27884
--requestItem
--requestParams
--httpRequest
Component successfully unlocked using purchased unlock code.
fullRequest:
a_synchronousRequest:
generateRequestHeader:
httpRequestGenStartLine:
genStartLine:
startLine: POST / HTTP/1.1
--genStartLine
--httpRequestGenStartLine
addCookies:
Not auto-adding cookies.
sendCookies: 1
cookieDir:
--addCookies
genMultipartFormData:
requestParam:
name: File1
filename: xxx.pdf
--requestParam
--genMultipartFormData
--generateRequestHeader
fullHttpRequest:
No domain
--fullHttpRequest
success: 0
--a_synchronousRequest
success: 0
--fullRequest
urlObject_loadUrl:
No domain in URL
url:
--urlObject_loadUrl
totalTime: Elapsed time: 15 millisec
Failed.
SynchronousRequest
ChilkatLog
what I have tried so far
Added https:\ - no response received
added host name as query param before posting to chunk uri
tried rest of http post.
Need help figuring thhis out. Appreciate any help.

Related

Getting 503 error from Karate for API which works fine on Postman or Insomnia [duplicate]

This question already has an answer here:
Karate DSL: Getting connection timeout error
(1 answer)
Closed 2 years ago.
While performing a get call for an api via karat observing DNS error when the proxies im using is commented but if i use the proxy it returns 401 error.
Below is the Following Code :
Feature File Code :
Background:
*url baseUrl
*def someData = { user:'"myemailid"','ContentType':'application/json',"Accept": "*/*"}
*headers someData
Scenario: SomeScenario
Given path '/clients'
When method GET
Then status 200
Karate Config :
function()
{
karate.configure('proxy','ip address')
var config = {
baseUrl:'some url'
}
return config;
}
Request Send to the server:
DEBUG com.intuit.karate - request:
1 > GET url
1 > Accept: */*
1 > Accept-Encoding: gzip,deflate
1 > Content-Type: application/json
1 > Host: scrbmapdk007182:8080
1 > Proxy-Connection: Keep-Alive
1 > User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_141)
1 > user: "myemail"
Response is 502 with along with DNS error
Not sure where im going wrong because it is working via postman .Request send is same as in postman
Read the docs: https://github.com/intuit/karate#configure
Has to be in http: or https: URI form including port number if applicable:
karate.configure('proxy','http://myhost:80');
EDIT: for others landing here, besides the fact that an HTTP proxy may be in the picture - another place where Karate behaves a bit differently than Postman is that Karate does not auto-send an Accept header by default.

Communication between 2 API services on the same cluster not working

I'm running a Kubernetes cluster with 2 API services inside Upon makin a API call to to my web-API located in the cluster I want the call to be forwarded to my backend-API.
This is not happening!
Now I looked into the the Ingress config of the backend-API and found its "Out side address" so to say and tried using that when forwarding API calls from the web-API upon which I received this message (This was a https address.
THE API CALL RESPONSE WAS: StatusCode: 404, ReasonPhrase: 'Not Found', Version: 1.0, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Server: squid/3.1.23
Mime-Version: 1.0
Date: Fri, 22 Oct 2021 23:00:54 GMT
X-Squid-Error: ERR_DNS_FAIL 0
Content-Type: text/html
Content-Length: 3525
}
Then I did some reading and got convinced I should send the internal API call to the backend using its internal IP. So executed in to the container running the backend service and viewed the resolv.config file and here I found 2 things one was a name: which was followed by a IP address and the other was Search: which was followed by 3 very long names.
So I used the IP address that came after name and upon sending a API call to it I got the following response:
http://10.233.0.3/uploadBlob
THE API CALL RESPONSE WAS: StatusCode: 403, ReasonPhrase: 'Forbidden', Version: 1.0, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Server: squid/3.1.23
Mime-Version: 1.0
Date: Sun, 24 Oct 2021 15:59:51 GMT
X-Squid-Error: ERR_ACCESS_DENIED 0
X-Cache: MISS from **************
X-Cache-Lookup: NONE from **************:8080
Via: 1.0 ************* (squid/3.1.23)
Connection: keep-alive
Content-Type: text/html
Content-Length: 3458
}
http://10.233.0.3/uploadBlob
OBS - I had to remove the addresses for security reasons as I am not the owner of it.
Now I'm all out of ideas as to how I can make this internal call between these two services and i would appreciate any help I could get.

API key auth for Ambassador

I'm trying to figure out how to create a simple API key protected proxy with Ambassador on k8s, yet can't seem to find any docs on this.
Specifically, I just want to set it up so it can take a request with API-KEY header, authenticate it, and if API-KEY is valid for some client, pass it onto my backend.
I suggest you do the following:
Create an Authentication Application: for each protected endpoint, this app will be responsible for validating the Api Key.
Configuring Ambassador to redirect requests to this service: you just need to annotate your authentication app service definition. Example:
---
apiVersion: v1
kind: Service
metadata:
name: auth-app
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: AuthService
name: authentication
auth_service: "auth-app:8080"
allowed_request_headers:
- "API-KEY"
spec:
type: ClusterIP
selector:
app: auth-app
ports:
- port: 8080
name: auth-app
targetPort: auth-app
Configure an endpoint in auth-app corresponding to the endpoint of the app you want to authenticate. Suppose you have an app with a Mapping like this:
apiVersion: ambassador/v1
kind: Mapping
name: myapp-mapping
prefix: /myapp/
service: myapp:8000
Then you need to have an endpoint "/myapp/" in auth-app. You will read your API-KEY header there. If the key is valid, return a HTTP 200 (OK). Ambassador will then send the original message to myapp. If auth-app returns any other thing besides a HTTP 200, Ambassador will return that response to the client.
Bypass the authentication in needed apps. For example you might need a login app, responsible for providing an API Key to the clients. You can bypass authentication for these apps using bypass_auth: true in the mapping:
apiVersion: ambassador/v1
kind: Mapping
name: login-mapping
prefix: /login/
service: login-app:8080
bypass_auth: true
Check this if you want to know more about authentication in Ambassador
EDIT: According to this answer it is a good practice if you use as header Authorization: Bearer {base64-API-KEY}. In Ambassador the Authorization header is allowed by default, so you don't need to pass it in the allowed_request_headers field.
I settled on this quick and dirty solution after not finding a simple approach (that would not involve spinning up an external authentication service).
You can use Header-based Routing and only allow incoming requests with a matching header:value.
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: protected-mapping
namespace: default
spec:
prefix: /protected-path/
rewrite: /
headers:
# Poorman's Bearer authentication
# Ambassador will return a 404 error unless the Authorization header value is set as below on the incoming requests.
Authorization: "Bearer <token>"
service: app:80
Testing
# Not authenticated => 404
$ curl -sI -X GET https://ambassador/protected-path/
HTTP/1.1 404 Not Found
date: Thu, 11 Mar 2021 18:30:27 GMT
server: envoy
content-length: 0
# Authenticated => 200
$ curl -sI -X GET -H 'Authorization: Bearer eEVCV1JtUzBSVUFvQmw4eVRVM25BenJa' https://ambassador/protected-path/
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
vary: Origin
date: Thu, 11 Mar 2021 18:23:20 GMT
content-length: 15
x-envoy-upstream-service-time: 3
server: envoy
While you could technically use any header:value pair (e.g., x-my-auth-header: header-value) here, the Authorization: Bearer ... scheme seems to be the best option if you want to follow a standard.
Whether to base64-encode or not your token in this case is up to you.
Here's a lengthy explanation of how to read and understand the spec(s) in this regard: https://stackoverflow.com/a/56704746/4550880
It boils down to the following regex format for the token value:
[-a-zA-Z0-9._~+/]+=*

DNS error observed while performing API testing using karate framework [duplicate]

This question already has an answer here:
Karate DSL: Getting connection timeout error
(1 answer)
Closed 2 years ago.
While performing a get call for an api via karat observing DNS error when the proxies im using is commented but if i use the proxy it returns 401 error.
Below is the Following Code :
Feature File Code :
Background:
*url baseUrl
*def someData = { user:'"myemailid"','ContentType':'application/json',"Accept": "*/*"}
*headers someData
Scenario: SomeScenario
Given path '/clients'
When method GET
Then status 200
Karate Config :
function()
{
karate.configure('proxy','ip address')
var config = {
baseUrl:'some url'
}
return config;
}
Request Send to the server:
DEBUG com.intuit.karate - request:
1 > GET url
1 > Accept: */*
1 > Accept-Encoding: gzip,deflate
1 > Content-Type: application/json
1 > Host: scrbmapdk007182:8080
1 > Proxy-Connection: Keep-Alive
1 > User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_141)
1 > user: "myemail"
Response is 502 with along with DNS error
Not sure where im going wrong because it is working via postman .Request send is same as in postman
Read the docs: https://github.com/intuit/karate#configure
Has to be in http: or https: URI form including port number if applicable:
karate.configure('proxy','http://myhost:80');
EDIT: for others landing here, besides the fact that an HTTP proxy may be in the picture - another place where Karate behaves a bit differently than Postman is that Karate does not auto-send an Accept header by default.

Google Cloud MissingSecurityHeader error

While there is a thread about this problem on Google's FAQ, it seems like there are only two answers that have satisfied other users. I'm certain there is no proxy on my network and I'm pretty sure I've configured boto as I see credentials in the request.
Here's the capture from gsutil:
/// Output sanitized
Creating gs://64/...
DEBUG:boto:path=/64/
DEBUG:boto:auth_path=/64/
DEBUG:boto:Method: PUT
DEBUG:boto:Path: /64/
DEBUG:boto:Data: <CreateBucketConfiguration><LocationConstraint>US</LocationConstraint></\
CreateBucketConfiguration>
DEBUG:boto:Headers: {'x-goog-api-version': '2'}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout':\
70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key dc3
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=dc3 present (cache_file=/tmp/o\
auth2_client-tokencache.1000.dc3)
DEBUG:oauth2_client:GetAccessToken: token from cache: AccessToken(token=ya29, expiry=2\
013-07-19 21:05:51.136103Z)
DEBUG:boto:wrapping ssl socket; CA certificate file=.../gsutil/third_party/boto/boto/cace\
rts/cacerts.txt
DEBUG:boto:validating server certificate: hostname=storage.googleapis.com, certificate ho\
sts=['*.googleusercontent.com', '*.blogspot.com', '*.bp.blogspot.com', '*.commondatastora\
ge.googleapis.com', '*.doubleclickusercontent.com', '*.ggpht.com', '*.googledrive.com', '\
*.googlesyndication.com', '*.storage.googleapis.com', 'blogspot.com', 'bp.blogspot.com', \
'commondatastorage.googleapis.com', 'doubleclickusercontent.com', 'ggpht.com', 'googledri\
ve.com', 'googleusercontent.com', 'static.panoramio.com.storage.googleapis.com', 'storage\
.googleapis.com']
GSResponseError: status=400, code=MissingSecurityHeader, reason=Bad Request, detail=A non\
empty x-goog-project-id header is required for this request.
send: 'PUT /64/ HTTP/1.1\r\nHost: storage.googleapis.com\r\nAccept-Encoding: iden\
tity\r\nContent-Length: 98\r\nx-goog-api-version: 2\r\nAuthorization: Bearer ya29\r\nU\
ser-Agent: Boto/2.9.7 (linux2)\r\n\r\n<CreateBucketConfiguration><LocationConstraint>US</\
LocationConstraint></CreateBucketConfiguration>'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/xml; charset=UTF-8^M
header: Content-Length: 232^M
header: Date: Fri, 19 Jul 2013 20:44:24 GMT^M
header: Server: HTTP Upload Server Built on Jul 12 2013 17:12:36 (1373674356)^M
It looks like you might not have a default_project_id specified in your .boto file.
It should look something like this:
[GSUtil]
default_project_id = 1234567890
Alternatively, you can pass the -p option to the gsutil mb command to manually specify a project. From the gsutil mb documentation:
-p proj_id Specifies the project ID under which to create the bucket.