Amazon S3 CORS works with HTTP but not HTTPS - amazon-s3

I can get Amazon S3 to pass CORS headers with http, but not with https. How do I get it to work with both? What if we're using Akamai as a CDN?
Here's my bucket configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedOrigin>http://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Here's my test. The only difference between these is that one uses http, the other uses https. Both resources load just fine in the browser, but I would like to use them in a CORS setting where it could be https.
pnore#mbp> curl -i -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -H 'Pragma: no-cache' --verbose http://my.custom.domain/path/to/file/in/bucket | head -n 15
* Adding handle: conn: 0x7fee83803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fee83803a00) send_pipe: 1, recv_pipe: 0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to my.custom.domain port 80 (#0)
* Trying 23.23.23.23...
* Connected to my.custom.domain (23.23.23.23) port 80 (#0)
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0> GET /path/to/file/in/bucket HTTP/1.1
> User-Agent: curl/7.30.0
> Host: my.custom.domain
> Accept: */*
> Origin: http://example.com
> Access-Control-Request-Method: GET
> Pragma: no-cache
>
< HTTP/1.1 200 OK
< x-amz-id-2: random
< x-amz-request-id: random
< Access-Control-Allow-Origin: http://example.com
< Access-Control-Allow-Methods: GET
< Access-Control-Max-Age: 3000
< Access-Control-Allow-Credentials: true
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
< Last-Modified: Tue, 10 Jun 2014 15:34:38 GMT
< ETag: "random"
< Accept-Ranges: bytes
< Content-Type: video/webm
< Content-Length: 8981905
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
< Date: Fri, 19 Jun 2015 21:31:22 GMT
< Connection: keep-alive
<
{ [data not shown]
HTTP/1.1 200 OK
x-amz-id-2: random
x-amz-request-id: random
Access-Control-Allow-Origin: http://example.com
Access-Control-Allow-Methods: GET
Access-Control-Max-Age: 3000
Access-Control-Allow-Credentials: true
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Last-Modified: Tue, 10 Jun 2014 15:34:38 GMT
ETag: "random"
Accept-Ranges: bytes
Content-Type: video/webm
Content-Length: 8981905
Server: AmazonS3
Date: Fri, 19 Jun 2015 21:31:22 GMT
...
pnore#mbp> curl -i -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -H 'Pragma: no-cache' --verbose https://my.custom.comain/path/to/file/in/bucket | head -n 15
* Adding handle: conn: 0x7fd24380c000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fd24380c000) send_pipe: 1, recv_pipe: 0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to my.custom.domain port 443 (#0)
* Trying 23.23.23.23...
* Connected to my.custom.domain (23.23.23.23) port 443 (#0)
* TLS 1.2 connection using TLS_RSA_WITH_AES_256_CBC_SHA
* Server certificate: my.custom.domain
* Server certificate: GeoTrust SSL CA - G4
* Server certificate: GeoTrust Global CA
> GET /path/to/file/in/bucket HTTP/1.1
> User-Agent: curl/7.30.0
> Host: my.custom.domain
> Accept: */*
> Origin: http://example.com
> Access-Control-Request-Method: GET
> Pragma: no-cache
>
< HTTP/1.1 200 OK
< x-amz-id-2:
< x-amz-request-id:
< Last-Modified: Tue, 10 Jun 2014 15:34:38 GMT
< ETag: "random"
< Accept-Ranges: bytes
< Content-Type: video/webm
< Content-Length: 8981905
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
< Date: Fri, 19 Jun 2015 21:31:29 GMT
< Connection: keep-alive
<
{ [data not shown]
HTTP/1.1 200 OK
x-amz-id-2:
x-amz-request-id:
Last-Modified: Tue, 10 Jun 2014 15:34:38 GMT
ETag: "random"
Accept-Ranges: bytes
Content-Type: video/webm
Content-Length: 8981905
Server: AmazonS3
Date: Fri, 19 Jun 2015 21:31:29 GMT
Connection: keep-alive
...
Note that the first request contains the desired Access-Control-Allow-Origin header, and the second does not.
I've also tried <AllowedOrigin>*</AllowedOrigin> and using different <CORSRule> blocks for each <AllowedOrigin>.
References I've checked:
Getting S3 CORS Access-Control-Allow-Origin to dynamically echo requesting domain 1
Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading 1
Getting S3 CORS Access-Control-Allow-Origin to dynamically echo requesting domain
Aws S3 Bucket CORS configuration is not saving properly
http://blog.errorception.com/2014/11/enabling-cors-on-amazon-cloudfront-with.html
Correct S3 + Cloudfront CORS Configuration?
https://forums.aws.amazon.com/thread.jspa?messageID=377513
How to Configure SSL for Amazon S3 bucket
HTTPS for Amazon S3 static website
SSL on Amazon S3 as "static website"

I couldn't find documentation that explicitly mentioned it, but it appears that the CORS configuration for the bucket only allows one <AllowedOrigin> per <CORSRule> element entry. You are allowed up to 100 <CORSRule> entries in the configuration. Therefore, in order to get your configuration to support both http and https you should create two <CORSRule> entries, like so:
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
FWIW, I have not tried it, but the configuration may also support a protocol agnostic format, e.g. simply <AllowedOrigin>//*</AllowedOrigin>.

Related

curl changes the URI in the authorization header for digest behind proxy

The bounty expires in 4 days. Answers to this question are eligible for a +50 reputation bounty.
Mirza Prangon is looking for an answer from a reputable source:
Details how what is going wrong, where it is going wrong and how to fix it.
I am trying to use curl for a http request.
I have to use it behind a enterprise proxy server. The remote host uses digest authentication.
I am using the following curl command.
curl -x "http://proxy_username:proxy_pass#proxyIp.xxx.xxx.xxx:8080" -L -X GET "https://remote-host.something.com:443/tomcat_servlet/UploadServlet" --digest -u digest_auth_user:digest_auth_pass -v -k
But I get 400 bad request from apache httpd. The full output from curl is
* Trying proxyIp.xxx.xxx.xxx:8080...
* Connected to proxyIp.xxx.xxx.xxx (proxyIp.xxx.xxx.xxx) port 8080 (#0)
* allocate connect buffer
* Establish HTTP proxy tunnel to remote-host.something.com:443
* Proxy auth using Basic with user 'proxy_username'
* Server auth using Digest with user 'digest_auth_user'
> CONNECT remote-host.something.com:443 HTTP/1.1
> Host: remote-host.something.com:443
> Proxy-Authorization: Basic <redacted>
> User-Agent: curl/7.83.1
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
< Via:HTTP/1.1 s_proxy_nrt
<
* Proxy replied 200 to CONNECT request
* CONNECT phase completed
* schannel: disabled automatic use of client certificate
* ALPN: offers http/1.1
* ALPN: server did not agree on a protocol. Uses default.
* Server auth using Digest with user 'digest_auth_user'
> GET /tomcat_servlet/UploadServlet HTTP/1.1
> Host: remote-host.something.com
> User-Agent: curl/7.83.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 307 Temporary Redirect
< Server: Cisco Umbrella
< Date: Tue, 14 Feb 2023 02:52:03 GMT
< Content-Type: text/html
< Content-Length: 190
< Connection: keep-alive
< Set-Cookie: swg_https_a2bc=1; Path=/; Expires=Tue, 14-Feb-23 03:02:03 GMT; domain=remote-host.something.com; SameSite=None; Secure
< Location: https://remote-host.something.com/tomcat_servlet/UploadServlet?swg_a2bc=1
< Via: HTTP/1.1 s_proxy_nrt
<
* Ignoring the response-body
* Connection #0 to host proxyIp.xxx.xxx.xxx left intact
* Issue another request to this URL: 'https://remote-host.something.com/tomcat_servlet/UploadServlet?swg_a2bc=1'
* Found bundle for host: 0x1a0ed47d970 [serially]
* Re-using existing connection #0 with proxy proxyIp.xxx.xxx.xxx
* Connected to proxyIp.xxx.xxx.xxx (proxyIp.xxx.xxx.xxx) port 8080 (#0)
* Server auth using Digest with user 'digest_auth_user'
> GET /tomcat_servlet/UploadServlet?swg_a2bc=1 HTTP/1.1
> Host: remote-host.something.com
> User-Agent: curl/7.83.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Date: Tue, 14 Feb 2023 02:52:03 GMT
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 381
< Connection: keep-alive
< Server: Apache/2.4.48 (Win64) OpenSSL/1.1.1k
< WWW-Authenticate: Digest realm="https_transfer", nonce="redacted", algorithm=MD5, qop="auth"
< Via: HTTP/1.1 m_proxy_nrt
<
* Ignoring the response-body
* Connection #0 to host proxyIp.xxx.xxx.xxx left intact
* Issue another request to this URL: 'https://remote-host.something.com/tomcat_servlet/UploadServlet?swg_a2bc=1'
* Found bundle for host: 0x1a0ed47d970 [serially]
* Re-using existing connection #0 with proxy proxyIp.xxx.xxx.xxx
* Connected to proxyIp.xxx.xxx.xxx (proxyIp.xxx.xxx.xxx) port 8080 (#0)
* Server auth using Digest with user 'digest_auth_user'
> GET /tomcat_servlet/UploadServlet?swg_a2bc=1 HTTP/1.1
> Host: remote-host.something.com
> Authorization: Digest username="digest_auth_user",realm="https_transfer",nonce="redacted",uri="/tomcat_servlet/UploadServlet?swg_a2bc=1",cnonce="redacted",nc=00000001,algorithm=MD5,response="redacted",qop="redacted"
> User-Agent: curl/7.83.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 400 Bad Request
< Date: Tue, 14 Feb 2023 02:52:03 GMT
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 226
< Connection: keep-alive
< Server: Apache/2.4.48 (Win64) OpenSSL/1.1.1k
< Via: HTTP/1.1 m_proxy_nrt
<
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
</body></html>
* Connection #0 to host proxyIp.xxx.xxx.xxx left intact
Is the server side, I get the following in httpd log.
[auth_digest:error] [pid 3052:tid 1928] [client xxx.xxx.xxx.xxx:xxx] AH01786: uri mismatch - </tomcat_servlet/UploadServlet?swg_a2bc=1> does not match request-uri </tomcat_servlet/UploadServlet>
Indeed, cURL is adding some query it is getting from the proxy server in the authentication header.
Settings of my httpd
<Location /tomcat_servlet>
ProxyPass http://localhost:8080/tomcat_servlet
ProxyPassReverse http://localhost:8080/tomcat_servlet
AuthType Digest
AuthName https_transfer
AuthUserFile ${SRVROOT}/conf/.htpasswd
Require valid-user
</Location>
How do I use cURL in this situation? Or should I change some settings in the httpd side?

Index file for a subdirectory through CloudFront

I am trying to do a perfectly conventional thing: I am using
CloudFront / S3 to host a static website, but I also want to host
another website in a subdirectory. Following the instruction, I
believe I got S3 to work
% curl -v http://mydomain.me.s3-website-us-west-1.amazonaws.com/c
> GET /c HTTP/1.1
> Host: mydomain.me.s3-website-us-west-1.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 302 Moved Temporarily
< x-amz-error-code: Found
< x-amz-error-message: Resource Found
< x-amz-request-id: 9BB13A73FFB4503E
< x-amz-id-2: 3JX26tNdHi1irPbFJS7E1BifwliygqRZsZIc/qZptjBqBjjmGL7YGK6xfG23GZR70R0Ou+3ZAiM=
< Location: /c/
< Content-Type: text/html; charset=utf-8
< Content-Length: 313
< Date: Tue, 01 Dec 2020 01:58:08 GMT
< Server: AmazonS3
So /c is redirecting to /c/, which I believe is correct, and that new location definitely serves correctly:
% curl -v http://mydomain.me.s3-website-us-west-1.amazonaws.com/c/
> GET /c/ HTTP/1.1
> Host: mydomain.me.s3-website-us-west-1.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< x-amz-id-2: BD0wdDnhonp7Y5i2b7mUDVbIXKYu4O52YPUKVQx5GDaLW5hmDzcrsF/EixdksCtkt/NK6Bg24hY=
< x-amz-request-id: 7F11B109218EF9ED
< Date: Tue, 01 Dec 2020 01:58:11 GMT
< Last-Modified: Tue, 01 Dec 2020 01:31:59 GMT
< x-amz-version-id: zSq5IxE3Ug8oG5SSW.lZsCYydp42.h.4
< ETag: "7999ccd49fe930021167ae6f8fe95eb6"
< Content-Type: text/html
< Content-Length: 36
< Server: AmazonS3
<
And it actually gives me my file. But when I try to go through CloudFront for /c:
% curl -v https://mydomain.me/c
> GET /c HTTP/2
> Host: mydomain.me
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/2 403
< content-type: application/xml
< date: Tue, 01 Dec 2020 01:59:43 GMT
< server: AmazonS3
< x-cache: Error from cloudfront
< via: 1.1 58b53da3f7d231b76d30fcffbf4945a1.cloudfront.net (CloudFront)
< x-amz-cf-pop: SFO20-C1
< x-amz-cf-id: PSjqsinkkfheUfhEPVYbbujMqemugFbrYxM-pQMIihMk3dpp2W4Bmw==
and it downloads the familiar S3 access denied. For /c/, it is even weirder:
% curl -v https://mydomain.me/c/
> GET /c/ HTTP/2
> Host: mydomain.me
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/2 200
< content-type: application/x-directory; charset=UTF-8
< content-length: 0
< last-modified: Tue, 01 Dec 2020 01:30:44 GMT
< x-amz-version-id: 4L.jn6WG3emcGutRuwEZv_lE0aO07AGR
< accept-ranges: bytes
< server: AmazonS3
< date: Tue, 01 Dec 2020 02:00:31 GMT
< etag: "d41d8cd98f00b204e9800998ecf8427e"
< x-cache: RefreshHit from cloudfront
< via: 1.1 37d64bca4c93552139fb3a85c9c4a119.cloudfront.net (CloudFront)
< x-amz-cf-pop: SFO20-C1
< x-amz-cf-id: r5lS4QTmg07XhIXRlXsNJ4qcJaWXfj5Ik9fXZPY_dzLjED-A2MhBiA==
It "works", but it returns an empty file, which it says is a directory listing.
I have logging turned on, and that last one returns:
b5063beaaa3c80c2ad85635ddb1c5fac3da6b5510e9ef332c9e0df0c9abdd45a mydomain.me [01/Dec/2020:01:57:47 +0000] 73.202.134.48 b5063beaaa3c80c2ad85635ddb1c5fac3da6b5510e9ef332c9e0df0c9abdd45a 116EA2ED16AA56DE REST.GET.NOTIFICATION - "GET /mydomain.me?notification= HTTP/1.1" 200 - 115 - 15 - "-" "S3Console/0.4, aws-internal/3 aws-sdk-java/1.11.888 Linux/4.9.217-0.3.ac.206.84.332.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.262-b10 java/1.8.0_262 vendor/Oracle_Corporation" - noe+YUO+FeYaIukSpTTKl9npt1R0+uAr4Hqzx/mQge2bfhydBiiquR9EWG3iGanDRjK/EagN5Ss= SigV4 ECDHE-RSA-AES128-SHA AuthHeader s3-us-west-1.amazonaws.com TLSv1.2
CloudFront is running some Java library?
curl -v https://mydomain.me/c/index.html works fine.
I assume I have misconfigured CloudFront, but cannot figure out how. Any suggestions?
Click on the CloudFront Distribution ID
Select the tab "Origins and Origin Groups"
Click the checkbox for the first item under "Origins" (assuming you only have one)
Click "Edit"
Change the "Origin Domain Name" to
"mydomain.me.s3-website-us-west-1.amazonaws.com" (following your
example)
Click "Yes, Edit"
I've done this a hundred times, I know this is a requirement, and it bites me every time!

Cannot able to access API in on-premise

I have installed Tyk( dashboard, gateway & pump) as a docker image on our local machine.
We have created API by ( System Management -> APIs -> Add New API) with below-mentioned configuration via Tyk Dashboard UI.
API-Name: My API
Listen Path: /test-api/
Target URL: http://httpbin.org/
Now the problem is that I am getting "Not Found" error when we access the API.
Could someone help me to resolve this issue?
Request: curl -X GET http://api-dashboard:3000/test-api/get -v
Response: 404 (Not Found)
Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to api-dashboard (127.0.0.1) port 3000 (#0)
> GET /test-api/get HTTP/1.1
> Host: api-dashboard:3000
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Access-Control-Allow-Credentials: true
< Cache-Control: no-store, no-cache, private
< Strict-Transport-Security: max-age=63072000; includeSubDomains
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< Date: Wed, 24 Apr 2019 08:58:35 GMT
< Content-Length: 9
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host api-dashboard left intact
You are calling the dashboard, you should be calling your gateway url instead.
E.g. http://api-gateway:8080/test-api/get
Tyk gateway default port is 8080.

Invalid client type getting verification code for limited input devices with google oauth

I'm trying to add google login for limited input devices as described here for a Web application type.
Using my client_id I cannot get the verification code because I keep getting this error:
$> curl -d "client_id=887293527777-tf5uf5q5skss8sbktp1vpo67p2v5b7i7.apps.googleusercontent.com&scope=email%20profile" https://accounts.google.com/o/oauth2/device/code
{
"error" : "invalid_client",
"error_description" : "Invalid client type."
}
And with verbose output:
$> curl -d "client_id=887293527777-tf5uf5q5skss8sbktp1vpo67p2v5b7i7.apps.googleusercontent.com&scope=email%20profile" https://accounts.google.com/o/oauth2/device/code -vvv
* Trying 209.85.203.84...
* TCP_NODELAY set
* Connected to accounts.google.com (209.85.203.84) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* ALPN, server accepted to use h2
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=accounts.google.com,O=Google Inc,L=Mountain View,ST=California,C=US
* start date: Feb 07 21:22:30 2018 GMT
* expire date: May 02 21:11:00 2018 GMT
* common name: accounts.google.com
* issuer: CN=Google Internet Authority G2,O=Google Inc,C=US
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x56450ea69cd0)
> POST /o/oauth2/device/code HTTP/1.1
> Host: accounts.google.com
> User-Agent: curl/7.51.0
> Accept: */*
> Content-Length: 104
> Content-Type: application/x-www-form-urlencoded
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* We are completely uploaded and fine
< HTTP/2 401
< content-type: application/json; charset=utf-8
< x-content-type-options: nosniff
< cache-control: no-cache, no-store, max-age=0, must-revalidate
< pragma: no-cache
< expires: Mon, 01 Jan 1990 00:00:00 GMT
< date: Fri, 23 Feb 2018 14:29:41 GMT
< server: ESF
< x-xss-protection: 1; mode=block
< x-frame-options: SAMEORIGIN
< alt-svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339; quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000; v="41,39,38,37,35"
< accept-ranges: none
< vary: Accept-Encoding
<
{
"error" : "invalid_client",
"error_description" : "Invalid client type."
* Curl_http_done: called premature == 0
* Connection #0 to host accounts.google.com left intact
}
This is really annoying given they give a curl example on their guide. I've tried from javascript too but with no luck.
Edit: I can curl using Other as the type so I don't think the problem is on my side, but using Other is no good to me because I need to use Web Application in order to set CORS.
That flow is only supported for client type "TVs and Limited Input devices". See https://developers.google.com/identity/sign-in/devices#get_a_client_id_and_client_secret

How do I force apache to deliver a file in chunked encoded format

I am verifying if my application handles file content delivered through chunked-encoding mode. I am not sure what change to make to the httpd.conf file to force chunked encoding through Apache. Is it even possible to do this with Apache server, if not what would be an easier solution? I am using Apache 2.4.2 and HTTP 1.1.
By default, keep-alive is On in Apache and I do not see the data as chunked when testing with wireshark.
EDIT: Added more info:
Only way I managed to do this was by enabling the deflate module.
Then I configured my client to send "Accept-Encoding: gzip, deflate" header and apache would compress and send the file back in chunked mode.
I had to enable the file type in the module though.
AddOutputFilterByType DEFLATE image/png
See example:
curl --raw -v --header "Accept-Encoding: gzip, deflate" http://localhost/image.png | more
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /image.png HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost
> Accept: */*
> Accept-Encoding: gzip, deflate
>
< HTTP/1.1 200 OK
< Date: Mon, 13 Apr 2015 10:08:45 GMT
* Server Apache/2.4.7 (Ubuntu) is not blacklisted
< Server: Apache/2.4.7 (Ubuntu)
< Last-Modified: Mon, 13 Apr 2015 09:48:53 GMT
< ETag: "3b5306-5139805976dae-gzip"
< Accept-Ranges: bytes
< Vary: Accept-Encoding
< Content-Encoding: gzip
< Transfer-Encoding: chunked
< Content-Type: image/png
<
This resource produces chunked results http://www.httpwatch.com/httpgallery/chunked/
which is very useful for testing clients. You can see this by running
$ curl --raw -i http://www.httpwatch.com/httpgallery/chunked/
HTTP/1.1 200 OK
Cache-Control: private,Public
Transfer-Encoding: chunked
Content-Type: text/html
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Mon, 22 Jul 2013 09:41:04 GMT
7b
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
2d
<html xmlns="http://www.w3.org/1999/xhtml">
....
I tried this way to get HTTP chunked encoded data in Ubuntu, it might help.
In apache server create a file index.php in your directory where index page is there ( ex : /var/www/html/) and paste below content (should have php installed):
<?php phpinfo(); ?>
Then try to curl the page as below :
root#ubuntu-16:~# curl -v http://10.11.0.230:2222/index.php
* Trying 10.11.0.230...
* Connected to 10.11.0.230 (10.11.0.230) port 2222 (#0)
> GET /index.php HTTP/1.1
> Host: 10.11.0.230:2222
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 01 Jul 2020 07:51:24 GMT
< Server: Apache/2.4.18 (Ubuntu)
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
<!DOCTYPE html>
<html>
<body>
...
...
...