Getting the request URL with hash #content in Helidon SE - helidon

I am writing a Helidon SE webserver and I am not to process the #content as part of URL.
I tried the following:
ServerRequest request;
request.uri()
request.absoluteUri()
None of them is giving me the hash content of the request URL.
For example: https://example.com/#code=xyz I need to extract 'xyz'

The anchor is not sent to the server by the client (e.g. browser / curl).
curl -vvv http://localhost:8080/foo\?bob\=alice\#xyz
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /foo?bob=alice HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.79.1
> Accept: */*
See this answer which goes into more details.
Retrieving anchor link in URL for ASP.NET

Related

PHP prevent header overwriting by Proxy

I want to access a PHP script hosted on dnsserver.icu via this proxy 207.154.231.211:8080 with curl.
The problem is that the Proxy server seems to do overwrite the HTTP 200 code with a 302 code making it impossible to reach the script.
curl -v dnsserver.icu gives the following output:
* Rebuilt URL to: dnsserver.icu/
* Trying 134.122.73.150...
* TCP_NODELAY set
* Connected to dnsserver.icu (134.122.73.150) port 80 (#0)
> GET / HTTP/1.1
> Host: dnsserver.icu
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 15 Apr 2020 20:05:18 GMT
< Server: Apache/2.4.29 (Ubuntu)
< Content-Length: 31
< Content-Type: text/html; charset=UTF-8
<
* Connection #0 to host dnsserver.icu left intact
whereas curl -v -x 207.154.231.211:8080 dnsserver.icu gives the unexpected result of:
* Rebuilt URL to: dnsserver.icu/
* Trying 207.154.231.211...
* TCP_NODELAY set
* Connected to 207.154.231.211 (207.154.231.211) port 8080 (#0)
> GET http://dnsserver.icu/ HTTP/1.1
> Host: dnsserver.icu
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Location: http://206.189.153.135
< Date: Wed, 15 Apr 2020 20:08:37 GMT
< Connection: keep-alive
< Transfer-Encoding: chunked
<
* Connection #0 to host 207.154.231.211 left intact
The address in the Location header is also changing sometimes.
I already experimented with different header configurations but I couldn't get it to work. When I log every call to the PHP script it doesn't look like the server is even reached by the proxy (no call logged). Futhermore the apache access log is empty.
Strangely this is not the case for all domains. I'm able to access e.g. ip-api.com, google.com or also less popular domains like proxyjudge.us (but not the ip equivalent of 45.33.35.141) through the proxy.
I have no idea what the reason for this behaviour is. Is there any 'trick' in terms of header setting or apache configuration that makes it possible to also access dnsserver.icu through this proxy? Something I havent tought of?
I appreciate any help.

Letting upstream handle cors requests

I'm trying to setup a service that already handles CORS requests and would like to keep it that way instead of handling the CORS request on the Edge Proxy.
Leaving the cors field blank didn't help at all.
Is there anyway to achieve this with Ambassador?
Ambassador will not handle CORS in anyway unless you set the cors parameter in a Mapping or Module config.
Even if that is set, the way Envoy handles CORS seems to be the behavior you are searching for.
Taking a look at the linked comment in this issue https://github.com/envoyproxy/envoy/issues/300#issuecomment-296796675, we can see how Envoy chose to implement it's CORS filter. Specifically:
Assign values to the CORS headers in the repsponse: For each of the headers specified in Table 1 above:
a. let value be the option for the header config
b. if value is not defined, continue to the next header
c. else, write the response header for the specified config option
This means that Envoy will first take the value of the headers set by the upstream service and only write them with the configured values if they are not set in the response.
You can test this by creating a route to the httpbin.org (which handles CORS) and setting cors parameter in the Mapping.
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: cors-httpbin
spec:
prefix: /httpbin/
service: httpbin.org
cors:
origins:
- http://foo.example
methods:
- POST
- OPTIONS
The Mapping above should configure Envoy to set the access-control-allow-origins and access-control-allow-methods headers to http://foo.example.com and POST respectively. However, after sending a test request to this endpoint, we can see that we are instead getting very different CORS headers back in the response:
curl https://aes.example.com/httpbin/headers -v -H "Origin: http://bar.example.com" -H "Access-Control-Request-Method: GET" -X OPTIONS
* Trying 34.74.58.157:443...
* Connected to aes.example.com (10.11.12.100) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: aes.example.com
* Server certificate: Let's Encrypt Authority X3
* Server certificate: DST Root CA X3
> OPTIONS /httpbin/headers HTTP/1.1
> Host: aes.example.com
> User-Agent: curl/7.69.0
> Accept: */*
> Origin: http://bar.example.com
> Access-Control-Request-Method: GET
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Thu, 19 Mar 2020 13:25:48 GMT
< content-type: text/html; charset=utf-8
< content-length: 0
< server: envoy
< allow: HEAD, OPTIONS, GET
< access-control-allow-origin: http://bar.example.com
< access-control-allow-credentials: true
< access-control-allow-methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
< access-control-max-age: 3600
< x-envoy-upstream-service-time: 33
<
* Connection #0 to host aes.example.com left intact
This is because the httpbin.org upstream is setting these headers in the response and so Envoy is defaulting to using them instead of forcing the CORS configuration we gave it. In this way, Envoy really acts as a default for CORS settings and allows upstreams to set more or less restrictive configurations as they see fit.
This behavior can be confusing and caused me a lot of headaches trying to figure it out. I hope I helped clear it up for you.

Apache HTTP2 h2c mode not working properly

I would like to enable h2c mode on apache, so I can use HTTP2.0 protocol. In my virtual host configuration I have included the line:
Protocols h2c http/1.1
I have also followed the advise to disable prefork but it doesn't work as expected.
Currently I'm using apache 2.4.29 on Ubuntu.
Case 1) curl requesting http2 upgrade
$ curl -vs --http2 http://domain1.com
* Rebuilt URL to: http://domain1.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to domain1.com (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: domain1.com
> User-Agent: curl/7.58.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
>
< HTTP/1.1 101 Switching Protocols
< Upgrade: h2c
< Connection: Upgrade
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=28
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Sun, 00 Jan 1900 00:00:00 GMT
< server: Apache/2.4.29 (Ubuntu)
< last-modified: Fri, 29 Mar 2019 13:52:29 GMT
< etag: W/"2aa6-5853bfb4c71ac"
< accept-ranges: bytes
< content-length: 10918
< vary: Accept-Encoding
< content-type: text/html
<
.... [snip website code] ....
Case 2) curl directly using http2
$ curl -vs --http2-prior-knowledge http://domain1.com
* Rebuilt URL to: http://domain1.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to domain1.com (127.0.0.1) port 80 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5604f1cb1580)
> GET / HTTP/2
> Host: domain1.com
> User-Agent: curl/7.58.0
> Accept: */*
>
* http2 error: Remote peer returned unexpected data while we expected SETTINGS frame. Perhaps, peer does not support HTTP/2 properly.
As you can see Case 1 is working as expected, but Case 2 is not returning the site. Why is this happening? Is it because Apache is restricting direct use of HTTP2.0 without security?
Hope you can give me an answer as I don't know why things are not working now.
I think I have found the answer, and I think it is a bug in the lastest Apache versions. If I only enable h2c in a virtual host the error persist, but if I enable it on the default virtual host (000-default.conf) everything seems to be working fine.
Another potential solution I have tested and that is working is to enable the protocols h2 and h2c in every virtual host by modifying the mods-enabled/http2.load file:
LoadModule http2_module /usr/lib/apache2/modules/mod_http2.so
<IfModule http2_module>
Protocols h2 h2c http/1.1
</IfModule>
Any of the above mentioned options seems to make the system works as expected both with protocol negotiation and with prior knowledge.

WSO2 create API for SCEP server HTTP GET POST

I have a SCEP endpoint (Simple Certificate Enrollment Protocol) which is using simple HTTP GET and POST with parameters, for example:
http://localhost/scepserver/pkiclient.exe?operation=GetCACaps&message=CA
I am trying to implement this API in WSO2 Api Manager with endpoint to my SCEP server. I was trying to do it using "Design a New REST API" but it is not working and I do not want to use JSON in message payload.
How should I define API for SCEP, with example to call endpoint with query parameters?
EDIT:
Trying through curl:
curl -X GET 'http://10.30.9.145:8280/devscep/1/pkiclient.exe?operation=GetCACaps&message=CA' -v
Result:
* Hostname was NOT found in DNS cache
* Trying 10.30.9.145...
* Connected to 10.30.9.145 (10.30.9.145) port 8280 (#0)
> GET /devscep/1/pkiclient.exe?operation=GetCACaps&message=CA HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 10.30.9.145:8280
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< activityID: 22588072245075117976472
< WWW-Authenticate: realm="WSO2 API Manager"
< Content-Type: application/soap+xml; charset=UTF-8
< Date: Fri, 14 Jul 2017 13:02:16 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 10.30.9.145 left intact
<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body/></soapenv:Envelope>
In the resources section of the design page, you can define expected query parameters for each resource.
https://docs.wso2.com/display/AM210/Key+Concepts#KeyConcepts-APIresources

Custom Status Line Not Working in RESTLET

I am writing a REST application and i am using RESTLET. My service has a PUT method. As part of the response, i would like to return to the user Custom Status.
For Example :
200 - Successfully Created and Data processing in progress.
I tried to set the statuses as below.
#Put
public String storeItem(Representation entity) throws Exception {
// Some Processing
Status st = new Status(420,null,"REASON_PHRASE","Some description",null);
setStatus(st);
return "Some String Representation"
}
When i try to access the URL using CURL, i get the following status line.
curl -v -X PUT "http://localhost:8080/extensible/data/process"
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
> PUT /extensible/data/upload HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 420 420
< Content-Type: application/json; charset=UTF-8
< Date: Wed, 22 Jan 2014 06:56:24 GMT
< Accept-Ranges: bytes
< Server: Restlet-Framework/2.0.1
< Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept
< Content-Length: 21
<
* Connection #0 to host localhost left intact
* Closing connection #0
The status line above is HTTP/1.1 420 420 but i expect a status line of HTTP/1.1 420 REASON_PHRASE
What am i doing wrong?
Any help will be greatly appreciated.
My two cents about the design.
1. I think you need a pretty good reason to use a custom http status.
I don't think this is the case.
REST API consumed by applications and the application that consume the API know that this particular PUT is part of an asynchronous process.
There for a simple 200 with the new id as data or link to the edit url should be enough.
The client application should notify the user, if decided to do so.
If you still think a custom status is the right way you should consider using 20* and not 420.