Karate mock API: /__admin/stop endpoint not stopping mock service (standalone jar v1.3.0) - karate

I am running a mock service using the karate standalone jar version 1.3.0.
Mocking is working fine.
However, when I make a GET request to the /__admin/stop endpoint, the process registers that the endpoint has been called, closes the listen port, but does not stop.
Execution output and scripts...
on startup:
java -jar /opt/karate/karate.jar -m src/test/java/mocks/fs/john.feature -p 80
16:17:16.671 [main] INFO com.intuit.karate - Karate version: 1.3.0
16:17:17.315 [main] INFO com.intuit.karate - mock server initialized: src/test/java/mocks/fs/john.feature
16:17:17.425 [main] DEBUG com.intuit.karate.http.HttpServer - server started: aad-9mpcfg3:65080
netstat on listen socket:
netstat -na | grep 65080
tcp6 0 0 :::65080 :::* LISTEN
submission of curl commands:
curl -v http://localhost:65080/john/transactionservice/ping
* Trying 127.0.0.1:65080...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 65080 (#0)
> GET /john/transactionservice/ping HTTP/1.1
> Host: localhost:65080
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< content-length: 49
< server: Armeria/1.18.0
< date: Mon, 28 Nov 2022 16:19:37 GMT
<
* Connection #0 to host localhost left intact
{"message":"this is the JOHN TransactionService"}
curl -v http://localhost:65080/__admin/stop
* Trying 127.0.0.1:65080...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 65080 (#0)
> GET /__admin/stop HTTP/1.1
> Host: localhost:65080
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 202 Accepted
< content-type: text/plain; charset=utf-8
< content-length: 12
< server: Armeria/1.18.0
< date: Mon, 28 Nov 2022 16:19:54 GMT
<
* Connection #0 to host localhost left intact
output after curl commands:
java -jar /opt/karate/karate.jar -m src/test/java/mocks/fs/john.feature -p 80
16:17:16.671 [main] INFO com.intuit.karate - Karate version: 1.3.0
16:17:17.315 [main] INFO com.intuit.karate - mock server initialized: src/test/java/mocks/fs/john.feature
16:17:17.425 [main] DEBUG com.intuit.karate.http.HttpServer - server started: aad-9mpcfg3:65080
16:19:37.312 [armeria-common-worker-epoll-2-1] DEBUG com.intuit.karate - scenario matched at line 3: pathMatches('john/transactionservice/ping') && methodIs('get')
16:19:54.603 [armeria-common-worker-epoll-2-2] DEBUG com.intuit.karate.http.HttpServer - received command to stop server: /__admin/stop
At this point, netstat on listen port shows socket is closed, but the process continues to run:
ps -ef | grep karate
matt 15070 15069 1 16:17 pts/3 00:00:03 java -jar karate-1.3.0.jar -m src/test/java/mocks/fs/john.feature -p 65080
This is the test mock that I am using (src/test/java/mocks/fs/john.feature):
Feature: JOHN TransactionService mock
Scenario: pathMatches('john/transactionservice/ping') && methodIs('get')
* def response = {}
* set response.message = 'this is the JOHN TransactionService'
* def responseStatus = 200
Question:
Is there something else I should be doing to make the mock process stop? I think I'm following the guidance at https://github.com/karatelabs/karate/tree/master/karate-netty#stopping
Thank you in anticipation of responses.

Related

PHP prevent header overwriting by Proxy

I want to access a PHP script hosted on dnsserver.icu via this proxy 207.154.231.211:8080 with curl.
The problem is that the Proxy server seems to do overwrite the HTTP 200 code with a 302 code making it impossible to reach the script.
curl -v dnsserver.icu gives the following output:
* Rebuilt URL to: dnsserver.icu/
* Trying 134.122.73.150...
* TCP_NODELAY set
* Connected to dnsserver.icu (134.122.73.150) port 80 (#0)
> GET / HTTP/1.1
> Host: dnsserver.icu
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 15 Apr 2020 20:05:18 GMT
< Server: Apache/2.4.29 (Ubuntu)
< Content-Length: 31
< Content-Type: text/html; charset=UTF-8
<
* Connection #0 to host dnsserver.icu left intact
whereas curl -v -x 207.154.231.211:8080 dnsserver.icu gives the unexpected result of:
* Rebuilt URL to: dnsserver.icu/
* Trying 207.154.231.211...
* TCP_NODELAY set
* Connected to 207.154.231.211 (207.154.231.211) port 8080 (#0)
> GET http://dnsserver.icu/ HTTP/1.1
> Host: dnsserver.icu
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Location: http://206.189.153.135
< Date: Wed, 15 Apr 2020 20:08:37 GMT
< Connection: keep-alive
< Transfer-Encoding: chunked
<
* Connection #0 to host 207.154.231.211 left intact
The address in the Location header is also changing sometimes.
I already experimented with different header configurations but I couldn't get it to work. When I log every call to the PHP script it doesn't look like the server is even reached by the proxy (no call logged). Futhermore the apache access log is empty.
Strangely this is not the case for all domains. I'm able to access e.g. ip-api.com, google.com or also less popular domains like proxyjudge.us (but not the ip equivalent of 45.33.35.141) through the proxy.
I have no idea what the reason for this behaviour is. Is there any 'trick' in terms of header setting or apache configuration that makes it possible to also access dnsserver.icu through this proxy? Something I havent tought of?
I appreciate any help.

Apache HTTP2 h2c mode not working properly

I would like to enable h2c mode on apache, so I can use HTTP2.0 protocol. In my virtual host configuration I have included the line:
Protocols h2c http/1.1
I have also followed the advise to disable prefork but it doesn't work as expected.
Currently I'm using apache 2.4.29 on Ubuntu.
Case 1) curl requesting http2 upgrade
$ curl -vs --http2 http://domain1.com
* Rebuilt URL to: http://domain1.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to domain1.com (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: domain1.com
> User-Agent: curl/7.58.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
>
< HTTP/1.1 101 Switching Protocols
< Upgrade: h2c
< Connection: Upgrade
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=28
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Sun, 00 Jan 1900 00:00:00 GMT
< server: Apache/2.4.29 (Ubuntu)
< last-modified: Fri, 29 Mar 2019 13:52:29 GMT
< etag: W/"2aa6-5853bfb4c71ac"
< accept-ranges: bytes
< content-length: 10918
< vary: Accept-Encoding
< content-type: text/html
<
.... [snip website code] ....
Case 2) curl directly using http2
$ curl -vs --http2-prior-knowledge http://domain1.com
* Rebuilt URL to: http://domain1.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to domain1.com (127.0.0.1) port 80 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5604f1cb1580)
> GET / HTTP/2
> Host: domain1.com
> User-Agent: curl/7.58.0
> Accept: */*
>
* http2 error: Remote peer returned unexpected data while we expected SETTINGS frame. Perhaps, peer does not support HTTP/2 properly.
As you can see Case 1 is working as expected, but Case 2 is not returning the site. Why is this happening? Is it because Apache is restricting direct use of HTTP2.0 without security?
Hope you can give me an answer as I don't know why things are not working now.
I think I have found the answer, and I think it is a bug in the lastest Apache versions. If I only enable h2c in a virtual host the error persist, but if I enable it on the default virtual host (000-default.conf) everything seems to be working fine.
Another potential solution I have tested and that is working is to enable the protocols h2 and h2c in every virtual host by modifying the mods-enabled/http2.load file:
LoadModule http2_module /usr/lib/apache2/modules/mod_http2.so
<IfModule http2_module>
Protocols h2 h2c http/1.1
</IfModule>
Any of the above mentioned options seems to make the system works as expected both with protocol negotiation and with prior knowledge.

Apache2 proxypass does NOT work for POST request

I'm using Apache2 to route incoming requests to backend python flask web service and Splunk web service, there are running on the same server. I need to use POST to upload my file.
Here's my apache proxy setup:
ProxyPass /api/ http://10.68.57.166:5000/api/
ProxyPass / http://10.68.57.166:8000/
The test shows POST request has not been successfully routed, GET request is working fine although, the message says the method is not allowed, because my python flask only allows POST and OPTIONS.
Please help give me some thoughts of how to get it fixed ? very appreciated.
Gent79 tmp $ curl -v -X POST http://10.68.57.166/api/upload
Trying 10.68.57.166...
Connected to 10.68.57.166 (10.68.57.166) port 80 (#0)
> POST /api/upload HTTP/1.1
> Host: 10.68.57.166
> User-Agent: curl/7.45.0
> Accept: */*
>
< HTTP/1.1 400 BAD REQUEST
< Date: Fri, 15 Jul 2016 02:28:28 GMT
< Server: Werkzeug/0.9.6 Python/2.7.9
< Content-Type: text/html
< Content-Length: 192
< Connection: close
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>400 Bad Request</title>
<h1>Bad Request</h1>
<p>The browser (or proxy) sent a request that this server could not understand.</p>
* Closing connection 0
Gent79 tmp $ curl -v -X GET http://10.68.57.166/api/upload
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.68.57.166...
* Connected to 10.68.57.166 (10.68.57.166) port 80 (#0)
> GET /api/upload HTTP/1.1
> Host: 10.68.57.166
> User-Agent: curl/7.45.0
> Accept: */*
>
< HTTP/1.1 405 METHOD NOT ALLOWED
< Date: Fri, 15 Jul 2016 02:28:41 GMT
< Server: Werkzeug/0.9.6 Python/2.7.9
< Content-Type: text/html
< Allow: POST, OPTIONS
< Content-Length: 178
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>
* Connection #0 to host 10.68.57.166 left intact*

Vagrant is not port forwarding with VirtualBox and PuPHPet

Problem
I cannot connect to my virtual machine's environment, despite using an accepted private network and forwarded ports.
Description
System
MAC OSX 10.9.3
Vagrant 1.6.3
VirtualBox 4.3.12
Vagrantfile file
Puphpet file
Cisco AnyConnect VPN
Using Private Company Network
ifconfig results
Virtual Host file for Web Project.
Upon vagrant up, I vagrant ssh into my virtualbox. The following request work as expected:
[06:04 PM]-[vagrant#precise64]-[~]
$ curl -v 192.168.56.101
* About to connect() to 192.168.56.101 port 80 (#0)
* Trying 192.168.56.101... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 192.168.56.101
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 09 Jul 2014 18:05:05 GMT
< Server: Apache/2.4.9 (Ubuntu)
< Vary: Accept-Encoding
< Content-Length: 481
< Connection: close
< Content-Type: text/html;charset=UTF-8
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>Index of /</title>
</head>
<body>
<h1>Index of /</h1>
<table>
<tr><th valign="top"><img src="/icons/blank.gif" alt="[ICO]"></th><th>Name</th><th>Last modified</th><th>Size</th><th>Description</th></tr>
<tr><th colspan="5"><hr></th></tr>
<tr><th colspan="5"><hr></th></tr>
</table>
</body></html>
* Closing connection #0
[06:05 PM]-[vagrant#precise64]-[~]
$ curl -v playworldsystems.dev
* About to connect() to playworldsystems.dev port 80 (#0)
* Trying 192.168.56.101... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: playworldsystems.dev
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 09 Jul 2014 18:05:52 GMT
< Server: Apache/2.4.9 (Ubuntu)
< X-Powered-By: PHP/5.5.14-2+deb.sury.org~precise+1
< Vary: Accept-Encoding
< Content-Length: 122
< Connection: close
< Content-Type: text/html
<
<pre class='xdebug-var-dump' dir='ltr'><small>string</small> <font color='#cc0000'>'hello'</font> <i>(length=5)</i>
* Closing connection #0
</pre>
However, when I try both commands from within my HOST terminal, I receive the following error:
curl -v 192.168.56.101
* About to connect() to 192.168.56.101 port 80 (#0)
* Trying 192.168.56.101...
* Adding handle: conn: 0x7fc6f1000000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc6f1000000) send_pipe: 1, recv_pipe: 0
* Failed connect to 192.168.56.101:80; Operation timed out
* Closing connection 0
curl: (7) Failed connect to 192.168.56.101:80; Operation timed out
☁ ~ curl -v playworldsystems.dev
* Adding handle: conn: 0x7fdbe9803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fdbe9803000) send_pipe: 1, recv_pipe: 0
* About to connect() to playworldsystems.dev port 80 (#0)
* Trying 192.168.56.101...
* Failed connect to playworldsystems.dev:80; Operation timed out
* Closing connection 0
curl: (7) Failed connect to playworldsystems.dev:80; Operation timed out
Even after trying cURLing from both port 6969 and 8080, I still have no success. I've used random IP Addresses as well as Ports. I've tried altering my virtual host to other port numbers. Nothing seems to work. I will mention that when first starting out, I noticed that this vagrant setup has worked --both only twice. Each time it was working, i would vagrant suspend and vagrant up the next morning to find that my prior solution is no longer working.
Perhaps this fact is what makes this process so frustrating. I want to get of MAMP for my current projects but fear that it is either my work machine's settings that are interfering or perhaps some other application or network related issue. I'm unsure what steps to take and am looking forward to any and all solutions.
a few tips/pointers.
check that the firewall rules are not blocking the ports, if all else fails, just disable it for testing "iptables -F"
make sure ports are forwarded correctly https://docs.vagrantup.com/v2/networking/forwarded_ports.html
so what you want for http is host:8080 guest:80
sometimes your IP address changes on your host ( for example, acquiring new dhcp or joining new network home/work/vpn )
check that you can ping your VM guest from your host via the correct IP from the correct adapter. On the host run "ifconfig -a" (on the mac there should be some en0 for ethernet and en1 for airport) if you can ping the vm guest, then you can hit the web server via port 8080 ( http://mylocal.dev:8080/ ) provided of course that you edited your /etc/hosts to point your vhost to the above mentioned IP address.

Custom Status Line Not Working in RESTLET

I am writing a REST application and i am using RESTLET. My service has a PUT method. As part of the response, i would like to return to the user Custom Status.
For Example :
200 - Successfully Created and Data processing in progress.
I tried to set the statuses as below.
#Put
public String storeItem(Representation entity) throws Exception {
// Some Processing
Status st = new Status(420,null,"REASON_PHRASE","Some description",null);
setStatus(st);
return "Some String Representation"
}
When i try to access the URL using CURL, i get the following status line.
curl -v -X PUT "http://localhost:8080/extensible/data/process"
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
> PUT /extensible/data/upload HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
>
< HTTP/1.1 420 420
< Content-Type: application/json; charset=UTF-8
< Date: Wed, 22 Jan 2014 06:56:24 GMT
< Accept-Ranges: bytes
< Server: Restlet-Framework/2.0.1
< Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept
< Content-Length: 21
<
* Connection #0 to host localhost left intact
* Closing connection #0
The status line above is HTTP/1.1 420 420 but i expect a status line of HTTP/1.1 420 REASON_PHRASE
What am i doing wrong?
Any help will be greatly appreciated.
My two cents about the design.
1. I think you need a pretty good reason to use a custom http status.
I don't think this is the case.
REST API consumed by applications and the application that consume the API know that this particular PUT is part of an asynchronous process.
There for a simple 200 with the new id as data or link to the edit url should be enough.
The client application should notify the user, if decided to do so.
If you still think a custom status is the right way you should consider using 20* and not 420.