I'm writing some web services using Ocamlnet. More specifically, I'm using Netcgi_scgi in combination with Apache 2.
It looks like all of the functionality is directed at presenting me with an already-parsed view of the user input.
Is there any way for me to access the input stream directly so I can parse the raw POST data myself?
Also, is there a way for me to prevent the Netcgi code from automatically trying to parse the POST data? (I found that, the way I'm doing it, it fails and throws an exception if it doesn't receive the expected key-value format, so my handler never even gets called.)
I would ideally like to be able to support JSON-RPC, which calls for the POST data to be a JSON object with no argument name associated with it. (Well, the spec is not protocol specific so that isn't spelled out, but it seems to me to be the most reasonable interpretation. The old (1.2) spec explicitly shows it done that way.)
Easy way would be (using plex) allowing application/json mime
Nethttpd_plex.nethttpd_factory
~config_cgi:Netcgi.({
default_config with
default_exn_handler = false;
permitted_input_content_types = [
"application/json";
] # default_config.permitted_input_content_types})
Then you will get content of request in single parameter BODY (if client properly sends Content-Type: application/json header), so you can do something like
let get_json_request (cgi : Netcgi.cgi_activation) =
cgi # argument_value "BODY"
|> Yojson.Basic.from_string
let generate_page (cgi : Netcgi.cgi_activation) =
get_json_request cgi |> process_json_request
A little harder (but probably better) way would be providing your own dyn_activation and dyn_handler for your handler or creating new service (look for default_dynamic_service in ocamlnet source)... but I newer went down that path...
Use cgi_argument:
let with_arg arg f = Io.unwind ~protect:(fun arg -> arg#finalize ()) f arg
let get_arg cgi name =
try Some (with_arg (cgi#argument name) (fun arg -> arg#value))
with Not_found -> None
let parse ?default ~validate ~parse cgi name =
match default, get_arg cgi name with
| None , None -> argf "Missing parameter \"%s\"" name
| Some v, None -> v
| _ , Some p ->
try parse (Pcre.extract ~rex:validate ~full_match:false p)
with Not_found -> argf "Invalid parameter \"%s\"" name
References
Netcgi:cgi_argument
How to Write a Simple Web Application using Ocamlnet
Related
I am doing a POC on karate-gatling to reuse my tests. I have referred to documentation and installed those versions. First of all awesome work as usual, very easy to setup and get going.
I am calling a feature file from MySimualtion.scala which has three other abstract feature calls as follows:
* def tranRef = TransactionReferenceUtils.generateTransactionReferenceStartWith('09')
* set payloadR /transaction_reference = tranRef
POST API >> /sending/v1/dm
* call read('classpath:../InitiateAbstract.feature')
* match responseStatus == 200
GET API By Reference >> /sending/v1/dm?ref={ref}
* call read('classpath:../RetrieveByRefAbstract.feature') {ref: #(tranRef)}
* match responseStatus == 200
GET API By Id>> /sending/v1/dm/{id}
* call read('classpath:../RetrieveByIdAbstract.feature') {id: #(pmId)}
* match responseStatus == 200
Abstract features use url keyword to invoke APIs.
MySimulation.scala looks like this
class MySimulation extends Simulation {
val protocol = karateProtocol(
"/sending/v1/dm?ref={ref}" -> Nil,
"/send/v1/dm/{id}" -> Nil,
"/sending/v1/dm" -> Nil
)
protocol.nameResolver = (req, ctx) => req.getUrlAndPath()
val create = scenario("create").exec(karateFeature("classpath:com/mastercard/send/xb/Testcases/Rem1Shot/Remit1ShotWithFrwdFeesRetrieve.feature"))
setUp(
create.inject(rampUsers(2) during (5 seconds)).protocols(protocol)
)
}
Now the issue is, in reports GET request with {id} and POST request is getting aggregated but GET requests with ref are reported individually.
I have also tried using nameResolver with getUrlAndPath, still no luck.
I am not sure if I am missing anything here.
Note:
There was another issue where I was not able to aggregate the GET request with id by using following protocol but now its fine when I include full uri.
"/dm/{id}" -> Nil,
"/dm" -> Nil
For that get request, pass a fake header and use that to control the nameResolver: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
I would have expected /sending/v1/{dm} or something like that to work.
Note that in theory you can write some custom Scala code to parse the URL and do the name resolution. If you feel this should be made easier, submit a feature request, or better still, contribute code !
I am doing a POC on karate-gatling to reuse my tests. I have referred to documentation and installed those versions. First of all awesome work as usual, very easy to setup and get going.
I am calling a feature file from MySimualtion.scala which has three other abstract feature calls as follows:
* def tranRef = TransactionReferenceUtils.generateTransactionReferenceStartWith('09')
* set payloadR /transaction_reference = tranRef
POST API >> /sending/v1/dm
* call read('classpath:../InitiateAbstract.feature')
* match responseStatus == 200
GET API By Reference >> /sending/v1/dm?ref={ref}
* call read('classpath:../RetrieveByRefAbstract.feature') {ref: #(tranRef)}
* match responseStatus == 200
GET API By Id>> /sending/v1/dm/{id}
* call read('classpath:../RetrieveByIdAbstract.feature') {id: #(pmId)}
* match responseStatus == 200
Abstract features use url keyword to invoke APIs.
MySimulation.scala looks like this
class MySimulation extends Simulation {
val protocol = karateProtocol(
"/sending/v1/dm?ref={ref}" -> Nil,
"/send/v1/dm/{id}" -> Nil,
"/sending/v1/dm" -> Nil
)
protocol.nameResolver = (req, ctx) => req.getUrlAndPath()
val create = scenario("create").exec(karateFeature("classpath:com/mastercard/send/xb/Testcases/Rem1Shot/Remit1ShotWithFrwdFeesRetrieve.feature"))
setUp(
create.inject(rampUsers(2) during (5 seconds)).protocols(protocol)
)
}
Now the issue is, in reports GET request with {id} and POST request is getting aggregated but GET requests with ref are reported individually.
I have also tried using nameResolver with getUrlAndPath, still no luck.
I am not sure if I am missing anything here.
Note:
There was another issue where I was not able to aggregate the GET request with id by using following protocol but now its fine when I include full uri.
"/dm/{id}" -> Nil,
"/dm" -> Nil
For that get request, pass a fake header and use that to control the nameResolver: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
I would have expected /sending/v1/{dm} or something like that to work.
Note that in theory you can write some custom Scala code to parse the URL and do the name resolution. If you feel this should be made easier, submit a feature request, or better still, contribute code !
New to Elm can anyone help me understand why this code returns my error "could not load the page"? I am pretty sure it has something to do with the data being returned being JSON and I have not yet figured out how to manage that.
Basically i am new to Elm and want to take it a little further by working with more JSON data from free APIs can anyone lend a hand?
import Browser
import Html exposing (Html, text, pre)
import Http
-- MAIN
main =
Browser.element
{ init = init
, update = update
, subscriptions = subscriptions
, view = view
}
-- MODEL
type Model
= Failure
| Loading
| Success String
init : () -> (Model, Cmd Msg)
init _ =
( Loading
, Http.get
{ url = "http://api.openweathermap.org/data/2.5/weather?q=naples&APPID=mykey"
, expect = Http.expectString GotText
}
)
-- UPDATE
type Msg
= GotText (Result Http.Error String)
update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
GotText result ->
case result of
Ok fullText ->
(Success fullText, Cmd.none)
Err _ ->
(Failure, Cmd.none)
-- SUBSCRIPTIONS
subscriptions : Model -> Sub Msg
subscriptions model =
Sub.none
-- VIEW
view : Model -> Html Msg
view model =
case model of
Failure ->
text "I was unable to load your book."
Loading ->
text "Loading..."
Success fullText ->
pre [] [ text fullText ]
UPDATE -
This works in Ellie but not locally compiling using Elm 19
Something is off with the body of the `init` definition:
39|> ( Loading
40|> , Http.get
41|> { url = "http://127.0.0.1:8080/test"
42|> , expect = Http.expectString GotText
43|> }
44|> )
The body is a tuple of type:
( Model, Json.Decode.Decoder a -> Http.Request a )
But the type annotation on `init` says it should be:
( Model, Cmd Msg )
-- TYPE MISMATCH ---------------------------------------------- src/Fizzbuzz.elm
The 1st argument to `get` is not what I expect:
40| , Http.get
41|> { url = "http://127.0.0.1:8080/test"
42|> , expect = Http.expectString GotText
43|> }
This argument is a record of type:
{ expect : b, url : String }
But `get` needs the 1st argument to be:
String
-- TOO MANY ARGS ---------------------------------------------- src/Fizzbuzz.elm
The `expectString` value is not a function, but it was given 1 argument.
42| , expect = Http.expectString GotText
^^^^^^^^^^^^^^^^^
Are there any missing commas? Or missing parentheses?
UPDATED - I made a change to try sending any JSON to Elm from my Go webserver to confirm some things leaned by your answers thanks.
You are trying to make a cross-domain request. The mechanism for doing this is called CORS (Cross-Origin Resource Sharing). To enable CORS requests, the server must explicitly allow requests from your domain using the access-control-allow-origin response header. The open weather API does not have this header, so when the browser tries to request the data it is blocked by browser security restrictions.
You'll need to do one of the following:
ask openweather to whitelist your domain for CORS response headers (unlikely to be possible)
proxy the openweather request from your own domain's servers
The latter is more likely to be possible, but it also means you'll be able to keep your API key a secret. If you are making weather requests from the client then anyone that loads your web page will be able to see the API key on the requests their browser makes as well as in the source code of your webpage.
The working example given by #5ndG uses an API that has an access-control-allow-origin response header that explicitly whitelists Ellie, which is why it works there. You can look at the requests and responses using your browser's dev tools to see that this is the case.
good luck!
Your code should manage fine with a JSON response. It will just give you a JSON string. (Though probably you will want to decode your JSON so you can use it.)
I tried your code with a testing API I found on Google that returns some JSON: https://jsonplaceholder.typicode.com/todos/1
It works fine, displaying the JSON string as expected, so perhaps your url has something wrong with it?
See this Ellie for the working example.
It might also be a good idea to not throw away the Http error in your update function; it provides useful information when debugging.
I am writing a trading program that I need to connect to MtGox (a bitcoin exchange) through the API v2. But I keep getting the following error:
URL: 1 https://data.mtgox.com/api/2/BTCUSD/money/bitcoin/address
HTTP Error 403: Forbidden.
Most of my script is a direct copy from here (that is a pastebin link). I just had to change it to work with Python 3.3.
I suspect that it has to do with the part of script where I use base64.b64encode. In my code, I have to encode my strings to utf-8 to use base64.b64encode:
url = self.__url_parts + '2/' + path
api2postdatatohash = (path + chr(0) + post_data).encode('utf-8') #new way to hash for API 2, includes path + NUL
ahmac = base64.b64encode(str(hmac.new(base64.b64decode(self.secret),api2postdatatohash,hashlib.sha512).digest()).encode('utf-8'))
# Create header for auth-requiring operations
header = {
"User-Agent": 'Arbitrater',
"Rest-Key": self.key,
"Rest-Sign": ahmac
}
However, with the other guy's script, he doesn't have too:
url = self.__url_parts + '2/' + path
api2postdatatohash = path + chr(0) + post_data #new way to hash for API 2, includes path + NUL
ahmac = base64.b64encode(str(hmac.new(base64.b64decode(self.secret),api2postdatatohash,hashlib.sha512).digest()))
# Create header for auth-requiring operations
header = {
"User-Agent": 'genBTC-bot',
"Rest-Key": self.key,
"Rest-Sign": ahmac
}
I'm wondering if that extra encoding is causing my header credentials to be incorrect. I think this is another Python 2 v. Python 3 problem. I don't know how the other guy got away without changing to utf-8, because the script won't run if you try to pass a string to b64encode or hmac. Do you guys see any problems with what I am doing? Is out code equivalent?
This line specifically seems to be the problem -
ahmac = base64.b64encode(str(hmac.new(base64.b64decode(self.secret),api2postdatatohash,hashlib.sha512).digest()).encode('utf-8'))
To clarify, hmac.new() creates an object to which you then call digest(). Digest returns a bytes object such as
b.digest()
b'\x92b\x129\xdf\t\xbaPPZ\x00.\x96\xf8%\xaa'
Now, when you call str on this, it turns to
b'\\x92b\\x129\\xdf\\t\\xbaPPZ\\x00.\\x96\\xf8%\\xaa'
So, see what happens there? The byte indicator is now part of the string itself, which you then call encode() on.
str(b.digest()).encode("utf-8")
b"b'\\x92b\\x129\\xdf\\t\\xbaPPZ\\x00.\\x96\\xf8%\\xaa'"
To fix this, as turning bytes into a string back into bytes was unnecessary anyhow(besides problematic), I believe this will work -
ahmac = base64.b64encode(hmac.new(base64.b64decode(self.secret),api2postdatatohash,hashlib.sha512).digest())
I believe you are likely to find help in a related question of mine although it deals with the WebSocket API:
Authenticated call to MtGox WebSocket API in Python 3
Also, the HTTP 403 error seems to indicate that there is something fundamentally wrong with the request. Even if you threw the wrong authentication info at the API you should have gotten an error message as a response and not a 403. My best guess is that you are using the wrong HTTP method so check if you are using the appropriate one (GET/POST).
I've read the suggestions for making your own data handler, for example:
web_view.connect('resource-request-starting', resource_cb)
def resource_cb(view, frame, resource, request, response):
print request.get_uri()
#get data using urllib with different user-agent...
request.set_uri('data:....')
(from http://code.google.com/p/pywebkitgtk/wiki/HowDoI)
will let you download using custom header/useragent. However, sometimes it will complain if set_uri is given string with null-char, or it will give an error like "** Message: console message: (http://url) #linenumber: SECURITY_ERR: DOM Exception 18: An attempt was made to break through the security policy of the user agent."
Is there a better way to set a browser useragent for pygtk code? This says you can add/remove/replace headers using SoupMessage, however that documentation is missing...
This code sets a special user-agent:
http://nullege.com/codes/show/src%40p%40r%40PrisPy-HEAD%40PrisPy.py/33/webkit.WebView/python
webkit.WebSettings() allows user-agent switching, and a few other settings, but it seems it doesn't have the option to add other headers.