How to develop API with Hapi.js in coffeescript - hapi.js

I'm new to hapi.js and I would try test it out. Is it possible to develop with Hapi.js and coffeescript? Could you supply some example on how to setup hapi.js with coffeescript.

I thought the same approach would be useful. From CoffeeScript.org:
The golden rule of CoffeeScript is: "It's just JavaScript". The code compiles one-to-one into the equivalent JS, and there is no interpretation at runtime. You can use any existing JavaScript library seamlessly from CoffeeScript (and vice-versa). The compiled output is readable and pretty-printed, will work in every JavaScript runtime, and tends to run as fast or faster than the equivalent handwritten JavaScript.
If you know CoffeeScript, then you should be able to translate all examples into CoffeeScript by yourself. Else, you should learn CoffeeScript.

Copy and paste any example code into http://js2.coffee and it will show you how it may look like in coffeescript.
And yes you could easily use coffeescript also with hapi like:
Hapi = require 'hapi'
server = new Hapi.Server()
server.connection
port: 3000
server.start ->
console.log 'Server running at:', server.info.uri

Related

How to call validate.js and use it in feature file (to verify the response)?

How to call validate.js and use it in feature file (to verify a specific part of the response)?
I am trying to use https://github.com/validatorjs/validator.js which is a library with some awesome validators out-of-the-box.
While reading the Karate documentation there is a way to read / call and read .js files, so I taught there has to be a way to do this. https://intuit.github.io/karate/#schema-validation
I gotten this far but: ReferenceError: "isNumeric" is not defined in at line number 1
var validator = require('validator');
* def isNumeric = validator.isNumeric ;
In a scenario:
And match each response/list/costs/numberX == '#? isNumeric(_)'
I feel I am really close...
Unfortunately as of now Karate supports only ES5 (via Nashorn) and also does not support JS "module" concepts such as the import or require keywords.
Personally I think this is a good thing, the more JS you use, the more unmaintainable your scripts become. And there is no good way to debug. Note that Karate has syntax to do "functional style" loops and transformations.
Also I have found that in most of the cases where you think JS is needed, Karate's built-in schema validation is sufficient or a better choice.
That said, we hope that when we switch to Graal (proposed, and a must for Java 13+) we will be able to use ES6+ and I am personally looking forward to the arrow notation for functions.

Rust IDE not detecting Result from error_chain, thinks I'm using the std::result::Result

I have an errors.rs file with error_chain! {}, which exports Result, ResultExt, Error and ErrorKind.
If I use self::errors::*, IntelliJ thinks that I'm using the default Result (std::result::Result, I think). However, if I explicitly import the types using use self::errors::{Result, ...}, everything works out hunky dory.
I can tell because the standard result has two type params, but the error_chain one has only one.
In either case, it still compiles.
I'm using the standard Rust IntelliJ plugin, version 0.1.0.1991.
Help! Does anyone know how to get the plugin to understand what the macro is doing?
The IntelliJ-Rust plugin uses its own code parser. It allows to leverage all the IntelliJ platform capabilities (like code navigation, formatting, refactoring, inspections, quick documentation, markers and many others) but requires implementing all the language features, which is not a simple task for Rust (you can find a more in-depth discussion of the Rust compiler parser versus IDE parser in this reddit post).
Macros expansion is probably the biggest language feature that is not supported by the plugin parser at the moment. That is, the plugin sees this error_chain! call, can resolve it to its definition, but doesn't expand it to the actual code and hence doesn't know about the new Result struct that shadows the one from stdlib. Unfortunately, in some cases it leads to such false positive error messages.
I've converted this error annotation into an inspection, so in the next plugin version you'll be able to switch it off entirely or for the particular code block. The work on macros expansion is also in progress.

How to compile and use Kotlin code in runtime?

I'm trying to create a Kotlin Vert.x language support module and I need a way to compile Kotlin files and load the results with a ClassLoader. I've tried using kotlin-compiler library and found K2JVMCompiler class, but it seems to support only command-line-style arguments with its exec method. Is there a way to compile a Kotlin file in runtime (possibly without having to save and read .class files) and immediately load the generated classes? (Kind of like Groovy does.) If not, do you have any useful compiler arguments suggestions or pretty much any advices?
This feels like an XY Problem. You want to know how to compile Kotlin on the fly so that you can more easily use Vert.x by running from Kotlin source files instead of compiled code. But really the recommended path for Vert.x usage is to create a simple bit of code that deploys your verticle within compiled code.
In the question, your link for language support says Vert.x 2 in the path "vertx.io/vertx2/language_support.html"; which is different than how it is now done in Vert.x 3. I think you are merging two thoughts into one. First that Vert.x 3 wants you to run Java/Kotlin files from source (it doesn't really; that was a Vert.x 2 thing they moved away from for compiled languages), and second that you need custom language support (you don't).
You should try to use Vert.x 3 by running compiled code. To do so, build your classes and run your own main() that deploys a verticle programatically. Your code would be simple as:
import io.vertx.core.Vertx
fun main(args: Array<String>) {
val vertx = Vertx.vertx()
vertx.deployVerticle(SomeVerticleOfMine())
}
Alternatively, the docs for running and deploying from the command-line say:
Vert.x will compile the Java source file on the fly before running it. This is really useful for quickly prototyping verticles and great for demos. No need to set-up a Maven or Gradle build first to get going!
And really it is indeed just for prototyping and quick testing, and it isn't any faster than letting your IDE do the same and running from the compiled classes. You also then have debugging features of the IDE which are infinitely valuable.
For a few helper libraries for using Kotlin with Vert.x, view these options:
Vert.x 3 module for Klutter - I am the author, one of my libraries
Vert.x 3 helpers for Kotlin - by Cy6erGn0m
Kovert, a REST framework for Vert.x 3 - I am the author, one of my libraries
Vert.x nubes - not Kotlin specific, but makes Vert.x-Web friendlier for JVM languages.
There is a full sample project of running Vert.x + Kovert (specifically start with the App class). You can look at the code of Kovert to do your own similar work of starting and running Vert.x nicely, with Promises or however you wish. The docs for Kovert have links to code for starting Vertx and also starting a Verticle to use Vert.x-Web, so more sample code you can read. But it helps to understand Injekt (light-weight dependency registry), Kovenant (promises library), and Klutter configuration injection to understand the complete sample.
Other quick note, Vert.x has codegen support for other languages, but since you can call all of the Java version directly, it does not need to support Kotlin either.
Kotlin 1.1 comes with javax.script (JSR-223) support, which means you can use it as scripting engine similarly to JavaScript with Nashorn.

Using elm for front end development + serving dynamic elm pages though haskell

I started with elm yesterday and I really enjoy using it. Without any experience in front end development I could build a nice looking webpage in only 30 lines of code, which is amazing.
Now I really want to use it in a real life example, I want to build a small blog.
But I need a way to communicate with elm. For example I need to query my database and I get a list of blog entries [Blog] and now I need to pass them to elm.
I am not sure how I would do it. I was looking though the popular haskell frameworks like yesod snap and happstack and the first thing that I found was http://hackage.haskell.org/package/snap-elm-0.1.1.2/docs/Snap-Elm.html
But it seems it is intended for serving static elm files, but I need to pass arguments to it.
Any framework that you would recommend me that already has elm support for serving dynamic elm pages?
And if not, how would you do it?
My idea was just to use elm as a skeleton and then I generate a normal html file with yesod snap or happstack and integrate this file into elm. Would this be possible?
Something that would look like this
container 1000 1000 middle <| displayHtml "/pages/my_generated_html_page.html"
Edit:
My first hacky solution was this
tPage = plainText "<script src=\"http://code.jquery.com/jquery-1.10.1.min.js\"></script>\n
<script> \n
$(function(){\n
$(\"#includedContent\).load(\"/home/maik/b.html\"); \n
});\n
</script> \n
<div id=\"includedContent\"></div>\n"
Unfortunately I am not allowed to use script tags in elm.
I recommend studying elm-lang.org's source code. The majority of it is pure Elm but there are pages that are generated on the server side with Haskell.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.