I have a Tornado web server that exposes some endpoints in its API.
I want to be able to document my handlers (endpoints) in-code, including description, parameters, example, response structure, etc., and afterwards generate an interactive documentation that enables one to "play" with my API, easily make requests and experience the response on a sandbox environment.
I know Swagger, and particularly their SwaggerUI solution is one of the best tools for that, but I get confused how it works. I understand that I need to feed the SwaggerUI engine some .yaml that defines my API, but how do I generate it from my code?
Many github libraries I found aren't good enough or only support Flask...
Thanks
To my understanding, SwaggerUI is dependent on swagger specification.
So, it boils down to generating the Swagger Specification in a clean and elegant manner.
Did you get a chance to look at apispec?
I am finding this to be an active project with a plugin for tornado.
Here's how we are doing it in our project. We made our own module and we are still actively developing this. For more info: https://pypi.org/project/tornado-swirl/
import tornado.web
import tornado_swirl as swirl
#swirl.restapi('/item/(?P<itemid>\d+)')
class ItemHandler(tornado.web.RequestHandler):
def get(self, itemid):
"""Get Item data.
Gets Item data from database.
Path Parameter:
itemid (int) -- The item id
"""
pass
#swirl.schema
class User(object):
"""This is the user class
Your usual long description.
Properties:
name (string) -- required. Name of user
age (int) -- Age of user
"""
pass
def make_app():
return swirl.Application(swirl.api_routes())
if __name__ == "__main__":
app = make_app()
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
Related
We're successfully using Karate to automate tests for REST and SOAP webservices. In addition, we're having some legacy webservices, which are based on the Hessian Web Service protocol (http://hessian.caucho.com/).
Hessian calls are HTTP requests as well, so we would like to add them to our Karate test suites.
My first attempt was to use the Java Interop feature, so the tests are implemeted as Java code and the Java classes are getting called within the Feature files.
Example:
Scenario: Test offer purchase order
* def OfferPurchaseClient = Java.type('com.xyz.OfferPurchaseClient')
* def orderId = OfferPurchaseClient.createOrder('12345', 'xyz', 'max.mustermann#test.de')
* match orderId == '#number'
This approach is working, but I'm wondering if there is a more elegant way which would also use some more features of the Karate DSL.
I'm thinking about something like this (Dummy Code):
Scenario: Test offer purchase order
Given url orderManagementEndpoint
And path offerPurchase
And request serializeHessian(offerPurchase.json)
When method post
Then status 200
And match deserializeHessian(response).orderId == '#number'
Any recommendations/tips about how to implement such an approach?
I actually think you are already on the right track with the Java interop approach. The HTTP client is in my honest opinion a very small sub-set of the capabilities of Karate, there are so many other aspects such as assertions, reports, parallel-execution, even load-testing etc. So I wouldn't worry about not using request, method, status etc.
Maybe OfferPurchaseClient.createOrder() can return a JSON.
Or what I would do is consider an approach like OfferPurchaseClient.request(Map<String, Object> payload) - where payload can include all the parameters including the path and expected response code etc.
To put this another way, I'm suggesting that you write your own mini-framework and custom DSL around some Java code. If you want inspiration and ideas, please look at this discussion, it is an example of building a CLI testing tool around Karate: https://stackoverflow.com/a/62911366/143475
One of the param for my API is security related and linked to the environment on which the test would run , essentially it will be dynamic.
Since this is security related, I have an internal rest API that provides this data.
I want to understand what is the effective way of getting this data in Karate feature?
I have tried two different ways:
1. Defined a java util and invoke the java type and def variable for holding the data
Defined a Util method as part of karate-config.js
In karate-config.js
function getSomeData(someValue) {
return Java.type('xyz.abc.util.MyUtil');
}
In the feature file
defined a JS
* def dataFromJS = read('classpath:com/xyz/util/js_that_invokes_rest.js')
I want to understand if there is a pattern of how this should be done or if there is an explicit support in Karate for doing this?
I have an internal rest API
Well. Did you just forget that Karate is all about making REST requests !? :)
Please create a re-usable feature, make that REST call, define the variables that you need and now you can call it from other features.
Please refer to the documentation: https://github.com/intuit/karate#calling-other-feature-files
I have a playframework project that has reached beta/user testing.
For this testing we require test data to exist in the environment
I am looking for a way to automate this via scripts.
The best way will be via call's to the API passing the correctly shaped data based on the models in the project (thus dependant on the project not external).
Are there any existing SBT plugins that I could utilise that would be able to create the appropriate JSON and pass it to the API to setup the environment
Why do you need a plugin for this? I think what you want to do is to have a set of Json, then call the end-points and see what is the response from the back-end. In case of "setting up" based on a call that has a Json, you could use FakeRequest in your tests:
val application = newGuiceApplicationBuilder().build()
val response = route(application, FakeRequest(POST, "/end-point")).get
contentAsString(response) must include("where is Json")
In your test you can also test the response from the back-end and the Json you are feeding it:
Create a set of Json using Writes, based on a case class you are using in the back-end. You could also purposely create an invalid Json as well, that misses a field for example; or has an invalid structure.
Use Table driven testing and sending FakeRequest with the body/header containing your Json; and then checking it against the expected results.
I'm on the move, when I get home, I can write an example code here.
We are creating a online platform and exposing an Julia API via a embedded code-editor. The user can access the API and run some analysis on our web-app. I have a question related to controlling access to the API and objects.
The API right now contains a database handle and other objects that are exposed to the user and can be used to hack the internal system.
Below is the current architecture:
UserProgram.jl
function doanalysis()
data = getdata()
# some analysis on data
end
InternalProgram.jl
const client = MongoClient()
const collection = MongoCollection(client,"dbname","collectionName")
function getdata()
data = #some function to get data from collection
return data
end
#after parsing the user program
doanalysis()
To run the user analysis, we pass user program as a command-line argument (using ArgParse module) and run the internal program as follows
$ julia InternalProgram.jl --file Userprogram.jl
With this architecture, user potentially gets access to "client" and "collection" and can modify internal databases.
Is there a better way to solve this problem without exposing the objects?
I hope someone has an answer to this.
You will be exposing yourself to multiple types of vulnerabilities - as the general rule, executing user inputed code is a VERY BAD IDEA.
1/ like you said, you'll potentially allow users to execute random code against your database.
2/ your users will have access to all the power of Julia to do things on your server (download files they can later execute for example, access other servers and services on the server [MySQL, email, etc]). Depending on the level of access of the Julia process, think unauthorized access to your file system, installing key loggers, running spam servers, etc.
3/ will be able to use Julia packages and get you into a lot of trouble - like for example add/use the Requests.jl package and execute DoS attacks on other servers.
If you really want to go this way, I recommend that:
A/ set proper (minimal) permissions for the MongoDB user configured to be used in the app (ex: http://blog.mlab.com/2016/07/mongodb-tips-tricks-collection-level-access-control/)
B/ execute each user's code into a separate sandbox / container that only exposes the minimum necessary software
C/ have your containers running on a managed platform where tooling exists (firewalls) to monitor incoming and outgoing traffic (for example to block spam or DoS attacks)
In order to achieve B/ and C/ my recommendation is to use JuliaBox. I haven't used it myself, but seems to be exactly what you need: https://github.com/JuliaCloud/JuliaBox
Once you get that running, you can also use https://github.com/JuliaWeb/JuliaWebAPI.jl
I need some help figuring out the best way to proceed with creating a Rails 3 engine(or plugin, and/or gem).
Apologies for the length of this question...here's part 1:
My company uses an email service provider to send all of our outbound customer emails. They have created a SOAP web service and I have incorporated it into a sample Rails 3 app. The goal of creating an app first was so that I could then take that code and turn it into a gem.
Here's some of the background: The SOAP service has 23 actions in all and, in creating my sample app, I grouped similar actions together. Some of these actions involve uploading/downloading mailing lists and HTML content via the SOAP WS and, as a result, there is a MySQL database with a few tables to store HTML content and lists as a sort of "staging area".
All in all, I have 5 models to contain the SOAP actions (they do not inherit from ActiveRecord::Base) and 3 models that interact with the MySQL database.
I also have a corresponding controller for each model and a view for each SOAP action that I used to help me test the actions as I implemented them.
So...I'm not sure where to go from here. My code needs a lot of DRY-ing up. For example, the WS requires that the user authentication info be sent in the envelope body of each request. So, that means each method in the model has the same auth info hard coded into it which is extremely repetitive; obviously I'd like for that to be cleaner. I also look back now through the code and see that the requests themselves are repetitive and could probably be consolidated.
All of that I think I can figure out on my own, but here is something that seems obvious but I can't figure out. How can I create methods that can be used in all of my models (thinking specifically of the user auth part of the equation).
Here's part 2:
My intention from the beginning has been to extract my code and package it into a gem incase any of my ESP's other clients could use it (plus I'll be using it in several different apps). However, I'd like for it to be very configurable. There should be a default minimal configuration (i.e. just models that wrap the SOAP actions) created just by adding the gem to a Gemfile. However, I'd also like for there to be some tools available (like generators or Rake tasks) to get a user started. What I have in mind is options to create migration files, models, controllers, or views (or the whole nine yards if they want).
So, here's where I'm stuck on knowing whether I should pursue the plugin or engine route. I read Jordan West's series on creating an engine and I really like the thought of that, but I'm not sure if that is the right route for me.
So if you've read this far and I haven't confused the hell out of you, I could use some guidance :)
Thanks
Let's answer your question in parts.
Part One
Ruby's flexibility means you can share code across all of your models extremely easily. Are they extending any sort of class? If they are, simply add the methods to the parent object like so:
class SOAPModel
def request(action, params)
# Request code goes in here
end
end
Then it's simply a case of calling request in your respective models. Alternatively, you could access this method statically with SOAPModel.request. It's really up to you. Otherwise, if (for some bizarre reason) you can't touch a parent object, you could define the methods dynamically:
[User, Post, Message, Comment, File].each do |model|
model.send :define_method, :request, proc { |action, params|
# Request code goes in here
}
end
It's Ruby, so there are tons of ways of doing it.
Part Two
Gems are more than flexible to handle your problem; both Rails and Rake are pretty smart and will look inside your gem (as long as it's in your environment file and Gemfile). Create a generators directory and a /name/name_generator.rb where name is the name of your generator. The just run rails g name and you're there. Same goes for Rake (tasks).
I hope that helps!