Is there an Objective-C version of Artifice?
If not, how would I design/develop/create it?
Related Questions
Mock HTTP response via Objective-C
Mock NSURLConnection
I think I might be able to help you here.
I have a Ruby library that is somewhat similar to artifice, albeit more self-contained and built on top of Sinatra, called Mimic. I'm pretty happy with it and one of my favourite features is that as well as being configured using it's Ruby DSL (or using the Sinatra API directly), it can be configured remotely or from any process that speaks HTTP. This means you can use it in your Objective-C tests and configure it from the tests too (rather than having say, a set of external fixtures in a Ruby file).
In the name of eating my own dog food, I recently converted the acceptance tests for my Objective C RestClient port, Resty to use Mimic. The Mimic daemon is started up as part of the build process and my stubs are configured directly in the tests, using a thin Objective-C wrapper around the Mimic REST API.
As you can see, I strive very hard for test clarity!
Those tests use OCUnit but you can use this with Kiwi. In fact, the assertEventually macro in the above tests was the basis of the asynchronous testing support that I ported to Kiwi.
I've since extracted the Objective-C wrapper for Mimic from LRResty and moved it into the Mimic repository. You may want to check out the Resty project to see how my project and the tests are configured. If you have any questions, please ask.
One caveat: I haven't found a way of getting these tests to run successfully in Xcode 4, using the "Test" option, due to the way that it runs. In Xcode 3, I rely on Run Script build phases to start and stop the Mimic daemon, but because Xcode 4 doesn't run the tests as part of the build process this doesn't work. I've tried to accomplish something similar using pre/post test actions but unfortunately these are woefully inadequate due to various bugs.
Bonus tip: I find Charles Debugging Proxy as massive help when working with web services and you can use it with Mimic too; the Objective-C wrapper can be proxied through Charles so you can see exactly what is happening, both in terms of stub configuration and actual HTTP requests (Mimic can even be configured to return some helpful debugging data in the response headers).
Do let me know if you have any questions.
Related
Is there any tool usually used in Common LISP tests to block all network requests and stub responses for specific URLs?
Just for reference, in Ruby we usually use:
https://github.com/bblimke/webmock
https://github.com/chrisk/fakeweb
(or even more powerful tools like https://github.com/vcr/vcr made on top of them)
I know similar tools exist in Python (I remember this one: https://github.com/gabrielfalcao/HTTPretty) and I have found:
https://github.com/johanhaleby/stub-http
created for Clojure and described here:
Strategy for stubbing HTTP requests in Clojure tests
Is there anything similar to this? If not, what do you usually test code that open connections and do external requests. Do you only mock methods directly with tools like mockingbird and cl-mock or is there anything I'm missing?
This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.
Any strategies for doing this? We have a Rails codebase that is currently fully integrated (same app serves up JS assets as does the back-end heavy lifting), but are thinking of extracting the two out into separate services, each in their own git repository and running on separate servers.
I'm planning on unit/acceptance testing the API with a small ruby HTTP client that will also act as documentation for the API endpoints, and the JS front-end (Brunch.io, Backbone, Chaplin) will have unit/acceptance testing internally as well... but I feel like I should be writing cucumber tests that integrate the two together, right? Where do those cukes live? In which repo?
Appreciate any insight here. Thanks!
In a general sense, if you have code that is for both your server and your client, then the "right" place to keep it depends on which side of your app is more central or "heavier": the client or the server.
That being said, from the way your describe things in your question, it kind of sounds you consider the Rails app "primary". For instance, you mention that you currently have "JS assets" integrated/being served by your "Rails codebase" ... not Rails assets being served by your JS server ;-)
So that answers things on the theoretical level, but I also think it makes sense to put the code in your Rails codebase for a practical reason: Cucumber is a Rails tool, not a JS one. You might use it to test some non-Ruby code, but ultimately it's being run by Ruby.
I don't know for sure, but I suspect you'll create headaches for yourself if you try and put your Cucumber specs in your JS codebase, then try to run them from your Rails codebase. Plus, that really tightly couples the two codebases: to run your tests you need both codebases on your testrunner vs. if you keep the Cucumber stuff in Rails-land your test runner could just have your Rails code, and it could run against a different server that has your JS code.
So ultimately it sounds to me like the Cucumber stuff belongs in Rails-land ... but going the other way (and storing it with your JS repo) doesn't seem horrible to me either, just potentially more problematic.
Greetings, we have a project with loads of beans, JSP and etc. There is a desperate need for performing automated tests in our environment (we use Maven). Now, we can easily write tests for database project layer, for various security utilities we implemented. But the JSP pages remain untested.
I searched for utilities for server-side testing and Cactus seems the best option. However, according to their changelist, their last release is 1.8 and it was released more than two years ago!
So the question is - what happened to Cactus, is it still developing or what? And what are the recent alternates for Jakarta Cactus (if any exists)?
I've used a combination of Spring, JUnit and HttpClient with some success in recent projects.
Apache HttpClient provides a powerful and flexible API for constructing and sending http requests into your application. It cannot replicate a web browser, say by running client side scripts, however if there is sufficient content within the resulting http responses (headers, URI, body), then you can use this information to traverse pages within the application and validate the behavior. You can post forms, follow re-directs, process cookies and supply the inputs into your application.
JUnit (junit.org) drives the tests, invoking a series of pages with HttpClient and can be deployed alongside the application, run standalone with ant/maven, or run separately inside your IDE.
Spring (springsource.org) is, of course, optional as you may not be using it for your project. I've found it useful to stub/mock out parts of the application, such that I can isolate specific areas, such as front-end controllers, through to the business logic, by substituting the DAOs to return specific data values. It provides an excellent Test Context Framework and specialized TestRunners that hook in well to testing frameworks like JUnit (or TestNG if you prefer).
Cactus served as a good server-side testing framework in the ejb2 ages and but it's not supported anymore.
You can use combination of both Mock testing (fine-grained) and In-Container testing (coarse-grained) strategy to test your application completely.
Mock Testing Frameworks : Mockito, Jmockit, EasyMock etc..
Integration Testing Frameworks (Java EE) : Arquillian, Embeddable API, etc..
I prefer Mockito and Arquillian for server-side testing.
How about Arquillian? I haven't used it and it doesn't even have a stable version yet, but at least it's in active development.
You might want to try selenium. That with jBehave is a good combination I'm finding. And the more support for both those projects, the more they will not go defunct (like cactus).
My objective is create an apache module that will provide RESTful services (i.e., we have some legacy code that controls/queries some networking equipment and we would now like to expose that functionality as a RESTful service). I guess the flow might look something like this:
WebBrowser -- issues RESTful URI---> [Apache (my_module) ] -->..
..---> Interface to existing Legacy code.
I have been mucking around various wikis, blogs, forums, articles etc. but I just can't seem to understand how those RESTful urls will get to (my_module) in apache [you can tell I have never worked with web-servers internals, much less modules, before]. I mean, do I have to edit that httpd.conf file and say something like: Send all urls that look like http://baseurl/restservices/... to my_module. If so, how do I do it?
Also, what will my_module actually get? Does it get the full http request message and it has to parse it like typical CGI programs?
Further, what is the best way for my_module to interact with my legacy code? E.g., Open a TCP connection to it and send messages and write wrapper around legacy code to interpret those messages. Or can my_module directly invoke the functions in my legacy code somehow if I compiled my entire legacy code as a module in apache?
Thanks for any hints. If u know of a good tutorial, please point me to it. I'm looking for a high level overview that will give me the architecture (the developers under me can then follow up on the nitty-gritty details).
I'd write an extension for PHP or Python and use mod_php / mod_wsgi
I think you are approaching this in the wrong way:
Apache modules are not really how you want to handle a URL if your requirements are quote basic. Depending on the language your legacy code is in, I would advise:
Binding its API into a python or PHP module, and have that script called by Apache through normal means. It is also a lot simple (in many cases) to glue a C-call style compiled language to these scripting languages rather than Apache itself.
It also has the advantage of adding an abstractions which allows you to layer additional logic in a scripting language on your core legacy code. You may also want to preprocess data and validate it from the request before handing it into your legacy code.
Both PHP and Python also have RESTful frameworks and utilities.
If you do write an Apache module, then check out Writing Apache Modules with Perl and C
See:
Developing PHP Extensions in C, Extending Python in C or C++ ... also if using Python checkout the WSGI stuff.
I'd agree with Aiden. Writing Apache modules is not for the faint hearted and you definitely don't want to go there unless you absolutely must. You would need to be prepared to become very conversant with how Apache works.
If you still think you need to, then look at:
http://httpd.apache.org/apreq/
This is a library which uses existing Apache Runtime Libraries and which provides higher level functionality for dealing with POST data, cookies etc from C code hooked into Apache via a custom module.
The book Aiden mentions though is a bit dated. Better off getting:
The Apache Modules Book: Application Development with Apache