I have built a Cloud Function express API and I would like to know how to write tests for it.
To answer your question : of course, you can write tests for Google Cloud Functions. I will even add that, like any other application, you should write tests.
You can take a look at the Cloud Functions documentation about Testing and CI/CD. In the Node.js part, it shows how to test Cloud Functions with Mocha as a test framework and Sinon as a mocking framework. The testing process should be part of your local development and your CI tool, if you have one.
Basically, there are 3 types of tests :
unit tests
integration tests
system tests
In unit tests, you should mock your HTTP framework (Express) to test small parts of your code.
In integration tests, you should mock any external dependency between your Function and other components (for example, if your Function writes data to a Cloud SQL database, then you should mock this Cloud SQL database).
In system tests, you should deploy your Function to a specific GCP environment (most probably an isolated project), to ensure that your Function interacts well with other GCP components, or starts when you trigger it.
Finally, there was previously a Cloud Functions Node.js emulator to locally test Functions. This have been deprecated, but replaced by the Functions Framework, that also can be used to "spin up a local development server for quick testing".
Related
My goal is to set up an environment where CircleCI would run my e2e tests on BrowserStack in different browsers.
My tests are assuming that there is a mock server running. (E.g. tests are checking whether a certain call to the mock server has been made or not.)
I learned that there is such a thing as local testing for BrowserStack, but whenever I'd like to start the mock server on port 65432 it says the port is already being used. Error: listen EADDRINUSE :::65432
I have an Express mock server running (on port 65432), tests are ran by Nightwatch against Selenium server.
So far I only saw examples which run tests against homepages which are living on the internet (like google.com), but I would like to run my own mock server locally and run my tests against it.
Is there a way where I could run a mock server and run my tests with Nightwatch and Selenium against that mock server and all done by a CI tool running the tests on BrowserStack?
If you have a internal website (not accessible to public) hosted on your machine (using mock server - Tomcat, Nginx, Express Mock Server, etc) and wish to run the Selenium based scripts to test that application on BrowserStack, then you can use the Local Testing feature.
You simply need to run the binary file that they provide on your local machine (where the internal website is accessible) and set the capability 'browserstack.local' to 'true'. Hence the tests running on BrowserStack will be able to access your internal website. I would recommend you to review the documentation here. You can checkout the documentation on NightwatchJS-BrowserStack here.
If you wish to trigger the tests using CircleCI. They provide the plug-in for CircleCI as well, read more on the same here. The plug-in itself will handle the Local Testing for you in that case.
For future readers: my problem was parallelism - I set 2 workers (child processes basically) with the following object:
"test_workers": {
'enabled': true,
'workers': 2
}
I found this setup from one of the examples which I can't find anymore, but if you are running your Nightwatch tests with your own mock server this might mess up the test suite since every worker will try to spin up a mock server for it's own tests, which will obviously fail.
Is there any way to automatically run regression/functional tests on Nifi flows using Jenkins pipeline ?
Searched for it, without any success.
Thanks for your help.
With the recent release of NiFI-1.5.0 and NiFi-Registry-0.1.0, the community has come together to produce a number of SDLC/CICD integration tools to make using things like Jenkins Pipeline easier.
There is both Python (NiPyAPI), and Java (NiFi-Toolkit-CLI) API wrappers being produced by a team of collaborators to allow scripted manipulation of NiFi Flows across different environments.
Common functions include interaction with integrated version control, import/export of flows as JSON documents, deployment between environments, start/stop of flows, etc.
So, we are working quickly towards supporting things like an integrated wrapper for declarative Jenkins Pipelines, and I would add it is being done fully in public codebase under the Apache license, so we (I am the lead NiPy author) would welcome your collaboration.
We are in the process of transitioning towards SOA.
Our current goal is to try and ensure that more of the application is developed as "Services" (mainly to improve visibility of capability, re-use and de-risk change). Some of those services will be exposed as web services, but many (and probably the majority) will not, and be used for "internal" use only to help reap some of the benefits of SOA.
For those "internal" services we are currently intending on implementing them as OSGi bundles; however we are struggling to understand how best to test them. Our goal is to enable the current System Test team to test all types of services and we have been investigating tools like SoapUI and SOA Test; however it's becoming clearer that we may face some challenges in testing our services implemented as OSGi bundles using tools like these; and indeed asking the test team to do so.
So we're looking for some advice on how best to test aspects of our capability designed to act as a "service", but implemented as an OSGi bundle instead of a web service.
What tools would people recommend, and is this a type of testing that's traditionally done by a developer during unit test, or can it be done by a less technical tester, undertaking the same basic principles of testing interfaces (i.e. inputs, processing, outputs)?
You could theoretically use a Remote Service Admin implementation (like Aries RSA or Eclipse ECF) to expose your internal services to the outside during testing to access them using an external system test tool.
I would not recommend to let an external team test your OSGi services though. It is much better to test the services in your own build using an integration testing tool like pax exam. It allows to define which bundles and other config to install. Then it boots up an OSGi framework with your setup and runs modified junit tests against it. The advantage is that such tests are quite realistic and still quite simple.
See here for some pax exam tests in aries rsa or apache karaf.
The first example uses the pax exam forked container for a very fast test (<1s per Test) while the second example uses the apache karaf container (~10s per Test) for tests that are very near a production system.
So you get much faster feedback than with an external system test team that will always lag a bit behind your current development. It also allows you to establish the policy that each team member runs the tests locally before committing.
The application that I test consists of multiple integration components distributed across different platforms. The system integration test scenarios cover creation of transactions from GUI, validating data in DB, checking remote paths for file creation, file transfer to other paths etc. Can someone suggest a good automation framework( Open source if available) to write automation script for such integration scenarios? I have worked with Selenium primarily for GUI automation but the current scenario involves automation beyond GUI( DB queries, services, checking XML file creation in remote paths etc).Kindly suggest.
I think for other stuffs you have mentioned to automate is done through any service side scripting language like Java, PHP, etc.
For DB queries you can go with java abstraction library or by using simple JDBC connection and functions.
For Services testing: You can use services testing framework like for Java use Jersey, etc.
For checking XML file creation: It can be validated through simple java streams handling functions.
And for Remote paths validation: It can be handled through Java FTP library.
I'm working on project with two applications: android app (client) and rest service (server). My android app consumes my rest service.
Both applications are tested separately to ensure they're doing their business as expected.
During server tests I prepare requests and check server responses.
During client tests I set up a simple http mock server and test client's requests against different mocked responses.
Now, this technique works pretty well. It gives me a flexibility I like. I can use different test frameworks and continuous integration environments. But there is one weak point. In both (client and server) test cases I specify the same api. I assume that e.g.
GET /foo-list.json
will return HTTP 200 with json
[{
id: 1,
name: foo1,
}, {
id: 2,
name: foo2
}]
So I repeat myself. If I change a response format my client tests won't fail.
My question is about good practices in testing this kind of scenario. How to make true integration tests without sacrificing flexibility of independent tests. Should I test client with mocked server or with a real instance of my rest service?
Please share your professional experience.
In your scenario you should continue to write unit tests to test individual classes, and integration tests to test the inter-operation between multiple application layers (e.g. business and database layers).
You ask:
"How to make true integration tests without sacrificing flexibility of independent tests"
All of your code should should use abstractions, so that you can use dependency injection to unit test classes in complete isolation using mock dependencies. The use of mocks will ensure that these tests will remain independent i.e. not coupled to any other classes. Hence taking this approach, the integration tests, which would use your final concrete classes, would not affect the unit tests which use the mocked classes.
Also:
"Should I test client with mocked server or with a real instance of my rest service?"
In addition to unit and integration tests you should also perform client-server integration testing; I use automated acceptance testing for doing this. Using a test framework such as Cucumber (also check out calabash-android, which is written specifically to test mobile applications) you can write tests which would test specific features and scenarios which would interact with both the client (your Android application) and server (your RESTful service). These client-server integration tests would start-up and stop concrete instances of the client and server.
Mocks are for unit testing. Your description of the tests with the mocks describes exactly that. You test the client and server as separate units.
Integration testing tests if the units work well together. Since the interface is a REST interface, mocking makes no sense then, you have to test the real thing over HTTP.
See also What is the difference between integration and unit tests?
If your service is based in Java, I'd strongly recommend looking into the Spock framework, for mocking any sort of calls that might be coming from the client. Since Spock is just an extension of jUnit, you might also be able to use it for Android (though, to be fair I've never done Android development)
I'd say you want to do two things. Integration testing and Unit testing. Integration testing would attempt to bring up the android application and cause it to make service calls, ensuring the contexts interact with each other kindly.
However, in your regular commits, I'd suggest unit testing that mocks away everything but the class under test. Spock makes this pretty easy to do, and since it's built on top of jUnit all it takes a jar.
There is no reason you can't run automated end to end tests with a real service instance. You can run a real service instance on the same test machine you are using to run the unit tests, perhaps in the same container. You can set up configuration to use a different URL for the server instance for running automated end to end testing.
Why would you want to do the extra work of creating the mock service if you can run them against the real service?
I would only create a mock service, if the service was an external service over which I had no control!