How is api testing different from regular testing? - api

I don't know about testing but I would like to have a clear picture on how is API testing different from other testing methods.

API testing will not include UI as regular testing have
API testing requires basic networking knowledge such as what is the use of GET, POST, PUT, etc commands used.
API testing includes having knowledge of how various html elements work. For example, If I press a button, what will be the next function call. We need to know how 'button' element works
In API only API functions are tested, but in regular testing all the elements are tested
There are different tools used in API testing. POSTMAN is one of them

In API Testing we test Backend functionality while in Regular testing we check UI + Functional testing.
API Testing is helpful in testing Core Functionality.It helps us to reduce the risks.
Steps to Test API Manually:-
To use API manually, we can use browser based REST API plugins.
a)Install POSTMAN(Chrome) / REST(Firefox) plugin
b)Enter the API URL
c)Select the REST method
d)Select content-Header
e)Enter Request JSON (POST)
f)Click on send
g)It will return output response
Steps to start API Automation using REST

in general API testing is made for not doing many similar actions, when we can easily measure the result.
For example if you app button is not on the right place, it is hard to measure using code.
Another example is when you write a library for collections you can say just once:
CheckIntersect method:
result = mylibrary.getIntersection([1,2,3,4], [3,4,5,6])
if result != [3,4]
postTestError("CheckIntersect [1,2,3,4], [3,4,5,6]" + result.ToString() )
In this case you can easily measure the result, and not have a fear of you can't even find the problem in code.

I would like differentiate API vs. Other Testing instead looking into technical details.
API: Testing point of view the API is so important because, we can prepare independent test cases that are separate files. This makes our test approach relatively simple.
For better understanding, "Web API is typically done as HTTP/REST, nothing is defined, output can be eg. JSON/XML, input can be XML/JSON/or plain data. There are no standards for anything => no automatic calling and discovery."
API is a simple interface using HTTP protocol.
Other Testing:
Other testing like GUI, Regression, Unit, etc. Testing is absolutely essential for any application has to be in user friendly. The end user should be comfortable while using all the components should also perform their functionality with utmost clarity. Different Functional and GUI Testing can refer to just ensuring that the look and feel and Functional usability of the application is acceptable to the user.
Conclusion:
API can be:
Developed by one company, used by another company, and hosted by a third company Such involvement of several companies is a business cases for independent testing of API.
Example: Weather information API Developed and Tested by One & Accessed by many.

General Testing is testing each and every feature of the application from UI like web or mobile .
But,
API Testing is to verify the JSON Request to Server and Response from the Server .
If the application is using API all the content and Features based on the API Response from the Server .
For Example In FB app Profile screen, if the name is wrong ,
you can check from UI that general Testing, The same thing you can
Check it from API Response from the server , like below.
{"name":"Dharma","friends":450}

Related

Is Swagger UI like a sandbox? How does Swagger UI work

I'm trying to understand how Swagger UI works and what kind of scenario it suits for.
Per my understanding, it's not a sandbox when integrating with Swagger UI. That means Swagger UI manipulates real data. Is that correct?
So usually I have to build a dedicated test environment for Swagger UI, right? Sounds not right to me.
Even there is a test environment for Swagger UI, since it's open to lots of people. Does that mean the dummy data posted by someone will be saved forever and visible to others?
I was expecting Swagger UI behaves like a sandbox which only saves and manipulate the data for the current session. Once the user closes the session then re-open it, it should be brand-new.
I would like to know the typical scenario in which Swagger UI is used.
It's in the name, isn't it? It generates a UI for your API.
It does not spin up a demo environment for your API with the accompanying back-end, including a test database that contains dummy data and/or is periodically wiped.
If you want the latter, you'd have to build that your own.
Does that mean the dummy data posted by someone will be saved forever and visible to others?
Given that your API uses authentication, you will need authentication in order for Swagger UI to make calls. If you do not separate authenticated users' data, then that's a problem in your API, not a problem of Swagger.
Consider that Swagger UI presents you with a visual way to call your API. Anything you can do with Swagger, you can do with any REST/HTTP client, and so can your potential consumers.

How to call Google NLP Api from a Google Chrome extension

My aim is to select some text from a web page, start a google chrome extension and give the text to a google cloud api (Natural Language API) in my case.
I want to do some sentimental analysis and then get back the result to mark/ highlight positive sentences in green and negative ones in red.
I am new to this and do not know how to start.
The extension consists of manifest, popup etc. How should I call an API from there that does Natural Language Processing?
Should I create a Google Cloud Application with an API_KEY to call? In that case I would have to upload my credentials right?
Sorry sounds a bit confusing I know but I just don't know how I can bring this 2 things together an would be more than happy about any help
The best way to authenticate your app will depend on the specific needs and use cases of your application. You can see an overview of all the different methods here.
If you are not planning on identifying users nor on using a back end server that handles authenticating (as I assume to be your case), the best option would indeed be to use API keys. They do not identify the user, but are enough for the Natural Language APIs.
To do this you will need to create an API key for the services you want and add the necessary restrictions to make the key as secure as possible. Detailed instructions on how to do this and how to use the key in a url can be found here.
The API call could be made from within the Chrome extension with any JavaScript method capable of performing POST requests. For example using XMLHttpRequest or the Fetch API. You can find an example of the parameters that need to be included in the request here.
You may run into CORS issues when making the request directly from the extension. I recommend reading this answer, where a couple of workarounds for these issues are suggested.

What are the design patterns for Rest API automation using java

I am a new to REST API automation project, as part of that I have learned using jayway rest assured instead of jersy client. Now, the problem is I am able to use protocol methods and getting response to parse and checking required data is not.
Now,
I want to explore more to implement project setup like a pro, by using structured java classes or by using any class designs.
I want to use this project for load testing
I want to learn parameterization, i.e., First request's response may be input to 4th request (ex: login token id used for subsequent requests)
I also want to know how to feed data as input from external files
Note: I have searched for sample projects on other channels but they are not as per my requirement and I spent time to understand those project but going over my head, couldn't able to understand their style of implementation :(
Well, there are several things that you need to learn/understand:
It is not about design patterns, them you should learn in any case to consider yourself as a good developer or software engineer in test
REST Assured - is indeed for API testing, yes, but not the best
choice, or even, is the last choice for load/performance testing
Haveing request based dependencies, more sound like component testing or end-to-end testing which is usually also the last choice
and have to be as minimum as possible
Load/performance tools that you can choose and learn by your preferences are (not a full list, but ones that I used to use)
Blazemeter
Gatling
Jmeter

How to integrate wit.ai With my own chatbot application

I would like to create my own web chatbot and i like to integrate my app with wit.ai for natural language classification.I need to know how to integrate wit.ai service(through api call) with my application(any language in backend).i am using C# in front end.I have gone through the integration part Which posted in wit.ai website.But i don't know how to connect it .Could anyone send me a integration details little briefly
I think the short answer is its similar to how you would call any other APIs from your application server components. Wit exposes multiple APIs like message, speech and converse which you can call by passing the Authorization token and other payloads and make use of the API response in your application.
You can use message API if you are only interested in extracting
intent and other atributes of the sententense
Use speech for building voice based application and
Converse if you want to build a little more smarter app. Currently you can only pass text for converse APIs.Hoping they will introduce voice option for this soon.
Now to make things simpler, they have also provided SDKs in various languages like node-wit, pywit etc. So if you want to build your server side logic using on nodejs or python you can use these SDKs. The advantage is that you dont have to manage raw APIs calls and instead it is all managed by SDK. Also, other big advantage is that you can make use of runActions method which encapsulates converse API and make things simpler. If you want to build in nodejs then the messenger example is a good starting point. You can borrow all this logic/concept in your app and replace FB related calls etc with your custom bot. For Python you can look at the below link
https://github.com/wit-ai/pywit/pull/55
Also, you can explore the options like using other frameworks like botkit if you plan to integrate wit with other chatbots like FB messenger or slackbot as these frameworks provide more flexibility and ability to easily switch to different chatbots in future. But they don't seem to properly support the converse API of wit.
You are specifically looking for integration details. Since you are using c# for frontend app, natuarally the best option would be to use c# for backend as well. In which case you will be left with directly calling wit APIs from your backend as I think there are no SDKs in c#. If you want to make use of SDK in node or python etc then you will have to build a rest based backend (for example) which can be invoked from your c# application. I am currently working on a nodejs app and integrating it with wit using node-wit. I can share some code once its ready but i dont know when I will be able to finish it. For bootstrapping my application I have used this node application. If you have some understanding of node then you can look at the /server/controllers logic. Similar to this application I have built a witController which uses runAction to interact with wit and I am calling this from front-end when user submits a message to your bot. The biggest challenge in runAction is to figure-out a way to send back the wit response to your front-end and get follow up response from user. Wit sends the response in Send method as you can see in the node-wit's messanger example.
Hope this helps!

Integration testing: Mock external API vs. use external API sandbox

We're required to use the API of an external partner. The API is in a good shape and we got access to a sandbox environment we can use for automatic testing.
We already test every single call of the external API using unit tests but are unsure regarding best practices for integration tests when it comes to complex operations on the side of the external partner.
Example: Every user of our service also got a user object at our external partner. When performing external API call X on this user object, we expect object Y to appear inside collection Z of this user (which we have to query using a different call).
What are best practices for testing cases like this?
Mock the external API as much as possible and rely on the unit tests to do their job? Advantages: Tests run fast and independent from an internet connection. Disadvantages: Mistakes is in our mocks could lead to false positives.
Integrate the external API sandbox and run every integration test against it. Advantages: Close to real life API interactions. Disadvantages: Tests can only be run with an open internet connection and take more time.
Use a hybrid of mocked and sandbox data, set a boolean to switch between the internal (=mocked) and external (=sandbox) environment when required. Advantages: Reliable tests. Disadvantages: Could be a pain to set up.
Other best practices?
Thanks!
Related: How are integration tests written for interacting with external API? However, the answer "You don't. You have to actually trust that the actual API actually works." is not sufficient in our opinion.
[EDIT] We fear that integration testing only against our assumptions how the external API should work (even if they are based on unit tests) – and not against the actual API – will leave us with false positives. What we'd need is a test that verifies that our assumptions (mocks) are actually correct – not only in the context of unit tests but also in the context of complex operations with several steps.
Validation might be a good example: What if we mess up the integration code and send malformed data or data that does not make any sense in the context we send it in because we missed a step? Our mock API, which does not validate (or only in very limited range) would still return valid data instead of passing the error we would receive from the real API.
I believe there should be 2 level of verifications we need to do when we interface with an external API:
API verification: verify that the API works according to its specs and/or our understanding
App functionality verification: verify that our business logic works according to the expectation to the API that passes API verification
In our case, we use a mock API together with real and mock API verification.
Mock API allows us to isolate any runtime errors/exceptions to app functionality only, so we don't blame any external party for issues
The same API verification is executed against both real and mock APIs, to make sure that the real one works the way we expect, as well as the mock one should mimic the real one correctly
If along the way, external API changes, API verification may turn red, triggering changes in mock API. Changes in mock API may make app verification turn red, triggering changes in app implementation. This way you never miss any gap between external API and app implementation (ideally).
Another extra benefit of having a mock API + API verification is that your developers can use it as a documentation/specification of how the API is supposed to work.