API testing - Intro
Jan 6, 2024, 2:21 PM: ~ 7 min read
How APIs work and what they are:
They are some pieces of code accessible publicly in various ways, that try to apply the logic over the what’s given to them.
- What’s sent is called a payload.
- What’s returned is called a response.
- The payload sent in the url can be generically called url parameter.
- The receiver of the payload is called endpoint.
Thank you for reading the TL;DR section. From here on we will go in-depth on how to debug them and how to think about testing APIs.
How to think:
APIs are not only things over the internet accessible through URLs. API refers to any collection of standardized names that when called do something. For example the API of a programming language is the set of instructions the language comes with.
Gotchas:
When an API returns an error is not always the code itself that returns the error. Someone in front of the call can alter or prevent the initial payload. Someone after the call can alter or prevent the response. So the error can be anywhere in the chain. The error can be in the code, in the payload, in the response or in the chain of calls.
In production environment it can happen that the response is cached for a while. So even if the code is fixed quickly the response may still come as an error under some circumstances until the cache expires or is invalidated.
Lifetime of a call:
The HTTP API endpoint is not the server.
- The request leaves your computer.
- Wonders over the internet searching for the target.
- Reaches the target.
- Is received by the programs that expect requests and is forwarded between such programs and application layers .
- Is sent to the program that holds the code of the endpoint.
- Is transformed into whatever that programming language makes up from that payload.
- Is passed along to the function/method inside that program.
- An answer is returned.
- The answer makes it all the way back to your computer.
So each of those steps can fail independently in their own way. If you see an error try to pinpoint first which is the problematic step.
How to debug:
- Check the payload. Is it what you expect? Is it what the endpoint expects? Is it what the endpoint expects in the format it expects?
- Check the response. Is it what you expect? Is it what the endpoint is supposed to return? Is it what the endpoint is supposed to return in the format it is supposed to return?
- Check the code. Is it what you expect? Is it what the endpoint is supposed to do? Is it what the endpoint is supposed to do in the way it is supposed to do it?
How to test:
Write tests for every possible payload. The purpose of those tests is to stress the endpoint under test. When coding the validations of the responses be very strict about the types of values received and the parameters.
- Here is a list of ideas for negative scenarios on top of the positive scenario:
- Including using the wrong format
- The wrong type
- The wrong value - in case of numbers use negative numbers, numbers as strings, floats, integers, numbers as strings and empty strings
- The wrong length - in case of arrays use empty arrays, arrays with one element, arrays with more elements than expected. In case of numbers try values outside the interval. In case of strings use empty strings, strings with one character, strings with more characters than expected. In case of objects use empty objects, objects with one key, objects with more keys than expected.
- The wrong encoding
- The wrong order - in case of arrays try to send the elements in a different order than expected. In case of objects try to send the keys in a different order than expected.
- The wrong number of parameters - in case of functions try to send more parameters than expected, less parameters than expected, no parameters at all. Might sound stupid but as the tested code will evolve new arguments will be added or removed and the tests will catch the errors.
- The wrong number of calls
- The wrong order of calls - especially if the calls are asynchronous, the order of the calls is important or is a messages based app
- The wrong timing of calls - especially if the app is latency sensitive and other calls are expected to be made in a certain time frame
- The wrong frequency of calls - especially if the app is rate limited or there is a cache invalidation mechanism. The slow calls will always fall outside the caching and put extra load on the system.
- The wrong size of calls - for example if the payload is a file, try to send a file that is too big or too small. If the payload is a string or a binary try to send a string that is too big or too small.
- Wrong body altogether - an app error instead of the normal payload
Build an API mock server with implementations using the following ideas. Point your application against the mock server and use the automation suite to trigger those responses:
- Write a test for every possible response:
- Including using the wrong format
- The wrong type
- The wrong value
- The wrong length
- The wrong encoding
- The wrong order
- The wrong combination
- More parameters than expected in the mode
- Without the optional parameters
- With some missing mandatory parameters
- Wrong body altogether - an app error instead of the normal payload
What is a mock server and how to build one:
- Is a normal application with very simple logic which has the same API endpoints as the real application.
- The mock server can be built in the same language as the real application or in a different language.
- Inside every endpoint you can put some logic in an IF statement to check something on the payload and return a different response based on the above scenarios. Let's say "if the user name starts with 'a' always call the real server or return the positive answer, for 'b' return values out of range, for 'c' wrong type etc".
- Output the mock server logs to a file and use that file to build the automation suite. This way you can test the real server and the mock server in parallel and compare the results.
- Output the mock server logs to a logging system and build alerts based on the logs. This way you can develop alerts for your real server based on the mock server logs thus being very proactive when comes to the errors your customers will encounter in production.
- Deploy each consumer of this API on an environment which points to the mock server instead of the real server. This way you can test the consumer against the mock server or the real server in parallel based on a config file. So your frontend tests will not have to be very complex and different, they will just contain A LOT of extra validations that you never dreamed of.
Credits:
- Initial text: gabriel@qality.tech