A Practical Guide to Surviving AWS SAM
Part 7 — Testing
So far, we have experimented in creating applications for an imaginary world, where everything works on the first attempt, never changes and nothing breaks. Well, sorry to disappoint you, but the real world looks a bit different and there is an important aspect that we haven’t touched yet, testing.
Loved or hated, seen as a waste of time or a savior, you should always add some level of testing for your application. Unit, integration, end-to-end, infrastructure, performance, penetration — there are plenty of angles to test an application and hopefully in the end you will have covered them all.
We will not focus on the theory of testing but we will look at a couple of examples on how to do perform unit/integration tests for Serverless applications written with the SAM framework.
For this first part, we will not use SAM helper functions but we will use good old vanilla pytest.
As usual, you can find the source code on my GitHub page.
Let’s assume we have a simple application that exposes two APIs backed by two Lambda, one used to create an entry in a DynamoDB table and another one to retrieve the entry created, plus a layer to build a response compliant with API Gateway requirements (even if HTTP API can handle it).
Code for the first API could look something like this
and code for the second API could look like this
The code is very simple because I want to focus on the testing side. But for production code, always use best coding practices — handle errors, use logger, validate input, etc. So, let’s jump onto some test cases.
The first test that we can perform is the happy path for data insertion in DynamoDB. Due to the simplicity of the application, this can be converted into invoking directly the lambda_handler
function, and due to the simplicity of spinning up a DynamoDB table we can interact directly with the table deployed on AWS avoiding mocking or local copy of the service, obviously, the table should be created with SAM. A point of attention is that the role you are using to interact with AWS has enough permissions to read and write on DynamoDB and has been correctly set locally.
This test reads a json
file containing a sample of API Gateway event, invokes the lambda_handler
with this event, checks the correct status code of the response, and finally checks that data has been really created on the AWS DynamoDB table as expected. One important thing to notice is the parameter lambda_context
passed to the test, a fixture, function executed before the test that assumes the return value of the fixture as a parameter, so in our case, we use this to mock the Lambda context object. Fixtures are very handy and we could have used this construct also to parametrize lambda event construction.
Another thing to pay attention to is how to tell Python where all the modules are. Once deployed our Lambda knows where code, dependencies, and layer are, but locally we have to do some configuration. Python uses the concept of PythonPATH
to find modules, a list of system paths where the Python interpreter will look for modules. There are plenty of ways on how to structure your code and edit this list. For example, if you use PyCharm as preferred IDE you can mark the directory as source Root directly from the IDE by right-clicking on the desired folder, selecting Mark Directory as --> Sources Root
. For our structure is important to mark tests
and layer
folder as source root, the first to correctly resolve Lambda code from the test case, and the second to resolve shared code in the layer. Another alternative to resolve Lambda code from the test case could have been using relative import with ..
notation. Finally, if you need more flexibility you could edit directly PythonPATH
env var or from code with sys.path.append()
method. But there are still plenty of ways to do it.
The second test that we can perform is the happy path for data retrieval. We can proceed in three ways, mocking the DynamoDB response, setting up an entry in the DynamoDB, or running a local copy of the DynamoDB. Due to the simplicity of spinning up DynamoDB, we can use an approach similar to the previous one invoking directly lambda_handler
function, populating upfront the table invoking directly DynamoDB SDK or invoking our function that is guaranteed to be working by the previous test. Again here comes in hand the pytest fixture construct.
This way, we can insert a test entry in the DynamoDB table, execute the test, and after that, clean up the table. The yeld
method will await test execution completion handling automatically the clean up only after test execution while scope=module
property of the fixture forces its execution on module startup.
Another approach is using the boto3 built-in stub capabilities. The idea is to replace the table client with a stub, in our case the table
variable. We can now use the stubber to force a fake response for get_item
method. The built-in boto3 stubber also gives a validation that the structure that we are using is compliant with the real one. We now only have to activate the stubber and invoke the usual lambda_handler
that, once it reaches the get_item
method, instead of hitting the DynamoDB table will return the stub fake response. With this approach, we are doing a real unit test without testing third-party services.
We have only scratched the surface of the world of testing and in future chapters, we will see how to increase automation along with other ways of testing our application.
More content at bip.xTech