Does your team have to deal with performance issues very late in their development cycle? Does this lead to a lot of unplanned work in your sprints? What if I told you, that your team can validate various performance-related hypotheses right within your sprints? Yes, this is what we have been practising on various teams. Attend this talk where I will share our experience. **Problem Statement:** Performance Testing has traditionally been an activity that is done in a staging or prod environment (for the brave) by a team of expert performance testers. In my experience, this approach has several issues. * Typically high cycle time (time taken between code changes and these changes being deployed and tested in Perf Test Env) between test runs. This means Developers cannot experiment quickly. * The test design may be disconnected from the system design because the people who test it may not have a deep understanding of the application architecture. * Performance benchmarking and tuning becomes an afterthought, instead of being baked into our design and constantly validated during the development process **Solution:** Shift left your Performance Testing. * Enable Developers to run Performance Tests on their machines so that they can get immediate feedback as they make code changes. * Identify issues early and iterate over solutions quickly. * Only defer a small subset of special scenarios to the expert team or higher environments. **Talk is cheap, show me code** I will be sharing the learnings that I gained in the process of applying Shift Left principle to "API Performance Testing" and how we codified the approach into a re-usable open-source framework called [Perfiz](https://github.com/znsio/perfiz) so any team can take advantage of it. **Topics that will be covered** * Challenges running performance tests early in the development cycle * Few examples to see Shift Left in action * Hypothesis Invalidation Strategy. A scientific approach to reducing dependence on higher environments * Avoiding premature performance optimisations and moving to data driven architecture decisions with rapid local performance testing * What makes a good API Performance Testing framework? - In the context of Shift Left * It is containerised, runs well on local laptop and in higher environments or in hybrid mode * Leverages existing API test instead of duplicate load test scripts * Helps Developer express load as a configuration DSL without having to learn yet another tool * Not just a load generator, it collects data, has pre-set dashboards with basic metrics * It is code and not documentation * What makes a good performance test report? - In the context of Shift Left * To begin with it should be a live monitoring dashboard and not an after the fact report * It is visual (graphs and plots) rather than tabulation * Merges Load Data and Application performance metrics in a single visual representation over a shared time series based x-axis so that the correlation is clear * Perfiz Demo - An open source tool that embodies the above thought process * API test to Perfomance Test Suite in less than a minute with just YAML Config * Pre-built Grafana Dashboards to get you started * Containerised setup to get you going without any local setup with Docker * Prometheus and other monitoring tool hooks to monitor app performance * Perfiz in Higher Environments * Perfiz Architecture Overview and how you can extend, adapt and contribute back to the community * "Shift Left" limitations - Repeatability, My machine vs your machine, etc.
* How to turn your existing API tests into Performance Test Suite only through YAML configuration?
* Monitoring Application Performance Metrics through Prometheus and visualise load test results using Grafana to gain actionable insights