Up until now, assessing whether application performance is “good” or “bad” has been just for the performance experts, mostly because they were the most familiar with an application’s performance history. All that could change if everyone on the team (Dev and Ops) could easily compare (on a single graph) response times between all recent load tests: nightly builds, big releases, whatever.
With a tool set that finally delivered a richer performance history, the entire team could plainly see that Response Time from last night’s load test …
… either matched that of all previous releases … and it’s all GOOD
… or was radically slower than all previous releases, and performance is BAD.
There would be no lingering questions about when did performance get so bad, because the entire team would always be able to compare response time to every single load test in CI/CD, even those from past releases.
This talk introduces the “applesToApples” solution, which provides metrics from the load generator including response time, throughput, error counts, number of concurrent threads of load, and more: https://github.com/eostermueller/applesToApples
Watch the video.