Overview of QA processes in Amazon’s Search Experience Team

Amazon is a fast-moving company but something that we don’t neglect is our testing framework. We’ve built up this testing structure over the years which allows us to confidently push changes out to our live site. The search experience team is focused on improving the quality, speed and accuracy of search results on Amazon.com with a focus on improving revenue.

Our team methodically tracks different search metrics over time and we run search simulations on our Hadoop clusters in order to determine the optimal algorithms for search procedures. We have a passion for improving these metrics for speed, quality and relevancy. Any irrelevant search results are seen as a critical defect on our team.

All of our server-side automation is taken care of by Python scripts, plus we have Javascript and Perl test drivers that automate functionality from the front-end. This multi-lingual approach is requried due to the specific technologies that Amazon uses which I won’t dwell to much on because it is a trade secret.

One thing we try to drive on our team is test automation. Because manual test are cost a lot of money for very little return, we try to shamelessly automate everything that we can. We want all of our tests to be automated and have important metrics returned and visible on a dashboard where we can investigate problems before they are pushed live.

In a later post I’ll mention a specific testing framework that is very useful.

~                                                                              

Advertisements
Overview of QA processes in Amazon’s Search Experience Team