Why you should care how assays are developed?

24/03/2021

Share this blog

Thanks to high throughput automation, delivering large scale library screens is very much a matter of routine but bottlenecks have an annoying habit of shifting. Has the pressure to keep pace now shifted upstream to your assay development and optimization laboratories? If so, maybe it’s time to look the assay development workflow to see if any process enhancements can be made.

What’s easily addressable?

The process of choice and the tool of choice are easy places to start. A traditional One Factor At a Time (OFAT) approach to assay development is inherently slow as the results of one experiment feed into the design of the next. Here all variables except one are constrained in order to look at the response of that variable on your system. Realistically a maximum of 10 different factors may be explored during any given project by an extremely conscientious scientist using manual hand pipettes and some basic liquid handling for bulk dispensing.

Whilst trusty hand-held pipettes offer great versatility (any volume of any liquid can be pipetted into any well at any time), the fact that this tradition tool of choice is ‘hand-held’ means that experiments are limited in scale and restricted to low density microplates using relatively large and costly volumes of reagents. As a result, assays can take months to develop and frustratingly not all will ever achieve the desired assay quality/signal window or be feasible from a cost perspective.

Could Design of Experiments (DoE) help speed up the process and reduce costs?

Absolutely, Design of experiments (DoE), is a statistical technique for planning experiments and analyzing the information obtained. The technique allows several experimental parameters to be varied systematically and simultaneously in order to obtain more data using fewer experiments than the traditional ‘linear’ process. Based on the obtained data, a mathematical model of the studied process is created. The model can be used to understand the influence of the experimental parameters on the outcome and to find the optimum combination of those parameters.

In theory this method should not only make the process of assay development and optimization significantly faster and more cost effective, but it should also enable a more thorough evaluation of a greater number of assay variables for greater assay robustness. There is also the potential for unexpected “non-linear” interactions between components to be revealed. The upshot here is significant cost savings can be achieved or flawed assays can be failed faster.

New call-to-action