< Terug naar vorige pagina

Publicatie

Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography

Tijdschriftbijdrage - Tijdschriftartikel

An important challenge in chromatography is the development of adequate separation methods. Accurate retention models can significantly simplify and expedite the development of adequate separation methods for complex mixtures. The purpose of this study was to introduce reinforcement learning to chromatographic method development, by training a double deep Q-learning algorithm to select optimal isocratic scouting runs to generate accurate retention models. These scouting runs were fit to the Neue-Kuss retention model, which was then used to predict retention factors both under isocratic and gradient conditions. The quality of these predictions was compared to experimental data points, by computing a mean relative percentage error (MRPE) between the predicted and actual retention factors. By providing the reinforcement learning algorithm with a reward whenever the scouting runs led to accurate retention models and a penalty when the analysis time of a selected scouting run was too high (> 1h); it was hypothesized that the reinforcement learning algorithm should by time learn to select good scouting runs for compounds displaying a variety of characteristics. The reinforcement learning algorithm developed in this work was first trained on simulated data, and then evaluated on experimental data for 57 small molecules – each run at 10 different fractions of organic modifier (0.05 to 0.90) and four different linear gradients. The results showed that the MRPE of these retention models (3.77% for isocratic runs and 1.93% for gradient runs), mostly obtained via 3 isocratic scouting runs for each compound, were comparable in performance to retention models obtained by fitting the Neue-Kuss model to all (10) available isocratic datapoints (3.26% for isocratic runs and 4.97% for gradient runs) and retention models obtained via a “chromatographer's selection” of three scouting runs (3.86% for isocratic runs and 6.66% for gradient runs). It was therefore concluded that the reinforcement learning algorithm learned to select optimal scouting runs for retention modeling, by selecting 3 (out of 10) isocratic scouting runs per compound, that were informative enough to successfully capture the retention behavior of each compound.
Tijdschrift: Journal of chromatography
ISSN: 0021-9673
Volume: 1638
Jaar van publicatie:2021
Trefwoorden:Deep q-learning, Machine learning, Method development, Reinforcement learning, Retention models
BOF-keylabel:ja
IOF-keylabel:ja
BOF-publication weight:3
Auteurs:Regional
Authors from:Higher Education
Toegankelijkheid:Open