My research projects are funded in collaboration with The Dow Chemical Company through a University Partnership Initiative (UPI) program. Many interesting problems can be reframed as a nonlinear programming problem. In my research, I use nonlinear programming (NLP) techniques, usually based on maximum likelihood principles and collocation methods to solve large-scale, constrained, nonlinear parameter estimation problems comprised of differential and algebraic equations (DAEs). The problems I work on are typically applied areas of reaction engineering and spectroscopy, where data is noisy, imperfect, and reactions can occur over a wide range of time scales.
in situ optical spectroscopy techniques, such as Raman or infrared spectroscopy, have a wide variety of applications in both academia and industry. Collection methods such as these are safer and can provide more accurate information since they are less intrusive than extracting a sample for analysis. However, decoupling information from spectral data into concentration profiles and pure component absorbance profiles is a nontrivial problem which requires a large amount of research effort and can be very challenging for multicomponent mixtures.
In order to obtain concentration estimates ( C ) from the spectroscopic information ( D ), self-modeling curve resolution techniques are used to relate concentration estimates to pure species absorbance profiles ( S ) through Beer-Lambert’s Law:
where D is a ntp × nwp collected data matrix with ntp time points and nwp measured wavelengths. C is a ntp × nc with nc components. S is a nc × nwp matrix of pure species absorbance. E is a ntp × nwp residuals matrix.
Once concentration estimates ( C ) are obtained, they can be used in the determination or reaction kinetics with some type of mechanistic model. Many methods exist for solving this problem and can be categorized into hard or soft modeling, where there is an iterative approach to fitting the concentration data C and pure component absorbances S.
Alternatively, based on the work of Chen, Weifeng & Biegler, Lorenz & Garcia-Munoz, Salvador (2016), using nonlinear programming techniques, the problem of decoupling the spectral data and obtaining reaction kinetics can be solved simultaneously. I investigate nonlinear extensions of using a simultaneous approach for the deconvolution of Beer-Lambert’s Law with reaction kinetic information.
Thomas Krumpolc, D.W. Trahan, D.A. Hickman, and L.T. Biegler, “Kinetic Parameter Estimation with Nonlinear Mixed-Effects Models”, Chemical Engineering Journal, 2022, 136319, ISSN 1385-8947, https://doi.org/10.1016/j.cej.2022.136319.
The work involveed investigating the use of nonlinear mixed-effects models to take into account batch-to-batch variation for kinetic parameter estimation problems using data from multiple batch reactions. Multiple longitudinal batch experiments with time series data often exhibit correlated residuals, violating a common assumption that all batches are independent. Nonlinear mixed-effects models offer an alternative approach to account for the two types of random experimental variation resulting from longitudinal experiments: the measurement error for each data point and the random batch-to-batch variation between experiments. This paper by Hickman et al. was the first case study examined. The physical system was described by differential equations that model a trickle-bed batch reactor system in space and time and well-stirred pot. Using orthogonal collocation on finite elements, the differential equations that describe the physical system can be discretized into a system on nonlinear equations. Using nonlinear programming strategies, we can then formulate and solve a nonlinear mixed-effects model. By taking into account the batch-to-batch variation between experiments, which can be attributed to physical effects such as deactivation of a catalyst over time, we estimate chemical kinetic parameter values with 95% confidence intervals for each parameter. Further, we apply Bayesian inference techniques for model discrimination to identify the most likely candidate model given the data.