The Schrödinger equation is the master equation of quantum chemistry. The founders of quantum mechanics realised how this equation underpins essentially the whole of chemistry. However, they recognised that its exact application was much too complicated to be solvable at the time. More than two generations of researchers were left to work out how to achieve this ambitious goal for molecular systems of ever-increasing size. This book focuses on non-mainstream methods to solve the molecular electronic Schrödinger equation. Each method is based on a set of core ideas and this volume aims to explain these ideas clearly so that they become more accessible. By bringing together these non-standard methods, the book intends to inspire graduate students, postdoctoral researchers and academics to think of novel approaches. Is there a method out there that we have not thought of yet? Can we design a new method that combines the best of all worlds?
Sample Chapter(s)
Chapter 1: Intracule Functional Theory (243 KB)
https://doi.org/10.1142/9781848167254_fmatter
The following sections are included:
https://doi.org/10.1142/9781848167254_0001
Density functional theory (DFT) has become the most popular by far of the panoply of methods in quantum chemistry and the reason for this is simple. Where other schemes had become bogged down in mind numbingly expensive and detailed treatments of the electron correlation problem, DFT simply shrugged, pointed at the Hohenberg–Kohn theorem, and asserted that the correlation energy can be written as an integral of a certain function of the one-electron density. The only thing that irritated the wavefunction people more than the cavalier arrogance of that assertion was the astonishing accuracy of the energies that it yields.
Well, most of the time. Occasionally, DFT fails miserably and, although the reasons for its lapses are now understood rather well, it remains a major challenge to correct these fundamental deficiencies, while retaining the winsome one-electron foundation upon which DFT rests.
Does this mean that, for truly foolproof results, we have no option but to return to the bog of many-body theory? One might think so, at least from a cursory inspection of the current textbooks. But we feel differently, and in this chapter we present an overview of an attractive alternative that lies neither in the one-electron world of DFT, nor in the many-electron world of coupled-cluster theory. Our approach nestles in the two-electron “Fertile Crescent” that bridges these extremes, a largely unexplored land that would undoubtedly have been Goldilocks' choice.
We present results that demonstrate that the new approach—Intracule Functional Theory — is capable of predicting the correlation energies of small molecules with an accuracy that rivals that of much more expensive post-Hartree–Fock schemes. We also show that it easily and naturally models van der Waals dispersion energies. However, we also show that its current versions struggle to capture static correlation energies and that this is an important area for future development.
Finally, we peer into the probable future of the field, speculating on the directions in which we and others are likely to take it. We conclude that, although the approach is conceptually attractive and has shown considerable promise, the investigations hitherto have scarcely scratched the surface and there are ample opportunities for fresh ideas from creative minds.
https://doi.org/10.1142/9781848167254_0002
A major problem of wavefunction-based electronic structure theory is the slow convergence of correlation energies with respect to the size of the one-particle basis set. The situation can be dramatically improved through incorporation of terms in the wavefunction that depend explicitly on the interelectronic distances; and after a decade of intense development such explicitly correlated electronic structure theories are ready for widespread use. In this chapter I briefly summarise the essential elements of explicitly correlated methods, and then present five thoughts on how the field might develop in the future.
https://doi.org/10.1142/9781848167254_0003
This chapter is concerned with the problem of strongly correlated electrons in quantum chemistry. We describe how a technique known as the density matrix renormalization group (DMRG) can tackle complicated chemical problems of strong correlation by capturing the local nature of the correlations. We analyse the matrix product state structure of the DMRG wavefunction that encodes one-dimensional aspects of locality. We also discuss the connection to the traditional ideas of the renormalization group. We finish with a survey of applications of the DMRG, its strengths and weaknesses in chemical applications, and its recent promising generalization to tensor network states.
https://doi.org/10.1142/9781848167254_0004
For 50 years progress towards the direct calculation of the groundstate two-electron reduced density matrix (2-RDM) was stymied from an inability to constrain the 2-RDM to represent an N-electron wavefunction. Recent advances in theory and optimization have realized three methods for the direct calculation of the 2-RDM: (i) the variational 2-RDM method in which the 2-RDM is constrained explicitly through N-representability constraints known as positivity conditions, (ii) the parametric 2-RDM method in which the 2-RDM is constrained implicitly though the parametrization of the 2-RDM as a functional of itself, and (iii) the solution of the contracted Schrödinger equation (CSE) or its anti-Hermitian part (ACSE), in which p-RDMs for p > 2 are built from the 2-RDM by a cumulant-based reconstruction. Advantages of the 2-RDM methods include: (i) the treatment of strong electron correlation by the variational 2-RDM method, where traditional wavefunction methods would require as many as a billion times more determinants than feasible with the largest supercomputers, (ii) the balanced description of single and multi-reference correlation of the ACSE method which matches or exceeds the accuracy of traditional multi-reference wavefunction-based methods at a lower computational scaling, and (iii) the combination of accuracy and efficiency through the parametric 2-RDM method, which approaches the accuracy of coupled cluster methods with single, double, and triple excitations at the computational cost of configuration interaction with single and double excitations. Collectively, the 2-RDM methods have been applied to studying strong electron correlation in acene chains and hydrogen lattices, resolving the energy barriers in bicyclobutane's ring opening, computing the conical intersections in methylene's triplet excited states, and examining hydroxyurea derivatives for treating sickle-cell anemia. In this chapter we will discuss the theoretical foundations, practical advantages, and some recent applications of each 2-RDM method.
https://doi.org/10.1142/9781848167254_0005
By solving the Schrödinger equation one obtains the whole energy spectrum, both the bound and the continuum states. If the Hamiltonian depends on a set of parameters, these could be tuned to a transition from bound to continuum states. The behavior of systems near the threshold, which separates bound-states from continuum states, is important in the study of such phenomenon as: ionization of atoms and molecules, molecule dissociation, scattering collisions, and stability of matter. In general, the energy is non-analytic as a function of the Hamiltonian parameters or a bound-state does not exist at the threshold energy. The overall goal of this chapter is to show how one can predict, generate and identify, new class of stable quantum systems using large-dimensional models and the finite size scaling approach. Within this approach, the finite size corresponds not to the spatial dimension but to the number of elements in a complete basis set used to expand the exact eigenfunction of a given Hamiltonian. This method is efficient and very accurate for estimating the critical parameters, {λi}, for stability of a given Hamiltonian, H(λi). We present two methods of obtaining critical parameters using finite size scaling for a given quantum Hamiltonian: the finite element method and the basis set expansion method. The long term goal of developing finite size scaling is treating criticality from first principles at quantum phase transitions. In the last decade considerable attention has concentrated on a new class of phase transitions, transitions which occur at the absolute zero of temperature. These are quantum phase transitions which are driven by quantum fluctuations as a consequence of Heisenberg's uncertainty principle. These new transitions are tuned by parameters in the Hamiltonian. Finite size scaling might be useful in predicting the quantum critical parameters for systems going through quantum phase transitions.
https://doi.org/10.1142/9781848167254_0006
The generalized Sturmian method makes use of basis sets that are solutions to an approximate wave equation with a weighted potential. The weighting factors are chosen in such a way as to make all the members of the basis set isoenergetic. In this chapter we will show that when the approximate potential is taken to be that due to the attraction of the bare nucleus, the generalized Sturmian method is especially well suited for the calculation of large numbers of excited states of few-electron atoms and ions. Using the method we shall derive simple closed-form expressions that approximate the excited state energies of ions. The approximation improves with increasing nuclear charge. The method also allows automatic generation of near-optimal symmetry adapted basis sets, and it avoids the Hartree–Fock SCF approximation. Programs implementing the method may be freely downloaded from our website, sturmian.kvante.org [1].
https://doi.org/10.1142/9781848167254_0007
It is easy to prove that atomic and molecular orbitals must decay exponentially at long-range. They should also possess cusps when an electron approaches another particle (a peak where the ratio of orbital gradient to function gives the particle charge).
Therefore, hydrogen-like or Slater-type orbitals are the natural basis functions in quantum molecular calculations. Over the past four decades, the difficult integrals led computational chemists to seek alternatives. Consequently, Slater-type orbitals were replaced by Gaussian expansions in molecular calculations (although they decay more rapidly and have no cusps). From the 1990s on, considerable effort on the Slater integral problem by several groups has led to efficient algorithms which have served as the tools of new computer programs for polyatomic molecules.
The key ideas for integration: one-center expansion, Gauss transform, Fourier transform, use of Sturmians and elliptical coordinate methods are presented here, together with their advantages and disadvantages, and the latest developments within the field.
Recent advances using symbolic algebra, pre-calculated and stored factors and the state-of-the art with regard to parallel calculations are reported.
At times, high accuracy is not required and at others speed is unimportant.
A recent approximation separating the variables of the Coulomb operator will be described, as well as its usefulness in molecular calculations.
There is a renewed interest in the use of Slater orbitals as basis functions for configuration interaction (CI) and Hylleraas-CI atomic and molecular calculations, and in density functional and density matrix theories. In a few special cases, e.g. three particles, symmetry conditions lead to simple explicit pair-correlated wave-functions.
Similarly, advantages of this basis are considerable for both absolute energy and fixed-node error in quantum Monte Carlo (QMC). The model correlated functions may be useful to build up Jastrow factors.
These considerations will be dealt with in the context of modern computer hardware and its rapid development.
https://doi.org/10.1142/9781848167254_0008
Quantum mechanics has provided chemistry with two general theories of bonding: valence bond (VB) theory and molecular orbital (MO) theory. VB theory is essentially a quantum mechanical formulation of the classical concept of the chemical bond wherein the molecule is regarded as a set of atoms held together by local bonds. This is a very appealing model as it represents the quantum mechanical translation of the classical basic concepts that are deeply rooted in chemistry, such as Lewis' structural formulas, chemical valency, hybrid orbitals, and resonance. MO theory, on the other hand, uses a more physics-related language and has sprung as a means to interpret the electronic spectra of molecules and deal with excited states. However, with its canonical MOs delocalized over the entire molecule, this theory bears little relationship to the familiar language of chemists in terms of localized bonds and this is probably the reason why it was initially eclipsed by VB theory, up to the mid-1950s. Then the situation reversed and MO theory took over, among other reasons, because of the efficient implementations, which provided the chemical community with computational software of ever increasing speed and capabilities.
Nowadays, with the advent of modern computational ab initio VB methods and the progress in computer and coding technologies, VB theory is coming of age. Indeed, starting with the 1980s onward, several methodological advances in VB theory have been made, and allowed new and more accurate applications. Thus, dynamic correlation has been incorporated into VB calculations, so that at present, sophisticated VB methods combine the accuracy of post-HF methods with the specific advantages of VB theory such as extreme compactness of the wave functions that are readily interpreted in terms of Lewis structures, ability to calculate diabatic states, resonance energies and so on. Moreover, VB theory has been recently extended to handle species and reactions in solution, and is also capable of treating transition metal complexes. These newly developed tools have been used to verify and quantify fundamental concepts such as aromaticity, resonance energies, hybridization and so on, and to develop new ideas and models in chemical reactivity, that were not foreseen from the empirical VB model. The combination of the lucid insight inherent to VB theory and its new computational capabilities is discussed in this chapter. We hope that this chapter makes a case also for using these modern ab initio VB methods as routine tools in the service of chemistry.
https://doi.org/10.1142/9781848167254_0009
Quantum Monte Carlo (QMC) methods sample the wave function, in principle the exact one, instead of optimizing analytical functions as standard ab initio approaches. They have emerged as suitable alternative to ab initio methods to deal with the dynamical correlation in the description of the electronic structure of atoms and molecules.
Differently from standard quantum chemistry approaches, QMC enjoys several features that make it the method of choice for complicated systems. Among these features, it is important to notice its intrinsically parallelizable nature, the slow (˜N3) scaling of the computational cost with number of particles N, the limited amount of memory required and its ability to deal with substantially different systems within the same theoretical/algorithmic structure (e.g. it can be easily applied to both bosons and fermions, to the description of systems containing electrons and positrons as well as of vibrational properties of molecules).
One of the strongest points characterizing QMC is the fact that it may use any kind of basis sets, albeit uncommon ones, depending on the species. This fact allows QMC either to quickly converge to the exact answer in the case of bosonic systems, or to easily recover 90–95% of the correlation energy in electronic species without the inverse cubic convergence with respect to the size of the basis set that plagues more common ab initio methods. Thanks to these characteristics, different flavours of QMC have been applied to a wide set of species/problems spanning a range that stretches from molecules as small as water up to pieces of bulk matter as large as silicon and germanium crystals and that includes molecules such as porphyrins and C20.
Despite the strong points highlighted, QMC suffers from a relatively high computational cost mainly due to the necessity of evaluating many times a reference wave function. Thus, a substantial reduction of this cost may come from central processing units (CPU) particularly tailored to compute exponentials, polynomials and rational functions, from better algorithms that require less function evaluations and from variance reduction techniques. Besides, more robust approaches to reduce the so called “nodal error” would help improve the already appreciable accuracy afforded by QMC.
With the above issues improved, there are several avenues that would become possible to pursue on a routine basis. Among these, we foresee the calculations of intermolecular interactions for very large systems (e.g. parts of DNA with or without interacting species), the calculation of nuclear magnetic resonance (NMR) parameters for difficult systems, the automatic optimization of molecular structures and, even better, the chance of running molecular dynamics simulation à la Car–Parrinello using QMC computed atomic forces. In these circumstances, the study of phase transitions, bulk matter, interfaces and large biological systems may reach an unexpected level of accuracy that is currently unavailable due to methodological limitations.
https://doi.org/10.1142/9781848167254_0010
This chapter first discusses real-space grid methods for solving the Kohn–Sham equations of density functional theory. These approaches possess advantages due to the relatively localized nature of the Hamiltonian operator on a spatial grid. This computational locality and the physical locality due to the decay of the one-particle density matrix allow for the development of low-scaling algorithms. The localized nature of the real-space representation leads to a drawback, however; iterative processes designed to update the wave functions tend to stall due to the long-wavelength components of the error. Multigrid methods aimed at overcoming the stalling are discussed. The chapter then moves in a different direction motivated both by 1) the relatively large computational and storage overheads of wave-function-based methods and 2) possible new opportunities for computing based on special-purpose massively parallel architectures. Potential alternative approaches for large scale electronic structure are discussed that employ ideas from quantum Monte Carlo and reduced density-matrix descriptions. Preliminary work on a Feynman–Kac method that solves directly for the one-particle density matrix using random walks in localized regions of space is outlined.
https://doi.org/10.1142/9781848167254_0011
Over the years, computational physics and chemistry served as an ongoing source of problems that demanded the ever increasing performance from hardware as well as the software that ran on top of it. Most of these problems could be translated into solutions for systems of linear equations: the very topic of numerical linear algebra. Seemingly then, a set of efficient linear solvers could be solving important scientific problems for years to come. We argue that dramatic changes in hardware designs precipitated by the shifting nature of the marketplace of computer hardware had a continuous effect on the software for numerical linear algebra. The extraction of high percentages of peak performance continues to require adaptation of software. If the past history of this adaptive nature of linear algebra software is any guide then the future theme will feature changes as well — changes aimed at harnessing the incredible advances of the evolving hardware infrastructure.
https://doi.org/10.1142/9781848167254_bmatter
The following sections are included:
Sample Chapter(s)
Chapter 1: Intracule Functional Theory (243k)