The following is a non-comprehensive sample of the work being done across the Natural Sciences Division, listed by department. Research areas include foundational research on AI, using AI to do fundamental science research; and working with AI research and tools in the context of undergraduate and graduate education.
AI in natural sciences research
AI in natural sciences pedagogy + courses
Feature stories about AI in the Natural Sciences
AI in natural sciences research
Mathematical Sciences
The recent Joint Mathematics Meeting (JMM 2025) in Seattle attracted thousands of mathematicians. The theme was “We Decide our Future: Mathematics in the Age of AI." Numerous UW faculty and UW affiliated faculty presented in invited lectures, special sessions and panels (see here for a list of all AI related events at the JMM).
Applied Mathematics
Bamdad Hosseini operates an active research group focused on the mathematical foundations of machine learning algorithms and AI with a view towards their applications in scientific computing and natural sciences. His research is structured under two main thrusts: 1) the algorithmic and theoretical development of machine learning methods for simulation of physical systems governed by differential equations; and 2) the mathematical foundations of generative models and their applications in natural sciences and engineering.
Nathan Kutz is co-director of the AI Institute in Dynamic Systems a National Artificial Intelligence Research Institute funded by the National Science Foundation (NSF), whose mission is to develop the next generation of advanced machine learning tools for controlling complex physical systems.
Tim Leung, director of the Applied Mathematics department’s Computational Finance & Risk Management program and member of the advisory board of the AI Finance Institute, works on AI for quantitative finance. One aspect of this work is AI-Enhanced Data Analysis, a novel multiscale approach to analyze financial data at different frequencies and generate signals and features for a broad spectrum of machine learning models, such as regression tree ensemble, support vector machine, and long short-term memory neural network. AI-enhanced and machine learning models are useful in identifying patterns in financial price data and improve forecasting performance. Another area Leung's research on Automated Scalable Trading Systems, namely the development of AI and machine learning (ML)-based algorithmic trading systems. The approach seeks to incorporate new data and market signals in changing market conditions while optimizing the trading strategies and minimizing risks.
Hong Qian works on the mathematical foundation and physical interpretations for Data Science; which he sees as the theoretical underpinning for AI.
Eric Shea-Brown and Adrian Fairhall are co-directors of the UW’s Computational Neuroscience Center (CNC). The goal of the CNC is to "Decode Intelligence." Modern AI, which is powered by neural networks inspired by the brain, is at the core of their research and training efforts. The CNC is a hub connecting researchers across campus — in the College of Arts & Sciences, the School of Medicine, and the College of Engineering — and across the Pacific Northwest. The center has deep and highly active collaborations with the Allen Institute for Brain Science and regular interactions with Google DeepMind. The CNC houses the International Network on Biologically-Inspired Computing, the UW Swartz Center for Theoretical Neuroscience, and a current NIH T32 award. These programs open access to and advance understanding of the principles of neural computation for trainees at all levels. The CNC hosts a major national meeting on NeuroAI annually, and in recent collaboration with the Allen Institute for Neural Dynamics, the French Consulate, and the NSF. The CNC pursues ethical and social implications for AI and neurotechnology through its Neuroscience, AI, and Society series.
Eli Shlizerman’s research is in the interdisciplinary area of NeuroAI, in which fundamental properties of Neurobiological Networks (Neuro) are investigated along with Artificial Neural Networks (AI). Their research can be divided into 3 main pillars: 1. Development of AI methodology for data in Neuroscience (AI for Neuro). 2. Investigation of the contribution of interpretation of neurobiological mechanisms to design, understanding and control of AI systems (Neuro for AI). 3. Proposing AI systems for multi-modal learning, capable of processing various modalities simultaneously, e.g. video, audio, text, and brain signals (Multi-modal AI).
Mathematics
Jarod Alper leads efforts on several activities centered around AI and Mathematics, including:
- The UW's eXperimental Lean Lab (XLL), a subsidiary of the highly successful Washington Experimental Mathematics Lab (WXML), provides a space for mathematicians at any level (faculty, postdocs, PhD students, undergraduates, high school students) to learn about formalizing mathematics with Lean and to collaborate with others on mathematics projects. He has led led around 20 formalization projects with undergraduates through the existing structure of WXML.
- The Math AI Lab, encompassing a broader array of themes in Math AI: formalization, autoformalization, mathematical foundations of AI, using machine learning in math research, meaning of mathematics in the age of AI. Recent and current projects includen tactic building for Lean as well as reinforcement learning targeted to proof generation.
- Alper is organizing a workshop in April 2025 at the Institute for Computational and Experimental Research in Mathematics (ICERM) on Autoformalization for the Working Mathematician. At the JMM, Alper delivered an invited address on Embracing AI and formalization: Experimenting with Tomorrow's Mathematical Tools.
Sara Billey, PhD student Hermann Chau, and Pacific Northwest National Laboratory (PNNL) data scientist and mathematician Henry Kvinge are part of a team of machine learning (ML) experts and mathematicians exploring the use of AI and ML as a tool in algebraic combinatorics research. AI/ML may not be helpful for all mathematics problems, so it is important to select problems that can benefit from data-driven methods. They applied ML to problems related to permutations, tableaux, graphs, and binary matrices.
Dima Drusviyatskiy works on a variety of foundational problems in AI, including developing faster numerical algorithms for deep learning, showing that in a variety of learning problems, the dynamics of the stochastic algorithm become deterministic when the dimension of the data and the number of samples are sufficiently large, and understanding the feature learning phenomenon in deep learning, which posits that gradient-based algorithms automatically perform nonlinear dimension reduction tailored to the problem at hand. Feature learning is widely believed to be the mechanism underlying the impressive performance of deep learning in practice.
Dan Mikulincer is doing foundational work on AI recognizing that we don't currently have good theoretical foundations to explain what we empirically observe as deep learning algorithms producing things, like generative AI and ChatGPT. What we know should suggest that those algorithms will fail in worst-case scenarios, however it is important to understand how AI paradigms fail outside of worst-case scenarios. Mikulincer is working on showing that worst-case scenarios for deep learning algorithms are very unstable when you add noise. Another thread of research specifically deals with the current algorithms for generative AI — algorithms are based on stochastic interpolations that go at least as far back as Schrodinger in the 1930s. Mikulincer is trying to change the basic paradigms of the algorithm and consider other forms of stochastic interpolations. The hope is that at least in some cases, we can come up with an interpolation method equipped with an ad-hoc implementation algorithm. In that way, we can remove the black-box dependence on deep learning, increase the explainability, and provide rigorous guarantees for the algorithms.
Stefan Steinerberger has been involved in theoretical analyses of various machine learning problems and has been studying mathematics that is partially motivated by techniques used in machine learning. He is also on the Editorial Board of the newly founded journal Mathematical Foundations of Machine Learning.
Cynthia Vinzant serves on the American Mathematical Society’s Committee on Science Policy which is helping to guide the national Mathematics community on the policy regarding the intersection of Math and AI.
Statistics
AI is at the core of the Department of Statistics identity. Modern AI is data driven and based on statistical procedures. So, in a strong sense, everything that we do in Statistics is, directly or indirectly, foundational to AI. A number of Statistics faculty serve on the editorial boards of the main conferences where AI work is being published, such as NeurIPs and ICML.
Zaid Harchaoui and Abel Rodriguez have been part of the leadership of the NSF funded Institute for the Foundations of Data Science (IFDS) since 2020. This effort includes faculty in Math, Statistics, Computer Science & Engineering, and Electrical and Computer Engineering (ECE), where the lead PI, Maryam Fazel, is appointed. Harchaoui is also part of the Institute for Foundations of Machine Learning (IFML), one of the NSF funded AI Institutes based at UT Austin.
Astronomy
Faculty in Astronomy have used AI to optimize the performance of the Rubin Observatory — initially in terms of the image quality but eventually the overall performance of the telescope and its maintenance. With LINCC Frameworks, the department is developing AI tools to apply to the images and catalogs coming from Rubin to search for unusual events with the universe.
Chemistry
Dan Fu has been developing deep learning models for hyperspectral imaging. In an ongoing collaborative project with Sheng Wang in Computer Science, they are developing super-resolution chemical imaging via a diffusion-based deep generative model. And in an ongoing collaborative project with Hao Yuan Kueh in Bioengineering, they work on Label-free immune cell classification using deep learning.
David Ginger is developing neural network to extract dynamical information from advanced microscopy data.
Munira Khalil, in collaboration with Niranjan Govind from PNNL, is using neural network potentials to calculate nuclear quantum effects in molecular simulations. An important consideration for the accurate description of proton transfer involves the inclusion of nuclear quantum effects (NQE) into molecular simulations.
Xiaosong Li is developing an AI framework to design high-performance conducting polymer systems, a reinforcement learning platform to optimize variational quantum computing algorithms and developing quantum machine learning algorithms.
Anne McCoy is developing Neural Network Potentials for Diffusion Monte Carlo (DMC) Simulations. The McCoy group has been refining the protocol for training and using neural network potential energy surfaces for Diffusion Monte Carlo (DMC) simulations. Using a neural network to evaluate potential energies increases the computational efficiency of these simulations, allowing them to utilize DMC as a method for studying the vibrational landscapes of larger, more complex systems.
Physics
Accelerating AI algorithms for Data-driven Discovery (A3D3) is an NSF Institute ($16.5M over five years) led by Shih-Chieh Hsu that aims to construct the knowledge essential for real-time applications of artificial intelligence in three fields of science: high energy physics, multi-messenger astrophysics, and systems neuroscience. The institute, a collaboration across nine institutions. aims to develop customized AI solutions to process large datasets in real time, significantly enhancing the potential for discovery. Products include publications, talks, education, outreach, and software.
Shih-Chieh Hsu has also worked on the development of the low inference AI acceleration platform hls4ml, the high throughput AI high performance computing platform SuperSONIC, and a Foundation Model for Particle Physics.
Almost all stages of data analysis in the High Energy Physics group (Shih-Chieh Hsu, Anna Goussiou, Gordon Watts, and others) are done with AI tools (Deep Neural Networks, Graph Neural Networks, Adversarial Neural Networks, Transformers, Boosted Decision Trees, etc). The group has also started to use AI assistants in their work.
Researchers at the Center for Experimental Nuclear Physics and Astrophysics (CENPA) often use AI / ML techniques in their data analysis, usually for signal/background discrimination or related tasks.
Nuclear physics researchers have provided unique datasets for AI training purposes (eg. https://arxiv.org/abs/2308.10856).
Arka Majumdar (joint with ECE) uses AI extensively in photonics design and in building optical devices to aid Artificial Neural Networks. Majumdar is also developing new brain-inspired AI algorithms, specifically using predictive coding.
Boris Blinov has pursued an effort to use machine learning for single photon pulse shaping.
Kai-Mei Fu (joint with ECE) has recently begun a research project using AI for Hamiltonian learning in a multi-spin defect system.
Jens Gundlach’s Biophysics group is working with David Baker (UW Biochemistry) on designing de novo protein nanopores. The design process of de novo proteins is all done with AI.
Researchers in the Physics department's Institute of Nuclear Theory (INT) are exploring applications of Machine Learning to nuclear physics, including 1) Bayesian uncertainty quantification via machine learning, and 2) Neural network quantum states and exploration of real time dynamics of neutrinos in astrophysical objects.
Researchers in the Physics Education group are working to leverage AI to help with reading research papers, and to summarize papers that they are writing as a developmental way to judge how the paper will be understood by a typical reader.
Biology
Briana Abrahms’ research team is using AI and animal-borne sensors to classify difficult-to-observe behaviors in free-ranging large carnivore species (African wild dogs, lions) in their wild habitats, with the goal of understanding how environmental change is impacting their behavior and ecology.
Carl Bergstrom contributed to a forthcoming Proceedings of the National Academy of Sciences (PNAS) piece — with Emily Bender (Linguistics), Jevin West (Information School), and a host of other authors — about opportunities and threats for science from language learning models (LLMs). Bergstrom also is collaborating with philosophers and psychologists on a paper about how LLMs change the cost-benefit calculus of researchers and thus change the kinds of science being done
Bing Brunton does foundational research in AI methods by advancing AI algorithms for modeling and understanding spatiotemporal data recorded from natural animal behavior. Specifically, we are extending an approach known as hypernetworks to learn a mixture of dynamical systems that jointly explain neural and behavioral data. Brunton also has a significant portfolio of foundational research on the interface of neuroscience and AI, including:
- developing agent-based, normative models of how an insect-sized animal may localize sources of odors while flying in the air, using approaches from deep reinforcement learning (DRL).
- developing and maintaining computer vision tools based on AI to track and triangulate the movements of animals in 3D from multiple cameras. This toolkit. known as Anipose (http://anipose.readthedocs.io/), is now the most widely used open source 3D tracking toolkit in neuroscience.
- building neurally and biomechanically realistic models of animals (flies and rodents) that are capable of mimicking real animals brains and behavioral in virtual physics engines. These so-called digital twins of animals will be constrained by large-scale neural connectivity (i.e., the connectome), realistic whole-body models (skeletons, muscles, and sensors), and 3D behavioral tracking data, trained with deep reinforcement learning (DRL), and help design of targeted biological experiments to uncover comprehensive understanding of the neural basis of natural behavior.
- developing recurrent neural networks (RNNs) with feedback as a model of adaptation and learning in the mammalian cortex, specifically with a goal of understanding how designing brain-computer interfaces (BCIs) can be improved to facilitate decoding performance of movement intentions (e.g., to move a cursor on a screen by decoded brain activity alone).
Carol Bergstrom and Bing Brunton, along with Jevin West (Information School) are developing a training module for the National Institute of Neurological Disorders and Stroke (NINDS) on how to use LLMs in ways consistent with principles of scientific rigor.
Clemens Cabernard uses AI for python coding help for data analysis and Al-guided image segmentation. This is inbuilt in their image analysis software, Imaris.
Emily Carrington and her lab member use AI for writing code for data analysis and visualization and assessing writing effectiveness. She has observed this as especially helpful for students with dyslexia and dyscalculia.
Psychology
The Center for Human Neuroscience uses AI for the image analysis of MRI brain scans which are fundamental to their work in human neuroscience.
Speech and Hearing Sciences
Adrian KC Lee uses ML to model interacting time series to discover cortical networks associated with Auditory Processing Dysfunction.
Yi Shen uses human-centered computing techniques to customize hearing aids and cochlear implants, addressing the unique communication needs of older adults with age-related hearing loss. Shen also develops deep-learning based algorithms to enhance speech communication in background noise.
Christina Zhao uses ML/AL tools to predict individual language outcomes and presence of atypical development / language disorder from neural speech processing in infants and extract and validate neural signatures of speech processing based on subcortical and cortical recordings from human adults and infants.
Institute for Learning and Brain Science (I-LABS)
Faculty at I-LABS are engaged in multiple interdisciplinary projects using AI. This work often involves faculty across multiple units at UW as well as at other institutions. Some examples:
AI, Brain Science, Parent-Child Interactions aims to combine I-LABS expertise in children’s social learning with AI methods to machine segment, analyze, and quantify human interactions, with the aim of correlating this AI-generated ‘visual scene analysis’ with brain data.
Comparing Machines vs. Human Brains as Language Learners is a comparison between AI learning programs and the learning patterns of human infants. The initial finding compared human adult’s brain activation patterns with the deep layers exhibited by AI learning programs when the human brain and the AI system were given similar data input.
AI as a Tool to Integrate Multimodal and Behavioral Findings in Developmental Science incorporates machine learning analyses with the hypothesis that that machine learning will uncover new results, not detected by traditional neuroscience methods. The initial findings support the hypothesis: brain data from 11-month-old infants, using machine learning methods, identified significant correlations between aspects of the early brain measures and children’s later language abilities at the age of 6 years, findings that went beyond what the traditional neuroscience measures found.
World Values of Conversational AI and Consequences for Human-AI Interaction looks at how people increasingly interact with AI in their daily lives through smart assistants. These conversational AI “assistants” can manifest cultural values and beliefs that are built into the system—from traditionalism and conformity to social tolerance—which may or may not be in line with those held by the diverse public of AI users. This project is designed to test the culture clash that emerges when human users in a culture have underlying different values from the values implicitly held by the AI.
Social Intelligence Differences in Humans vs. Machines: A Matter of Morals and Values explores the differences between human and artificial intelligence in the ability to learn from social interactions. While human intelligence is characterized by the remarkable ability to learn from social interactions with other humans, attempts to endow machines and robots with “artificial social intelligence” (social AI) to learn from interactions with humans have failed to reach human levels, despite AI exceeding human capacities in other areas of intelligence.
AI in natural sciences pedagogy and courses
AI and pedagogy
The Department of Biology maintains an extensive AI Activities Hub for Educators, with a blend of various assignment styles and diverse formats to challenge and expand students' perspectives and creativity with biology course material. This site is maintained by Biology Instructional Coordinator Christine Savolainen, who is also a founding member of the Center for Teaching and Learning’s Advisory Council for Technology-Enhanced Teaching.
Sara Billey (Mathematics), PhD student Hermann Chau (Mathematics), and Pacific Northwest National Laboratory (PNNL) data scientist and mathematician Henry Kvinge have created an Algebraic Combinatorics Dataset Repository (ACD Repo) at PNNL, which contains mathematics datasets already set up for ML for those — including undergraduates and early career researchers — who want to get started exploring the intersection of AI and math research and are looking to get involved in mathematics research while building practical skills for their future.
Bemdad Hosseini (Applied Mathematics) runs the Mathematics of Machine Learning Journal Club, a study group for students and academics across campus to learn about recent ideas in the mathematical aspects of machine learning.
Bemdad Hosseini is also involved in a collaboration with the Math Science Upward Bound program at the UW, an outreach education program for underrepresented students in high schools in King County. Hosseini and his research team are involved in designing curriculum materials for a high school machine learning course for this program.
Courses with a focus on AI
Physics
Shih-Chieh Hsu has developed two new courses:
- PHYS 417: Neural Network Methods for Signals in Engineering and Physical Sciences
- ARTSCI 162 H: Exploring Quantum Universe With Artificial Intelligence
Mathematics
The Department of Mathematics is incorporating AI/formalization into their curriculum. Vasily Ilin taught an undergraduate topics course on Formalization in Spring 2024 and Jarod Alper is teaching an undergraduate topics course in 2025 Alper also plans to teach a graduate course on Mathematics of AI, which will be an overview course on the many themes of Math AI.
Sara Billey and PhD student Hermann Chau (both in Mathematics), and Pacific Northwest National Laboratory (PNNL) data scientist and mathematician Henry Kvinge has run an experimental summer course for undergraduates that focused on the role of AI in mathematics. One assignment had students train a neural network and interpret the network weights to discover the definition of several classical permutation statistics. Students were excited to make their own conjectures and prove they are correct.
Statistics
Statistics hosts one of the few undergraduate courses on campus on ethics of data science and artificial intelligence, STAT 303: Introduction to the Ethics of Algorithmic Decision Making. This course was developed originally by Abel Rodriguez and is current taught by Thomas Richardson. Other related UW courses are SOC 225, INFO 351, and CSE 480. STAT 303 may be the only one that includes both the humanistic/social/policy aspects and the technical aspects of the issues.
Biology
Carl Bergstrom (Biology) and Jevin West (Information School) have spent the past six months developing an online curriculum that is basically a humanities course about how to thrive in a ChatGPT world. They take a dialectical approach to teaching how they work, when you should use them, when you shouldn’t, and what they’re doing to society.
In their Calling Bullshit course (BIOL 270), Carl Bergstrom and Jevin West spend two weeks talking about how generative AI is changing the world.
In BIOL 461, Bing Brunton teaches the use of AI and language learning models (LLMs) via a series of in-class and writing exercises to explore ethical use of LLMs in studying neurobiology, including how to ask questions, how to refine answers, and how to ensure the provenance of the answers (e.g., check sources and citations).
Psychology
The Department of Psychology is in the process of developing multiple courses on AI & Psychology, both for non-majors, majors and graduate students.
Features stories about AI in the Natural Sciences

An Earful of AI
Hearing aid technology is improving all the time with the help of AI, thanks to researchers like Yi Shen, professor of speech & hearing sciences.