Title: Applications of AI Language Models for Biomedical Research
Abstract: Large language models (LLMs), trained on vast datasets, are opening new frontiers in biomedical research, especially when integrated with prompt engineering, parameter-efficient fine-tuning (PEFT), retrieval-augmented generation (RAG), reinforcement learning, and AI agents. In this presentation, I will highlight our work in applying these methods to diverse biomedical challenges. We developed RAG and prompt refinement strategies to improve gene relationship prediction and built AI agents for navigating knowledge bases. In protein modeling, we introduced S-PLM, a contrastive learning-based, 3D structure-aware protein language model that enhances sequence-based predictions. We trained our own large protein language model, Prot2Token, to handle various protein prediction tasks in a unified framework. Prompting protein language models also improved tasks such as signal peptide and targeting signal prediction. Additionally, we applied prompt-based learning to large single-cell RNA-seq models, leading to improved performance across multiple single-cell analysis tasks. We also developed a reinforcement learning approach that enhances cell line-specific subcellular localization prediction by optimizing protein-protein interaction networks, marking the first application of the Group Relative Policy Optimization (GRPO) framework in protein bioinformatics. Collectively, our results demonstrate the transformative potential of LLMs and AI agents in advancing biological discovery.
Bio: Dong Xu is Curators’ Distinguished Professor in the Department of Electrical Engineering and Computer Science, with appointments in the Christopher S. Bond Life Sciences Center and the Informatics Institute at the University of Missouri-Columbia. He obtained his Ph.D. from the University of Illinois, Urbana-Champaign in 1995 and did two years of postdoctoral work at the US National Cancer Institute. He was a Staff Scientist at Oak Ridge National Laboratory until 2003 before joining the University of Missouri, where he served as Department Chair of Computer Science during 2007-2016. Over the past 30 years, he has conducted research in many areas of computational biology and bioinformatics, including single-cell data analysis, protein structure prediction and modeling, protein post-translational modifications, protein localization prediction, computational systems biology, biological information systems, and bioinformatics applications in human, microbes, and plants. His research since 2012 has focused on the interface between bioinformatics and deep learning. He has published more than 500 papers with more than 30,000 citations and an H-index of 91 according to Google Scholar. He was elected to the rank of American Association for the Advancement of Science (AAAS) Fellow in 2015 and American Institute for Medical and Biological Engineering (AIMBE) Fellow in 2020.
Zoom link: Ìý
Title: Metric-Scale Robotic Skin Using E-Textile and MEMS Technologies
Abstract: The integration of multi-modal sensors into robotic skin is essential for ensuring safety and enabling collaborative work with humans. One of the main challenges in developing robotic skin is scalability including covering large, metric-scale surfaces, while also reducing sensor costs and automating the assembly process. Advanced semiconductor-based microelectromechanical systems (MEMS) offer a pathway to low-cost, highly sensitive tactile sensors, bending sensors, and other mechanical sensing components. Meanwhile, electronic textile (e-textile) technologies enable the integration of large-area sensors and electronic components through automated manufacturing processes. In this talk, I will introduce textile-based capacitive touch sensor technology, large-area MEMS integration, and the incorporation of electronic components on metric-scale textiles. In addition, the development of highly sensitive tactile and bending sensors will be presented for robotic applications.
Bio: Seiichi Takamatsu is a Professor in the School of Systems Science and Industrial Engineering at ÌìÃÀ´«Ã½. He received his B.E., M.E., and Ph.D. degrees in Mechanical Informatics from the University of Tokyo, Japan, in 2003, 2005, and 2009, respectively. From 2009 to 2015, he worked as a researcher at AIST. Before joining ÌìÃÀ´«Ã½, he was an Associate Professor at the University of Tokyo. His research interests include hybrid electronics, wearable MEMS technologies, and meter-scale electronic textiles. He has published over 120 journal and conference papers on hybrid electronics and electronic textiles.
Zoom link: Ìý
Title: Autonomous Culvert Inspection Using Legged Robots
Abstract: We are working on culvert inspection in the Erie canal using legged robots. As seen in this , such inspection presents several challenges including stable navigation of legged robots, inspection in low-light conditions, accurate fault detection andÌý detailed visual reconstruction for future inspections. In this talk, I’ll describe our approach for each of these problems along with implications for general robot autonomy. CaRT is a context-aware adaptation filter that uses temporal sequence sampling to improve stability in legged locomotion. NightHawk jointly optimizes external light and camera parameters for optimal image capture in low-light conditions. VISION is a system that uses vision-language models (VLMs) with best-view planning to perform fault detection in culverts in a zero shot manner. And EXPLORE is our adaptation of Gaussian Splatting for active 3D reconstruction of the culvert for both visual and structural reasoning.Ìý I’ll also briefly highlight our other project on .
Bio: I am an Associate Professor in Computer Science and Engineering at University at Buffalo, State University of New York (UB). I received my Ph.D. in Computer Science from University of Southern California in 2010 and was a Postdoctoral Fellow in the EECS Department at Harvard University from 2010-13. My research spans the areas of mobile systems and robotics. Most recently, my group works on field robotics in the areas of infrastructure inspection and autonomous excavation. My research is supported by several grants from National Science Foundation, DARPA, AFOSR, AFRL, ONR and others including the NSF Faculty Early Career Award which I received in 2019. For my research, I have received the IEEE Region 1 award for technological innovation (academic) in 2022, elevated to IEEE Senior Member in 2023. I am the founding director of the Center for Embodied Autonomy and Robotics, a university-wide Center that brings together research, entrepreneurship, and outreach in robotics. I also co-direct the Master’s program in Robotics.
Zoom Link: Ìý