Xingfu Wu

  • Research Associate Professor of Computer Science

Education

  • Ph. D. in computer science, Beijing University of Aeronautics and Astronautics, Beijing, China
  • M.S. in mathematics, Beijing Normal University, Beijing, China
  • B. S. in mathematics, Beijing Normal University, Beijing, China

Research Interests

  • High-performance computing
  • Performance modeling and analysis
  • Energy and power modeling and analysis

Professional Affiliations & Memberships

Argonne National Laboratory

Awards

  • Best Paper Award, 14th IEEE International Conference on Computational Science and Engineering (CSE-2011), 2011.
  • Second Place, Beijing Science and Technology Advancement Awards, 1997

Publications

Monographs

  1. Xingfu Wu, Performance Evaluation, Prediction, and Visualization of Parallel Systems,Kluwer Academic Publishers, Boston, February 1999. (based on my Ph.D Dissertation)

Book Chapters and Refereed Journal Articles

  1. Xingfu Wu, Benchun Duan, and Valerie Taylor, Parallel Earthquake Rupture Simulations on Large-scale Multicore Cluster Systems (Book Chapter), in Handbook of Data Intensive Computing (Eds: B. Furht and A. Escalante), Springer-Verlag, 2011.
  2. Xingfu Wu, Valerie Taylor, Charles Lively, Hung-Ching Chang, Bo Li, Kirk Cameron, Dan Terpstra and Shirley Moore, MuMMI: Multiple Metrics Modeling Infrastructure (Book Chapter), in Tools for High Performance Computing 2013 (Eds: A. Knupfer, J. Gracia, W. E. Nagel, M. M. Resch), Springer, 2014.
  3. Xingfu Wu, Valereie Taylor, and Zhiling Lan, Performance and Power Modeling and Prediction Using MuMMI and Ten Machine Learning Methods, Concurrency and Computation Practice and Experience, e7254, August 2022, https://doi.org/10.1002/cpe.7254.
  4. Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Valerie Taylor, Paul Hovland, and May Hall, Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization Pragmas Using Bayesian Optimization, Concurrency and Computation Practice and Experience, Nov. 2021, e6683, Volume 34, Issue 20, https://doi.org/10.1002/cpe.6683.
  5. Xingfu Wu and Valerie Taylor, Utilizing Ensemble Learning for Performance and Power Modeling and Improvement of Parallel Cancer Deep Learning CANDLE Benchmarks, Concurrency and Computation Practice and Experience, July 2021, e6516, https://doi.org/10.1002/cpe.6516.

Refereed Conference Papers

  1. Kevin Huck, Xingfu Wu, Anshu Dubey, Antigoni Georgiadou, J. Austin Harris, Tom Klosterman, Matthew Trappett, and Klaus Weide, Performance Debugging and Tuning of Flash-X with Data Analysis Tools, SC2022 Workshop on Programming and Performance Visualization Tools (ProTools22), Nov. 2022.
  2. Xingfu Wu, Valerie Taylor, and Zhiling Lan, Performance and Energy Improvement of the ECP Proxy App SW4lite under Various Workloads, SC2021 Workshop on Memory-Centric High Performance Computing (MCHPC’21), Nov. 2021.
  3. Jaehooh Koo, Prasanna Balaprakash, Michael Kruse, Xingfu Wu, Paul Hovland, and May Hall, Customized Monte Carlo Tree Search for LLVM/Polly's Composable Loop Optimization Transformations, SC20 Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS’21), Nov. 2021.
  4. Xingfu Wu, Aniruddha Marathe, and Siddhartha Jana, End-to-End PowerStack Codesign for Energy Efficient HPC (position paper), 2021 DoE Advanced Scientific Computing Research (ASCR) Workshop on Reimagining Codesign (ReCoDe), March 16-18, 2021. 6
  5. Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Valerie Taylor, Paul Hovland, and May Hall, Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization Pragmas Using Bayesian Optimization, SC20 Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS’20), Nov. 12, 2020, Atlanta, Georgia.
  6. Michael Kruse, Hal Finkel, and Xingfu Wu, Autotuning Search Space for Loop Transformations, SC20 Workshop on the LLVM Compiler Infrastructure in HPC, Nov. 12, 2020, Atlanta, Georgia.
  7. Xingfu Wu and Valereie Taylor, Utilizing Ensemble Learning for Performance and Power Modeling and Improvement of Parallel Cancer Deep Learning CANDLE Benchmarks, in 2020 Cray User Group Conference, October 27, 2020.
  8. Xingfu Wu, Valereie Taylor, and Zhiling Lan, Performance and Power Modeling and Prediction Using MuMMI and Ten Machine Learning Methods, in 2020 Cray User Group Conference, October 27, 2020.
  9. Xingfu Wu, Aniruddha Marathe, Siddhartha Jana , Ondrej Vysocky , Jophin John , Andrea Bartolini, Lubomir Riha, Michael Gerndt, Valerie Taylor, and Sridutt Bhalachandra, Toward an End-to-End Auto-tuning Framework in HPC PowerStack, Energy Efficient HPC State of Practice 2020 (EE HPC SOP 20), Sep. 14-17, 2020, Kobe, Japan.
  10. Xingfu Wu, Valerie Taylor, Justin M. Wozniak, Rick Stevens, Thomas Brettin, and Fangfang Xia, Performance, Energy, and Scalability Analysis and Improvement of Parallel Cancer Deep Learning CANDLE Benchmarks, in 48th International Conference on Parallel Processing, Kyoto, Japan, August 5-8, 2019.
  11. Xingfu Wu, Valereie Taylor, and Zhiling Lan, Evaluating Runtime and Power Requirements of Multilevel Checkpointing MPI Applications on Four Different Parallel Architectures: An Empirical Study, in 2018 Cray User Group Conference, Stockholm, Sweden, May 20-24, 2018.
  12. Xingfu Wu, Valerie Taylor, Justin M. Wozniak, Rick Stevens, Thomas Brettin, and Fangfang Xia, Performance, Power, and Scalability Analysis of the Horovod Implementation of the CANDLE NT3 Benchmark on the Cray XC40 Theta, in SC18 Workshop

Books

 Performance Evaluation, Prediction and Visualization of Parallel Systems (Kluwer, 1999)