學(xué)術(shù)交流
學(xué)術(shù)報(bào)告:Towards Efficient Distributed Machine learning: A Joint Algorithm and System Approach
發(fā)布時(shí)間:2023-12-28 瀏覽量:次
報(bào)告題目:Towards Efficient Distributed Machine learning: A Joint Algorithm and System
Approach
報(bào)告時(shí)間:2024年1月3日(周三)上午10:00
報(bào)告地點(diǎn):437bwin必贏國際官網(wǎng)一樓小米工作室
報(bào)告人:包巍
報(bào)告人國籍:中國
報(bào)告人單位:悉尼大學(xué)
報(bào)告人簡介: Wei Bao received the PhD degree in Electrical and Computer Engineering from the University of Toronto, Canada, in 2016. He is currently a senior lecturer at the School of Computer Science, the University of Sydney. His research covers networking, edge computing, and distributed machine learning. He received the Best Paper Awards in ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM) in 2013 and 2019 and IEEE International Symposium on Network Computing and Applications (NCA) in 2016.
報(bào)告摘要: Distributed machine learning is gaining popularity due to its advantages in flexibility, scalability, and privacy. However, it inevitably causes large latency and resource waste due to geographically distributed data and heterogeneous devices. In this talk, I will present our recent research progress in addressing such issues through our joint algorithm and system designs. By simplifying AI models and developing faster convergence training algorithms, we observe significant reductions in latency and resource consumption; by coordinating computing, communication, and other resources, both model training and inference are significantly accelerated. We also adapt our design to diverse systems and fluctuating environments to maximise the potential of both algorithms and systems. I will cover several exemplary research papers, such as efficient federated learning, collaborative video analytics, and the scheduling algorithms that support them, to demonstrate the effectiveness of our designs.
邀請(qǐng)人:程大釗、胡創(chuàng)