報告題目:駕駛模型檢測(Model inspection for driving)
報告時間: 2023年4月19日14:00
報告地點:437bwin必贏國際官網大樓B404
報告人:Patrick Pérez博士
報告人國籍:法國
報告人單位:世界500強企業法雷奧公司
報告人簡介:Patrick Pérez 是法資世界500強企業法雷奧公司人工智能副總裁兼 valeo.ai 的科學總監,valeo.ai是一個專注于法雷奧汽車應用,尤其是自動駕駛汽車的人工智能研究實驗室。在加入法雷奧之前,Patrick Pérez 曾在 Technicolor (2009-2018)、Inria (1993-2000、2004-2009) 和微軟劍橋研究院 (2000-2004) 擔任研究員。他的研究范圍包括多模態場景理解和計算成像。
Patrick Pérez 碩士畢業于巴黎中央理工學院,博士在雷恩大學攻讀信號處理專業。
Patrick Pérez 曾在法國回聲報(Les Echos)、EEE / CVF 計算機視覺和模式識別會議 (CVPR)等發表多篇文章。曾在三星、牛津大學、法蘭西學院等高校、公司發表演講。在布拉格舉行的捷克-法國國家AI 研討會上發表主題演講、小組討論和主席。
Bio: Patrick Pérez is Valeo VP of AI and Scientific Director of valeo.ai, an AI research lab focused on Valeo automotive applications, self-driving cars in particular. Before joining Valeo, Patrick Pérez was a researcher at Technicolor (2009-2018), Inria (1993-2000, 2004-2009) and Microsoft Research Cambridge (2000-2004). His research interests include multimodal scene understanding and computational imaging.
Patrick Pérez graduated from the Ecole Centrale Paris with a master's degree and a Ph.D. in signal processing at the University of Rennes.
Patrick Pérez has published several articles in Les Echos, EEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), etc. He has given speeches in universities and companies such as Samsung, Oxford University, and Collège de France. Keynote speech, panel discussion and chair at the Czech-French National AI Symposium in Prague.
報告摘要:從感知到決策,驅動堆棧嚴重依賴經過訓練的模型,這引發了關鍵的可靠性問題。提高可靠性可以采取多種形式,其中大部分仍處于研究階段。在本次演講中,我將回顧 Valeo.ai 最近的工作,旨在以各種方式“檢查”目標模型。我們將看到如何學習輔助模型,例如,預測識別模型輸出的置信度,或解釋端到端駕駛模型的決策。我們還將討論基于視覺的駕駛模型的反事實解釋的生成,以深入了解其推理和可能的偏見。
Abstract: From perception to decision, driving stacks rely heavily on trained models, which raises crucial reliability issues. Improving reliability can take many forms, most of them still at the research level. In this presentation, I will survey recent works at Valeo.ai aimed at ? inspecting ? in various ways a target model. We shall see how an auxiliary model can be learned, for instance, to predict the confidence of a recognition model’s output, or to explain the decision of an end-to-end driving model. We shall also discuss the generation of counter-factual explanations for a vision-based driving model, to get insights into its reasoning and possible biases.
邀請人:玄躋峰