報(bào)告題目:駕駛模型檢測(cè)(Model inspection for driving)
報(bào)告時(shí)間: 2023年4月19日14:00
報(bào)告地點(diǎn):437bwin必贏(yíng)國(guó)際官網(wǎng)大樓B404
報(bào)告人:Patrick Pérez博士
報(bào)告人國(guó)籍:法國(guó)
報(bào)告人單位:世界500強(qiáng)企業(yè)法雷奧公司
報(bào)告人簡(jiǎn)介:Patrick Pérez 是法資世界500強(qiáng)企業(yè)法雷奧公司人工智能副總裁兼 valeo.ai 的科學(xué)總監(jiān),valeo.ai是一個(gè)專(zhuān)注于法雷奧汽車(chē)應(yīng)用,尤其是自動(dòng)駕駛汽車(chē)的人工智能研究實(shí)驗(yàn)室。在加入法雷奧之前,Patrick Pérez 曾在 Technicolor (2009-2018)、Inria (1993-2000、2004-2009) 和微軟劍橋研究院 (2000-2004) 擔(dān)任研究員。他的研究范圍包括多模態(tài)場(chǎng)景理解和計(jì)算成像。
Patrick Pérez 碩士畢業(yè)于巴黎中央理工學(xué)院,博士在雷恩大學(xué)攻讀信號(hào)處理專(zhuān)業(yè)。
Patrick Pérez 曾在法國(guó)回聲報(bào)(Les Echos)、EEE / CVF 計(jì)算機(jī)視覺(jué)和模式識(shí)別會(huì)議 (CVPR)等發(fā)表多篇文章。曾在三星、牛津大學(xué)、法蘭西學(xué)院等高校、公司發(fā)表演講。在布拉格舉行的捷克-法國(guó)國(guó)家AI 研討會(huì)上發(fā)表主題演講、小組討論和主席。
Bio: Patrick Pérez is Valeo VP of AI and Scientific Director of valeo.ai, an AI research lab focused on Valeo automotive applications, self-driving cars in particular. Before joining Valeo, Patrick Pérez was a researcher at Technicolor (2009-2018), Inria (1993-2000, 2004-2009) and Microsoft Research Cambridge (2000-2004). His research interests include multimodal scene understanding and computational imaging.
Patrick Pérez graduated from the Ecole Centrale Paris with a master's degree and a Ph.D. in signal processing at the University of Rennes.
Patrick Pérez has published several articles in Les Echos, EEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), etc. He has given speeches in universities and companies such as Samsung, Oxford University, and Collège de France. Keynote speech, panel discussion and chair at the Czech-French National AI Symposium in Prague.
報(bào)告摘要:從感知到?jīng)Q策,驅(qū)動(dòng)堆棧嚴(yán)重依賴(lài)經(jīng)過(guò)訓(xùn)練的模型,這引發(fā)了關(guān)鍵的可靠性問(wèn)題。提高可靠性可以采取多種形式,其中大部分仍處于研究階段。在本次演講中,我將回顧 Valeo.ai 最近的工作,旨在以各種方式“檢查”目標(biāo)模型。我們將看到如何學(xué)習(xí)輔助模型,例如,預(yù)測(cè)識(shí)別模型輸出的置信度,或解釋端到端駕駛模型的決策。我們還將討論基于視覺(jué)的駕駛模型的反事實(shí)解釋的生成,以深入了解其推理和可能的偏見(jiàn)。
Abstract: From perception to decision, driving stacks rely heavily on trained models, which raises crucial reliability issues. Improving reliability can take many forms, most of them still at the research level. In this presentation, I will survey recent works at Valeo.ai aimed at ? inspecting ? in various ways a target model. We shall see how an auxiliary model can be learned, for instance, to predict the confidence of a recognition model’s output, or to explain the decision of an end-to-end driving model. We shall also discuss the generation of counter-factual explanations for a vision-based driving model, to get insights into its reasoning and possible biases.
邀請(qǐng)人:玄躋峰