報告題目:On Numerical Cognition Ability of Deep Learning – a case study of general AI
報告時間:2023年12月15日上午9:00
報告地點:437bwin必贏國際官網八樓報告廳
報告人:武筱林
報告人國籍:加拿大
報告人單位:麥克馬斯特大學

報告人簡介:武筱林,1982437bwin必贏國際官網學士,1988卡爾加里大學博士,專業均為計算機科學。他現在是加拿大麥克馬斯特大學電子與計算機工程系教授,工學院杰出教授,加拿大國家科學與工程基金資深工業研究教授。曾經任紐約大學工學院計算機科學系研究教授(2001-2003),加拿大西部大學計算機科學系副教授、教授。武筱林教授是高精度影像處理、編碼、傳輸及重構技術領域里的國際著名學者和技術權威。在斯坦福大學學術影響排品中,被列入世界top 1% scientists, H-index=66,擁有多項頗具影響的開創性科研成果。他是IEEE Fellow,IEEE工業信號處理委員會執行委員,IEEE多維信號處理技術委員會常務委員,IEEE圖像處理匯刊副主編。他還曾任IEEE多媒體匯刊副主編;多屆IEEE圖像處理/信號處理大會技術委員會成員和分會主席。武筱林教授還獲得過UWO卓越研究教授獎、丹麥Monsteds研究獎、芬蘭諾基亞國際研究獎、VCIP最佳論文獎等國際榮譽。
報告摘要:Subitizing, or the sense of small natural numbers, is an innate cognitive function of humans and primates; it responds to visual stimuli prior to the development of any symbolic skills, language or arithmetic. Given successes of deep learning (DL) in tasks of visual intelligence and given the primitivity of number sense, a tantalizing question is whether DL can comprehend numbers and perform subitizing. But somewhat disappointingly, extensive experiments of the type of cognitive psychology demonstrate that the examples-driven black box DL cannot see through superficial variations in visual representations and distill the abstract notion of natural number, a task that children perform with high accuracy and confidence. The failure is apparently due to the learning method not the connectionist CNN machinery itself. A recurrent neural network capable of subitizing does exist, which we construct by encoding a mechanism of mathematical morphology into the CNN convolutional kernels. Also, we investigate, using subitizing as a test bed, the ways to aid the black box DL by cognitive priors derived from human insight. Our findings are mixed and interesting, pointing to both cognitive deficit of pure DL, and some measured successes of boosting DL by predetermined cognitive implements. This case study of DL in cognitive computing is meaningful as numerosity represents a rudimentary level of human intelligence.
邀請人:杜博