[{"id":1,"membername":"唐振民(退休)","roletype":1,"tutortype":1,"isboss":3,"major":"机器人技术、无人系统","email":"tzm.cs@njust.edu.cn","college":"计算机科学与工程学院","school":"南京理工大学","marks":"\u003cp\u003e原总装军用“计算机与软件”专家组成员\u003c/p\u003e\u003cp\u003e原总装军用“核高基”专家组成员\u003c/p\u003e\u003cp\u003e装备发展部“人工智能”专家组成员\u003c/p\u003e\u003cp\u003e入选国防科工委“511”人才工程\u003c/p\u003e\u003cp\u003e国防科学技术进步奖 二等奖、三等奖\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e教育部科学技术进步奖 一等奖\u003c/p\u003e\u003cp\u003e江苏省科学技术奖 二等奖、三等奖\u003c/p\u003e\u003cp\u003e主持国家“型号”项目,军口“核高基”项目,国家自然科学基金重大计划重点项目、面上项目等。\u003cbr/\u003e\u003c/p\u003e","imgname":"e8273c26-77c8-4cae-9e9f-47ee90bd64a3-removebg-preview.jpg","imgdownname":"files/members/f0458536-d628-4c18-980a-f1e3b139e64c.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/f0458536-d628-4c18-980a-f1e3b139e64c.jpg","userid":1,"username":"admin","createtime":"2020-03-05 22:59:38","updatetime":"2024-11-23 16:30:26","deletetime":"2021-12-26 12:10:04","flag":1,"index":1},{"id":18,"membername":"姚亚洲 | Yazhou Yao","roletype":1,"tutortype":1,"isboss":1,"major":"计算机视觉、多媒体技术、机器学习 | Computer Vision, Multimedia, Machine Learning","email":"yazhou.yao@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cstrong\u003e奖励荣誉 | Honors:\u003c/strong\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e1. 陆军装备部\u0026nbsp;\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e“十五五”\u0026nbsp;\u003c/span\u003e“XX信息体系”专业组专家,2025\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e2. 信息支援部队装备部 “十五五” “XX控制”专业组专家,2025\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e3. 入选 国家高层次人才计划(青年项目),2021\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e4. 入选 江苏省杰青,2024\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e5. 国防科学技术进步奖,二等奖,2023\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e6. 兵器工业集团技术发明奖,二等奖,2025\u0026nbsp;\u003cbr/\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e7. 装备发展部,第一届“智算杯”智能计算基础平台挑战赛:高性能体系结构1组,三等奖,2020\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e8. 航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级SAR图像飞机目标细粒度智能识别赛道,季军,2023\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e9.\u0026nbsp;第三届“计图”人工智能挑战赛:语义分割赛道,一等奖(冠军),2023\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e10. 粤港澳大湾区国际算法算例大赛:遥感图像物体目标检测, 一等奖(冠军),2022\u003c/p\u003e\u003chr style\u003d\"text-wrap: wrap;\"/\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cstrong\u003e科研项目 | Fundings:\u003c/strong\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cstrong\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e一、人才类项目:\u003c/p\u003e\u003cp\u003e1. 中组部,国家高层次人才计划项目,2022.01-2024.12,200万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e2. 江苏省自然科学基金,杰出青年基金,“无可靠人工标注条件下的图像数据理解与应用研究”,2024.07-2027.06,180万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e二、国防类项目:\u003c/p\u003e\u003cp\u003e6. 国防科工局,技术基础重点项目,“精确制导武器图像识别XXX技术”,2025.01-2027.12,880万,主持\u003c/p\u003e\u003cp\u003e5. 装备发展部,重大专项课题(导引头“三化”项目),“导引头单体对地XX智能感知XXX技术”,2025.01-2025.12,202万,主持\u003c/p\u003e\u003cp\u003e4. 国防科工局,基础科研项目,“XX目标探测与识别解译技术”,2022.01-2024.12, 200万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e3. 军委科技委,国防基础加强计划技术领域基金项目,“XX变化下的图像匹配识别技术”,2023.01-2024.12,90万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e2. 装备发展部,“慧眼行动”项目,“智能化XX精确制导目标识别系统”,2021.06-2023.06,185万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e1. 装备发展部,预研项目,“基于XXX的目标检测加速系统”,2021.01-2022.12,91万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e\u003c/p\u003e\u003cp\u003e三、基金类项目:\u003c/p\u003e\u003cp\u003e3. 国家自然科学基金,面上项目,“面向真实开放场景的有限可靠标注细粒度识别技术研究”,2025.01-2028.12,50万,主持\u0026nbsp;\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e2. 国家自然科学基金,青年基金,“基于含噪样本数据的细粒度图像识别技术研究”,2022.01-2024.12,30万,主持\u0026nbsp;\u003c/p\u003e\u003cp\u003e1. 江苏省自然科学基金,青年基金,“基于无严格人工标注数据的细粒度识别技术研究”,2021.07-2024.06,20万,主持\u003c/p\u003e\u003chr style\u003d\"text-wrap: wrap;\"/\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color:#222222\"\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 3px 0;text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(34, 34, 34); letter-spacing: 0px;font-size:12px\"\u003e\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp style\u003d\";padding: 0\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e\u003cspan style\u003d\"font-family: \u0026quot;Times New Roman\u0026quot;; color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e51. Xinhao Cai, Gensheng Pei, Zeren Sun,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, Wenguan Wang, \u0026quot;Iris: Bringing Real-World Priors into Diffusion Model for Monocular Depth Estimation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e50. Mengmeng Sheng, Zeren Sun, Tao Chen, Jinshan Pan,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, \u0026quot;Revisiting Learning with Noisy Labels: Active Forgetting and Noise Suppression\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e49. Bo Zhou, Qiuxia Lai, Zeren Sun, Xiangbo Shu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Wenguan Wang, \u0026quot;Learning 3D Representations for Spatial Intelligence from Unposed Multi-View Images\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e48. Gensheng Pei, Xiruo Jiang, Xinhao Cai, Tao Chen,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Byeungwoo Jeon, \u0026quot;PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e47. Jianjian Yin, Tao Chen, Yi Chen, Gensheng Pei, Xiangbo Shu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, \u0026quot;PCA-Seg: Revisiting Cost Aggregation for Open-Vocabulary Semantic and Part Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e46. Zhenyu Yang, Gensheng Pei, Tao Chen, Yichao Zhou, Tianfei Zhou,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, \u0026quot;Efficiency Follows Global-Local Decoupling\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e45. Haowen Gu, Gensheng Pei, Zeren Sun, Mingwu Ren, Xiangbo Shu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*\u003c/span\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, \u0026quot;MedFG-VQA: Low-Frequency Memory and Graph Attention for Lightweight Medical VQA\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e44. Jianqiang Xu, Gensheng Pei, Huafeng Liu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;GSV2X: Geometry-Aware Uncertainty Modeling and Orthogonal Fusion for Robust Roadside Perception\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e43. Meiqi Cao, Jiachao Zhang, Xin Jiang, Rui Yan,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Zechao Li, Xiangbo Shu, \u0026quot;Seeing Motion Through Polarity for Event-based Action Recognition\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e41. Wenxuan Ge, Hongyu Qu, Rui Yan, Guo-Sen Xie,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Xiangbo Shu, Jinhui Tang, \u0026quot;Condensed Test-Time Adaptation of VLMs for Action Recognition\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e40. Xun Jiang, Yufan Gu, Disen Hu, Yuqing Hou,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fumin Shen, Heng Tao Shen, Xing Xu, \u0026quot;Multimodal Learning on Low-Quality Data with Conformal Predictive Self-Calibration\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e39. Xinhao Cai, Liulei Li, Gensheng Pei, Tao Chen, Jinshan Pan,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Wenguan Wang*, \u0026quot;Beyond Frequency: Scoring-Driven Debiasing for Object Detection via Blueprint-Prompted Image Synthesis\u0026quot;, International Conference on Learning Representations (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICLR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2026.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e38. Zeren Sun,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Tongliang Liu, Zechao Li, Fumin Shen, and Jinhui Tang, \u0026quot;Jo-SNC: Combating Noisy Labels through Fostering Self- and Neighbor-Consistency\u0026quot;, IEEE Transactions on Pattern Analysis and Machine Intelligence (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eTPAMI\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e37. Xinhao Cai, Qiuxia Lai, Gensheng Pei, Xiangbo Shu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Wenguan Wang*, \u0026quot;Cycle-Consistent Learning for Joint Layout-to-Image Generation and Object Detection\u0026quot;, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e36. Mengmeng Sheng, Zeren Sun*, Tianfei Zhou, Xiangbo Shu, Jinshan Pan,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;CA2C: A Prior-Knowledge-Free Approach for Robust Label Noise Learning via Asymmetric Co-learning and Co-training\u0026quot;, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e35. Meiqi Cao, Xiangbo Shu, Xin Jiang, Rui Yan,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Jinhui Tang, \u0026quot;Exploiting Frequency Dynamics for Enhanced Multimodal Event-based Action Recognition\u0026quot;, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e34. ZhiXuanLi, Binqian Xu, Xiangbo Shu, Jiachao Zhang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Guo-Sen Xie, Jinhui Tang, \u0026quot;Tensor-aggregated LoRA in Federated Fine-tuning\u0026quot;, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e33. Bo Zhou, Liulei Li, Yujia Wang, Huafeng Liu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Wenguan Wang*, \u0026quot;UNIALIGN: Scaling Multimodal Alignment within One Unified Model\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e32. Gensheng Pei, Tao Chen, Yujia Wang, Xinhao Cai, Xiangbo Shu, Tianfei Zhou,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;Seeing What Matters: Empowering CLIP with Patch Generation-to-Selection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e31. Yang Shen, Peng Wang, Xiu-Shen Wei,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, \u0026quot;An Empirical Study on Training Paradigms for Deep Supervised Hashing\u0026quot;, International Journal of Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eIJCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e30. Hongyu Qu, Xiangbo Shu, Jianan Wei,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Wenguan Wang, Jinhui Tang, \u0026quot;OmniGaze: Reward-inspired Generalizable Gaze Estimation In The Wild\u0026quot;, Neural Information Processing Systems (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eNIPS\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e29. Binqian Xu, Haiyang Mei, Zechen Bai, Jinjin Gong, Rui Yan, Guo-Sen Xie,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Basura Fernando, Xiangbo Shu, \u0026quot;You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM\u0026quot;, Neural Information Processing Systems (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eNIPS\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e28. Yang Shen, Xiu-Shen Wei, Yifan Sun, YuXin Song, Tao Yuan, Jian Jin, He-Yang Xu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Errui Ding, \u0026quot;Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization\u0026quot;, International Conference on Machine Learning (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICML\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e27. Mengmeng Sheng, Zeren Sun, Tao Chen, Shuchao Pang, Yucheng Wang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;Foster Adaptivity and Balance in Learning with Noisy Labels\u0026quot;, European Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eECCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e26. Tao Chen, XiRuo Jiang, Gensheng Pei, Zeren Sun, Yucheng Wang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic Segmentation\u0026quot;, European Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eECCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e25. Ruhao Ma, Shuchao Pang, Bing Li, Yongbin Zhou,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, \u0026quot;Veil Privacy on Visual Data: Concealing Privacy for Humans, Unveiling for DNNs\u0026quot;, European Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eECCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e24. Xinhao Cai, Qiuxia LAI, Yuwei Wang, Wenguan Wang*, Zeren Sun,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;Poly Kernel Inception Network for Remote Sensing Detection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e23. Gensheng Pei, Tao Chen, Xiruo Jiang, Huafeng Liu, Zeren Sun,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;VideoMAC: Video Masked Autoencoders Meet ConvNets\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e22. Mengmeng Sheng, Zeren Sun, Gensheng Pei, Tao Chen, Haonan Luo,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, \u0026quot;Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach\u0026quot;, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e21. Meiqi Cao, Rui Yan, Xiangbo Shu, Guangzhao Dai,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Guosen Xie, \u0026quot;AdaFPP: Adapt-Focused Bi-Propagating Prototype Learning for Panoramic Activity Recognition\u0026quot;, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e20. Haonan Luo, Guosheng Lin,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Fayao Liu, Zichuan Liu, and Zhenmin Tang, \u0026quot;Depth and Video Segmentation Based Visual Attention for Embodied Question Answering\u0026quot;, IEEE Transactions on Pattern Analysis and Machine Intelligence (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eTPAMI\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2022.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e19. Gensheng Pei,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao*\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Guo-Sen Xie, Fumin Shen, Zhenmin Tang, Jinhui Tang, \u0026quot;Hierarchical Feature Alignment Network for Unsupervised Video Object Segmentation\u0026quot;, European Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eECCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2022.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e18. Zeren Sun, Fumin Shen, Dan Huang, Qiong Wang, Xiangbo Shu,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, and Jinhui Tang, “PNP: Robust Learning from Noisy Labels by Probabilistic Noise Prediction”, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2022.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e17. Zeren Sun,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Xiu-Shen Wei*, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, and Heng Tao Shen, “Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach”, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e16. Guo-Sen Xie, Jie Liu, Huan Xiong,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, and Ling Shao, “Few-Shot Semantic Segmentation with Cyclic Memory Network”, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e15.\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Zeren Sun, Chuanyi Zhang, Fumin Shen, Qi Wu, Jian Zhang, Zhenmin Tang, “Jo-SRC: A Contrastive Approach for Combating Noisy Labels”, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e14.\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e\u0026nbsp;Yazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Tao Chen, Guosen Xie, Chuanyi Zhang, Fumin Shen, Qi Wu, Zhenmin Tang, Jian Zhang, “Non-Salient Region Object Mining for Weakly Supervised Semantic Segmentation”, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e13. Chuanyi Zhang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Xing Xu, Jie Shao, Jingkuan Song, Zechao Li, Zhenmin Tang, “Extracting Useful Knowledge form Noisy Web Images via Data Purification for Fine-Grained Recognition”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e12. Ji Zhang, Jingkuan Song,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Lianli Gao, “Curriculum-Based Meta-learning”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e11. Jingran Zhang, Xing Xu, Fumin Shen,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Jie Shao, Xiaofeng Zhu, “Video Representation Learning with Graph Contrastive Augmentation”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e10. Yifan Ren, Xing Xu, Fumin Shen,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Huimin Lu, “CAA: Candidate-Aware Aggregation for Temporal Action Detection”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e9. Guosen Xie, Li Liu, Fan Zhu, Fang Zhao, Zheng Zhang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Jie Qin, Ling Shao, “Region Graph Embedding Network for Zero-Shot Learning”, European Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eECCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e8. Chuanyi Zhang,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Xiangbo Shu, Zechao Li, Zhenmin Tang, Qi Wu, “Data-driven Meta-set Based Fine-Grained Visual Recognition”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e7. Zeren Sun, Xian-sheng Hua,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e*, Xiu-shen Wei, Guosheng Hu, Jian Zhang, “CRSSC: Salvage Reusable Samples from Noisy Data for Robust Learning”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e6. Benyi Hu, Renjie Song, Xiu-Shen Wei,\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e\u0026nbsp;Yazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Xiansheng Hua, and Yuehu Liu, “PyRetri: A PyTorch-based Library for Unsupervised Image Retrieval by Deep Convolutional Neural Networks”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e5.\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e\u0026nbsp;Yazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Xian-sheng Hua, Guanyu Gao, Zeren Sun, Zhibin Li, Jian Zhang, “Bridging the Web Data and Fine-Grained Visual Recognition via Alleviating Label Noise and Domain Mismatch”, ACM International Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e4. Zhibin Li, Jian Zhang, Yongshun Gong,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Qiang Wu, “Field-wise Learning for Multi-field Categorical Data”, Neural Information Processing Systems (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eNIPS\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e3. Haonan Luo, Guosheng Lin, Zichuan Liu, Fayao Liu, Zhenmin Tang, and\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, “SegEQA: Video Segmentation based Visual Attention for Embodied Question Answering”, IEEE International Conference on Computer Vision (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eICCV\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2019.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\";text-indent: 0;padding: 0;text-align: justify\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e2. Guo-Sen Xie, Li Liu, Xiao-Bo Jin, Fan Zhu, Zheng Zhang, Jie Qin,\u0026nbsp;\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eYazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, and Ling Shao, “Attentive Region Embedding Network for Zero-shot Learning”, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eCVPR\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2019.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 3px 0;text-indent: 0;padding: 0\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e1.\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e\u0026nbsp;Yazhou Yao\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e, Xiansheng Hua, Fumin Shen, Jian Zhang and Zhenmin Tang, “A Domain Robust Approach for Image Dataset Construction”, ACM Conference on Multimedia (\u003c/span\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003eACM MM\u003c/span\u003e\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(0, 0, 0); letter-spacing: 0px;\"\u003e), 2016.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cstrong\u003e学术兼职 | Academic:\u003c/strong\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003eAssociate Editor: Pattern Recognition\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003eGuest Editor:\u0026nbsp;TMM\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003eArea Chair: ACM MM, ICME\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003ePC Members For Conferences: CVPR, ICCV, ECCV, ACM MM, NIPS, AAAI\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003eReviewers for Journals: TPAMI, TIP, TNNLS, TMM, TCSVT, TKDE\u003c/span\u003e\u003c/p\u003e","imgname":"2寸护照.jpg","imgdownname":"files/members/35f7bceb-6ff4-4282-8e2b-d30d5c3696e3.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/35f7bceb-6ff4-4282-8e2b-d30d5c3696e3.jpg","userid":1,"username":"admin","createtime":"2022-06-19 11:04:57","updatetime":"2026-03-05 11:25:34","deletetime":"","flag":1,"index":2},{"id":91,"membername":"沈复民(长江学者)","roletype":1,"tutortype":3,"isboss":3,"major":"计算机视觉、多媒体技术、机器学习 | Computer Vision, Multimedia, Machine Learning","email":"","college":"计算机科学与工程学院","school":"电子科技大学","marks":"","imgname":"微信图片_20250317103939.jpg","imgdownname":"files/members/735e3dab-4134-4d6c-85e6-fe9a92ddab9d.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/735e3dab-4134-4d6c-85e6-fe9a92ddab9d.jpg","userid":1,"username":"admin","createtime":"2024-05-20 15:57:58","updatetime":"2025-06-24 11:47:59","deletetime":"","flag":1,"index":3},{"id":19,"membername":"孙泽人 | Zeren Sun","roletype":2,"tutortype":3,"isboss":2,"major":"计算机视觉、多媒体技术、标签噪声学习 | Computer Vision, Multimedia, Label Noise Learning","email":"zerens@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cstrong style\u003d\"\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e奖励荣誉 | Honors:\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-variant-ligatures: normal; font-weight: 700; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e1. 入选江苏省333工程第三层次,2024\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e2. 入选江苏省卓越博士后计划,2022\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e3.\u0026nbsp;\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:16px\"\u003e兵器工业集团技术发明奖,二等奖,2025\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e3. 军委装备发展部,第一届“智算杯”智能计算基础平台挑战赛,三等奖,2020\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e4.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级可见光图像飞机目标细粒度智能识别赛道,优秀奖,2023\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85);\"\u003e5. 粤港澳大湾区国际算法算例大赛,“数据选择与标记校正算法设计”,三等奖\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85);\"\u003e,\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85);\"\u003e2022\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; font-variant-ligatures: normal; font-weight: 700; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; font-weight: 700; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e工作经历 | Work Experience:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:16px\"\u003e2024.02 - 至今\u0026nbsp; \u0026nbsp; \u0026nbsp;:南京理工大学,副教授\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e2021.12 - 2024.01:南京理工大学,博士后\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; font-weight: 700; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e学习经历 | Education Experience:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e2016.09 - 2021.11:南京理工大学,博士,导师:唐振民\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e2014.09 - 2016.06:卡内基梅隆大学,MSRT(CMU-NUST双硕士联合培养项目),导师:Mel Siegel\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e2010.09 - 2014.06:南京理工大学,本科\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; font-weight: 700; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e科研项目 | Fundings:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cstrong style\u003d\"white-space: pre-line; box-sizing: border-box; color: rgb(85, 85, 85);\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e在研中:\u003c/span\u003e\u003c/strong\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial;font-size:16px\"\u003e3.\u0026nbsp;国防科技工业局,技术基础重点项目课题,\u0026quot;基于云架构XXX的智能识别与控制技术研究\u0026quot;,2025.01-2027.12,130万,主持\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial;font-size:16px\"\u003e2. 国家自然科学基金,青年基金,\u0026quot;真实标签噪声场景下的鲁棒图像识别方法研究\u0026quot;,2023.01-2025.12\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e,\u003c/span\u003e30万,主持\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial;font-size:16px\"\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e1. 江苏省自然科学基金,青年基金,\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e“开放场景下基于有限可靠标签的鲁棒图像识别研究”\u003c/span\u003e\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e,2022.07-2025.06\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e,\u003c/span\u003e20万,主持\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003e\u003cspan style\u003d\"white-space-collapse: preserve-breaks;\"\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-size:16px\"\u003e已结题:\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px\"\u003e1. 中国博士后科学基金,面上基金,“真实开放场景下基于不可靠图像标签的鲁棒图像识别方法研究”,已结题\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e,\u003c/span\u003e8万,主持\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/strong\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e学术论文 | Publications\u0026nbsp;\u003cstrong style\u003d\"white-space: pre-line; box-sizing: border-box; color: rgb(85, 85, 85);\"\u003e:\u003c/strong\u003e\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cstrong style\u003d\"\"\u003e\u003cspan style\u003d\"color: rgb(165, 165, 165);font-size:16px\"\u003e发表CCF-A类会议(含ECCV)与中科院一区期刊论文合计26篇,其中一作和通讯合计16篇\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:arial, helvetica, sans-serif\"\u003e\u003cspan style\u003d\"font-size: 16px;\"\u003e\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_6\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_8\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_12\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_14\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_16\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_18\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_20\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_22\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_24\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_26\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_28\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_30\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_32\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_34\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_36\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_38\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_40\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_42\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_44\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_46\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_48\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_50\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_52\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_54\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_56\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_58\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_60\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_62\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_64\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_66\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_68\"\u003e‍\u003c/span\u003e\u003cspan style\u003d\"display: none; line-height: 0px;\" id\u003d\"_baidu_bookmark_start_70\"\u003e‍\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e\u003cspan id\u003d\"_baidu_bookmark_start_4\" style\u003d\"display: none; line-height: 0px;\"\u003e‍\u003c/span\u003e26. Xinhao Cai, Gensheng Pei, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao*, Fumin Shen, Wenguan Wang, \u0026quot;Iris: Bringing Real-World Priors into Diffusion Model for Monocular Depth Estimation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e25. Mengmeng Sheng, \u003cstrong\u003eZeren Sun*\u003c/strong\u003e, Tao Chen, Jinshan Pan, Yazhou Yao*, Fumin Shen, \u0026quot;Revisiting Learning with Noisy Labels: Active Forgetting and Noise Suppression\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e24. Bo Zhou, Qiuxia Lai, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Xiangbo Shu, Yazhou Yao*, Wenguan Wang, \u0026quot;Learning 3D Representations for Spatial Intelligence from Unposed Multi-View Images\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e23. Haowen Gu, Gensheng Pei, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Mingwu Ren, Xiangbo Shu, Yazhou Yao*, Fumin Shen, \u0026quot;MedFG-VQA: Low-Frequency Memory and Graph Attention for Lightweight Medical VQA\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;;font-size:16px\"\u003e22.\u0026nbsp;\u003cstrong style\u003d\"\"\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao*, Tongliang Liu, Zechao Li, Fumin Shen, and Jinhui Tang, \u0026quot;Jo-SNC: Combating Noisy Labels through Fostering Self- and Neighbor-Consistency\u0026quot;, IEEE Transactions on Pattern Analysis and Machine Intelligence (\u003cstrong style\u003d\"\"\u003eTPAMI\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;;font-size:16px\"\u003e21. Junzhu Mao, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao, Tianfei Zhou, Liqiang Nie, Xiansheng Hua, \u0026quot;NiCI-Pruning: Enhancing Diffusion Model Pruning via Noise in Clean Image Guidance\u0026quot;,\u0026nbsp;IEEE \u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;times new roman\u0026quot;; caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003eTransactions\u0026nbsp;\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;times new roman\u0026quot;; caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003eon\u0026nbsp;\u003c/span\u003eImage Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;times new roman\u0026quot;; caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003e20.\u0026nbsp;Junzhu Mao, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao, Xiansheng Hua, Heng-Tao Shen, \u0026quot;Class Importance Consistency Matters: Efficient Model Pruning for Long-tailed Recognition Models\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e19.\u0026nbsp;Mengmeng Sheng, \u003cstrong\u003eZeren Sun*\u003c/strong\u003e, Tianfei Zhou, Xiangbo Shu, Jinshan Pan, Yazhou Yao*, \u0026quot;CA2C: A Prior-Knowledge-Free Approach for Robust Label Noise Learning via Asymmetric Co-learning and Co-training\u0026quot;, IEEE International Conference on Computer Vision (\u003cstrong\u003eICCV\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px;;font-family:times new roman\"\u003e\u003cspan id\u003d\"_baidu_bookmark_start_10\" style\u003d\"display: none; line-height: 0px;\"\u003e‍\u003c/span\u003e\u003cspan id\u003d\"_baidu_bookmark_start_0\" style\u003d\"display: none; line-height: 0px;\"\u003e‍\u003c/span\u003e\u003cspan id\u003d\"_baidu_bookmark_start_2\" style\u003d\"display: none; line-height: 0px;\"\u003e‍\u003c/span\u003e18. Mengmeng Sheng, Shuai Yan, \u003cstrong style\u003d\"\"\u003eZeren Sun*\u003c/strong\u003e, Tao Chen, Huafeng Liu, Yazhou Yao, \u0026quot;Combating Noisy Labels in Knowledge Distillation for Efficient Edge Device Deployment\u0026quot;,\u0026nbsp;IEEE Transactions on Consumer Electronics (\u003cstrong\u003eTCE\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e17.\u0026nbsp;Mengmeng Sheng, \u003cstrong\u003eZeren Sun*\u003c/strong\u003e, Gensheng Pei, Tao Chen, Haonan Luo, Yazhou Yao*, \u0026quot;Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACM MM\u003c/strong\u003e), 2024.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e16.\u0026nbsp;Mengmeng Sheng, \u003cstrong\u003eZeren Sun\u003cstrong style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap: wrap;\"\u003e*\u003c/strong\u003e\u003c/strong\u003e, Tao Chen, Shuchao Pang, Yucheng Wang, Yazhou Yao\u003cstrong style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap: wrap;\"\u003e*\u003c/strong\u003e, \u0026quot;Foster Adaptivity and Balance in Learning with Noisy Labels\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e15.\u0026nbsp;Tao Chen, XiRuo Jiang, Gensheng Pei, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yucheng Wang, Yazhou Yao, \u0026quot;Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic Segmentation\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e14.\u0026nbsp;Gensheng Pei, Tao Chen, Xiruo Jiang, Huafeng Liu, \u003cstrong\u003eZeren Sun\u003cspan style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap: wrap;\"\u003e*\u003c/span\u003e\u003c/strong\u003e, Yazhou Yao*, \u0026quot;VideoMAC: Video Masked Autoencoders Meet ConvNets\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e13.\u0026nbsp;Xinhao Cai, Qiuxia LAI, Yuwei Wang, Wenguan Wang, \u003cstrong style\u003d\"\"\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao*, \u0026quot;Poly Kernel Inception Network for Remote Sensing Detection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong style\u003d\"\"\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e12.\u0026nbsp;Huafeng Liu, Mengmeng Sheng, \u003cstrong\u003eZeren Sun*\u003c/strong\u003e, Yazhou Yao*, Xian-Sheng Hua, and Heng-Tao Shen, \u0026quot;Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e11.\u0026nbsp;Mengmeng Sheng, \u003cstrong\u003eZeren Sun\u003c/strong\u003e\u003cstrong\u003e*\u003c/strong\u003e, Zhenhuang Cai, Tao Chen, Yichao Zhou, Yazhou Yao*, \u0026quot;Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning\u0026quot;, AAAI Conference on Artificial Intelligence (\u003cstrong\u003eAAAI\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e10.\u0026nbsp;Junzhu Mao, Yazhou Yao, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Xingguo Huang, Fumin Shen, and Heng-Tao Shen. \u0026quot;Attention Map Guided Transformer Pruning for Occluded Person Re-Identification on Edge Device\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003e9.\u0026nbsp;\u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003eZeren Sun\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003e, Yazhou Yao, Xiu-Shen Wei, Fumin Shen, Huafeng Liu, and Xian-Sheng Hua. \u0026quot;Boosting Robust Learning via Leveraging Reusable Samples in Noisy Web Data\u0026quot;, IEEE Transactions on Multimedia (\u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); caret-color: rgb(0, 0, 0); text-wrap-mode: wrap;\"\u003e), 2023.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003e8. \u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85);\"\u003eZeren Sun\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003e, Fumin Shen, Dan Huang, Qiong Wang, Xiangbo Shu, Yazhou Yao, and Jinhui Tang. \u0026quot;PNP: Robust Learning from Noisy Labels by Probabilistic Noise Prediction\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85);\"\u003eCVPR\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003e), 2022. (\u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85);\"\u003eOral\u003c/strong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003e)\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e7. \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Huafeng Liu, Qiong Wang, Tianfei Zhou, Qi Wu, and Zhenmin Tang. \u0026quot;Co-LDL: A Co-training-based Label Distribution Learning Method for Tackling Label Noise\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2022.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e6. \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao, Xiu-Shen Wei, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, and Heng Tao Shen. \u0026quot;Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach\u0026quot;, IEEE International Conference on Computer Vision (\u003cstrong\u003eICCV\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e5. Yazhou Yao#, \u003cstrong\u003eZeren Sun#*\u003c/strong\u003e, Chuanyi Zhang, Fumin Shen, Qi Wu, Jian Zhang, Zhenmin Tang. \u0026quot;Jo-SRC: A Contrastive Approach for Combating Noisy Labels\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e4. \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Yazhou Yao, Jimin Xiao, Lei Zhang, Jian Zhang, Zhenmin Tang. \u0026quot;Exploiting Textual Queries for Dynamically Visual Disambiguation\u0026quot;, Pattern Recognition (\u003cstrong\u003ePR\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e3. \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Xian-Sheng Hua, Yazhou Yao, Xiu-Shen Wei, Guosheng Hu, Jian Zhang. \u0026quot;CRSSC: Salvage Reusable Samples from Noisy Data for Robust Learning\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACMMM\u003c/strong\u003e), 2020. (\u003cstrong\u003eOral\u003c/strong\u003e)\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;font-size:16px\"\u003e2. Yazhou Yao, Xian-Sheng Hua, Guanyu Gao, \u003cstrong\u003eZeren Sun\u003c/strong\u003e, Zhibin Li, Jian Zhang. \u0026quot;Bridging the Web Data and Fine-Grained Visual Recognition via Alleviating Label Noise and Domain Mismatch\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACMMM\u003c/strong\u003e), 2020.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin: 5px 0px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); white-space: normal;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85);font-family:times new roman;;font-size:16px\"\u003e1. Yazhou Yao, \u003cstrong style\u003d\"\"\u003eZeren Sun\u003c/strong\u003e, Fumin Shen, Li Liu, Limin Wang, Fan Zhu, Lizhong Ding, Gangshan Wu, Ling Shao. \u0026quot;Dynamically Visual Disambiguation of Keyword-based Image Search\u0026quot;, International Joint Conference on Artificial Intelligence (\u003cstrong style\u003d\"\"\u003eIJCAI\u003c/strong\u003e), 2019.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/strong\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003e学术兼职 | Academic:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e\u003cstrong style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-variant-ligatures: normal; orphans: 2; white-space: pre-line; widows: 2; text-decoration-thickness: initial;\"\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003ePC Members For Conferences: CVPR, ICCV, ECCV, NeurIPS, ACM MM, AAAI, IJCAI\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-variant-ligatures: normal; orphans: 2; widows: 2; text-decoration-thickness: initial; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:16px\"\u003eReviewers for Journals: TPAMI, TMM, TNNLS, TKDE, TCSVT\u003c/span\u003e\u003c/p\u003e","imgname":"戴眼镜证件照-白底.jpg","imgdownname":"files/members/f5f84492-fcd5-421f-a564-1a6b84dc53b6.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/f5f84492-fcd5-421f-a564-1a6b84dc53b6.jpg","userid":1,"username":"admin","createtime":"2022-06-19 11:39:30","updatetime":"2026-03-30 14:29:30","deletetime":"","flag":1,"index":4},{"id":21,"membername":"王琼 | Qiong Wang","roletype":2,"tutortype":2,"isboss":2,"major":"图像识别、图像分割、跨媒体检索","email":"wangq@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e奖励荣誉:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e1. 军委装备发展部,第一届“智算杯”智能计算基础平台挑战赛,三等奖,2020\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e2. 泰迪杯,第十届“泰迪杯”数据挖掘挑战赛,特等奖(冠军),2022\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;第三届“计图”人工智能挑战赛:语义分割赛道,一等奖(冠军),2023\u003c/p\u003e\u003cp\u003e4.\u0026nbsp;2024年获吴文俊人工智能科技进步奖二等奖\u003c/p\u003e\u003cp\u003e5.\u0026nbsp;2022年获中国指控学会科技进步奖一等奖\u003c/p\u003e\u003cp\u003e\u003cbr/\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e工作经历:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e2013.01 - 至今:\u0026nbsp; \u0026nbsp; 南京理工大学 计算机科学与工程学院,副教授\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e2008.01-2013.05:南京理工大学 计算机科学与工程学院,讲师\u003c/p\u003e\u003cp\u003e2018.08-2019.08:纽卡斯尔大学 计算机系,访问学者\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e\u003cstrong\u003e学习经历:\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e2003.09 - 2008.01:南京理工大学 计算机科学与工程学院,模式识别与智能系统 博士学位\u003c/p\u003e\u003cp\u003e1999.09 - 2003.07:南京理工大学 计算机科学与工程学院,计算机科学与技术\u0026nbsp; \u0026nbsp; 学士学位\u003cbr/\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e\u003cstrong\u003e科研项目:\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap: nowrap;\"\u003e1.\u0026nbsp;军委科技委,国防基础加强计划技术领域基金项目,“面向XXX的高并发高速传输技术”,2024.01-2025.12,100万,主持\u003c/span\u003e\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap: nowrap;\"\u003e2.\u0026nbsp;装备发展部,共用技术项目子课题,“面向XX场景的嵌入式高性能智能计算技术”,2023.01-2025.12,60万,主持\u003c/span\u003e\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap: nowrap;\"\u003e3.\u0026nbsp;国防科技工业局,技术基础项目课题,“基于深度学习的XX科技可视化XX智能化处理技术研究”,2024.01-2024.12,30万,主持\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e4.\u0026nbsp;战略支援部队航天系统部,天智杯转化应用项目,“遥感领域XXXX目标细粒度智能识别科目转化应用项目”,2023.08-2024.07,20万,主持\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong\u003e学术论文 (\u003c/strong\u003e*通讯作者\u003cstrong\u003e):\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e1. Jianqiang Xu,\u0026nbsp;Chunying Song,\u0026nbsp;Chao Shi,\u0026nbsp;Huafeng Liu,\u0026nbsp;\u003cstrong\u003eQiong Wang\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e*\u003c/span\u003e\u003c/strong\u003e: UncertainBEV: Uncertainty-aware BEV fusion for roadside 3D object detection.\u0026nbsp;Image Vis. Comput.\u0026nbsp;159:\u0026nbsp;105567\u0026nbsp;(2025)\u003c/p\u003e\u003cp\u003e2. Yin Tang, Rui Chen, Gensheng Pe,\u0026nbsp;\u003cstrong\u003eQiong Wang\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e*\u003c/span\u003e\u003c/strong\u003e: PASS-SAM: Integration of Segment Anything Model for Large-Scale Unsupervised Semantic Segmentation. \u0026nbsp;Computational Visual Media, vol. 11, no. 3, pp. 669-674 (2025)\u003c/p\u003e\u003cp\u003e3. Tingting Li,\u0026nbsp;Gensheng Pei,\u0026nbsp;Xinhao Cai,\u0026nbsp;\u003cstrong\u003eQiong Wang\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e*\u003c/span\u003e\u003c/strong\u003e,\u0026nbsp;Huafeng Liu,\u0026nbsp;Yazhou Yao:\u0026nbsp;Universal Organizer of Segment Anything Model for Unsupervised Semantic Segmentation.\u0026nbsp;ICME\u0026nbsp;2024:\u0026nbsp;1-6\u003c/p\u003e\u003cp\u003e4. Tao Chen, Yazhou Yao, Lei Zhang, \u003cstrong\u003eQiong Wang\u003c/strong\u003e, Guo-Sen Xie, Fumin Shen:\u0026nbsp;Saliency Guided Inter- and Intra-Class Relation Constraints for Weakly Supervised Semantic Segmentation. \u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e\u0026nbsp;IEEE Transactions on Multimedia (\u003c/span\u003e\u003cstrong style\u003d\"text-wrap-mode: wrap;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e)\u003c/span\u003e. 25: 1727-1737 (2023)\u003c/p\u003e\u003cp\u003e5. Chuanyi Zhang, Guosheng Lin, \u003cstrong\u003eQiong Wang\u003c/strong\u003e, Fumin Shen, Yazhou Yao, Zhenmin Tang:\u0026nbsp;Guided by Meta-Set: A Data-Driven Method for Fine-Grained Visual Recognition. \u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003eIEEE Transactions on Multimedia (\u003c/span\u003e\u003cstrong style\u003d\"text-wrap-mode: wrap;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e)\u003c/span\u003e 25: 4691-4703 (2023)\u003c/p\u003e\u003cp\u003e6. Huafeng Liu,\u0026nbsp;Pai Peng,\u0026nbsp;Tao Chen,\u0026nbsp;\u003cstrong\u003eQiong Wang\u003c/strong\u003e,\u0026nbsp;Yazhou Yao,\u0026nbsp;Xian-Sheng Hua: FECANet: Boosting Few-Shot Semantic Segmentation With Feature-Enhanced Context-Aware Network.\u0026nbsp;\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003eIEEE Transactions on Multimedia (\u003c/span\u003e\u003cstrong style\u003d\"text-wrap-mode: wrap;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e)\u003c/span\u003e\u0026nbsp;25:\u0026nbsp;8580-8592\u0026nbsp;(2023)\u003c/p\u003e\u003cp\u003e7. Rui Chen, Tao Chen, \u003cstrong\u003eQiong Wang\u003cspan style\u003d\"text-wrap-mode: wrap;\"\u003e*\u003c/span\u003e\u003c/strong\u003e, Yazhou Yao:\u0026nbsp;Semi-Supervised Semantic Segmentation With Region Relevance. ICME 2023: 852-857\u003c/p\u003e\u003cp\u003e8. Tao Chen, \u003cstrong\u003eQiong Wang*\u003c/strong\u003e, Lei Zhang, Yazhou Yao, Guosen Xie, and Fumin Shen, “Saliency Guided Inter- and Intra-Class Relation Constraints for Weakly Supervised Semantic Segmentation”, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2022.\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e9. Chuanyi Zhang, \u003cstrong\u003eQiong Wang*\u003c/strong\u003e, Guosen Xie, Qi Wu, Fumin Shen and Zhenmin Tang.\u0026nbsp; \u0026quot;Robust Learning from Noisy Web Images via Data Purification for Fine-Grained Recognition\u0026quot;, IEEE Transactions on Multimedia \u003cspan style\u003d\"white-space: normal;\"\u003e(\u003cstrong\u003eTMM\u003c/strong\u003e)\u003c/span\u003e, 2022.\u0026nbsp;\u0026nbsp;\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e10. Zeren Sun, Huafeng Liu, \u003cstrong\u003eQiong Wang*\u003c/strong\u003e, Tianfei Zhou, Qi Wu and Zhenmin Tang. \u0026quot;Co-LDL: A Co-training-based Label Distribution Learning Method for Tackling Label Noise\u0026quot;,\u0026nbsp;IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2022.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e11. Chuanyi Zhang, Guosheng Lin, \u003cstrong\u003eQiong Wang*\u003c/strong\u003e, Fumin Shen, Yazhou Yao, and Zhenmin Tang, \u0026quot;Guided by Meta-set: A Data-driven Method for Fine-Grained Visual Recognition\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2022.\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e12. Tao Chen, Shuihua Wang, \u003cstrong\u003eQiong Wang\u003c/strong\u003e, Zheng Zhang, Guosen Xie and Zhenmin Tang. “\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003eEnhanced Feature Alignment for Unsupervised Domain Adaptation of Semantic Segmentation\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e”,\u0026nbsp; IEEE Transactions on Multimedia \u003cspan style\u003d\"white-space: normal;\"\u003e(\u003cstrong\u003eTMM\u003c/strong\u003e)\u003c/span\u003e, 2022.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e13. Tao Chen, Guosen Xie, Yazhou Yao, \u003cstrong\u003eQiong Wang\u003c/strong\u003e, Fumin Shen, Zhenmin Tang, and Jian Zhang, “Semantically Meaningful Class Prototype Learning for One-Shot Image Segmentation”, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e14. Zeren Sun, Fumin Shen, Dan Huang,\u0026nbsp;\u003c/span\u003e\u003cstrong style\u003d\"white-space: normal;\"\u003eQiong Wang\u003c/strong\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e, Xiangbo Shu, Yazhou Yao, and Jinhui Tang, “PNP: Robust Learning from Noisy Labels by Probabilistic Noise Prediction”, IEEE Conference on Computer Vision and Pattern Recognition (\u003c/span\u003e\u003cstrong style\u003d\"white-space: normal;\"\u003eCVPR\u003c/strong\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e), 2022.\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e15. Chen Wang, Yazhou Yao, \u003cstrong\u003eQiong Wang*\u003c/strong\u003e, Zhenmin Tang,\u0026nbsp;Local Self-Attention on Fine-grained Cross-media Retrieval, ACM Conference on Multimedia, Asia, 2021.\u003c/p\u003e\u003cp\u003e16. \u003cstrong\u003eQiong Wang\u003c/strong\u003e, Youdong Guo , Yazhou Yao, DBFC-Net: a uniform framework for fine-grained cross-media retrieval. Multimedia Systems, 2021.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e学术著作:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e1. 《人工智能 - 智能机器人》,\u0026nbsp;电子工业出版社,陆建峰 王琼等,\u003cspan style\u003d\"white-space: normal;\"\u003e2020.6.\u003c/span\u003e\u003c/p\u003e","imgname":"1.jpg","imgdownname":"files/members/f2da1033-2d02-42fd-86e9-fc0634eb2da4.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/f2da1033-2d02-42fd-86e9-fc0634eb2da4.jpg","userid":1,"username":"admin","createtime":"2022-06-19 14:39:10","updatetime":"2025-09-21 21:38:35","deletetime":"","flag":1,"index":5},{"id":23,"membername":"陈涛 | Tao Chen","roletype":2,"tutortype":3,"isboss":2,"major":"计算机视觉、语义分割、弱监督学习|Computer Vision, Semantic Segmentation, Weakly Supervised Learning","email":"taochen@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp style\u003d\"box-sizing: border-box; font-weight: 700; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e奖励荣誉 | Honors:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1. 江苏省卓越博士后计划,2022\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:18px\"\u003e2. 军委装备发展部,第一届“智算杯”智能计算基础平台挑战赛,三等奖,2020\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:18px\"\u003e3.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级SAR图像飞机目标细粒度智能识别赛道,优秀奖,2023\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e4. 粤港澳大湾区国际算法算例大赛,“遥感图像物体目标检测”,一等奖(冠军)\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e,\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e2022\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; white-space: pre-line;\"\u003e5.\u0026nbsp;第三届“计图”人工智能挑战赛:语义分割赛道,\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; white-space: pre-line;\"\u003e二等奖(\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif;\"\u003e亚军\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; white-space: pre-line;\"\u003e),2023\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; font-weight: 700; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e工作经历 | Work Experience:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2025.11 - 至今:\u0026nbsp; \u0026nbsp; \u0026nbsp; \u0026nbsp; \u0026nbsp;南京理工大学,副教授\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2022.04 - 2025.10:\u0026nbsp; 南京理工大学,博士后\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e学习经历 | Education Experience:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2015.09 – 2022.03:南京理工大学,硕博,导师:唐振民\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2011.09 – 2015.06:\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e南京理工大学\u003c/span\u003e,本科\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e访学经历 | Visiting Experience\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2018.11 – 2019.11:悉尼科技大学,联培,外导:张健\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"margin-top: 0px; margin-bottom: 11px; box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2014.08 – 2015.01:亚利桑那州立大学,本科插班生\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"box-sizing: border-box; font-weight: 700; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e科研项目 | Fundings:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e3. 国家自然科学基金,青年基金,“面向弱监督语义分割的伪标签优化方法研究”,30万,主持\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2. 江苏省自然科学基金,青年基金,“基于联合显著图的弱监督语义分割方法研究”,20万,主持\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1.\u0026nbsp;中国电子科技集团智能科技研究院,横向项目,“图像识别插件”,2024.06-2024.07,33.5万,主持\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"box-sizing: border-box; font-weight: 700; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e24.\u0026nbsp;Jianjian Yin, Xiruo Jiang, \u003cstrong\u003eTao Chen*\u003c/strong\u003e, Gensheng Pei, Yazhou Yao*, Fumin Shen, Heng-Tao Shen, \u0026quot;DepMatch: Boosting Semi-supervised Semantic Segmentation by Exploring Depth Difference Knowledge\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e23. Gensheng Pei, Xiruo Jiang, Xinhao Cai, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Yazhou Yao*, Byeungwoo Jeon, \u0026quot;PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e22. Jianjian Yin, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Yi Chen, Gensheng Pei, Xiangbo Shu, Yazhou Yao*, Fumin Shen, \u0026quot;PCA-Seg: Revisiting Cost Aggregation for Open-Vocabulary Semantic and Part Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"text-wrap-mode: nowrap;font-family:times new roman;font-size:16px\"\u003e21. Zhenyu Yang, Gensheng Pei, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Yichao Zhou, Tianfei Zhou, Yazhou Yao*, Fumin Shen, \u0026quot;Efficiency Follows Global-Local Decoupling\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e20. Mengmeng Sheng, Zeren Sun, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Jinshan Pan, Yazhou Yao*, Fumin Shen, \u0026quot;Revisiting Learning with Noisy Labels: Active Forgetting and Noise Suppression\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e19.\u0026nbsp;Xinhao Cai, Liulei Li, Gensheng Pei, Tao Chen, Jinshan Pan, \u003cstrong\u003eYazhou Yao\u003c/strong\u003e*, Wenguan Wang*, \u0026quot;Beyond Frequency: Scoring-Driven Debiasing for Object Detection via Blueprint-Prompted Image Synthesis\u0026quot;, International Conference on Learning Representations (\u003cstrong\u003eICLR\u003c/strong\u003e), 2026.\u003cbr/\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e18.\u0026nbsp;Gensheng Pei, \u003cstrong style\u003d\"\"\u003eTao Chen\u003c/strong\u003e, Yujia Wang, Xinhao Cai, Xiangbo Shu, Tianfei Zhou, Yazhou Yao*, \u0026quot;Seeing What Matters: Empowering CLIP with Patch Generation-to-Selection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong style\u003d\"\"\u003eCVPR\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e17.\u0026nbsp;Jianjian Yin, \u003cstrong\u003eTao Chen\u003c/strong\u003e*, Gensheng Pei, Yazhou Yao*, Liqiang Nie, Xiansheng Hua, \u0026quot;Semi-supervised Semantic Segmentation with Multi-Constraint Consistency Learning\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e16.\u0026nbsp;Mengmeng Sheng, Zeren Sun, Gensheng Pei, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Haonan Luo, Yazhou Yao, \u0026quot;Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACM MM\u003c/strong\u003e), 2024.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e15.\u0026nbsp;Mengmeng Sheng, Zeren Sun, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Shuchao Pang, Yucheng Wang, Yazhou Yao, \u0026quot;Foster Adaptivity and Balance in Learning with Noisy Labels\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e14.\u0026nbsp;\u003cstrong\u003eTao Chen\u003c/strong\u003e, XiRuo Jiang, Gensheng Pei, Zeren Sun, Yucheng Wang, Yazhou Yao, \u0026quot;Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic Segmentation\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e13.\u0026nbsp;Gensheng Pei, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Xiruo Jiang, Huafeng Liu, Zeren Sun, Yazhou Yao*, \u0026quot;VideoMAC: Video Masked Autoencoders Meet ConvNets\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e12.\u0026nbsp;\u003cstrong\u003eTao Chen\u003c/strong\u003e, Yazhou Yao, Xingguo Huang, Zechao Li, Liqiang Nie and Jinhui Tang,\u0026nbsp;\u0026quot;Spatial Structure Constraints for Weakly Supervised Semantic Segmentation\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e11.\u0026nbsp;Mengmeng Sheng, Zeren Sun, Zhenhuang Cai, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Yichao Zhou, Yazhou Yao*, \u0026quot;Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning\u0026quot;, AAAI Conference on Artificial Intelligence (\u003cstrong\u003eAAAI\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e10.\u0026nbsp;Gensheng Pei, Fumin Shen, Yazhou Yao, \u003cstrong\u003eTao Chen\u003c/strong\u003e, Xian-Sheng Hua, and Heng-Tao Shen, \u0026quot;Hierarchical Graph Pattern Understanding for Zero-Shot Video Object Segmentation\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e9.\u0026nbsp;Yin Tang#,\u003cstrong\u003e Tao Chen#\u003c/strong\u003e, Xiruo Jiang, Yazhou Yao, Guo-Sen Xie, and Heng-Tao Shen, \u0026quot;Holistic Prototype Attention Network for Few-shot Video Object Segmentation\u0026quot;, IEEE Transactions on Circuits and Systems for Video Technology (\u003cstrong\u003eTCSVT\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e8. \u003cstrong\u003eTao Chen\u003c/strong\u003e, Yazhou Yao, and Jinhui Tang, \u0026quot;Multi-Granularity Denoising and Bidirectional Alignment for Weakly Supervised Semantic Segmentation\u0026quot;,\u0026nbsp;IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e7.\u0026nbsp;\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003eYazhou Yao#,\u0026nbsp;\u003c/span\u003e\u003cstrong style\u003d\"color: rgb(85, 85, 85);\"\u003eTao Chen#\u003c/strong\u003e\u003cstrong\u003e\u003c/strong\u003e, Hanbo Bi, Xinhao Cai, Gensheng Pei, Guoye Yang, Zhiyuan Yan, Xian Sun, Xing Xu, and Hai Zhang, \u0026quot;Automated Object Recognition in High-resolution Optical Remote Sensing Imagery\u0026quot;, National Science Review (\u003cstrong\u003eNSR\u003c/strong\u003e), 2023 (\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700; color: rgb(85, 85, 85); white-space: pre-line;\"\u003eImpact Factor\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e:23.178\u003c/span\u003e)\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:16px;font-family:times new roman\"\u003e\u003cspan style\u003d\"text-wrap: wrap;\"\u003e6.\u0026nbsp;\u003c/span\u003e\u003cspan style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/span\u003e\u003cspan style\u003d\"text-wrap: wrap;\"\u003e, Pai Peng, \u003cstrong\u003eTao Chen\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e*\u003c/span\u003e\u003c/strong\u003e, Qiong Wang, Yazhou Yao*, and Xian-Sheng Hua, \u0026quot;FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced Context-Aware Network\u0026quot;, IEEE Transactions on Multimedia (\u003c/span\u003e\u003cstrong style\u003d\"text-wrap: wrap;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"text-wrap: wrap;\"\u003e), 2023.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;font-size:16px;font-family:times new roman\"\u003e5. \u003cstrong\u003eTao Chen\u003c/strong\u003e, Qiong Wang, Lei Zhang, Yazhou Yao*, Guosen Xie, and Fumin Shen, “Saliency Guided Inter- and Intra-Class Relation Constraints for Weakly Supervised Semantic Segmentation”, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2022.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:16px;font-family:times new roman\"\u003e4. \u003cstrong\u003eTao Chen\u003c/strong\u003e, Guosen Xie, Yazhou Yao*, Qiong Wang, Fumin Shen, Zhenmin Tang, and Jian Zhang, “Semantically Meaningful Class Prototype Learning for One-Shot Image Segmentation”, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;font-size:16px;font-family:times new roman\"\u003e3. Yazhou Yao#, \u003cstrong\u003eTao Chen#\u003c/strong\u003e, Guosen Xie, Chuanyi Zhang, Fumin Shen, Qi Wu, Zhenmin Tang, Jian Zhang, “Non-Salient Region Object Mining for Weakly Supervised Semantic Segmentation”, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;font-size:16px;font-family:times new roman\"\u003e2. \u003cstrong\u003eTao Chen\u003c/strong\u003e, Shui-Hua Wang, Qiong Wang, Zheng Zhang, Guosen Xie and Zhenmin Tang.\u0026quot;Enhanced Feature Alignment for Unsupervised Domain Adaptation of Semantic Segmentation\u0026quot;,\u0026nbsp;\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003eIEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2021.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;font-size:16px;font-family:times new roman\"\u003e1. \u003cstrong style\u003d\"\"\u003eTao Chen\u003c/strong\u003e, Jian Zhang, Guosen Xie, Yazhou Yao, Xiaoshui Huang, Zhenmin Tang. \u0026quot;Classification Constrained Discriminator For Domain Adaptive\u0026nbsp;\u003cspan style\u003d\"color: rgb(85, 85, 85);\"\u003eSemantic Segmentation\u0026quot;, IEEE International Conference on \u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003eMultimedia\u0026nbsp; and Expo (\u003cstrong style\u003d\"\"\u003eICME\u003c/strong\u003e), 2020.\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"box-sizing: border-box; font-weight: 700; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box;font-size:18px\"\u003e学术兼职 | Academic:\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003ePC Members For Conferences: CVPR、 ICCV\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e、\u003c/span\u003e\u003c/span\u003eECCV\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e、AAAI、\u003c/span\u003e\u0026nbsp;ACM MM\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e、\u003c/span\u003e ICME\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003eReviewers for Journals: TIP\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003e、\u003c/span\u003eTMM、TNNLS、TCSVT\u003c/span\u003e\u003c/p\u003e","imgname":"pic.jpg","imgdownname":"files/members/2ab55951-71cc-4063-a5f8-33574b25690f.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/2ab55951-71cc-4063-a5f8-33574b25690f.jpg","userid":1,"username":"admin","createtime":"2022-06-19 14:59:52","updatetime":"2026-03-09 19:21:51","deletetime":"","flag":1,"index":6},{"id":24,"membername":"刘华峰 | Huafeng Liu","roletype":2,"tutortype":3,"isboss":2,"major":"智能机器人、无人系统、系统工程","email":"liu.hua.feng@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e奖励荣誉 | Honors:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1. 装备发展部,“强芯健魂 筑基智能”智能计算基础平台挑战赛(三等奖),2020, 北京\u0026nbsp;\u003cbr/\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2. \u003cspan style\u003d\"white-space: normal;\"\u003e陆军装备部,\u003cspan style\u003d\"white-space: normal;\"\u003e“跨越险阻”地面无人平台挑战赛,\u003c/span\u003e\u003c/span\u003e2018年,北京\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e3. \u003cspan style\u003d\"white-space: normal;\"\u003e陆军装备部,\u003c/span\u003e\u0026nbsp;“跨越险阻”地面无人平台挑战赛\u0026nbsp;,\u003cspan style\u003d\"white-space: normal;\"\u003e2016\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e年,\u003c/span\u003e黑龙江塔河\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e4. \u003cspan style\u003d\"white-space: normal;\"\u003e国家自然科学基金委,\u003c/span\u003e“中国智能车未来挑战赛2015”,\u003cspan style\u003d\"white-space: normal;\"\u003e2015\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e年,\u003c/span\u003e江苏常熟 \u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e5. \u003cspan style\u003d\"white-space: normal;\"\u003e总装备部,\u003c/span\u003e“跨越险阻”地面无人平台挑战赛,\u003cspan style\u003d\"white-space: normal;\"\u003e2014\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e年,\u003c/span\u003e北京\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e6. \u003cspan style\u003d\"white-space: normal;\"\u003e国家自然科学基金委,\u003c/span\u003e“中国智能车未来挑战赛2014”,\u003cspan style\u003d\"white-space: normal;\"\u003e2014\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e年,\u003c/span\u003e江苏常熟 \u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e7. \u003cspan style\u003d\"white-space: normal;\"\u003e国家自然科学基金委,\u003c/span\u003e“中国智能车未来挑战赛2013”,\u003cspan style\u003d\"white-space: normal;\"\u003e2013\u003c/span\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e年,\u003c/span\u003e 江苏常熟\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e工作经历 | Work Experience:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2022.04 - 至今: 南京理工大学,博士后\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e学习经历 | Education Experience:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2014年9月~2022年3月 南京理工大学 控制科学与工程 获工学博士学位(导师:唐振民教授)\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2012年9月~2014年9月 南京理工大学 模式识别与智能系统 硕士(提前攻博)\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2011年9月~2013年6月 南京理工大学 国际经济与贸易专业 获经济学第二学士学位\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2007年9月~2011年6月 南京理工大学 网络工程专业 获工学士学位\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e科研项目 | Fundings:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1. 江苏省自然科学基金,青年基金,“基于含有噪声标签样本数据的图像识别方法研究”,20万,主持\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2. 国家自然科学基金,青年基金,“噪声长尾交织条件下的鲁棒图像识别方法研究”,30万,主持\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1.\u0026nbsp;\u003cstrong style\u003d\"\"\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Haofeng Zhang, Jianfeng Lu, Zhenmin Tang. Exploiting Web Images for Fine-Grained Visual Recognition via Dynamic Loss Correction and Global Sample Selection. IEEE Transactions on Multimedia (\u003cstrong style\u003d\"\"\u003eTMM\u003c/strong\u003e), 24(2022): 1105-1115.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2.\u0026nbsp;\u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Chuanyi Zhang, Yazhou Yao, Xiushen Wei, Fumin Shen, Zhenmin Tang, Jian Zhang. Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating Open-Set Noise and Utilizing Hard Examples. IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 24(2022): 546-557.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e3.\u0026nbsp;\u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Pai Peng, Tao Chen, Qiong Wang, Yazhou Yao*, and Xian-Sheng Hua, \u0026quot;FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced Context-Aware Network\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e4.\u0026nbsp;\u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Mengmeng Sheng, Zeren Sun, Yazhou Yao*, Xian-Sheng Hua, and Heng-Tao Shen, \u0026quot;Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e5.\u0026nbsp;\u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003e刘华峰\u003c/span\u003e,\u003c/strong\u003e\u0026nbsp;陈静静, 李亮, 鲍秉坤, 李泽超, 刘家瑛, 聂礼强, 跨模态表征与生成技术, 中国图形图像学报, 2023\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e6. Zeren Sun, \u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Qiong Wang, Tianfei Zhou, Qi Wu, Zhenmin Tang. Co- LDL: A Co-training-based Label Distribution Learning Method for Tackling Label Noise. IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2021, 24(2022): 1093-1104.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e7. Zeren Sun, Yazhou Yao, Xiu-Shen Wei, Fumin Shen, \u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Xian-Sheng Hua. Boosting Robust Learning via Leveraging Reusable Samples in Noisy Web Data. IEEE Transactions on Multimedia \u003cspan style\u003d\"white-space: normal;\"\u003e(\u003c/span\u003e\u003cstrong style\u003d\"white-space: normal;\"\u003eTMM\u003c/strong\u003e\u003cspan style\u003d\"white-space: normal;\"\u003e)\u003c/span\u003e, 2022.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e8. Chuanyi Zhang, Yazhou Yao, \u003cstrong style\u003d\"\"\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Guosen Xie, Xiangbo Shu, Tianfei Zhou, Zheng Zhang, Fumin Shen, Zhenmin Tang. Web-Supervised Network with Softly Update-Drop Training for Fine-Grained Visual Classification. Proceedings of the 34th AAAI Conference on Artificial Intelligence (\u003cstrong style\u003d\"\"\u003eAAAI\u003c/strong\u003e), 2020, 12781-12788.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e9.\u0026nbsp;Gensheng Pei, Tao Chen, Xiruo Jiang, \u003cstrong style\u003d\"\"\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Zeren Sun, Yazhou Yao*, \u0026quot;VideoMAC: Video Masked Autoencoders Meet ConvNets\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong style\u003d\"\"\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e10.\u0026nbsp;Bo Zhou, Liulei Li, Yujia Wang, \u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Yazhou Yao*, Wenguan Wang*, \u0026quot;UNIALIGN: Scaling Multimodal Alignment within One Unified Model\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e11.\u0026nbsp;Jianqiang Xu, Gensheng Pei, \u003cstrong\u003e\u003cspan style\u003d\"text-decoration:underline;\"\u003eHuafeng Liu\u003c/span\u003e\u003c/strong\u003e, Yazhou Yao*, \u0026quot;GSV2X: Geometry-Aware Uncertainty Modeling and Orthogonal Fusion for Robust Roadside Perception\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026.\u003c/span\u003e\u003c/p\u003e","imgname":"微信图片_20220705110130.jpg","imgdownname":"files/members/421f822f-63dd-4b06-aeff-6005123db16b.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/421f822f-63dd-4b06-aeff-6005123db16b.jpg","userid":1,"username":"admin","createtime":"2022-06-19 15:05:13","updatetime":"2026-02-21 18:57:34","deletetime":"","flag":1,"index":7},{"id":98,"membername":"石朝侠 | Chaoxia Shi","roletype":2,"tutortype":2,"isboss":2,"major":"无人车自动驾驶;强化深度学习;地图创建;多机器人协同","email":"scx@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e团队获奖情况:\u003c/p\u003e\u003cp\u003e\u0026nbsp; 1.2019年参加第21届机器人及人工智能大赛,获得\u0026quot;室外无人车智能挑战赛\u0026quot;项目冠军。\u003c/p\u003e\u003cp\u003e\u0026nbsp; 2.2020年参加第22届机器人及人工智能大赛,获得\u0026quot;室外无人车智能挑战赛\u0026quot;项目冠军。\u003c/p\u003e\u003cp\u003e\u0026nbsp; 3.2021年参加第23届机器人及人工智能大赛,获得\u0026quot;室外无人车智能挑战赛\u0026quot;项目冠军。\u003c/p\u003e\u003cp\u003e\u0026nbsp; 4.2022年参加第24届机器人及人工智能大赛,获得\u0026quot;室外无人车智能挑战赛\u0026quot;项目二等奖。\u003c/p\u003e\u003cp\u003e\u0026nbsp; 5.2023年参加江苏省研究生科研创新实践大赛,获得\u0026quot;野外无人车\u0026quot;项目一等奖。\u003c/p\u003e\u003cp\u003e\u0026nbsp; 6.2025年参加第27届机器人及人工智能大赛,获得\u0026quot;无人室外场景\u0026quot;国赛一等奖和“人工智能创新赛”国赛一等奖各一项。\u003c/p\u003e\u003cp\u003e论文:\u003c/p\u003e\u003cp\u003e[1] Naeem Fizza, Chaoxia Shi*, Yanqing Wang. EPO-ImDDPG: Evolutionary Policy Optimization Approach with Improved DDPG for Multi-Agent Exploration in Dynamic Environment. 8th International Conference on Artificial Intelligence and Big Data, ICAIBD 2025, p 831-836, 2025\u003c/p\u003e\u003cdiv\u003e\u0026nbsp;[2] Ben Amarat Samia, Chaoxia Shi*, Yanqing Wang. Generative Adversarial Imitation Learning Method Based on TD3-SAC Hybrid Algorithm for Robot Motion Control. 8th International Conference on Artificial Intelligence and Big Data, ICAIBD 2025, p 846-852, 2025\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[3] Gao Yiming, Chaoxia Shi*,\u0026nbsp; Yanqing Wang. End-to-End Motion Planning Based on Visual Conditional Imitation Learning and Trajectory-guide[C]. The 2023 International Conference Artificial Intelligence Conference. 2024, Nanjing, China\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[4] Yujia Liu, Chaoxia Shi*, Yanqing Wang. Stable Monocular Visual Odometry based on Optical Flow Matching[C]. The 2023 International Conference Artificial Intelligence Conference. 2024, Nanjing, China\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[5] Guangyao Si, Chaoxia Shi*, Yanqing Wang. Enhanced and Pruned Motion Planning Based on Bird\u0026#39;s-Eye View. Communications in Computer and Information Science, v 2215 CCIS, p 154-164, 2024\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[6] Mingsu Yan, Chaoxia Shi*, Yanqing Wang. A Monocular Visual Odometry Combining Edge Enhance with Deep Learning, 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2019.12\u0026nbsp;\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[7] Chaoxia Shi, Yanqing Wang; Li He. Feature matching using Sequential evaluation on Sample Consensus, 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Shenzhen, China,\u0026nbsp;\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[8] Chaoxia Shi,\u0026nbsp; Tianheng Liu. Motion planning by adding geometric constraint of roadside to beam curvature method, IEEE-CYBER 2013, Nanjing,China\u0026nbsp;\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[9] Chaoxia Shi; Yanqing Wang,\u0026nbsp; Jingyu Yang. Online topological map building and qualitative localization in large-scale environment[J], Robotics and Autonomous Systems, 2010,58(5):488-496(SCI)\u003c/div\u003e\u003cdiv\u003e\u0026nbsp;[10]Chaoxia Shi; Yanqing Wang, Jingyu Yang. A local obstacle avoidance method for mobile robots in partially known environment, Robotics and Autonomous Systems[J],2010,58(5):425-434(SCI)\u003c/div\u003e\u003cp\u003e\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e\u003cbr/\u003e\u003c/p\u003e","imgname":"石朝侠.jpg","imgdownname":"files/members/3d1ec5d8-d0f3-4e3d-be52-ac3cc3e55a3c.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/3d1ec5d8-d0f3-4e3d-be52-ac3cc3e55a3c.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:44:16","updatetime":"2025-09-18 20:16:13","deletetime":"","flag":1,"index":8},{"id":22,"membername":"周翊超 | Yichao Zhou","roletype":5,"tutortype":3,"isboss":2,"major":"信号处理、机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e奖励荣誉 | Honors:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003c/p\u003e\u003cp\u003e1. 军委装备发展部,第一届“智算杯”智能计算基础平台挑战赛,三等奖,2020\u003cbr/\u003e\u003c/p\u003e\u003cp\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong\u003e工作经历 | Work Experience:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e2022.06 - 至今:\u0026nbsp; \u0026nbsp; 南京理工大学,副研究员\u003c/p\u003e\u003cp\u003e2019.07-2022.06:南京理工大学,博士后\u003c/p\u003e\u003cp\u003e2008.09-2012.01:航天科工集团空天防御研究院,工程师\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e学习经历 | Education Experience:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e2012.3-2018.3:南京理工大学,博士\u003c/p\u003e\u003cp\u003e2006.9-2008.6:南京理工大学,硕士\u003c/p\u003e\u003cp\u003e2002.9-2006.6:南京理工大学,本科\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e3. Zhenyu Yang, Gensheng Pei, Tao Chen, \u003cstrong\u003eYichao Zhou\u003c/strong\u003e, Tianfei Zhou, Yazhou Yao*, Fumin Shen, \u0026quot;Efficiency Follows Global-Local Decoupling\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e2.\u0026nbsp;Mengmeng Sheng, Zeren Sun, Zhenhuang Cai, Tao Chen, \u003cstrong\u003eYichao Zhou\u003c/strong\u003e, Yazhou Yao*, \u0026quot;Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning\u0026quot;, AAAI Conference on Artificial Intelligence (\u003cstrong\u003eAAAI\u003c/strong\u003e), 2024.\u003c/p\u003e\u003cp\u003e1. \u003cstrong\u003eYichao Zhou\u003c/strong\u003e, Zhisen Hu, Zuxing Xuan, Yangang Wang, Xiyuan Hu, “Synchronizing Detection and Removal of Smoke in Endoscopic Images With Cyclic Consistency Adversarial Nets”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022.\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cstrong\u003e科研项目 | Fundings:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e1. 北京无线电测量研究所,横向项目,“雷达组合面板状态识别应用软件”,60万,主持\u003c/p\u003e\u003cp\u003e2. 科技部,国家重点研发计划课题,“基于大数据贝叶斯方法的伪造视频人像量化检验关键技术研究”,840万,课题副组长,参与\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;国家自然科学基金,面上项目,“融合多模态学习的视频中深度伪造人脸取证技术研究”,70万,课题骨干,参与\u003c/p\u003e\u003cp\u003e4. 科技部,国家科技重大专项,“XX智能控制支撑软件系统”,3400万,参与\u003c/p\u003e","imgname":"7ac84353-1959-462c-abbc-0d6f6f181138-removebg-preview.jpg","imgdownname":"files/members/5e8e175b-8e80-447e-b0bc-e7468b9140af.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/5e8e175b-8e80-447e-b0bc-e7468b9140af.jpg","userid":1,"username":"admin","createtime":"2022-06-19 14:48:11","updatetime":"2026-02-21 18:58:17","deletetime":"","flag":1,"index":9},{"id":30,"membername":"裴根生 | Gensheng Pei","roletype":14,"tutortype":3,"isboss":2,"major":"视频分割、图像匹配、变化检测","email":"peigsh@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-size:18px\"\u003e奖励荣誉 | Honors:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e1.\u0026nbsp;粤港澳大湾区国际算法算例大赛,“遥感图像物体目标检测”赛道, 一等奖(冠军),2022\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e2.\u0026nbsp;第三届“计图”人工智能挑战赛:语义分割赛道,冠军,2023\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:16px\"\u003e3.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级SAR图像飞机目标细粒度智能识别赛道,季军;可见光图像军事基地设施智能变化检测赛道,优秀奖,2023\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e19.\u0026nbsp;Jianjian Yin, Xiruo Jiang, Tao Chen, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Yazhou Yao*, Fumin Shen, Heng-Tao Shen, \u0026quot;DepMatch: Boosting Semi-supervised Semantic Segmentation by Exploring Depth Difference Knowledge\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e18.\u0026nbsp;\u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Xiruo Jiang, Xinhao Cai, Tao Chen, Yazhou Yao*, Byeungwoo Jeon, \u0026quot;PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e17.\u0026nbsp;Xinhao Cai, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Zeren Sun, Yazhou Yao*, Fumin Shen, Wenguan Wang, \u0026quot;Iris: Bringing Real-World Priors into Diffusion Model for Monocular Depth Estimation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e16.\u0026nbsp;Jianjian Yin, Tao Chen, Yi Chen, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Xiangbo Shu, Yazhou Yao*, Fumin Shen, \u0026quot;PCA-Seg: Revisiting Cost Aggregation for Open-Vocabulary Semantic and Part Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e15.\u0026nbsp;Zhenyu Yang, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Tao Chen, Yichao Zhou, Tianfei Zhou, Yazhou Yao*, Fumin Shen, \u0026quot;Efficiency Follows Global-Local Decoupling\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e14.\u0026nbsp;Haowen Gu, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Zeren Sun, Mingwu Ren, Xiangbo Shu, Yazhou Yao*, Fumin Shen, \u0026quot;MedFG-VQA: Low-Frequency Memory and Graph Attention for Lightweight Medical VQA\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e13.\u0026nbsp;Jianqiang Xu, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Huafeng Liu, Yazhou Yao*, \u0026quot;GSV2X: Geometry-Aware Uncertainty Modeling and Orthogonal Fusion for Robust Roadside Perception\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e12.\u0026nbsp;Xinhao Cai, Liulei Li, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Tao Chen, Jinshan Pan, Yazhou Yao*, Wenguan Wang*, \u0026quot;Beyond Frequency: Scoring-Driven Debiasing for Object Detection via Blueprint-Prompted Image Synthesis\u0026quot;, International Conference on Learning Representations (\u003cstrong\u003eICLR\u003c/strong\u003e), 2026.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e11.\u0026nbsp;Xinhao Cai, Qiuxia Lai, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Xiangbo Shu, Yazhou Yao, Wenguan Wang, \u0026quot;Cycle-Consistent Learning for Joint Layout-to-Image Generation and Object Detection\u0026quot;, IEEE International Conference on Computer Vision (\u003cstrong\u003eICCV\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e10.\u0026nbsp;\u003cstrong style\u003d\"\"\u003eGensheng Pei\u003c/strong\u003e, Tao Chen, Yujia Wang, Xinhao Cai, Xiangbo Shu, Tianfei Zhou, Yazhou Yao*, \u0026quot;Seeing What Matters: Empowering CLIP with Patch Generation-to-Selection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong style\u003d\"\"\u003eCVPR\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e9.\u0026nbsp;Jianjian Yin, Tao Chen, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Yazhou Yao, Liqi\u003cspan style\u003d\"font-family: \u0026quot;times new roman\u0026quot;;\"\u003e\u003c/span\u003eang Nie, Xiansheng Hua, \u0026quot;Semi-supervised Semantic Segmentation with Multi-Constraint Consistency Learning\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e8.\u0026nbsp;Mengmeng Sheng, Zeren Sun, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Tao Chen, Haonan Luo, Yazhou Yao*, \u0026quot;Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACM MM\u003c/strong\u003e), 2024.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e7. \u003c/strong\u003eTao Chen, XiRuo Jiang, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Zeren Sun, Yucheng Wang, Yazhou Yao*, \u0026quot;Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic Segmentation\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e6.\u0026nbsp;\u003c/strong\u003eWang Zhang, Tingting Li, Yuntian Zhang, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Xiruo Jiang, Yazhou Yao, \u0026quot;LTFormer: A Light-weight Transformer-based Self-supervised Matching Network for Heterogeneous Remote Sensing Images\u0026quot;, Information Fusion (\u003cstrong\u003eI-Fusion\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e5.\u0026nbsp;Gensheng Pei\u003c/strong\u003e, Tao Chen, Xiruo Jiang, Huafeng Liu, Zeren Sun, Yazhou Yao*, \u0026quot;VideoMAC: Video Masked Autoencoders Meet ConvNets\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e4.\u0026nbsp;Gensheng Pei\u003c/strong\u003e, Fumin Shen, Yazhou Yao, Tao Chen, Xian-Sheng Hua, and Heng-Tao Shen, \u0026quot;Hierarchical Graph Pattern Understanding for Zero-Shot Video Object Segmentation\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e3.\u0026nbsp;\u003c/strong\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85);\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003eYazhou Yao\u003c/span\u003e, Tao Chen, Hanbo Bi, Xinhao Cai, \u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Guoye Yang, Zhiyuan Yan, Xian Sun, Xing Xu, and Hai Zhang, \u0026quot;\u003c/span\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); white-space: pre-line;\"\u003eAutomated object recognition in high-resolution optical remote sensing imagery\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85);\"\u003e\u0026quot;, National Science Review (\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003cstrong\u003eNSR\u003c/strong\u003e\u003c/span\u003e), 2023 (\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003cstrong\u003eImpact Factor\u003c/strong\u003e\u003c/span\u003e:23.178)\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:16px\"\u003e\u003cstrong\u003e2.\u003c/strong\u003e\u0026nbsp;\u003cstrong\u003eGensheng Pei\u003c/strong\u003e, Yazhou Yao, Fumin Shen, Dan Huang, Xingguo Huang, and Heng-Tao Shen, \u0026quot;Hierarchical Co-attention Propagation Network for Zero-Shot Video Object Segmentation\u0026quot;,\u0026nbsp;IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2023.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;;font-size:16px\"\u003e\u003cstrong style\u003d\"\"\u003e1. Gensheng Pei\u003c/strong\u003e, Yazhou Yao*, Guo-Sen Xie, Fumin Shen, Zhenmin Tang, Jinhui Tang, \u0026quot;Hierarchical Feature Alignment Network for Unsupervised Video Object Segmentation\u0026quot;, European Conference on Computer Vision (\u003cstrong style\u003d\"\"\u003eECCV\u003c/strong\u003e), 2022.\u003c/span\u003e\u003c/p\u003e","imgname":"3680c82b-33c8-4bf7-97ea-9b3c528d90bc-removebg-preview.jpg","imgdownname":"files/members/8aa36619-9616-4a8c-9998-d0648ac5a470.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/8aa36619-9616-4a8c-9998-d0648ac5a470.jpg","userid":1,"username":"admin","createtime":"2022-06-22 22:45:07","updatetime":"2026-03-09 19:26:20","deletetime":"","flag":1,"index":10},{"id":35,"membername":"毛君竹 | Junzhu Mao ","roletype":14,"tutortype":3,"isboss":2,"major":"网络压缩、机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700;\"\u003e\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e奖励荣誉 | Honors:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级可见光图像飞机目标细粒度智能识别赛道,优秀奖,2023\u003c/p\u003e\u003cp\u003e2. 粤港澳大湾区国际算法算例大赛,“遥感图像物体目标检测”赛道, 一等奖(冠军),2022\u003c/p\u003e\u003cp\u003e1. 军委装备发展部,第一届“智算杯”智能计算基础平台挑战赛,三等奖,2020\u003c/p\u003e\u003chr/\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: nowrap;\"\u003e\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700;\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003cstrong\u003e4. \u003c/strong\u003e\u003cstrong\u003eJunzhu Mao\u003c/strong\u003e, Zeren Sun, Yazhou Yao*, Tianfei Zhou, Liqiang Nie, and, Xiansheng Hua, \u0026quot;NiCI-Pruning: Enhancing Diffusion Model Pruning via Noise in Clean Image Guidance\u0026quot;, IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2025\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003cstrong\u003e3\u003c/strong\u003e.\u0026nbsp;\u003cstrong\u003eJunzhu Mao\u003c/strong\u003e, Zeren Sun, Yazhou Yao, Xiansheng Hua, Heng-Tao Shen, \u0026quot;Class Importance Consistency Matters: Efficient Model Pruning for Long-tailed Recognition Models\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2025\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003e\u003cspan style\u003d\"box-sizing: border-box;\"\u003e\u003cstrong\u003e2\u003c/strong\u003e.\u0026nbsp;\u003cstrong\u003eJunzhu Mao\u003c/strong\u003e, Yang Shen, Jinyang Guo, Yazhou Yao*, Xiansheng Hua, and Hengtao Shen, \u0026quot;Prune and Merge: Efficient Token Compression for Vision Transformer with Spatial Information Preserved\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"box-sizing: border-box; margin-top: 0px; margin-bottom: 11px; color: rgb(85, 85, 85); font-size: 14px; white-space: pre-line !important;\"\u003e\u003cspan style\u003d\"font-family:times new roman\"\u003e\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700;\"\u003e1. \u003c/span\u003e\u003cstrong style\u003d\"\"\u003eJunzhu Mao\u003c/strong\u003e, Yazhou Yao, Zeren Sun, Xingguo Huang, Fumin Shen and Heng-Tao Shen, \u0026quot;Attention Map Guided Transformer Pruning for Occluded Person Re-Identification on Edge Device\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2023.\u0026nbsp;\u003c/span\u003e\u003c/p\u003e","imgname":"23a9081a-5ac4-4476-93c0-9107bb8ae118-removebg-preview.jpg","imgdownname":"files/members/29d265b7-c5fc-465a-845f-fa06c1766d13.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/29d265b7-c5fc-465a-845f-fa06c1766d13.jpg","userid":1,"username":"admin","createtime":"2022-06-22 22:56:06","updatetime":"2026-03-09 19:29:48","deletetime":"","flag":1,"index":11},{"id":32,"membername":"唐印","roletype":15,"tutortype":3,"isboss":4,"major":"视频分割","email":"tyeclipse@njust.deu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp style\u003d\"text-wrap: wrap;\"\u003e\u003cspan style\u003d\"text-wrap: nowrap;\"\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp style\u003d\"text-wrap: wrap;\"\u003e1.\u003cstrong\u003e \u003c/strong\u003e\u003cstrong\u003eYin Tang\u003c/strong\u003e, Tao Chen, Xiruo Jiang, Yazhou Yao, Guo-Sen Xie, and Heng-Tao Shen, \u0026quot;Holistic Prototype Attention Network for Few-shot Video Object Segmentation\u0026quot;, IEEE Transactions on Circuits and Systems for Video Technology\u0026nbsp;(\u003cstrong\u003eTCSVT\u003c/strong\u003e), 2023.\u003c/p\u003e","imgname":"98c58ef5-6ca8-4714-ad7a-859ff2f6d2a2-removebg-preview.jpg","imgdownname":"files/members/e773f06d-dd5a-476d-88e0-754681f22172.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/e773f06d-dd5a-476d-88e0-754681f22172.jpg","userid":1,"username":"admin","createtime":"2022-06-22 22:49:44","updatetime":"2024-03-21 09:30:14","deletetime":"","flag":1,"index":12},{"id":33,"membername":"蔡鑫浩","roletype":15,"tutortype":3,"isboss":4,"major":"计算机视觉、模式识别、机器学习","email":"xinhao@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-size:18px\"\u003e奖励荣誉 | Honors:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e3.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级SAR图像飞机目标细粒度智能识别赛道,季军;可见光图像军事基地设施智能变化检测赛道,优秀奖,2023\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e2.\u0026nbsp;粤港澳大湾区国际算法算例大赛,“遥感图像物体目标检测”赛道, 一等奖(冠军),2022\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1.\u0026nbsp;第十届“泰迪杯”数据挖掘挑战赛,特等奖(冠军),2022\u003c/span\u003e\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;\"\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e7.\u0026nbsp;Gensheng Pei, Xiruo Jiang, \u003cstrong\u003eXinhao Cai\u003c/strong\u003e, Tao Chen, Yazhou Yao*, Byeungwoo Jeon, \u0026quot;PEARL: Geometry Aligns Semantics for Training-Free Open-Vocabulary Semantic Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e6.\u0026nbsp;\u003cstrong\u003eXinhao Cai\u003c/strong\u003e, Gensheng Pei, Zeren Sun, Yazhou Yao*, Fumin Shen, Wenguan Wang, \u0026quot;Iris: Bringing Real-World Priors into Diffusion Model for Monocular Depth Estimation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e5.\u0026nbsp;\u003cstrong\u003eXinhao Cai\u003c/strong\u003e, Liulei Li, Gensheng Pei, Tao Chen, Jinshan Pan, Yazhou Yao*, Wenguan Wang*, \u0026quot;Beyond Frequency: Scoring-Driven Debiasing for Object Detection via Blueprint-Prompted Image Synthesis\u0026quot;, International Conference on Learning Representations (\u003cstrong\u003eICLR\u003c/strong\u003e), 2026.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e4.\u0026nbsp;\u003cstrong\u003eXinhao Cai\u003c/strong\u003e, Qiuxia Lai, Gensheng Pei, Xiangbo Shu, Yazhou Yao, Wenguan Wang, \u0026quot;Cycle-Consistent Learning for Joint Layout-to-Image Generation and Object Detection\u0026quot;, IEEE International Conference on Computer Vision (\u003cstrong\u003eICCV\u003c/strong\u003e), 2025. .\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e3.\u0026nbsp;Gensheng Pei, Tao Chen, Yujia Wang, \u003cstrong\u003eXinhao Cai\u003c/strong\u003e, Xiangbo Shu, Tianfei Zhou, Yazhou Yao*, \u0026quot;Seeing What Matters: Empowering CLIP with Patch Generation-to-Selection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e2.\u0026nbsp;\u003cstrong style\u003d\"\"\u003eXinhao Cai\u003c/strong\u003e, Qiuxia LAI, Yuwei Wang, Wenguan Wang, Zeren Sun, Yazhou Yao*, \u0026quot;Poly Kernel Inception Network for Remote Sensing Detection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong style\u003d\"\"\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"white-space: nowrap;font-size:18px\"\u003e1. Yazhou Yao, Tao Chen, Hanbo Bi, \u003cstrong style\u003d\"\"\u003eXinhao Cai\u003c/strong\u003e, Gensheng Pei, Guoye Yang, Zhiyuan Yan, Xian Sun, Xing Xu, and Hai Zhang, \u0026quot;Automated object recognition in high-resolution optical remote sensing imagery\u0026quot;, National Science Review (\u003cstrong style\u003d\"\"\u003eNSR\u003c/strong\u003e), 2023 (\u003cstrong style\u003d\"\"\u003eImpact Factor\u003c/strong\u003e:23.178)\u003c/span\u003e\u003c/p\u003e","imgname":"b511be97-530b-43b7-b0a5-8295e2f451d6-removebg-preview.jpg","imgdownname":"files/members/7651cd4e-ece6-4fa7-a78e-72749e1e8c89.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/7651cd4e-ece6-4fa7-a78e-72749e1e8c89.jpg","userid":1,"username":"admin","createtime":"2022-06-22 22:53:14","updatetime":"2026-02-26 22:05:02","deletetime":"","flag":1,"index":13},{"id":34,"membername":"盛猛猛","roletype":15,"tutortype":3,"isboss":4,"major":"计算机视觉、模式识别、机器学习","email":"shengmengmemg@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e奖励荣誉 | Honors:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;航天系统装备部,第三届“天智杯”人工智能挑战赛:亚米级SAR图像飞机目标细粒度智能识别赛道,优秀奖,2023\u003c/p\u003e\u003cp\u003e2. 粤港澳大湾区国际算法算例大赛,“数据选择与标记校正算法设计”赛道, 三等奖,2022\u003c/p\u003e\u003cp\u003e1. 第十届“泰迪杯”数据挖掘挑战赛,特等奖(冠军),2022\u003c/p\u003e\u003chr/\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; color: rgb(85, 85, 85); white-space: pre-line;font-size:16px;\"\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e7.\u0026nbsp;\u003cstrong\u003eMengmeng Sheng\u003c/strong\u003e, Zeren Sun, Tao Chen, Jinshan Pan, Yazhou Yao*, Fumin Shen, \u0026quot;Revisiting Learning with Noisy Labels: Active Forgetting and Noise Suppression\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e6.\u0026nbsp;\u003cstrong\u003eMengmeng Sheng\u003c/strong\u003e, Zeren Sun, Tianfei Zhou, Xiangbo Shu, Jinshan Pan, Yazhou Yao, \u0026quot;CA2C: A Prior-Knowledge-Free Approach for Robust Label Noise Learning via Asymmetric Co-learning and Co-training\u0026quot;, IEEE International Conference on Computer Vision (\u003cstrong\u003eICCV\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/p\u003e\u003cp\u003e5.\u003cstrong\u003e Mengmeng Sheng\u003c/strong\u003e, Shuai Yan, Zeren Sun, Tao Chen, Huafeng Liu, Yazhou Yao, \u0026quot;Combating Noisy Labels in Knowledge Distillation for Efficient Edge Device Deployment\u0026quot;,\u0026nbsp;IEEE Transactions on Consumer Electronics (\u003cstrong\u003eTCE\u003c/strong\u003e), 2025.\u003c/p\u003e\u003cp\u003e4.\u0026nbsp;\u003cstrong\u003eMengmeng Sheng\u003c/strong\u003e, Zeren Sun, Gensheng Pei, Tao Chen, Haonan Luo, Yazhou Yao*, \u0026quot;Enhancing Robustness in Learning with Noisy Labels: An Asymmetric Co-Training Approach\u0026quot;, ACM International Conference on Multimedia (\u003cstrong\u003eACM MM\u003c/strong\u003e), 2024.\u003cstrong\u003e\u0026nbsp;\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e3.\u003cstrong\u003e\u0026nbsp;Mengmeng Sheng\u003c/strong\u003e, Zeren Sun, Tao Chen, Shuchao Pang, Yucheng Wang, Yazhou Yao*, \u0026quot;Foster Adaptivity and Balance in Learning with Noisy Labels\u0026quot;, European Conference on Computer Vision (\u003cstrong\u003eECCV\u003c/strong\u003e), 2024.\u003c/p\u003e\u003cp\u003e2.\u003cstrong\u003e\u0026nbsp;\u003c/strong\u003eHuafeng Liu#, \u003cstrong\u003eMengmeng Sheng#\u003c/strong\u003e, Zeren Sun, Yazhou Yao*, Xian-Sheng Hua, and Heng-Tao Shen, \u0026quot;Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2024.\u003c/p\u003e\u003cp\u003e1.\u0026nbsp;\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003e\u003cstrong\u003eMengmeng Sheng\u003c/strong\u003e, Zeren Sun, Zhenhuang Cai, Tao Chen, Yichao Zhou,\u0026nbsp;\u003c/span\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003eYazhou Yao\u003c/span\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003e, \u0026quot;\u003c/span\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003eAdaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning\u003c/span\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003e\u0026quot;,\u0026nbsp;\u003c/span\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003eAAAI Conference on Artificial Intelligence (\u003c/span\u003e\u003cstrong style\u003d\"font-size: medium; text-wrap: wrap;\"\u003eAAAI\u003c/strong\u003e\u003cspan style\u003d\"font-size: medium; text-wrap: wrap;\"\u003e), 2024.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cbr/\u003e\u003c/p\u003e","imgname":"4e3ee49a-c066-42f8-aa0c-ed102b9411f4-removebg-preview.jpg","imgdownname":"files/members/1bdfaf05-5db5-4b34-b477-6bf04d4f15cf.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/1bdfaf05-5db5-4b34-b477-6bf04d4f15cf.jpg","userid":1,"username":"admin","createtime":"2022-06-22 22:54:58","updatetime":"2026-02-21 19:02:35","deletetime":"","flag":1,"index":14},{"id":59,"membername":"周波","roletype":15,"tutortype":3,"isboss":4,"major":"无人系统、路径规划","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:18px\"\u003e2.\u003cstrong\u003e\u0026nbsp;Bo Zhou\u003c/strong\u003e, Qiuxia Lai, Zeren Sun, Xiangbo Shu, Yazhou Yao*, Wenguan Wang, \u0026quot;Learning 3D Representations for Spatial Intelligence from Unposed Multi-View Images\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-family:times new roman;font-size:18px\"\u003e1.\u0026nbsp;\u003cstrong\u003eBo Zhou\u003c/strong\u003e, Liulei Li, Yujia Wang, Huafeng Liu, Yazhou Yao*, Wenguan Wang*, \u0026quot;UNIALIGN: Scaling Multimodal Alignment within One Unified Model\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2025.\u003c/span\u003e\u003c/p\u003e","imgname":"e8f565a1-e882-41e1-a263-74c5fa512592-removebg-preview.jpg","imgdownname":"files/members/58a08c53-883f-4ec5-b950-8145d7b5709a.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/58a08c53-883f-4ec5-b950-8145d7b5709a.jpg","userid":1,"username":"admin","createtime":"2022-09-04 23:05:27","updatetime":"2026-02-21 19:03:38","deletetime":"","flag":1,"index":15},{"id":60,"membername":"张旺","roletype":15,"tutortype":3,"isboss":4,"major":"无人系统、环境感知","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003e\u003cstrong\u003e\u003cspan style\u003d\"color: rgb(85, 85, 85); font-family: \u0026quot;Segoe UI\u0026quot;, \u0026quot;Lucida Grande\u0026quot;, Helvetica, Arial, \u0026quot;Microsoft YaHei\u0026quot;, FreeSans, Arimo, \u0026quot;Droid Sans\u0026quot;, \u0026quot;wenquanyi micro hei\u0026quot;, \u0026quot;Hiragino Sans GB\u0026quot;, \u0026quot;Hiragino Sans GB W3\u0026quot;, Roboto, Arial, sans-serif; font-weight: 700; white-space: pre-line;\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/strong\u003e\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003e1.\u003cstrong\u003e Wang Zhang\u003c/strong\u003e, \u003cspan style\u003d\"box-sizing: border-box;\"\u003eTingting Li\u003c/span\u003e, Yuntian Zhang, Gensheng Pei, Xiruo Jiang, \u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003eYazhou Yao\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003e, \u0026quot;LTFormer: A Light-weight Transformer-based Self-supervised Matching Network for Heterogeneous Remote Sensing Images\u0026quot;,\u0026nbsp;Information Fusion (\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; font-weight: 700; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003eI-Fusion\u003c/span\u003e\u003cspan style\u003d\"box-sizing: border-box; white-space: pre-line; font-size: 14px; color: rgb(34, 34, 34); font-family: \u0026quot;times new roman\u0026quot;, times, serif;\"\u003e), 2024.\u003c/span\u003e\u003c/p\u003e","imgname":"8626b633-0747-4460-94c1-941d1bb7d144-removebg-preview.jpg","imgdownname":"files/members/3e5e9dce-37f9-47dd-9412-9e838bddf443.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/3e5e9dce-37f9-47dd-9412-9e838bddf443.jpg","userid":1,"username":"admin","createtime":"2022-09-04 23:06:05","updatetime":"2024-10-19 10:29:44","deletetime":"","flag":1,"index":16},{"id":73,"membername":"段振亚","roletype":15,"tutortype":3,"isboss":4,"major":"噪声学习、机器学习","email":"duanzy@njust.edu.can","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"段振亚.jpg","imgdownname":"files/members/8ea01e31-6a8c-4ff7-a655-07305bab5d24.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/8ea01e31-6a8c-4ff7-a655-07305bab5d24.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:05:43","updatetime":"2023-11-11 16:07:23","deletetime":"","flag":1,"index":17},{"id":74,"membername":"杨振宇","roletype":15,"tutortype":3,"isboss":4,"major":"计算机视觉、机器学习","email":"zhenyu_yang@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700; color: rgb(85, 85, 85); white-space: pre-line;font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;\u003cstrong\u003eZhenyu Yang\u003c/strong\u003e, Gensheng Pei, Tao Chen, Yichao Zhou, Tianfei Zhou, Yazhou Yao*, Fumin Shen, \u0026quot;Efficiency Follows Global-Local Decoupling\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e2.\u0026nbsp;\u003cstrong\u003eZhenyu Yang\u003c/strong\u003e, Gensheng Pei, Tao Chen, Xia Yuan, Haofeng Zhang, Xiangbo Shu, Yazhou Yao, \u0026quot;Beyond Quadratic: Linear-Time Change Detection with RWKV\u0026quot;,\u0026nbsp;AAAI Conference on Artificial Intelligence (\u003cstrong\u003eAAAI\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e1.\u003cstrong\u003e\u0026nbsp;Zhenyu Yang\u003c/strong\u003e, Gensheng Pei, Yazhou Yao, Tianfei Zhou, Lizhong Ding, Fumin Shen, \u0026quot;ChangeTitans: Towards Remote Sensing Change Detection with Neural Memory\u0026quot;,\u0026nbsp;IEEE Transactions on Geoscience \u0026amp; Remote Sensing (\u003cstrong\u003eTGRS\u003c/strong\u003e), 2025\u003c/p\u003e","imgname":"57abb8ed-94bc-4811-88df-e070453dc57f-removebg-preview.jpg","imgdownname":"files/members/e7b7a22c-6f84-4b57-8c98-bc5df7081fdb.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/e7b7a22c-6f84-4b57-8c98-bc5df7081fdb.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:06:22","updatetime":"2026-02-21 19:12:59","deletetime":"","flag":1,"index":18},{"id":79,"membername":"赵欣阳","roletype":15,"tutortype":3,"isboss":4,"major":"计算机视觉、机器学习","email":"zhaoxy@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e学术论文 | Publications:\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e1\u003c/strong\u003e.\u0026nbsp;\u003cstrong\u003eXinyang Zhao\u003c/strong\u003e, Jian Jin, Yangyang Li, Yazhou Yao*, \u0026quot;Twofold Debiasing Enhances Fine-Grained Learning with Coarse Labels\u0026quot;, AAAI Conference on Artificial Intelligence (\u003cstrong\u003eAAAI\u003c/strong\u003e), 2025.\u003c/p\u003e","imgname":"491837e8-faf5-49fb-b0d6-d6f37b265f9c-removebg-preview.jpg","imgdownname":"files/members/a1a83c3e-c427-4325-8dae-061f3daf4732.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/a1a83c3e-c427-4325-8dae-061f3daf4732.jpg","userid":1,"username":"admin","createtime":"2023-11-12 09:47:28","updatetime":"2024-12-22 11:30:07","deletetime":"","flag":1,"index":19},{"id":92,"membername":"尹健健","roletype":15,"tutortype":3,"isboss":4,"major":"计算机视觉","email":"JianJYin@njust.edu.cn","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e学术论文 | Publications:\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e4.\u0026nbsp;\u003cstrong\u003eJianjian Yin\u003c/strong\u003e, Xiruo Jiang, Tao Chen, Gensheng Pei, Yazhou Yao*, Fumin Shen, Heng-Tao Shen, \u0026quot;DepMatch: Boosting Semi-supervised Semantic Segmentation by Exploring Depth Difference Knowledge\u0026quot;,\u0026nbsp;IEEE Transactions on Image Processing (\u003cstrong\u003eTIP\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e3.\u0026nbsp;\u003cstrong\u003eJianjian Yin\u003c/strong\u003e, Tao Chen, Yi Chen, Gensheng Pei, Xiangbo Shu, Yazhou Yao*, Fumin Shen, \u0026quot;PCA-Seg: Revisiting Cost Aggregation for Open-Vocabulary Semantic and Part Segmentation\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/p\u003e\u003cp\u003e2. \u003cstrong\u003eJianjian Yin\u003c/strong\u003e, Tao Chen, Gensheng Pei, Yazhou Yao, Liqiang Nie, Xiansheng Hua, \u0026quot;Semi-supervised Semantic Segmentation with Multi-Constraint Consistency Learning\u0026quot;, IEEE Transactions on Multimedia (\u003cstrong\u003eTMM\u003c/strong\u003e), 2025.\u0026nbsp;\u003c/p\u003e\u003cp\u003e1. \u003cstrong\u003eJianjian Yin\u003c/strong\u003e, Shuai Yan, Tao Chen, Yi Chen and Yazhou Yao∗, \u0026quot;Class Probability Space Regularization for Semi-supervised Semantic Segmentation\u0026quot;,\u0026nbsp;Computer Vision and Image Understanding (\u003cstrong\u003eCVIU\u003c/strong\u003e), 2024.\u003c/p\u003e","imgname":"微信图片_20241123172741.jpg","imgdownname":"files/members/552e8362-709e-404e-828a-42c877e9fdb0.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/552e8362-709e-404e-828a-42c877e9fdb0.jpg","userid":1,"username":"admin","createtime":"2024-11-23 17:29:30","updatetime":"2026-03-09 19:26:09","deletetime":"","flag":1,"index":20},{"id":99,"membername":"姚雨蒙","roletype":15,"tutortype":3,"isboss":4,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"姚雨蒙.jpg","imgdownname":"files/members/8c13e0c1-869d-40aa-a70a-8559cf3a2dfc.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/8c13e0c1-869d-40aa-a70a-8559cf3a2dfc.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:48:17","updatetime":"","deletetime":"","flag":1,"index":21},{"id":119,"membername":"王星","roletype":15,"tutortype":3,"isboss":4,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"王星.jpg","imgdownname":"files/members/fb5d5852-abbd-4866-ab09-8dd86ebde25d.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/fb5d5852-abbd-4866-ab09-8dd86ebde25d.jpg","userid":1,"username":"admin","createtime":"2025-09-19 13:25:32","updatetime":"","deletetime":"","flag":1,"index":22},{"id":65,"membername":"王钰伟","roletype":16,"tutortype":3,"isboss":4,"major":"计算机视觉、模式识别","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cstrong\u003e\u003cspan style\u003d\"font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cspan style\u003d\"font-size:18px\"\u003e1.\u0026nbsp;Xinhao Cai, Qiuxia LAI, \u003cstrong\u003eYuwei Wang\u003c/strong\u003e, Wenguan Wang, Zeren Sun, Yazhou Yao*, \u0026quot;Poly Kernel Inception Network for Remote Sensing Detection\u0026quot;, IEEE Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2024.\u003c/span\u003e\u003c/p\u003e","imgname":"122106222797 王钰伟.jpg","imgdownname":"files/members/9d8cc060-dead-4cad-b897-5ced2ed5bf62.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/9d8cc060-dead-4cad-b897-5ced2ed5bf62.jpg","userid":1,"username":"admin","createtime":"2022-09-04 23:09:17","updatetime":"2025-06-24 11:40:56","deletetime":"","flag":1,"index":23},{"id":75,"membername":"宋春颖","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"cbab386b-95cd-459a-bf56-9ee7d008e414-removebg-preview.jpg","imgdownname":"files/members/7b45e720-a25a-4e92-b658-b46b205c8c70.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/7b45e720-a25a-4e92-b658-b46b205c8c70.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:12:03","updatetime":"2024-03-21 09:35:44","deletetime":"","flag":1,"index":24},{"id":76,"membername":"金舒文","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"微信图片_20231111161516.jpg","imgdownname":"files/members/96b89cd0-ed1c-4a65-938b-42f65c161225.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/96b89cd0-ed1c-4a65-938b-42f65c161225.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:18:13","updatetime":"","deletetime":"","flag":1,"index":25},{"id":77,"membername":"徐建强","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"\u003cp\u003e\u003cspan style\u003d\"box-sizing: border-box; font-weight: 700; color: rgb(85, 85, 85); white-space: pre-line;font-size:18px\"\u003e学术论文 | Publications:\u003c/span\u003e\u003c/p\u003e\u003cp\u003e1. \u003cstrong\u003eJianqiang Xu\u003c/strong\u003e, Gensheng Pei, Huafeng Liu, Yazhou Yao*, \u0026quot;GSV2X: Geometry-Aware Uncertainty Modeling and Orthogonal Fusion for Robust Roadside Perception\u0026quot;, IEEE/CVF Conference on Computer Vision and Pattern Recognition (\u003cstrong\u003eCVPR\u003c/strong\u003e), 2026\u003c/p\u003e","imgname":"b486b4f1-75d3-4584-b6aa-fcb8e8dc40a6-removebg-preview (1).jpg","imgdownname":"files/members/1054e290-f89b-4ee1-b43f-21f273824adf.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/1054e290-f89b-4ee1-b43f-21f273824adf.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:34:44","updatetime":"2026-02-21 19:16:38","deletetime":"","flag":1,"index":26},{"id":78,"membername":"彭世昱","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"8aab7550-e367-4991-86f9-1b8e04dd0c54-removebg-preview.jpg","imgdownname":"files/members/0b70e3da-e402-43eb-8673-2a46a614c3c7.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/0b70e3da-e402-43eb-8673-2a46a614c3c7.jpg","userid":1,"username":"admin","createtime":"2023-11-11 16:36:29","updatetime":"2024-03-21 09:36:11","deletetime":"","flag":1,"index":27},{"id":80,"membername":"段宇浪","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、模式识别","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"段宇浪.jpg","imgdownname":"files/members/c3956551-bd8b-4a67-88a0-a5bd95034661.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/c3956551-bd8b-4a67-88a0-a5bd95034661.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:28:07","updatetime":"","deletetime":"","flag":1,"index":28},{"id":81,"membername":"刘瑞阳","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、多媒体技术、机器学习 | Computer Vision, Multimedia, Machine Learning","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"刘瑞阳.jpg","imgdownname":"files/members/d354dafa-750e-49fe-99a5-49658610e0e2.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/d354dafa-750e-49fe-99a5-49658610e0e2.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:28:39","updatetime":"","deletetime":"","flag":1,"index":29},{"id":83,"membername":"孙雪婷","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、多媒体技术、标签噪声学习 | Computer Vision, Multimedia, Label Noise Learning","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"孙雪婷.JPG","imgdownname":"files/members/d75b0da1-ec88-47aa-8623-659d9c1366b7.JPG","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/d75b0da1-ec88-47aa-8623-659d9c1366b7.JPG","userid":1,"username":"admin","createtime":"2024-04-04 10:29:57","updatetime":"","deletetime":"","flag":1,"index":30},{"id":84,"membername":"邢佳薇","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、模式识别","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"邢佳薇.jpg","imgdownname":"files/members/0b0a8da8-d41f-401a-bc50-9ab6320f133e.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/0b0a8da8-d41f-401a-bc50-9ab6320f133e.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:30:33","updatetime":"","deletetime":"","flag":1,"index":31},{"id":85,"membername":"徐家炜","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、模式识别","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"徐家炜.jpg","imgdownname":"files/members/31d2e0e5-a4f7-42b5-a98a-c0599085fb1e.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/31d2e0e5-a4f7-42b5-a98a-c0599085fb1e.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:31:04","updatetime":"","deletetime":"","flag":1,"index":32},{"id":86,"membername":"杨明","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、模式识别、机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"杨明.jpg","imgdownname":"files/members/a8cddbea-521e-49f7-ac81-52b6317e460b.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/a8cddbea-521e-49f7-ac81-52b6317e460b.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:31:32","updatetime":"","deletetime":"","flag":1,"index":33},{"id":87,"membername":"姚瑶","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、多媒体技术、标签噪声学习 | Computer Vision, Multimedia, Label Noise Learning","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"姚瑶.jpg","imgdownname":"files/members/46b0d5fb-2288-4f5b-8af2-71104e56cd58.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/46b0d5fb-2288-4f5b-8af2-71104e56cd58.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:32:21","updatetime":"","deletetime":"","flag":1,"index":34},{"id":89,"membername":"张铝","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、多媒体技术、机器学习 | Computer Vision, Multimedia, Machine Learning","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"张铝.jpg","imgdownname":"files/members/3ef50f45-cf73-4d88-9111-57bd7f14ae9c.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/3ef50f45-cf73-4d88-9111-57bd7f14ae9c.jpg","userid":1,"username":"admin","createtime":"2024-04-04 10:34:26","updatetime":"","deletetime":"","flag":1,"index":35},{"id":90,"membername":"施少煌","roletype":16,"tutortype":3,"isboss":5,"major":"计算机视觉、模式识别","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"微信图片_20240407182042.jpg","imgdownname":"files/members/1e230974-6499-40a4-bbdc-d40bb683bc28.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/1e230974-6499-40a4-bbdc-d40bb683bc28.jpg","userid":1,"username":"admin","createtime":"2024-04-07 18:21:10","updatetime":"","deletetime":"","flag":1,"index":36},{"id":100,"membername":"陈培玺","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"23级 陈培玺 石.png","imgdownname":"files/members/d106e139-bfed-4003-98dd-2abd36dd82d7.png","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/d106e139-bfed-4003-98dd-2abd36dd82d7.png","userid":1,"username":"admin","createtime":"2025-09-18 19:52:33","updatetime":"","deletetime":"","flag":1,"index":37},{"id":101,"membername":"王晨宇","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"23-王晨宇-石.jpg","imgdownname":"files/members/ceb76f28-38d0-4d00-8046-4a95997605da.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/ceb76f28-38d0-4d00-8046-4a95997605da.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:53:17","updatetime":"","deletetime":"","flag":1,"index":38},{"id":102,"membername":"边超","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"边超 石 研二.jpg","imgdownname":"files/members/dbf33501-674e-434a-8854-aaa490640086.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/dbf33501-674e-434a-8854-aaa490640086.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:54:08","updatetime":"","deletetime":"","flag":1,"index":39},{"id":103,"membername":"陈明华","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"陈明华.png","imgdownname":"files/members/da8c11a6-7f94-45f6-b30f-3b540e46609e.png","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/da8c11a6-7f94-45f6-b30f-3b540e46609e.png","userid":1,"username":"admin","createtime":"2025-09-18 19:54:40","updatetime":"","deletetime":"","flag":1,"index":40},{"id":104,"membername":"陈颖","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"陈颖.jpg","imgdownname":"files/members/9f2fe44c-ed87-4e9e-b8ca-12ab5a74c309.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/9f2fe44c-ed87-4e9e-b8ca-12ab5a74c309.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:55:03","updatetime":"","deletetime":"","flag":1,"index":41},{"id":105,"membername":"邓宁宁","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"邓宁宁 石 研二.jpg","imgdownname":"files/members/b191eba5-208b-4be7-acdc-d3db1a45fae2.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/b191eba5-208b-4be7-acdc-d3db1a45fae2.jpg","userid":1,"username":"admin","createtime":"2025-09-18 19:56:05","updatetime":"","deletetime":"","flag":1,"index":42},{"id":106,"membername":"丰晴","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"丰晴.jpg","imgdownname":"files/members/dfb80e86-fbef-4035-a7f6-0a0421692d6d.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/dfb80e86-fbef-4035-a7f6-0a0421692d6d.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:00:03","updatetime":"","deletetime":"","flag":1,"index":43},{"id":107,"membername":"付嘉骏","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"付嘉骏.jpg","imgdownname":"files/members/064e8a41-927e-4e22-a025-c55fb4e9c131.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/064e8a41-927e-4e22-a025-c55fb4e9c131.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:01:00","updatetime":"","deletetime":"","flag":1,"index":44},{"id":108,"membername":"黄祥","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"黄祥 石.jpg","imgdownname":"files/members/906a60c9-6f6d-4ae5-93df-1e5569035403.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/906a60c9-6f6d-4ae5-93df-1e5569035403.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:01:48","updatetime":"","deletetime":"","flag":1,"index":45},{"id":109,"membername":"李娜","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"李娜.jpg","imgdownname":"files/members/30176d3f-13dc-485b-94d6-4d48f155d0f0.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/30176d3f-13dc-485b-94d6-4d48f155d0f0.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:02:18","updatetime":"","deletetime":"","flag":1,"index":46},{"id":110,"membername":"李世龙","roletype":16,"tutortype":3,"isboss":5,"major":"端到端自动驾驶、强化学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"李世龙 石 研二.png","imgdownname":"files/members/219cac37-e9cc-4519-b68c-2165f92f6cd7.png","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/219cac37-e9cc-4519-b68c-2165f92f6cd7.png","userid":1,"username":"admin","createtime":"2025-09-18 20:03:16","updatetime":"","deletetime":"","flag":1,"index":47},{"id":111,"membername":"钱海源","roletype":16,"tutortype":3,"isboss":5,"major":"语义分割","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"钱海源.jpg","imgdownname":"files/members/b33bd886-333b-4a9b-8a78-7cd16de8e586.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/b33bd886-333b-4a9b-8a78-7cd16de8e586.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:06:11","updatetime":"","deletetime":"","flag":1,"index":48},{"id":112,"membername":"张恭溥","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"石 研三 张恭溥.jpg","imgdownname":"files/members/7c6da272-4a5b-4b3e-85b9-d9b4a7db82de.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/7c6da272-4a5b-4b3e-85b9-d9b4a7db82de.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:06:56","updatetime":"","deletetime":"","flag":1,"index":49},{"id":113,"membername":"石金轩","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"石金轩.jpg","imgdownname":"files/members/b86b95ae-a4d7-4dda-8fd3-8fc4ffffbd13.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/b86b95ae-a4d7-4dda-8fd3-8fc4ffffbd13.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:07:21","updatetime":"","deletetime":"","flag":1,"index":50},{"id":114,"membername":"孙靖辉","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"孙靖辉.png","imgdownname":"files/members/9a39295c-2fca-4f6d-b70f-f29bd6645e7a.png","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/9a39295c-2fca-4f6d-b70f-f29bd6645e7a.png","userid":1,"username":"admin","createtime":"2025-09-18 20:08:22","updatetime":"","deletetime":"","flag":1,"index":51},{"id":115,"membername":"吴照","roletype":16,"tutortype":3,"isboss":5,"major":"无人系统","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"吴照 研一 石.jpg","imgdownname":"files/members/bc31600a-cc90-4c81-9daa-1cee7e56e6fb.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/bc31600a-cc90-4c81-9daa-1cee7e56e6fb.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:08:47","updatetime":"","deletetime":"","flag":1,"index":52},{"id":116,"membername":"杨子建","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"杨子健.png","imgdownname":"files/members/471f0851-9dd2-4604-a12f-76ca4d0f4197.png","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/471f0851-9dd2-4604-a12f-76ca4d0f4197.png","userid":1,"username":"admin","createtime":"2025-09-18 20:09:11","updatetime":"2025-09-20 22:18:16","deletetime":"","flag":1,"index":53},{"id":117,"membername":"张忠诚","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"张忠诚.jpg","imgdownname":"files/members/35543021-0372-4858-9ad2-199d05fcc860.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/35543021-0372-4858-9ad2-199d05fcc860.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:09:53","updatetime":"","deletetime":"","flag":1,"index":54},{"id":118,"membername":"章亚宁","roletype":16,"tutortype":3,"isboss":5,"major":"机器学习","email":"","college":"计算机科学与工程学院 | School of Computer Science and Engineering","school":"南京理工大学 | Nanjing University of Science and Technology","marks":"","imgname":"章亚宁.jpg","imgdownname":"files/members/5a839320-f99f-4b0d-aac2-00c5d4c29246.jpg","imageaddress":"C:\\apache-tomcat-8.0.53\\webapps\\milab\\files\\members/5a839320-f99f-4b0d-aac2-00c5d4c29246.jpg","userid":1,"username":"admin","createtime":"2025-09-18 20:10:27","updatetime":"","deletetime":"","flag":1,"index":55}]