2025 9th International Conference on Electronic Information Technology and Computer Engineering
Invited Speakers
Home / Invited Speakers

高尚策.jpg

Prof. Shangce Gao, University of Toyama, Japan

高尚策,教授,日本国立富山大学(IEEE Senior Member、全球前2%顶尖科学家)

Shangce Gao is a Chair Professor at the Artificial Intelligence Laboratory, Faculty of Engineering, University of Toyama, Japan. He earned his B.S. degree from Southeast University in 2005 and his Ph.D. from the University of Toyama in 2011. From April 2011, he served as an Associate Professor at Tongji University, and since April 2014, he has held a tenured Associate Professorship at the University of Toyama. In 2023, he was exceptionally promoted to Chair Professor, becoming the youngest person in Japan to receive such a distinction. His primary research interests include neural networks, evolutionary computation, and their applications. He has published over 200 SCI-indexed journal articles in leading venues such as IEEE TNNLS and TEVC, as well as more than 100 conference papers in premier forums including ACM SIGGRAPH and IJCAI. He serves as an Associate Editor and has been recognized as the Annual Best Editor for journals such as IEEE TNNLS and IEEE/CAA JAS.


Title: Towards Solving Big Problems with Ultra-Miniature Neuron Models

Abstract: Currently, large-scale models represented by ChatGPT have achieved remarkable success in fields such as text processing and image generation. However, the substantial energy consumption required for their training and deployment has raised widespread concern. This presentation focuses on biologically plausible, energy-efficient small-scale models, aiming to overcome the current bottlenecks in deep learning related to fundamental mechanisms, models, and algorithms. Starting from macro-level evolutionary theory and micro-level single-neuron structures, we propose an innovative dendritic learning theory based on dendritic neuron models. By integrating intelligent optimization and learning algorithms, we successfully implement evolutionary dendritic learning and examine its key performance metrics, including computational capacity, associative memory, learning efficiency, classification ability, and prediction accuracy. Finally, the proposed model is applied to large-scale, complex big data analysis and mining tasks, demonstrating its potential and advantages in real-world applications.



赵庆林.jpg

Prof. Qinglin Zhao, Macau University of Science and Technology, China

赵庆林,教授,澳门科技大学

Prof. Qinglin Zhao, IEEE Senior Member and China CCF Member, is a Professor with the School of Computer Science and Engineering (SCSE) at the Macau University of Science and Technology (MUST), Macau. He received his B.S. degree from Hubei University, Wuhan, China in 1998, his M.S. degree from Huazhong University of Science and Technology, Wuhan in 2001, and his Ph.D. degree from the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China in 2005. Following his doctoral studies, he worked as a Postdoctoral Researcher with The Chinese University of Hong Kong and the Hong Kong University of Science and Technology from May 2005 to August 2009, before joining SCSE, MUST in September 2009. His research interests encompass machine learning, blockchain and Web 3.0, Internet of Things, wireless communications and networking, edge computing, and quantum machine learning. Prof. Zhao has published more than 100 peer-reviewed papers, including 30 IEEE Transactions papers, and holds more than 30 patents, including 8 U.S. patents. He received the BOC Excellent Research Award from Macau University of Science and Technology in both 2011 and 2015. Currently, he serves as an Associate Editor for IEEE Transactions on Mobile Computing and IET Communications, General Chair of the 34th IEEE Wireless and Optical Communications Conference (WOCC 2025), and as an expert review committee member for the Guangdong Science and Technology Award. 


Title: Enabling Reliable Cross-Technology Communication through General and Offset-Resistant Physical-Layer Acknowledgement

Abstract: Cross-technology communication (CTC) enables direct communication among heterogeneous wireless devices (Bluetooth, WiFi, ZigBee) but suffers from inherent unreliability due to technology heterogeneity. Most existing CTC designs lack acknowledgment mechanisms, and the few available solutions are limited to specific technology pairs and vulnerable to sampling offsets. In this talk, I will present GOP-ACK, a general and offset-resistant physical-layer acknowledgment approach that addresses these critical limitations. GOP-ACK uses offset-resistant signal encoding that provides broad applicability across diverse CTC scenarios and robust detection under sampling offset conditions. I will discuss our practical design guidelines and demonstrate two implementation cases: ZigBee-to-BLE and ZigBee-to-WiFi feedback. Our experimental validation shows that GOP-ACK significantly outperforms existing approaches, making cross-technology communication more practical for IoT and heterogeneous wireless networks.



毛先领2.jpg

Prof. Xianling Mao, Beijing Institute of Technology, China

毛先领,教授,北京理工大学(国家级青年人才)

Xian-Ling Mao, is a Professor and Ph.D. Supervisor at Beijing Institute of Technology. His main research focuses on information retrieval and data mining. He has published over 60 papers in top-tier conferences and journals, including SIGIR, WWW, SIGMOD, ACL, TOIS, and TKDE. He led the research and development of series of scientific big data mining systems like Hammer PDF. His work has been recognized with 2 Best Paper Awards and 3 provincial/ministerial-level Science and Technology Progress Awards. 


Title: Automatic Evaluation Methods for Generated Content Quality

Abstract: Generative Artificial Intelligence is experiencing rapid development, and the evaluation of generated content quality has become crucial for the high-quality advancement of this field. This talk will systematically review existing methods and introduce our research group's related studies on this topic.




李华青.jpg

Prof. Huaqing Li, Southwest University, China

李华青,教授,西南大学(研究生院副院长、国家级青年人才、IET Fellow、全球前2%顶尖科学家)

Huaqing Li is the deputy dean of the Graduate School of Southwest University, professor and doctoral supervisor of the College of Electronic and Information Engineering, Young Changjiang Scholar, Fellow of the Institute of Engineering and Technology (IET Fellow), academic leader in Chongqing, principal investigator of the Outstanding Young Scientist Fund of Chongqing, and top-notch youth talent in Chongqing. He has been selected as one of the Highly Cited Chinese Researchers (Elsevier), one of the Top 100,000 scientists in the world, and one of the World's Top 2% Scientists (Stanford University). He has proposed a series of theories and methods for the cooperative operation of multi-agent systems in adverse environments, and has published more than 70 papers in IEEE Transactions, undertaken three national projects, and formulated one standard. He has won the first and second prizes of the Chongqing Natural Science Award, the first prize of the Innovation Award of the Chongqing Association for Science and Technology, the second prize of the Innovation Award of the China Institute of Command and Control, and the second prize of the Natural Science Award of the Chinese Association of Automation.


Title: Design and Application of Distributed Composite Optimization Algorithm Based on Operator Splitting

Abstract: Distributed optimization theory and methods, as an effective way of making optimal decisions, have important research value and wide applications in the fields of large-scale complex data analysis, machine learning, optimal scheduling of energy systems, and game theory. The most of existing distributed optimization algorithms are only applicable to optimization problems with a single objective function, and still face certain challenges in solving practical engineering problems with complex optimization structures and physical constraints. This report will introduce the design ideas of fully distributed optimization algorithms based on operator splitting theory for the composite optimization problem with triple convex functions. Then, the convergence analysis framework of the algorithm is constructed using operator theory. Finally, the effectiveness of the proposed distributed approach is verified by several simulations.





夏书银.jpg

Prof. Shuyin Xia, Chongqing University of Posts and Telecommunications, China

夏书银,教授,重庆邮电大学(人工智能学院副院长、前沿交叉研究院副院长

Xia Shuyin, Professor and Ph.D. Supervisor at Chongqing University of Posts and Telecommunications, serves as Associate Dean of the School of Artificial Intelligence, Associate Dean of the Frontier Interdisciplinary Research Institute, and Deputy Director of the Key Laboratory of Cyberspace Big Data Intelligent Computing, Ministry of Education. He has led multiple national-level key projects, including the National Key R&D Program, the NSFC Original Exploration Program, and the National Excellent Young Scientists Fund Project. Together with Professors Wang Guoyin and Gao Xinbo, he proposed and developed the Granular-ball Computing theory. His research achievements, published as first or corresponding author in top-tier AI journals and conferences such as IEEE TPAMI, TKDE, TNNLS, TCYB, ICML, AAAI, IJCAI, and ICDE, have earned him prestigious awards including the CCF Natural Science First Prize, Wu Wenjun AI Science and Technology Progress First Prize, Chongqing Natural Science First Prize, and National Teaching Achievement Second Prize. His research focuses on Granular-ball Computing, Computer Vision, Machine Learning, Deep Learning, Evolutionary Computation, and Swarm Intelligence.


Title: Granular-ball Computing: An Efficient, Robust, and Explainable New Artificial Intelligence Theory

Abstract: Current artificial intelligence methods primarily rely on the finest-grained pixels/single-granularity representations, lacking the inherent efficiency, robustness, and explainable conceptual descriptive capabilities of the human brain's "large-scale first" cognition. To address this, Granular-ball Computing has been proposed and developed based on multi-granularity cognitive computing theory. This theory simulates the human brain's "coarse-to-fine" cognitive mechanism by generating representations from coarse to fine granularity. It uses granular-balls of varying sizes to cover data samples, achieving adaptive and efficient multi-granularity conceptual representations. Furthermore, it constructs novel computing paradigms based on granular-balls, delivering higher efficiency, robustness, and explainability compared to traditional AI methods. Currently, Granular-ball Computing has garnered significant attention not only from renowned domestic scholars but also from leading international researchers at institutions such as the University of Michigan, Indian Institute of Technology, and University of Alberta. This presentation introduces the theoretical advancements and latest developments in Granular-ball Computing, including:Granular-ball Classifiers, Granular-ball Clustering, Granular-ball Graph Networks, Granular-ball Reinforcement Learning, Granular-ball Large Models, Granular-ball Evolutionary Computation, Granular-ball Open Continual Learning, Granular-ball Federated Learning, Granular-ball Rough Sets, Granular-ball Fuzzy Sets, Granular-ball Three-Way Decision, Granular-ball Superpixels, Granular-ball NLP). These innovations demonstrate the broad applicability and transformative potential of Granular-ball Computing across diverse AI domains.




胡鹏伟.jpg

Prof. Pengwei Hu, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, China

胡鹏伟,研究员,中国科学院新疆理化技术研究所(国家海外高层次青年人才)

Pengwei Hu received the Ph.D. in computer science from the Department of Computing, The Hong Kong Polytechnic University, Hong Kong, in 2019. From 2020 to 2022, he was a Lead Scientist with the Science and Technology Office, Merck KGaA. He is currently a Professor with Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, China. His research interests include organoid, machine learning, biomedical informatics and biomedicine. He has published more than 100 papers at leading AI and Bio conferences and journals.


Title: AI-Driven Analysis of Organoid for Miniature Twins Exploring

Abstract: As an emerging biomedical tool, organoid technology can generate miniature models with specific tissue or organ functions, which provides a powerful platform for disease research, drug screening and personalized medicine. The introduction of artificial intelligence has further improved the accuracy and efficiency of organoid research. Through computational vision and other technologies, accurate assessment and prediction of organoid function, morphology and maturity have been realized. This report introduces the latest application progress of artificial intelligence in organoid technology, including individual difference assessment, multi-omics analysis and open data platform construction, aiming to reveal the great potential of this interdisciplinary field and provide new ideas for future in vitro model research.



胡伦.jpg

Prof. Lun Hu, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, China

胡伦,研究员,中国科学院新疆理化技术研究所(中国科学院高层次人才入选者、新疆“天山英才”-科技创新领军)

Lun Hu (Senior Member, IEEE) received the B.S. degree in automation from Huazhong University of Science and Technology, Wuhan, China, in 2006, and the M.Sc. degree in information technology and Ph.D. degree in computer science from Hong Kong Polytechnic University, Hong Kong, China, in 2008 and 2015, respectively. He is currently a Professor of Computer Science with the Xinjiang Technical Institute of Physics & Chemistry of the Chinese Academy of Sciences, Urumqi, China. He has authored or co-authored over 100 papers in the areas of his interests. His research interests include graph machine learning and knowledge reasoning, as well as their applications in bioinformatics. Dr. Hu was the recipient of the Best Paper Award from the International Conference on Intelligent Computing in 2023. He is currently serving as an editorial board member in several international journals, such as PLOS Computational Biology and Scientific Reports. His Google Scholar page is https://scholar.google.com/citations?hl=en&user=KshzgDIAAAAJ.


Title: Interpretable identification of cancer genes across biological networks via transformer-powered graph representation learning

Abstract: Graph representation learning has been leveraged to identify cancer genes from biological networks. However, its applicability is limited by insufficient interpretability and generalizability under integrative network analysis. Here we report the development of an interpretable and generalizable transformer-based model that accurately predicts cancer genes by leveraging graph representation learning and the integration of multi-omics data with the topologies of homogeneous and heterogeneous networks of biological interactions. The model allows for the interpretation of the respective importance of multi-omicand higher-order structural features, achieved state-of-the-art performance in the prediction of cancer genes across biological networks (including networks of interactions between miRNA and proteins, transcription factors and proteins, and transcription factors and miRNA) in pan-cancer and cancer-specific scenarios, and predicted 57 cancer-gene candidates (including three genes that had not been identified by other models) among 4,729 unlabelledgenes across 8 pan-cancer datasets. The model’s interpretability and generalization may facilitate the understanding of gene-related regulatory mechanisms and the discovery of new cancer genes.