2024 3rd International Conference on Image Processing, Computer Vision and Machine Learning
Speakers
Home / Speakers



Speakers

Prof. Hsiao-Hwa Chen.png


Prof. Hsiao-Hwa Chen, National Cheng Kung University

IEEE Fellow, IET Fellow, BCS Fellow, AAIA Fellow

JHsiao-Hwa Chen (S'89-M'91-SM'00-F'10) is currently a Distinguished Professor in the Department of Engineering Science, National Cheng Kung University, Taiwan. He obtained his BSc and MSc degrees from Zhejiang University, China, and a PhD degree from the University of Oulu, Finland, in 1982, 1985, and 1991, respectively. He has authored or co-authored over 500 technical papers in major international journals and conferences, six books, and more than ten book chapters in the areas of communications. He served as the TPC Chair for IEEE Globecom 2019. He was the founding Editor-in-Chief of Wiley’s Security and Communication Networks Journal. He is the recipient of the best paper award in 2021 IEEE Systems Journal and the IEEE 2016 Jack Neubauer Memorial Award. He served as the Editor-in-Chief for IEEE Wireless Communications from 2012 to 2015. He was an elected Member-at-Large of IEEE ComSoc from 2015 to 2016. He is a Fellow of IEEE, Fellow of IET, Fellow of BCS and Fellow of AAIA. H-index 85.

Speech title:6G and Beyond Wireless Communications – Technical Issues, Challenges and Applications

Abstract: This talk will focus on various prospects about possible enabling technologies to implement the sixth generation (6G) and beyond wireless communication systems. The main technical issues, challenges and applications of those enabling technologies will be identified and discussed in the sequel. The talk will begin with an introduction of the major technical requirements of upcoming 6G wireless communications and its current development background. The systems with the applications of high-frequency spectra to implement 6G and beyond wireless communications will be enlisted first, followed by the discussions on 6G network architecture, feasible physical layer technologies, effective communication techniques, as well as other related technologies. 



Prof. Jie Yang, Shanghai Jiao Tong University, China

Jie Yang received a bachelor’s degree in Automatic Control in Shanghai Jiao Tong University (SJTU), where a master’s degree in Pattern Recognition & Intelligent System was achieved three years later. In 1994, he received Ph.D. at Department of Computer Science, University of Hamburg, Germany. Now he is the Professor and Director of Institute of Image Processing and Pattern recognition in Shanghai Jiao Tong University. He is the principal investigator of more than 30 national and ministry scientific research projects in image processing, pattern recognition, data mining, and artificial intelligence. He has published six books,more than five hundreds of articles in national or international academic journals and conferences. Google citation over 22200,H-index 80. Up to now, he has supervised 5 postdoctoral, 46 doctors and 70 masters, awarded six research achievement prizes from ministry of Education, China and Shanghai municipality. Two Ph.D. dissertation he supervised was evaluated as “National Best Ph.D. Dissertation” in 2009 and in 2017.  Two Ph.D. dissertations he supervised were evaluated as “Shanghai Best Ph.D. Dissertation” in 2009 and 2010. He has owned 48 patents. 

Speech title:Training Deep Neural Networks in Tiny Subspaces

Abstract: Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that theycould be trained in low-dimensional subspaces. In thistalk, we propose a Dynamic Linear Dimensionality Reduction (DLDR) based on the low-dimensional properties of the training trajectory. The reduction method is efficient, supported by comprehensive experiments: optimizing DNNs in40-dimensional spaces can achieve comparable performance as regular training over thousands or even millions of parameters. Since there are only a few variables to optimize, we develop an efficient quasi-Newton-based algorithm, obtain robustness to label noise, and improve the performance of well-trained models, which are three follow-up experiments that can show the advantages of finding such low-dimensional subspaces.


杨杰教授.png