2024 World AI IoT Congress

                                       RESEARCH KEYNOTE SERIES

V John Mathews

(Professor, Oregon State University , USA)

Bio: V John Mathews is a professor in the School of Electrical Engineering and Computer Science at the Oregon State University. He received his Ph.D. and M.S. degrees in electrical and computer engineering from the University of Iowa, Iowa City, Iowa in 1984 and 1981, respectively, and the B.E. (Hons.) degree in electronics and communication engineering from the Regional Engineering College (now National Institute of Technology), Tiruchirappalli, India in 1980. Prior to 2015, he was with the Department of Electrical & Computer Engineering at the University of Utah. He served as the chairman of the ECE department at Utah from 1999 to 2003, and as the head of the School of Electrical Engineering and Computer Science at Oregon State from 2015 to 2017. His current research interests are in nonlinear and adaptive signal processing and application of signal processing and machine learning techniques in neural engineering, biomedicine, and structural health management. Mathews is a Fellow of IEEE. He has served in many leadership positions of the IEEE Signal Processing Society. He is a recipient of the 2008-09 Distinguished Alumni Award from the National Institute of Technology, Tiruchirappalli, India, IEEE Utah Section’s Engineer of the Year Award in 2010, and the Utah Engineers Council’s Engineer of the Year Award in 2011. He was a distinguished lecturer of the IEEE Signal Processing Society for 2013 and 2014 and is the recipient of the 2014 IEEE Signal Processing Society Meritorious Service Award.

Title for Talk : Intuitive Control of Bionic Limbs for Amputees and People with Spinal Cord Injuries

Abstract: Recent technological innovations such as functional neuro-muscular stimulation (FNS) offer considerable benefits to paralyzed individuals. FNS can produce movement in paralyzed muscles by the application of electrical stimuli to the nerves innervating the muscles. The first part of this talk will describe how smooth muscle movements that track desired movements can be evoked using electrical stimulation via electrode arrays inserted into peripheral nerves. Animal experiments demonstrating the feasibility of the method will be described. The second part of this talk will describe efforts to interpret human motor intent from bioelectrical signals. Machine learning algorithms for accomplishing this objective will be presented. The decoded information can then be used to intuitively evoke desired movements of paralyzed muscles or control prosthetic devices in patients with limb loss, i.e., movements of the bionic limbs can be evoked by the users’ mind. Results of experiments involving human amputee subjects will be described and discussed.


Robert Hiromoto

(Professor , University of Idaho, USA)

Bio: Robert Hiromoto is a Professor and former Department Chair in Computer Science Department at the University of Idaho (UI). His research focus is in the areas of computational algorithms and the design of wireless communication protocols. Dr. Hiromoto has had extensive experience in high-performance and parallel computing. His most recent work has been on parallel graphics rendering architectures, a set theoretic estimation approach to decryption, and the design of UAV communication protocols. Dr. Hiromoto was formerly a professor of computer science at the University of Texas at San Antonio (UTSA), and a staff member for more than 12 years in the Computer Research group at the Los Alamos National Laboratory.(Based on document published on 15 April 2010).


Garrison W. Cottrell

(Professor, University of California, San Diego, USA)

Bio : Dr. Cottrell is a Professor in the Computer Science & Engineering Department at UCSD.  He is a member of the AI Group at UCSD. His research group, Gary’s Unbelievable Research Unit, publishes unbelievable research. Our research is strongly interdisciplinary. It concerns using neural networks and other computational models applied to problems in cognitive science and artificial intelligence, engineering and biology. He has had success using them for such disparate tasks as modeling how children acquire words, studying how lobsters chew, and nonlinear data compression. Most recently he worked on face and object recognition, visual salience and visual attention, and modeling early visual cortex.

Title Of Talk: An anatomically-inspired model of the visual system using deep learning

AbstractConvolutional Neural Networks (CNNs) are currently the best models we have of the ventral temporal lobe – the part of cortex engaged in recognizing objects. They have been effective at predicting the firing rates of neurons in monkey cortex, as well as fMRI and MEG responses in human subjects. They are based on several observations concerning the visual world: 1) pixels are most correlated with nearby pixels, leading to local receptive fields; 2) stationary statistics – the statistics of image pixels are relatively invariant across the visual field, leading to replicated features 3) objects do not change identity depending on their location in the image, leading to pooling of responses, making CNNs relatively translation invariant; and 4) objects are made of parts, leading to increasing receptive field sizes in deeper layers, so smaller parts are recognized in shallower layers, and larger composites in later layers. However, compared to the primate visual system, there are a couple of striking differences. CNNs have high resolution everywhere, whereas primates have a foveated retina, with high resolution for humans only about the size of your thumbnail at arm’s length, and steep dropoff in resolution towards the periphery. The mapping from the visual field to V1 is a log-polar transform. This has two main advantages: scale is just a left-right translation, and rotation in the image plane is a vertical translation. When these are given as input to a standard CNN, scale and rotation invariance is obtained. However, translation invariance is lost, which we make up for by moving our eyes about 3 times a second. We present results from a model with these constraints, and show that, despite rotation invariance, the model is able to capture the inverted face effect, while standard CNNs do not

 

Important Deadlines

Full Paper Submission:27th March 2024
Acceptance Notification: 17th April 2024
Final Paper Submission:22nd April 2024
Early Bird Registration: 2nd May 2024
Presentation Submission: 2nd May 2024
Conference: 29 – 31 May 2024

Previous Conference

IEEE AIIoT 2022

Sister Conferences

IEEE CCWC 2022

IEEE UEMCON 2022

IEEE IEMCON 2022

Search

[wpdreams_ajaxsearchlite]

Announcements


•    Best Paper Award will be given for each track