2024 World AI IoT Congress

                                       RESEARCH KEYNOTE SERIES

V John Mathews

(Professor, Oregon State University , USA)

Bio: V John Mathews is a professor in the School of Electrical Engineering and Computer Science at the Oregon State University. He received his Ph.D. and M.S. degrees in electrical and computer engineering from the University of Iowa, Iowa City, Iowa in 1984 and 1981, respectively, and the B.E. (Hons.) degree in electronics and communication engineering from the Regional Engineering College (now National Institute of Technology), Tiruchirappalli, India in 1980. Prior to 2015, he was with the Department of Electrical & Computer Engineering at the University of Utah. He served as the chairman of the ECE department at Utah from 1999 to 2003, and as the head of the School of Electrical Engineering and Computer Science at Oregon State from 2015 to 2017. His current research interests are in nonlinear and adaptive signal processing and application of signal processing and machine learning techniques in neural engineering, biomedicine, and structural health management. Mathews is a Fellow of IEEE. He has served in many leadership positions of the IEEE Signal Processing Society. He is a recipient of the 2008-09 Distinguished Alumni Award from the National Institute of Technology, Tiruchirappalli, India, IEEE Utah Section’s Engineer of the Year Award in 2010, and the Utah Engineers Council’s Engineer of the Year Award in 2011. He was a distinguished lecturer of the IEEE Signal Processing Society for 2013 and 2014 and is the recipient of the 2014 IEEE Signal Processing Society Meritorious Service Award.

Title for Talk : Intuitive Control of Bionic Limbs for Amputees and People with Spinal Cord Injuries

Abstract: Recent technological innovations such as functional neuro-muscular stimulation (FNS) offer considerable benefits to paralyzed individuals. FNS can produce movement in paralyzed muscles by the application of electrical stimuli to the nerves innervating the muscles. The first part of this talk will describe how smooth muscle movements that track desired movements can be evoked using electrical stimulation via electrode arrays inserted into peripheral nerves. Animal experiments demonstrating the feasibility of the method will be described. The second part of this talk will describe efforts to interpret human motor intent from bioelectrical signals. Machine learning algorithms for accomplishing this objective will be presented. The decoded information can then be used to intuitively evoke desired movements of paralyzed muscles or control prosthetic devices in patients with limb loss, i.e., movements of the bionic limbs can be evoked by the users’ mind. Results of experiments involving human amputee subjects will be described and discussed.


Robert Hiromoto

(Professor , University of Idaho, USA)

Bio: Robert Hiromoto is a Professor and former Department Chair in Computer Science Department at the University of Idaho (UI). His research focus is in the areas of computational algorithms and the design of wireless communication protocols. Dr. Hiromoto has had extensive experience in high-performance and parallel computing. His most recent work has been on parallel graphics rendering architectures, a set theoretic estimation approach to decryption, and the design of UAV communication protocols. Dr. Hiromoto was formerly a professor of computer science at the University of Texas at San Antonio (UTSA), and a staff member for more than 12 years in the Computer Research group at the Los Alamos National Laboratory.(Based on document published on 15 April 2010).


Garrison W. Cottrell

(Professor, University of California, San Diego, USA)

Bio : Dr. Cottrell is a Professor in the Computer Science & Engineering Department at UCSD.  He is a member of the AI Group at UCSD. His research group, Gary’s Unbelievable Research Unit, publishes unbelievable research. Our research is strongly interdisciplinary. It concerns using neural networks and other computational models applied to problems in cognitive science and artificial intelligence, engineering and biology. He has had success using them for such disparate tasks as modeling how children acquire words, studying how lobsters chew, and nonlinear data compression. Most recently he worked on face and object recognition, visual salience and visual attention, and modeling early visual cortex.

Title Of Talk: An anatomically-inspired model of the visual system using deep learning

AbstractConvolutional Neural Networks (CNNs) are currently the best models we have of the ventral temporal lobe – the part of cortex engaged in recognizing objects. They have been effective at predicting the firing rates of neurons in monkey cortex, as well as fMRI and MEG responses in human subjects. They are based on several observations concerning the visual world: 1) pixels are most correlated with nearby pixels, leading to local receptive fields; 2) stationary statistics – the statistics of image pixels are relatively invariant across the visual field, leading to replicated features 3) objects do not change identity depending on their location in the image, leading to pooling of responses, making CNNs relatively translation invariant; and 4) objects are made of parts, leading to increasing receptive field sizes in deeper layers, so smaller parts are recognized in shallower layers, and larger composites in later layers. However, compared to the primate visual system, there are a couple of striking differences. CNNs have high resolution everywhere, whereas primates have a foveated retina, with high resolution for humans only about the size of your thumbnail at arm’s length, and steep dropoff in resolution towards the periphery. The mapping from the visual field to V1 is a log-polar transform. This has two main advantages: scale is just a left-right translation, and rotation in the image plane is a vertical translation. When these are given as input to a standard CNN, scale and rotation invariance is obtained. However, translation invariance is lost, which we make up for by moving our eyes about 3 times a second. We present results from a model with these constraints, and show that, despite rotation invariance, the model is able to capture the inverted face effect, while standard CNNs do not.


M. Reza Alam

(Researcher, TAF lab, University of Berkley, USA)

Bio : Born in Yazd, a small historic city at the geographic center of Iran, Reza received his BSc in Mechanical Engineering and MSc in Applied Mechanics from Sharif University of Technology, Tehran, Iran. He then joined the Mechanical Engineering program at Massachusetts Institute of Technology, Cambridge, MA. He received his Master of Science in Mechanical Engineering in 2005, Ph.D. in Mechanical Engineering in 2008, and then served as a Postdoctoral associate (2008-2009) and Lecturer (2009-2011) at MIT. In July 2011 Reza joined the faculty of the University of California, Berkeley.

Title For Talk: Underwater Wireless Communication through a Swarm of Super-Agile Autonomous Underwater Vehicles

Abstract: Underwater Wireless Data Communication is one of the most important outstanding problems in ocean engineering, hindering research expeditions and industrial developments. The reason is simple: Electromagnetic waves, at any frequency, are heavily absorbed by water, and, Acoustic waves (Sonar) have narrow bandwidths. Lack of underwater wireless communication has left our oceans mostly unexplored. In this presentation, I will delve into our endeavor to tackle this challenge through the use of a swarm of small-size Autonomous Underwater Vehicles (AUVs) that relay a laser beam (data carrier) from the seabed to the surface of the ocean. Our AUVs will be lined up at distances less than the maximum range of the laser, then Each AUV will receive the signal from the one below, amplify the signal, and send it to the next one on the top until the signal reaches the station on the surface where it can easily reach satellites (via RF) and hence anywhere in the world. To keep the communication line reliable, particularly against oceanic disturbances due to surface waves, internal waves, oceanic currents, and marine life, a supervised deep learner is designed to quantify patterns in the background disturbance, based on which an optimal network topology for distribution of our drones is calculated such that reliability in smooth communication is maximized.


Chandra Krintz

Professor, University of California, Santa Barbara, USA)

Bio: Chandra Krintz is a Professor of Computer Science at the University of California, Santa Barbara (UCSB) and co-founder and Chief Scientist of AppScale Systems, Inc. She joined the UCSB faculty in 2001 after receiving her M.S. and Ph.D. degrees in Computer Science from the University of California, San Diego (UCSD). Chandra has led a number of different research projects that have advanced the state-of-the-art in programming and distributed systems in ways that improve performance and energy consumption, and that ease development and deployment of software. Recently, her work has focused on the intersection of IoT, edge and cloud computing, and data analytics with applications in farming, ranching, and conservation science (cf SmartFarm and WTB). Chandra has advised over 70 undergraduate and graduate students, has published numerous research articles regarding the implementation of programming languages in venues that include DEBS, SEC, ASPLOS, IoTDI, WWW, HotCloud, Cloud, PLDI, TPDS, OOPSLA, IC2E, and others, participates in efforts to broaden participation in computing, and is the progenitor of the AppScale project. Chandra’s efforts have been recognized with a NSF CAREER award, the CRA-W Anita Borg Early Career Award (BECA), the UCSB Academic Senate Distinguished Teaching Award, and as the 2015 UCSB Sustainability Champion. Chandra is an IEEE and ACM senior member, has served as a member-at-large and vice chair of the ACM SIGPLAN Executive Committee, and serves as an associate editor of IEEE TCC and TPDS. She is currently the Computer Science Vice Chair of Graduate Affairs and served as the Vice Chair of Undergraduate Affairs from 2014 to 2017.

Title for Talk : Sustainable IoT Systems for Digital Agriculture

Abstract: The Internet of Things (IoT) has great potential for extending human perception and automating intelligent actuation and control of cyber-physical systems. However, its adoption and wide spread use is nascent. A key reason for this is that IoT deployments are vastly heterogenous with widely varying resource constraints which makes them very challenging to program, deploy, secure, and maintain. Further, many of the most compelling use cases from climate resilience, disaster prediction and response, ecology and agriculture, etc. require deployments that are remote, inaccessible, hostile to electronics, that lack power infrastructure (forcing reliance on batteries), and span trust domains (edge versus cloud).

Our research focuses on new approaches that simplify the development and management of IoT applications in these settings. In particular, we have designed a new programming system that makes AI/ML applications portable across the edge-cloud continuum, robust to faults and intermittent connectivity, and easier to deploy and manage. We enable this through a novel combination of a distributed serverless runtime, dataflow programming abstractions, and intelligent deployment optimizations. We design, test, and evaluate these advances using real applications for digital agriculture (as part of the UCSB SmartFarm project). We find that they simplify IoT software development and deployment while reducing energy consumption significantly.


 

Satyajit Chakrabarti (@csatyajit) / X

Satyajit Chakrabarti

(President, IEM America Corporation, USA)

Bio : Prof. Satyajit Chakrabarti, is a Professor, Technologist, Serial Entrepreneur and Venture Capitalist. He obtained his PhD in Nanotechnology from the National Institute of Technology and Masters in Computer Science from the University of British Columbia. Prof. Chakrabarti manages companies in the field of Technology, Healthcare and Education. He is an avid philanthropist and social Entrepreneur and runs non profits and two educational Universities and five colleges in India in the fields of Engineering, Management, Hospitality, Law, Healthcare with over 10,000 students and over 100,000 alumni. His Technology companies work with big multinational companies to deliver products and services in the fields of Artificial Intelligence, Internet of Things, Virtual and Augmented Reality, Cybersecurity and Web Development. Prof. Chakrabarti is a passionate researcher with over 100 publications in the fields of Artificial Intelligence, IOT and Data Science and over 20 patents filed in various fields of Technology. Prof. Chakrabarti is an avid investor in early stage startups and a mentor and teacher to thousands of students across the globe.

Prof. Satyajit Chakrabarti is very passionate about technology, Innovation, entrepreneurship, clean energy, nature and sustainability, renewable energy, learning technologies and education, media, design, arts and culture.

His special expertise includes Angel Investment, Venture Capital and Private Equity, Investments, Management Consulting, Strategic Management, Technology Applications, Problem Solving using innovation and innovative technologies.

Title Of Talk : Latest AI, IOT and emerging technology use cases in the industry 4.0

Abstract: With the popularity and adoption of emerging technologies in the industry today, innovative products and services have a huge market today. We have developed advanced IoTsolutions for asset health monitoring, water level sensing, warehouse management, and industrial automation. Our IoTtech stack includes hardware components, connectivity protocols like MQTT, cloud platforms like AWS IoT Core, edge/fog computing architectures, and sophisticated user dashboards.

We work in creating immersive AR and VR experiences leveraging tools like Unity and Blender. Our AR capabilities span marker-based and markerless applications for industries like education, retail, and training. The VR solutions encompass realistic virtual environments for gaming, simulations, and industrial use cases.

We have expertise in Artificial Intelligence (AI), empowering businesses with powerful predictive capabilities, computer vision, natural language processing, and intelligent decision support systems.

Important Deadlines

Full Paper Submission:3rd April 2024
Acceptance Notification: 19th April 2024
Final Paper Submission:12th May 2024
Early Bird Registration: 3rd May 2024
Presentation Submission: 12th May 2024
Conference: 29 – 31 May 2024

Previous Conference

IEEE AIIoT 2022

Sister Conferences

IEEE CCWC 2022

IEEE UEMCON 2022

IEEE IEMCON 2022

Search

[wpdreams_ajaxsearchlite]

Announcements


•    Best Paper Award will be given for each track