2021 World AI IoT Congress


RESEARCH KEYNOTE SERIES

Elisa Bertino

(Professor, Purdue University ,USA)

Bio: Elisa Bertino is professor of Computer Science at Purdue University. Prior to joining Purdue, she was a professor and department head at the Department of Computer Science and Communication of the University of Milan. She has been a visiting researcher at the IBM Research Laboratory (now Almaden) in San Jose, at the Microelectronics and Computer Technology Corporation, at Rutgers University, and at Telcordia Technologies. Her main research interests include security, privacy, database systems, distributed systems, and sensor networks. Her recent research focuses on digital identity management, biometrics, IoT security, security of 4G and 5G cellular network protocols, and policy infrastructures for managing distributed systems. Prof. Bertino has published more than 700 papers in all major refereed journals, and in proceedings of international conferences and symposia. She has given keynotes, tutorials and invited presentations at conferences and other events. She is a Fellow member of ACM, IEEE, and AAAS. She received the 2002 IEEE Computer Society Technical Achievement Award “For outstanding contributions to database systems and database security and advanced data management systems”, the 2005 IEEE Computer Society Tsutomu Kanai Award for “Pioneering and innovative research contributions to secure distributed systems”, and the ACM 2019-2020 Athena Lecturer Award.

Title of Talk: IoT Security

Abstract:  The  Internet  of  Things  (IoT)  paradigm  refers  to  the  network  of  physical  objects  or  “things”  embedded  with  electronics,  software,  sensors,  and connectivity  to  enable  objects  to  exchange  data  with  servers,  centralized  systems,    and/or  other  connected  devices  based  on  a  variety  of  communication  infrastructures.  IoT  makes  it  possible  to  sense  and  control  objects  creating  opportunities  for  more  direct  integration  between  the  physical  world and computer-based systems. Furthermore, the deployment of AI techniques enhances the autonomy of IoT devices and systems. IoT will thus usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity).  However,  because  of  its  fine-grained,  continuous  and  pervasive  data  acquisition  and  control  capabilities, IoT raises concerns about security, privacy, and safety. Deploying existing solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in IoT security and privacy, we outline a security lifecycle approach to  securing  IoT  data,  and then focus on our recent work on security analysis for cellular network protocols and edge-based anomaly detection based on machine learning techniques. We will conclude with a brief discussion of our recent work focusing on security and safety-constrained autonomous IoT devices that use use reinforcement learning techniques.


 

Bharat K. Bhargava

(Professor, Purdue University ,USA)

Bio: Bharat Bhargava is a professor of the Department of Computer Science with a courtesy appointment in the School of Electrical & Computer Engineering at Purdue University. His recent research is on Intelligent Autonomous Systems and data analytics and machine learning. It includes cognitive autonomy, reflexivity, deep learning and knowledge discovery. His earlier work on Waxed Prune with MIT and NGC built a prototype for privacy preserving data dissemination in cross-domains. Currently he is leading the NGC REALM consortium. He has graduated the largest number of Ph.D students in CS department at Purdue and is active in supporting/mentoring minority students. In 2003, he was inducted in the Purdue’s Book of Great Teachers. In 2017, he received the Helen Schleman Gold Medallion Award for supporting women at Purdue and Focus award for advancing technology for differently abled students.

Title of Talk: Real Application of Machine Learning (REALM): Situation Knowledge on Demand (SKOD)

Abstract:  Extracting relevant patterns from heterogeneous data streams poses significant computational and analytical challenges. Identifying such patterns and pushing corresponding content to interested users according to mission needs in real-time is the challenge. This research utilizes the best in Database systems, Knowledge representation, Machine Learning to get the right data to the right user at the right time with completeness and low noise. If user’s need is unmet, queries evolve and get modified to come close to satisfy mission needs which may themselves be unclear. If need is partially met, when new streaming data streams in, our research connects relevant data to queries. The knowledge for further processing is kept in the form of queries (megabytes) vs database (giga bytes). The project deals with multi-media data at peta and zeta scale. The research leads to a scalable, real-time, fault-tolerant, privacy preserving architecture that consumes streams of multimodal data (e.g., video, text, sound) utilizing publish/subscribe stream engines and RDBMS microservices. We utilize neural networks to extract relevant objects from video and latent semantic indexing techniques to model topics for unstructured text. We presents a unique Situational Knowledge Query Engine that continuously builds a multimodal relational knowledge base constructed using SQL queries and pushes dynamic content to relevant users through triggers based on modeling of users’ interests. We analyze an extensive collection of Cambridge data (millions of Twitter tweets, 35+ structured datasets, and 100+ hours of video traffic, and needs for police, public works and citizens).At present data from West Lafayette police is being analyzed to provide identifying suspicious activity and deal with disasters such as school shooting. We will continue to learn from NG researchers to demonstrate the feasibility of the proof-of-concept. Research has resulted in Darpa proposals, collaborations with Sandia, JPL, and multiple NGC IRADS and many research papers and Ph.D thesis.


Songwu Lu

(Professor, University of California, Los Angeles ,USA)

Bio: Dr. Songwu Lu is the  Professor in Computer Science Department at University of California, Los Angeles. He is  leading Wireless Networking Group (WiNG) at UCLA. His research interests include wireless networking, mobile systems, cloud computing and wireless and Internet security. Prior to UCLA, he graduated with a Ph.D. from University of Illinois at Urbana-Champaign in 1999.


Heng Ji

(Professor, University of Illinois ,USA)

Bio: Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department of University of Illinois at Urbana-Champaign. She is an Amazon Scholar. She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge Base Population and Knowledge-driven Generation. She was selected as “Young Scientist” and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. The awards she received include “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014, Bosch Research Award in 2014-2018 and Tencent AI Lab Rhino-Bird Gift Fund in 2019, and ACL2020 Best Demo Paper Award. She has served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018. She is elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2021.

Title of Talk: Information Surgery

Abstract: For the first time in human history, with the creation of the Internet, the Web, and more recently modern social media (such as Twitter, YouTube, and Instagram), sharing information at scale has become accessible to all through Internet of things (IoT). This development led to the generation of vast amounts of online information from a proliferation of different producers. But it has also facilitated the spread of false and inaccurate information that may inflict harm, as evidenced by the rise of information disorder. In recent years, generative neural network models in natural language processing and computer vision have become the frontier for malicious actors to controllably generate misinformation at scale. These realistic-looking AI-generated “fake news” have been shown to easily deceive humans. In this talk I propose to extend research on Information Extraction to evaluate the veracity of news stories and change the consumption of news media around the world. Such a system would use multimedia multilingual information extraction as a basis to analyze media reports from across the world, identify fine-grained falsified information, fix them and prioritize information for analyst review. I will present a new “Information Surgeon” model, which takes full advantage of state-of-the-art multimedia joint knowledge extraction techniques to analyze fine-grained event, entity, and relation elements, as well as whether these extracted Knowledge elements align consistently across modalities and background knowledge. We propose a novel probabilistic graphical neural network model to fuse the outputs from these indicators to detect misinformation and make the results highly explainable. A major challenge to performing KE level misinformation detection is the lack of training data. Hence, we additionally propose a novel graph-to-text generation approach to generate noisy training data automatically by knowledge element manipulation. Experiment results show that our approach achieves 92%-95% detection accuracy, 16.8% absolute higher than the state-of-the-art approach.


Goutam Chattopadhyay

(Senior Research Scientist, NASA -Jet Propulsion Laboratory, California, USA)

Bio: Goutam Chattopadhyay is a Senior Scientist at the NASAs Jet Propulsion Laboratory, California Institute of Technology, a Visiting Professor at the Division of Physics, Mathematics, and Astronomy at the California Institute of Technology, Pasadena, USA, a BEL Distinguished Visiting Chair Professor at the Indian Institute of Science, Bangalore, India, and an Adjunct Professor at the Indian Institute of Technology, Kharagpur, India. He received the Ph.D. degree in electrical engineering from the California Institute of Technology (Caltech), Pasadena, in 2000. He is a Fellow of IEEE (USA) and IETE (India) and an IEEE Distinguished Lecturer is a Senior Scientist at the NASAs Jet Propulsion Laboratory, California Institute of Technology, a Visiting Professor at the Division of Physics, Mathematics, and Astronomy at the California Institute of Technology, Pasadena, USA, a BEL Distinguished Visiting Chair Professor at the Indian Institute of Science, Bangalore, India, and an Adjunct Professor at the Indian Institute of Technology, Kharagpur, India. He received the Ph.D. degree in electrical engineering from the California Institute of Technology (Caltech), Pasadena, in 2000. He is a Fellow of IEEE (USA) and IETE (India) and an IEEE Distinguished Lecturer.

His research interests include microwave, millimeter-wave, and terahertz receiver systems and radars, and development of space instruments for the search for life beyond Earth.

He has more than 350 publications in international journals and conferences and holds more than twenty patents. He also received more than 35 NASA technical achievement and new technology invention awards. He received the IEEE Region-6 Engineer of the Year Award in 2018, Distinguished Alumni Award from the Indian Institute of Engineering Science and Technology (IIEST), India in 2017. He was the recipient of the best journal paper award in 2020 and 2013 by IEEE Transactions on Terahertz Science and Technology, best paper award for antenna design and applications at the European Antennas and Propagation conference (EuCAP) in 2017, and IETE Prof. S. N. Mitra Memorial Award in 2014.

Title of Talk: Mars Landing and Related Technical Challenges

Abstract: NASA’s Jet Propulsion Laboratory, which completed eighty years of its existence in 2016, builds instruments for NASA missions. Exploring the universe and our own planet Earth from space has been the mission of NASA. Robotics missions such as Voyager, which continues to go beyond our solar system, missions to Mars and other planets, exploring the stars and galaxies for astrophysics missions, exploring and answering the question, are we alone in this universe? has been the driving force for NASA scientists for more than six decades.

Fundamental science questions drives the selection of NASA missions. And to answer some of the fundamental science questions, NASA took multiple trips to the red planet Mars. Mars in its early history resembled a lot like our own planet. Landing on Mars is extremely challenging. In this presentation we will discuss those challenges and the technologies we developed to address them. We will also present an overview of the state of the art instruments that we are currently developing and layout the details of the science questions they will try to answer. Rapid progress on multiple fronts, such as commercial software for component and device modeling, low-loss circuits and interconnect technologies, cell phone technologies, and submicron scale lithographic techniques are making it possible for us to design and develop smart, low-power yet very powerful instruments that can even fit in a SmallSat or CubeSat. We will also discuss the challenges of the future generation instruments in addressing the needs for critical scientific applications.

The research described herein was carried out at the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA, under contract with National Aeronautics and Space Administration.


Clifford S Stein

(Professor, Columbia University, USA)

Bio: Clifford Stein is a Professor of IEOR and of Computer Science at Columbia University. He is also the Associate Director for Research in the Data Science Institute. From 2008-2013, he was chair of the IEOR department. Prior to joining Columbia, he spent 9 years as an Assistant and Associate Professor in the Dartmouth College Department of Computer Science.

His research interests include the design and analysis of algorithms, combinatorial optimization, operations research, network algorithms, scheduling, algorithm engineering, data science and parallel computing. Professor Stein has published many influential papers in the leading conferences and journals in his field, and has occupied a variety of editorial positions including the journals ACM Transactions on Algorithms, Mathematical Programming, Journal of Algorithms, SIAM Journal on Discrete Mathematics and Operations Research Letters. His work has been supported by the National Science Foundation and Sloan Foundation. He is a Fellow of the Association for Computing Machinery (ACM). He is the winner of several prestigious awards including an NSF Career Award, an Alfred Sloan Research Fellowship and the Karen Wetterhahn Award for Distinguished Creative or Scholarly Achievement. He is also the co-author of Introduction to Algorithms, with T. Cormen, C. Leiserson and R. Rivest.  This book currently the best-selling textbook in algorithms and has sold over 750,000 copies and been translated into 15 languages.

Title of Talk: Parallel Algorithms for  Problems on Massive Graphs

Abstract: Large graphs model many important problems in data science. When the graph is too large to fit in the memory of one computer, standard sequential algorithms do not work, or are so slow as to be useless.  We will survey some recent progress on efficient parallel algorithms whose performance scales nicely with the size of the graph, covering  many of the well-known basic graph problems such as connectivity, spanning trees, shortest paths and matchings.

Important Deadlines

Full Paper Submission:10th April 2021
Acceptance Notification: 15th April 2021
Final Paper Submission:26th April 2021
Early Bird Registration: 25th April 2021
Presentation Submission: 2nd May 2021
Conference: 10th – 13th May 2021

Sister Conferences

IEEE CCWC 2021

IEEE UEMCON 2020

IEEE IEMCON 2020

Search

Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages

Announcements

•    Conference Proceedings will be submitted for publication at IEEE Xplore Digital Library.
•    Best Paper Award will be given for each track
•    Conference Record No- 52608