Please visit the MSTCS YouTube Channel to view past seminars. You will be able to find videos for each Seminar Speaker.
Missouri S&T
Talk Title: From Generalized to Personalized: Leveraging LLMs for Conversational Search
Date: November 18th, 2024 Time: 10 AM Location: CS 220
Personalization is paramount for conversational search and recommendation. Despite significant advancements in Large Language Models (LLMs), these models often provide generalized recommendations that fail to capture the nuanced interests of individual users. To be effective, conversational agents must not only address users’ immediate queries but also adapt their responses based on the cumulative context of interactions. A major challenge in developing personalized conversational systems is the lack of large-scale datasets that reflect genuine user preferences and interactions.
In this talk, Dr Chatterjee will first introduce a method for collecting extensive, multi-session, multi-domain, human-written personal conversations using LLMs. Following this, he will provide an overview of the TREC Interactive Knowledge Assistance track, which he is co-leading with NIST in the US, aimed at advancing research in personalized conversational search.
Dr Shubham Chatterjee is an Assistant Professor of Computer Science at the Missouri University of Science and Technology, Rolla Missouri. Dr Chatterjee’s research focuses on developing neural information retrieval models that leverage Knowledge Graph semantics to enhance the understanding and addressing of users’ information needs. His work intersects the fields of Information Retrieval and Natural Language Processing (NLP), employing Deep Learning to refine information access systems. Currently, Dr. Chatterjee is engaged in research on personalized conversational assistants, Large Language Models (LLMs) and their applications in search. Previously, Dr Chatterjee worked as a Postdoctoral Research Associate with Dr. Jeff Dalton in the Generalized Representation and Information Learning (GRILL) Lab, a leading research group in the School of Informatics at the University of Edinburgh. Prior to this, Dr Chatterjee was a Postdoctoral Research Associate with Dr Jeff Dalton at the University of Glasgow, Scotland, as part of the prestigious Glasgow IR group. He also worked as a Postdoctoral Research Associate with Dr Laura Dietz at the University of New Hampshire, Durham, where he completed his PhD under her supervision.
Missouri S&T
Talk Title: Judicious Parallelism for File Transfers in High-Performance Networks
Date: November 11th, 2024 Time: 10 AM Location: CS 220
Parallelism is key to efficiently utilizing high-speed research networks when transferring large volumes of data. However, the monolithic design of existing transfer applications requires the same level of parallelism to be used for read, write, and network operations for file transfers. This, in turn, overburdens system resources since setting the parallelism level for the slowest component results in unnecessarily high parallelism for other components. Using more than necessary parallelism leads to increased overhead on system resources and unfair resource allocation among competing transfers. We introduce modular file transfer architecture, Marlin, to separate I/O and network operations for file transfers so that parallelism can be independently adjusted for each component. Marlin adopts online gradient descent algorithm to swiftly search the solution space and find the optimal level of parallelism for read, transfer, and write operations. Experimental results collected under various network settings show that Marlin can identify and use a minimum parallelism level for each component, improving fairness among competing transfers and CPU utilization. Finally, separating network transfers from write operations allows Marlin to outperform the state-of-the-art solutions by more than 2x when transferring small datasets.
Md Arifuzzaman is an Assistant Professor of Computer Science at Missouri University of Science and Technology. He completed his Ph.D. in Computer Science and Engineering at the University of Nevada, Reno, under the guidance of Dr. Engin Arslan. He also holds a Bachelor’s degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET).
Dr. Arifuzzaman’s research interests encompass High-Performance Systems and Networking, Distributed Systems, and Quantum Networking. His work addresses critical challenges in optimizing the performance and scalability of large-scale systems, and his research contributions have been featured in prestigious journals and conferences such as IEEE TPDS, IEEE/ACM Supercomputing (SC), ACM Supercomputing (ICS), IEEE QCNC, IFIP TMA, and IEEE Cluster. His ongoing work continues to push the boundaries of systems and networking to meet the demands of emerging technologies.
Missouri S&T
Talk Title: Compiler Optimization for Irregular Memory Access Patterns in PGAS Programs
Date: November 4th, 2024 Time: 10 AM Location: CS 220
Applications that operate on large, sparse graphs and matrices exhibit fine-grained irregular memory accesses patterns, leading to both performance and productivity challenges on today’s distributed-memory systems. The Partitioned Global Address Space (PGAS) model attempts to address these challenges by combining the memory of physically distributed nodes into a logical global address space, simplifying how programmers enact communication in their applications. However, while PGAS model can provide high developer productivity, the performance issues that arise from irregular memory accesses are still present. This dissertation aims to bridge the gap between high productivity and high performance for irregular applications in the PGAS programming model.
To achieve that goal, we have designed and implemented COPPER, a framework that performs Compiler Optimizations for Productivity and PERformance. COPPER automatically performs static analysis to identify irregular memory access patterns to distributed data within parallel loops, and then applies code transformations to enact optimizations at runtime. These optimizations perform small message aggregation , adaptive prefetching and selective data replication. Furthermore, they are applied without requiring user intervention, thereby improving performance without sacrificing developer productivity. We demonstrate the capabilities of COPPER by implementing it within the Chapel parallel programming language and show that the performance of several irregular applications across different hardware platforms can be improved by as much as 444x.
Alan Sussman is a Professor in the Department of Computer Science at the University of Maryland, and has been at Maryland since 1992. Working with students and other researchers at Maryland and other institutions he has published numerous conference and journal papers and received several best paper awards in various topics related to software tools for high performance parallel and distributed computing, and edited two books on teaching parallel and distributed computing. He is a founding member of the Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER). He is a subject area editor for the Parallel Computing journal and an associate editor for IEEE Transactions on Parallel and Distributed Systems. He recently completed a rotation as a program officer in the Office of Advanced Cyberinfrastructure at the National Science Foundation. He received his PhD in computer science from Carnegie Mellon University.
Missouri S&T
Talk Title: Advancing Web Application Security: From Database Synthesis to Race Condition Detection
Date: October 28th, 2024 Time: 10 AM Location: CS 220
Testing database-backed web applications is challenging because their behavior heavily depends on the data stored in databases. Without realistic data, it becomes difficult to reach vulnerable code, limiting the effectiveness of dynamic security testing methods. However, obtaining such test databases is often impractical due to privacy concerns. Our first approach, SynthDB (published in NDSS’23), addresses this by using program analysis to map the relationships between web application code and its database. SynthDB then reconstructs a database that enables the exploration of previously unreachable paths, without compromising database integrity. It achieves 629% code and 77.1% query coverage, outperforming existing database synthesis techniques by 14.0% and 24.2%, respectively.
Building on the insights from SynthDB, we tackle a specific yet critical problem in web applications: request race vulnerabilities. These can lead to data inconsistencies, unexpected behavior, and unauthorized access. Our approach, RaceDB (accepted in S&P’25), leverages the foundational work of SynthDB to enhance the detection of these vulnerabilities through two key innovations: Application-aware Request Detection (ARD) and replay-based verification. This allows RaceDB to efficiently isolate true races from false positives, surpassing state-of-the-art techniques and identifying 18 previously unknown vulnerabilities, 7 of which have been assigned CVEs.
Dr. Kyu Hyung Lee earned a PhD in Computer Science from Purdue University in 2014. Currently, he is an Associate Professor and Graduate Coordinator in the School of Computing at the University of Georgia, where he also serves as the Associate Director of the Institute for Cybersecurity and Privacy.
His research focuses on software security and cyber forensics, with a particular emphasis on using program analysis techniques to solve security challenges and enhance attack investigation methods. He has been recognized with several prestigious awards, including the Usenix Security Distinguished Paper Award, and has served on the program committees of major security conferences, such as Usenix Security, CCS, and NDSS.
Missouri S&T
Talk Title: Bitcoin
Date: October 21st, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: Intellectual Property Protection for Software
Date: October 14th, 2024 Time: 10 AM Location: CS 220
Software innovations are valuable to individuals, start-ups, and established businesses and can be protected by law as intellectual property (IP). Intellectual property is a type of intangible property created by the mind, such as inventions, works of art and literature, designs, names, or images. Software also fits into this category. The laws of most industrialized countries recognize four types of IP that can be protected: patents, copyright, trade secrets, and trademarks. Each type of IP impacts software.
Bob Bain prepares and prosecutes U.S. and international patents in electrical, computer, mechanical, and related fields. An experience patent and intellectual property attorney, Bob guides his clients through complex intellectual property issues concerning freedom-to-operate, infringement, due diligence, and licensing to efficiently patent and safeguard his clients’ intellectual property. Bob also assists clients in drafting and negotiating software and technology agreements, including licenses, Software-as-a-Service (SaaS) agreements, and consulting and development agreements. Bob holds a B.S. in Electrical Engineering from Missouri S&T and a J.D. from the University of Missouri School of Law.
Missouri S&T
Talk Title: Security
Date: October 7th, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: Software History: What, Why, and How
Date: September 30th, 2024 Time: 10 AM Location: CS 220
This talk covers the relevance of software history to practitioners and students of technology and argues why there is a need to include software history within computing programs. After more than 60 years of software development, students are not necessarily taught the lessons of that history, nor about the preexisting software and assumptions that may be used in modern software. As a result, students and software developers can be “caught” by scenarios where their applications unwittingly use those underlying assumptions and software. Students also may not appreciate the breadth of software and how different kinds of software have different historical paths. This talk identifies a number of those scenarios where the history of software is relevant such as secure systems, network protocol development and other specific examples of where lessons have been learned from experience with previous software systems. This talk covers the structure of a course in software history, the motivating context, the efforts to assemble the course materials, and what the impact has been so far to students taking the course as a stand-alone course at multiple institutions.
Kim W. Tracy has over 30 years of experience in software development and computing education. He has worked on various software projects, including at Bell Labs and as the Chief Information Officer at Northeastern Illinois University. While at Bell Labs, he worked on a number of different products including the 5ESSâ Telephone Switch as well as consulting for clients around the world. Tracy has taught computer science courses that include traditional computer science, software engineering, cybersecurity, and information technology at multiple universities and currently teaches at Rose-Hulman Institute of Technology. He is involved in ABET accreditation and professional societies, and has authored textbooks on software history and object-oriented AI. Tracy holds senior memberships in ACM and IEEE and chairs the ACM History Committee.
Missouri S&T
Talk Title: Privacy
Date: September 23rd, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: IP
Date: September 16th, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: IP, Copyright, Patent
Date: September 9th, 2024 Time: 10 AM Location: CS 220
LECTURE CANCELED
Missouri S&T
Talk Title: Internet with free speech
Date: August 26th, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: Ethics in Computing
Date: August 19th, 2024 Time: 10 AM Location: CS 220
Missouri S&T
Talk Title: Towards Emotion-Aware Software Engineering: LLM's are Here to Help
Date: May 3rd, 2024 Time: 10 AM Location: CS 222
Missouri S&T
Talk Title: Road Safety: Opportunities and Challenges Towards the Safe System Approach
Date: April 29th, 2024 Time: 10 AM Location: CS 222
Annually, road traffic crashes claim the lives of 1.35 million individuals, making them the leading cause of mortality globally for those between the ages of 5 and 29 according to the World Health Organization. The Safe System approach promoted by the USDOT emphasizes proactive efforts compared to the traditional reactive approach—taking action after a crash event happens. Artificial Intelligence (AI) based techniques, mostly focusing on machine learning (ML) methods, can effectively support and enhance the proactive Safe System design for road users. This talk will elaborate on the use of AI/ML techniques within the context of road safety countermeasure design. We will also discuss the data and deployment-related challenges associated with the application of AI/ML. Finally, we will discuss the ongoing research focusing on the safety of commercial motor vehicle crashes in Kansas.
Dr. Husain Aziz is an assistant professor of civil engineering at Kansas State University and leads the Transportation Infrastructure and Systems (TIS) research lab focusing on the modeling, simulation, and optimization of traffic flows in smart cities, shared-used mobility, and resilient transportation systems. He received his M.S. and Ph.D. from the University of Texas at Austin and Purdue University in 2010 and 2014, respectively, and his B.Sc. degree from Bangladesh University of Engineering and Technology in 2007. Before joining K-State Civil Engineering, Dr. Aziz held an R&D scientist position at the Oak Ridge National Laboratory of the U.S. Department of Energy (2014 – 2019). He also serves as the associate editor of the IEEE Transactions on Intelligent Transportation Systems and the Journal of Intelligent Transportation Systems (Taylor & Francis). He is a full member of the ASCE, ITE, and IEEE.
Missouri S&T
Talk Title: Strategic Reasoning in Machine Learning, with Implications for Security and Fairness
Date: April 8th, 2024 Time: 10 AM Location: CS 222
The practical success of machine learning has naturally led to a critical assessment of its underlying assumptions. One assumption that has received a great deal of scrutiny is that data is generated exogenously according to some fixed (albeit unknown) distribution. In other words, a typical model of data is as a mechanical, non-living entity, with no agency of its own. When people are involved in the process of generating data, however, human agency has a propensity to violate this assumption. In this talk, I will largely consider the case where people are strategic, manipulating information observable to learned models to their ends. Strategic manipulation of learning has a common mathematical abstraction in the literature: an actor changes their features, subject to a constraint on the magnitude of the change, to maximize prediction loss. I instantiate this in two settings: security and resource allocation. In the former, the strategic actor is an attacker who aims to achieve malicious goals (such as executing a malicious payload); in the latter, the actor is someone who simply wishes to obtain the resource, say, a loan. I will show that in a security setting, common intuition about the value of this simple threat model is inconsistent with evidence. In the case of resource allocation, I consider two issues that are qualitatively distinct from security: incentive compatibility and group fairness. In particular, I will discuss how one can achieve approximate incentive compatibility through auditing, and the phenomenon of fairness reversal that arises as a consequence of strategic manipulation of features. Finally, I will present the results of a human subjects experiment that studies perceptions of fairness of the information (features) used in a low-stakes simulated employment decisions, highlighting the importance of the role one plays (employer or prospective worker), as well as the differences between explicitly expressed and implicit sentiments.
Yevgeniy Vorobeychik is a Professor of Computer Science & Engineering at Washington University in Saint Louis. Previously, he was an Assistant Professor of Computer Science at Vanderbilt University. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game theoretic modeling of security and privacy, trustworthy machine learning, and algorithmic and behavioral game theory. Dr. Vorobeychik received an NSF CAREER award in 2017, and was invited to give an IJCAI-16 early career spotlight talk. He also received several Best Paper awards, including one of 2017 Best Papers in Health Informatics. He was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.
Missouri S&T
Talk Title: Human Genome Sequence Analysis Using Commodity Clusters
Date: March 11th, 2024 Time: 10 AM Location: CS 222
Human genome sequences are very large in size and require significant compute and storage resources for processing and analysis. Variant calling is a key task performed on an individual’s genome to identify different types of variants. Knowing these variants can lead to new advances in disease diagnosis and treatment. In this talk, I will present two approaches for accelerating variant calling pipelines on a large workload of human genomes using a commodity cluster. The first approach called AVAH relies only the CPUs in the cluster; the second approach called AVAH* exploits graphics processing units (GPUs) in the cluster. I will also present the performance benefits of AVAH and AVAH* tested on a workload of publicly available human genomes using NSF-funded testbeds, namely, CloudLab and FABRIC.
Dr. Praveen Rao is an associate professor in the Department of Electrical Engineering & Computer Science at the University of Missouri (MU). His research interests are in the areas of big data management, data science, health informatics, and cybersecurity. His research, teaching, and outreach activities have been supported by the National Science Foundation (NSF), the National Endowment for the Humanities (NEH), the National Institutes of Health (NIH), Centers for Disease Control and Prevention (CDC), Air Force Research Lab (AFRL), the University of Missouri System, University of Missouri Research Board, and companies. He serves as the Director of Graduate Studies for Ph.D. in Informatics. While he was a faculty at University of Missouri-Kansas City (UMKC), he received UMKC Award for Excellence in Mentoring Undergraduate Researchers, Scholars, and Artists and UMKC N.T. Veatch Award for distinguished research and creativity. At MU, he recently received the MU EECS Senior Faculty Excellence in Teaching and Mentoring Award. He is a Senior Member of the ACM (2020) and IEEE (2015).
Missouri S&T
Talk Title: Intelligent Next-G Wireless & Quantum Communication Networks
Date: March 4th, 2024 Time: 10 AM Location: CS 222
Next-generation communication networks aim to support large numbers of users while providing high data rates, ultra-reliability, robust security, and seamless connectivity. The engineering and design of such networks rely heavily on tools from machine learning, signal processing, and optimization. In this talk I will outline my vision for leveraging these tools to realize these next-generation networks. Firstly, I will discuss my research on anomaly detection in cellular band and privacy-preserving spectrum sharing in the Citizens Broadband Radio Service (CBRS) band. I will advocate for increased spectrum availability by supporting the FCC's efforts to offer more bands for reliable sharing. Specifically, I will delve into how machine learning and signal processing can address challenges such as detecting overlapping radar pulses amidst high noise and interference power and identifying anomalous underlay transmissions. These advancements can relax transmission power restrictions and enhance service coverage and reliability for end users. Next, I will present my work on utilizing optimization and reinforcement learning to design intelligent routing algorithms for stochastic quantum networks. These networks can offer ultra-secure services at high data rates. By deploying intelligent routing algorithms, we can further enhance the efficiency of quantum network operations. Lastly, I will discuss my research on reconfigurable intelligent surfaces (RIS) and their role in engineering wireless environments for ensuring ultra-connectivity in challenging non-line-of-sight scenarios prevalent in 6G and beyond networks. Additionally, I will highlight my future research directions, which include leveraging digital twin technology, multimodal fusion, and explainable artificial intelligence to advance the capabilities and intelligence of these future communication networks, ultimately benefiting users and society.
Vini Chaudhary is a postdoctoral research associate at the Department of Electrical and Computer Engineering, Northeastern University, from October 2021. She received her Ph.D. in Electrical Engineering from Indian Institute of Technology Delhi (IITD) in September 2021. Her research interests involve application of machine learning, signal processing, and optimization in next-generation wireless and quantum communications systems domain, targeted to the areas of spectrum learning, anomaly detection (cybersecurity), reconfigurable intelligent surfaces-aided directional transmissions, protocols design for quantum internet, and green sensing in internet-of-things. Dr. Chaudhary’s green sensing-related works received ‘Distinction in Doctoral Research’ award from IITD. She has authored around 8 journal/magazine papers and 8 conference papers. She is an active member of IEEE, Communication Society (ComSoc), and Women in Communication Engineering (WICE) group. She is a recipient of prestigious TCS fellowship (2016-2020) and served as an invited speaker in TCS research café in July 2021.
Missouri S&T
Talk Title: Collaborative Autonomy in Connected Systems
Date: March 1st, 2024 Time: 10 AM Location: CS 222
Robotics has a profound impact on many aspects of our daily lives, such as intelligent transportation, intelligent home services, and unmanned space exploration. With the development of robotics, there is an inevitable question that we need to answer: How do we enable collaborative perception and cooperative behaviors in robots? In this talk, I will present my past research and future plans that aims to touch this question. Specifically, I will mainly discuss (1). how we coordinate robots to collaboratively understand the environments in real-world settings, such as without GPS, as well as with limited communication constraint, (2) how we make robots to adaptively cooperate with each other for complex tasks, such as collaborative task scheduling, connected autonomous driving, cooperative transportation, and navigation.
Peng Gao is now a postdoc associate at the University of Massachusetts Amherst. Previously, He got the Postdoc Fellowship from the Maryland Robotics Center, advised by Dr. Ming C. Lin at the University of Maryland, College Park. He obtained his Ph.D. degree at the Colorado School of Mines in 2022, supervised by Dr. Hao Zhang in the Human-Centered Robotics Lab. He is now serving as the Associate Editor for IEEE Robotics and Automation Letters (RAL). In addition, he is the first author of the best paper award for Agri-Robot at IEEE 2023 International Conference on Intelligent Robots and Systems (IROS).
Dr. Gao ‘s central theme of his research focuses on collaborative autonomy to enable team awareness for multi-robot and human-autonomy teaming. This vision seeks to empower autonomous robots with the ability to collaboratively and autonomously comprehend both unstructured environments and their human and robotic teammates, thereby surpassing human-level collaboration for complex tasks in open-world settings.
Missouri S&T
Talk Title: Empowering Graph Intelligence via Natural and Artificial Dynamics
Date: February 29th, 2024 Time: 10 AM Location: CS 222
In the era of big data, the relationship between entities has become much more complex than ever before. As a kind of relational data structure, graphs (or networks) attract much research attention for dealing with this unprecedented phenomenon. In the long run, graph research and AI developments face two general challenges when adapting to the complexities of the real world. Firstly, the graph structure and attributes may evolve over time (i.e., time-evolving topological structures, time-evolving node/graph attributes/labels, etc.). The resulting problems include but are not limited to ignoring entity temporal correlation, overlooking causality discovery, computation inefficiency, non-generalization, etc. Secondly, the initial state of graphs may be imperfect (e.g., having connection errors, sampling noises, missing features, scarce labels, hard-to-interpret, redundant, privacy-leaking, robustness-lacking, etc.). The corresponding problems include but are not limited to non-robustness, indiscriminative representations, non-generalization, etc. Hence, this research talk will concentrate on investigating how to study (1) Natural Dynamics (e.g., leveraging spatial-temporal properties of graphs) and (2) Artificial Dynamics (e.g., augmenting and pruning graph components) for Graph Mining, Graph Representations, and Graph Neural Networks to achieve task performance upgrades in accuracy, efficiency, trustworthiness, etc.
Dongqi Fu is a final-year Ph.D. Candidate in Computer Science from University of Illinois Urbana-Champaign. He is interested in developing AI, machine learning, and data mining algorithms on graph data (i.e., non-IID, relational, non-grid, non-Euclidean data) to support various applications. Dongqi used to be a research scientist intern at Meta AI and IBM T.J. Watson Research for graph deep learning and applications. His research has been recognized by many premium conferences (e.g., ICLR, KDD, WWW, SIGIR, CIKM, etc.) and selective awards (e.g., 2023 Rising Star in Data Science by UChicago and UC San Diego, 2023 C.W. Gear Outstanding Graduate Student by UIUC, 2022 Top 8% Reviewer by NeurIPS, etc.).
Missouri S&T
Talk Title: Optimal Data Acquisition and Machine Learning in Distributed Systems
Date: February 28th, 2024 Time: 10 AM Location: CS 222
Data is the fuel of AI/ML and usually resides in distributed devices/silos. With the growing adoption of AI/ML in distributed systems, one needs to ensure high-quality data acquisition for effective collaborative learning. In this talk, I will motivate the problem through a few examples, and delve into my work on two thrusts: (1) designing online game-theoretical mechanisms for high-quality data acquisition, and (2) designing algorithms for effective distributed machine learning. These efforts offer theoretical guarantees and empirical efficacy for effective distributed AI/ML systems. To conclude, I will discuss two exciting directions for this domain: (1) efficient split learning, and (2) efficient unsupervised learning at edge devices.
Chao Huang received his Ph.D. from the Chinese University of Hong Kong in 2021, where he was supervised by Dr. Jianwei Huang and Dr. Randall Berry. He is now a postdoctoral researcher affiliated with the Computer Science Department at the University of California, Davis, working with Dr. Xin Liu. His research interests are AI/ML and network economics, with a recent focus on distributed learning and online learning. Chao Huang's research work has been funded by NSF and USDA.
Missouri S&T
Talk Title: Security and Privacy in Emerging Wireless and IoT Systems
Date: February 26th, 2024 Time: 10 AM Location: CS 222
With the advancement of wireless sensing and pervasive computing, extensive research is being conducted across various application domains, including the Internet of Things (IoT), smart healthcare, and associated security challenges. My research delves into the concealed aspects of IoT and mobile systems security, focusing particularly on identifying vulnerabilities and exploring attack surfaces within IoT devices and existing mobile systems, while also providing corresponding defense strategies. The first part of my presentation introduces a novel and universal camera state inference technique WeakCamID. It is the first work to identify the vulnerability of current wireless security cameras without subscription plans. This technique allows an adversary to bypass such a camera without being recorded via passive WiFi sniffing. WeakCamID can be implemented with a single smartphone and does not require any professional equipment nor connection to the same network as the target camera. The second part of my talk describes a new attack Phantom-CSI attack against liveness detection systems that use wireless signals to authenticate environmental human activities. The proposed attack can manipulate wireless signals to exhibit the same semantic information as that measured by a co-existing camera or microphone, thus allowing spoofed video or voice signals to bypass the wireless liveness detection system. Our implementation of this attack on wireless platforms (i.e., USRPs) validates its effectiveness and robustness. Finally, I will outline emerging research directions I aim to pursue, targeting the advancement of smart homes, smart cities, and smart living, while simultaneously enhancing the security and privacy of our digital world.
Qiuye He is currently a Ph.D. candidate in Computer Science at the University of Oklahoma, under the supervision of Dr. Song Fang. Her research interests include Cybersecurity, Mobile Sensing and Computing, and Smart Healthcare. During her Ph.D. study, she has published 7 papers in premium conferences and peer-reviewed journals, including ACM MobiSys, ACM CCS (2), RAID, EAI MobiQuitous, IEEE TMC, and IEEE TDSC. She holds a US patent and is a recipient of a Best Paper Award from EAI MobiQuitous 2022. She has also received the CS Alumni Graduate Fellowship from the University of Oklahoma. Her research has been widely covered by the press, including Sooner Magazine, KOCO 5 News, Fox News, and OU News. For more information, please visit: https://qiuye.info/.
Missouri S&T
Talk Title: How Much is My Data? Data Valuation in Data Markets
Date: February 21st, 2024 Time: 10 AM Location: CS 222
The power of big data largely stems from its many secondary uses, such as enabling machine learning models and data-driven decision-making applications. However, a significant challenge lies in incentivizing and facilitating large-scale data sharing and collaboration. Emerging data markets offer a promising solution by facilitating exchanges between data owners and buyers. Yet, these markets face a critical challenge: ensuring fair revenue distribution among data owners according to their contributions. The Shapley value, a concept from cooperative game theory, provides a method for data valuation that embodies desirable fairness properties. However, its real-world application is limited by computational complexities. To this end, my research focuses on developing principled and efficient methods to compute the Shapley value, utilizing practical assumptions applicable in real-world scenarios to make fair data valuation achievable on a large scale. In my talk, I will introduce two innovative methods to efficiently compute the Shapley value: (1) leveraging a novel assumption that is widely applicable, and (2) employing a game-decomposition approach. My presentation will conclude with a vision for integrating fairness, privacy, and security into a holistic framework for data valuation, marking a significant step forward in incentivizing data sharing and thus harnessing the true power of big data.
Xuan Luo is a Ph.D. candidate at Simon Fraser University, under the guidance of Prof. Jian Pei. Her research interests lie in the broad area of data science, with a particular focus on fair data valuation in data markets and machine learning. She is also an expert in blockchain, with a focus on the scalability and interpretability of blockchain systems. Her research findings have been published in top venues such as SIGMOD, VLDB, SIGKDD, and Knowledge and Information Systems.
Missouri S&T
Talk Title: Why Functional Hardware Description Matters
Date: February 19th, 2024 Time: 10 AM Location: CS 222
Large Language Models (LLMs) represent a significant leap in language comprehension and generation. However, they often rely on backend information retrieval (IR) systems to ensure accuracy and factual grounding. My research focuses on enhancing these essential IR systems, particularly for web and conversational search applications.
In my talk, I will discuss my research on the development of advanced knowledge-enhanced neural IR models. These models leverage Knowledge Graph semantics and text mining techniques to improve the text comprehension capabilities of intelligent search systems. Moreover, I will discuss my ongoing research which integrates LLMs into search systems. This involves harnessing the sophisticated language processing abilities of LLMs to refine and improve search results. Additionally, I will discuss my current work on personalizing conversational search assistants. This research aims to create more intuitive and user-centric search experiences by tailoring responses to individual user preferences and contexts. Finally, I will share my vision for the future of this field over the next five years. This will include insights into potential developments and the goals I aim to achieve through my research, shaping the future of search and its integration with advanced language models.
Dr. Shubham Chatterjee is a Postdoctoral Research Associate working with Dr. Jeff Dalton in the Generalized Representation and Information Learning (GRILL) Lab, a leading research group in the School of Informatics at the University of Edinburgh. Dr. Chatterjee’s research focuses on developing neural information retrieval models that leverage Knowledge Graph semantics to enhance the way information access systems understand and address a user's information needs. His research intersects Information Retrieval and NLP domains, utilizing Deep Learning to refine information access systems. Dr. Chatterjee is currently involved in research on personalized conversational assistants, entity-oriented learned-sparse IR models, and other IR research in general.
Prior to this, Dr. Chatterjee worked as a Postdoctoral Research Associate with Dr. Jeff Dalton at the University of Glasgow, Scotland, where he was part of the prestigious Glasgow IR group. Before, he worked as a Postdoctoral Research Associate with Dr. Laura Dietz at the University of New Hampshire, Durham, USA . This was also where he completed his PhD working with Dr. Dietz.
Missouri S&T
Talk Title: Why Functional Hardware Description Matters
Date: February 16th, 2024 Time: 10 AM Location: CS 222
There is no such thing as high assurance without high assurance hardware. High assurance hardware is essential, because any and all high assurance systems ultimately depend on hardware that conforms to, and does not undermine, critical system properties and invariants. And yet, high assurance hardware development is stymied by the conceptual gap between formal methods and hardware description languages used by engineers. This talk presents ReWire, a functional programming language providing a suitable foundation for formal verification of hardware designs, and a compiler for that language that translates high-level designs directly into working hardware. I will also discuss the ReWire's role in constructing formally verified hardware accelerators for fully homomorphic encryption as part of DARPA's DPRIVE program. I will also present brief overview of my research in emergent execution (a.k.a., weird machines).
Dr. William Harrison received his BA in Mathematics from Berkeley in 1986 and his doctorate from the University of Illinois at Urbana-Champaign in 2001 in Computer Science. From 2000-2003, he was a post-doctoral research associate at the Oregon Graduate Institute in Portland, Oregon. Until recently, Dr Harrison was an Associate Professor in the Electrical Engineering and Computer Science department at the University of Missouri. In December 2007, he received the CAREER award from the National Science Foundation's CyberTrust program. He has unusual experience and connections in Federal Cybersecurity research having spent almost three of the last ten years at research positions at the National Security Agency and at Oak Ridge National Laboratory, and also having had close collaboration with the US Naval Research Laboratory since 2007. Currently, he is a Senior Principal Research Scientist at Two Six Technologies, Inc., transitioning his research on high assurance hardware into practice. His research interests include all aspects of programming languages research (e.g., language-based computer security, semantics, design and implementation), reconfigurable computing, formal methods and malware analysis.
Missouri S&T
Talk Title: Towards Embodied Visual Intelligence
Date: February 12th, 2024 Time: 10 AM Location: CS 222
The research talk focuses on advancing artificial intelligence to emulate human visual intelligence. The research aims to develop universal, interpretable, and carbon-efficient AI models that mimic human vision’s adaptability and efficiency. Key aspects include creating AI systems that are powerful yet environmentally sustainable, enhancing models’ interpretability for better understanding and trust, and addressing the limitations of large-scale models with a focus on efficient network utilization. The overarching goal of this talk is to create AI that achieves human-like vision intelligence while considering social and environmental sustainability.
ChengHan received his bachelor degree in from Tianjin University (TJU) in 2019. He got his masters degree from the Pennsylvania State University (PSU) in 2021. He is currently a Ph.D. candidate in Chester F. Carlson Center for Imaging Science at Rochester Institute of Technology (RIT). His research interests include computer vision, network explainability, representation learning and efficient learning. Specifically, his research is focused on Artificial Intelligence (AI) sustainability and explainability, where he designs, implements, deploys, and evaluates AI systems that empower communities and address environmental and social issues. His publications include flagship conferences in the AI and machine learning domains such as NeurIPS, ICCV, ICLR and IJCAI.
Missouri S&T
Talk Title: Parallel Similarity Searches on Heterogeneous Architectures
Date: February 5th, 2024 Time: 10 AM Location: CS 222
In this talk, I will discuss accelerating similarity searches and related proximity search problems using GPU and hybrid CPU+GPU architectures. Similarity searches (or range queries) are fundamental database queries that are used in many machine learning algorithms that require information about nearby points/feature vectors in the data space. However, similarity searches have data-dependent workload characteristics, which make them challenging to parallelize efficiently on GPU architectures. I will discuss my work on parallel similarity searches and related algorithms and will describe several lessons learned over the past few years.
Mike Gowanlock is an associate professor in the School of Informatics, Computing, and Cyber Systems at Northern Arizona University. He obtained his PhD in Computer Science from the University of Hawaii at Manoa in 2015, and was a postdoctoral associate at MIT Haystack Observatory from 2015-2017. Mike Gowanlock's research interests include parallel, high performance, and data-intensive computing, general purpose computing on graphics processing units, and astronomy.
Missouri S&T
Talk Title: Crop Darpan; Exploiting Data Science to Empower a Farmer to Diagnose Farm Problems
Date: January 26rd, 2024 Time: 10 AM Location: CS 222
Despite the variety of traditional and IT-based agricultural information delivery approaches, the majority of Indian farmers are not acquiring or provided with the actionable agricultural information in a timely and regular manner. IT based agricultural information delivery systems, which have been developed during last two decades, can be categorized as pull-based, push-based, and hybrid systems. The pull-based systems assume that the farmers and other stakeholders pull the information. Call centers and several web portals come under this category. The experience shows that majority of poor and marginal farmers are not covered by such systems as farmers are unable to pull the actionable agricultural information from such systems due to low knowledge levels, communication gap and perceptual problems. The push-based systems vary with respect to the degree of generalization to personalization. The radio, video, proactive SMS- and voice-based services push generic information to farmers and so are ineffective.
To address the lab to land gap in agriculture, since 2004, research efforts are being made at IIIT Hyderabad to investigate the building of IT-based agro-advisory systems to deliver actionable agro-advice to every farm field of India in a timely manner. In this connection, we have built eSagu system, village-level eSagu system. Moreover, we are investigating the building of Crop Darpan system. The Crop Darpan App is available for Cotton and Rice crops at Google Play Store and Apple App Store. In this talk, I will share the experiences of building these systems and share the future plans.
Prof. P. Krishna Reddy is a faculty member at IIIT Hyderabad. He is the head of Agricultural Research Center and the member of Data Sciences and Analytics Center research team at IIIT Hyderabad, India. During 2013 to 2015, he has served as a Program Director, ITRA-Agriculture & Food, Information Technology Research Academy (ITRA), Government of India. From 1997 to 2002, he was a research associate at the Center for Conceptual Information Processing Research, Institute of Industrial Science, University of Tokyo. From 1994 to 1996, he was a faculty member at the Division of Computer Engineering, Netaji Subhas Institute of Technology, Delhi. During the summer of 2003, he was a visiting researcher at Institute for Software Research International, School of Computer Science, Carnegie Mellon University, Pittsburgh, USA. He has received both M.Tech. and Ph.D. degrees in computer science from Jawaharlal Nehru University, New Delhi in 1991 and 1994, respectively.
His research areas include data mining, database systems and IT for agriculture. He has published about 200 refereed research papers which include 35 journal papers, 4 book chapters, and 14 edited books. He is a steering committee member of the pacific-asian knowledge discovery and data mining (PAKDD) conference series and Database Systems for Advanced Applications (DASFAA) conference series. He is a steering committee chair of Big Data Analytics (BDA) conference series since 2017. He was a proceedings chair of COMAD 2008, a workshop chair of KDRS 2010, media and publicity chair of KDD 2015, and general chair of BDA2017. As a general chair, he has organized the both 14th and 21th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD2010 and PAKDD2021) conferences, the Third National Conference on Agro-Informatics and Precision Agriculture 2012 (AIPA 2012), both 5th and 10th International Conference on Big Data Analytics (BDA 2017 and BDA2022) and 27th International Conference on Database Systems for Advanced Applications (DASFAA-2022). He has delivered several invited/panel talks at the reputed conferences and workshops in India and abroad.
He has got several awards and recognitions. He has executed 16 research projects by raising the research funding of about 80 million Indian rupees. Since 2004, he has been investigating the building efficient knowledge agricultural knowledge transfer systems by extending developments in IT. He has developed eSagu system, which is an IT-based farm-specific agro-advisory system, which has been field-tested in hundreds of villages on about 50 field and horticultural crops. He has also built eAgromet system, which is an IT-based agro-meteorological advisory system to provide risk mitigation information to farmers. He has conceptualized the notion of Virtual Crop Labs to improve applied skills for extension professionals. Currently, he has built a Crop Darpan system, which is a crop diagnostic tool for farmers, with the funding support from India-Japan Joint Research Laboratory Program. He has received two best paper awards. The eSagu system, which is an IT based farm-specific agro-advisory system, has got several recognitions including CSI-Nihilent e-Governance Project Award in 2006, Manthan Award in 2008 and finalist in the Stockholm Challenge Award in 2008 and received PAKDD Distinguished Service Award in 2021.
Missouri S&T
Talk Title: AI/ML Driven Image Analytics for Scientific Discovery in life and Material Sciences
Date: January 22nd, 2024 Time: 10 AM Location: CS 222
Scientific understanding and discovery in biology, medicine, basic life sciences, and material sciences heavily depend on imaging. Imaging data for these fields is growing exponentially in size and carry more information thanks to increasing power of imaging hardware, advances in biomarker research, rising adoption of imaging technologies, and emergence of new applications. Advent of higher resolution imaging modalities, such as electron microscopy allows scientists to distinguish structures at the molecular level leading to not only increased scientific understanding but also to opportunities in material discovery, drug discovery, and precision medicine.
Volume and complexity of the imaging data in life and material sciences offer opportunities but also challenges particularly due to the critical and high-stakes nature of these fields. There is a growing need for robust, reliable, and precise automated image analytics capabilities to take full advantage of the scientific and clinical data and to address the pressing needs of their applications. Artificial intelligence/machine learning (AI/ML) approaches have recently been revolutionizing data sciences. These approaches offer new opportunities to address these needs, to drive the scientific discovery process in life and material sciences, and to empower clinical decision making.
In this talk, I will present a brief review of my lab’s interdisciplinary efforts at University of Missouri towards development of quantitative image data analytics methods to facilitate scientific understanding and discovery in life and material sciences, and to promote evidence-based medicine.
Filiz Bunyak Ersoy received her B.S. and M.S. degrees in Control and Computer Engineering from Istanbul Technical University, Turkey and her Ph.D. degree in Computer Science from Missouri University of Science and Technology. She is an Assistant Professor of Computer Science at University of Missouri-Columbia. Her research interests include image processing, computer vision, artificial intelligence, and machine learning for biomedical image analysis and visual surveillance with special emphasis on segmentation, motion analysis, level set and deep learning methods. She served as chair for BIBM Workshop on Machine Learning Approaches in High Resolution Microscopy Imaging, co-chair for ICPR/ICCV Workshop on Analysis of Aerial Motion Imagery, and program co-chair for IEEE Applied Imagery Pattern Recognition Workshop. She serves on editorial boards of MDPI Sensors Journal and ACM Computing Surveys.
Missouri S&T
Talk Title: Robust Multi-view Visual Learning: A Knowledge Flow Perspective
Date: November 27th, 2023 Time: 10 AM Location: CS 220
Multi-view data are extensively accessible nowadays thanks to various types of features, viewpoints, and different sensors. For example, the most popular commercial depth sensor Kinect uses both visible light and near-infrared sensors for depth estimation; automatic driving uses both visual and radar/lidar sensors to produce real-time 3D information on the road, and face analysis algorithms prefer face images from different views for high-fidelity reconstruction and recognition. All of them tend to facilitate better data representation in different application scenarios. This talk covers most multi-view visual data representation approaches from two knowledge flows perspectives, i.e., knowledge fusion and knowledge transfer, centered from conventional multi-view learning to zero-shot learning, and from transfer learning to open-set domain adaptation.
Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He received the Ph.D. degree from the Department of Electrical and Computer Engineering, Northeastern University, USA in 2018. He is a faculty member affiliated with Department of Computer Science, Tulane University since 2021. Prior that, he was a faculty member affiliated with Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis. His research interests include transfer learning, multi-view learning and deep learning. He received the National Institute of Justice Fellowship during 2016-2018. He was the recipients of the best paper award (SPIE 2016) and best paper candidate (ACM MM 2017). He is currently an Associate Editor of the Journal of Electronic Imaging (JEI) and IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).
Missouri S&T
Talk Title: FedOpt++- Federation Beyond Uniform Client Selection, Optimization Beyond Simple Minimization
Date: October 30th, 2023 Time: 10 AM Location: CS 220
Large-scale edge-based collection of training data in many machine learning (ML) applications calls for communication-efficient distributed optimization algorithms, such as those used in federated learning (FL), to process the data. Our work addresses some limitations in the current FL literature. First, previous analyses of FedAvg (the most commonly used FL algorithm) assume either full participation of clients or partial participation with a uniform sampling of clients. However, in practical FL systems, client availability often follows a natural cyclic pattern. We provide (to our knowledge) the first theoretical framework to analyze the convergence of FedAvg with cyclic client participation. Another limitation of FL is the paucity of work on problems beyond simple minimization settings. Minimax optimization can model several ML applications such as GANs, multi-agent games, and reinforcement learning. Stochastic gradient descent ascent (SGDA) is one of the most common algorithms for minimax optimization. However, its convergence is not well understood in nonconvex settings. We analyze SGDA (and its variants) for several classes of nonconvex-concave and nonconvex-nonconcave minimax problems. Further, we provide novel and tighter analysis of SGDA-like methods in the federated setting, in the process improving existing convergence and communication guarantees.
Pranay Sharma is a research scientist in the Department of Electrical and Computer Engineering (ECE) at Carnegie Mellon University. Pranay finished his Ph.D. in ECE from Syracuse University in 2021, where he worked on distributed localization and target tracking, as well as distributed optimization. Before that, he finished his B.Tech-M.Tech dual degree in Electrical Engineering from IIT Kanpur in 2013. His current research focuses broadly on distributed machine learning and optimization. Specifically, his work focuses on Federated Learning, minimax optimization, differential privacy, and reinforcement learning.
Missouri S&T
Talk Title: Data Mining in the Age of Big Data and AI
Date: October 23rd, 2023 Time: 10 AM Location: CS 220
In this talk I will delve into our recent endeavors on trajectory data mining, emphasizing its contemporary and evolving nature that stems from the incorporation of novel deep learning methods for addressing long-established problems. In the first part of the talk, I will present our recent work on trajectory-user linking (TUL), a trajectory classification problem aimed at connecting anonymous trajectories to their respective users. I will introduce TULHOR, a TUL model inspired by BERT, a popular language representation model. One of the key innovations of TULHOR lies in its utilization of a higher-order mobility flow data representation, facilitated by geographic area tessellation. This approach helps TULHOR overcome the problem of trajectory sparsity, generalize more effectively, and outperform several robust baselines. In the second part of the talk, I will present our recent work on trajectory dictionary construction, which aims to identify a compact set of fundamental trajectory building blocks known as pathlets. These pathlets have the potential to represent a vast array of trajectories and find applications in diverse tasks such as trajectory compression. I will introduce PathletRL, a deep reinforcement learning method employing Deep Q Networks (DQN) to approximate the utility of a pathlet dictionary using newly introduced metrics encompassing trajectory loss and representability. Notably, PathletRL can significantly reduce the size of the constructed dictionary, achieving reductions of up to 65.8% compared to alternative methods. Furthermore, our findings demonstrate that only half of the pathlets in the dictionary are necessary to reconstruct a substantial 85% of the original trajectory data.
Manos Papagelis is an Associate Professor of Electrical Engineering and Computer Science (EECS) at the Lassonde School of Engineering, York University, Canada. His research interests include data mining, graph mining, NLP, machine learning, big data analytics, and knowledge discovery. He holds a PhD in Computer Science from the University of Toronto, Canada, and a MSc and a BSc in Computer Science from the University of Crete, Greece. Prior to joining York University, he was a postdoctoral research fellow at the University of California, Berkeley, research intern at Yahoo! Labs, Barcelona and research fellow at the Institute of Computer Science, FORTH, Greece. He has published widely in premier journals and conferences of his discipline, and he is the recipient of two best paper awards (IEEE MDM
2018; IEEE MDM 2020), an outstanding reviewer award (ACM CIKM 2017), and the EAAI journal 2005-2010 top cited article award. He has served as a program committee member for numerous ACM and IEEE conferences, he is an associate editor of the journal, Computational Intelligence, and program committee co-chair of the IEEE MDM 2024. He has lectured at the University of Toronto, the University of California, Berkeley and York University, and he has received the Lassonde Educator of the Year award (2021).
Missouri S&T
Talk Title: Protecting Human Users from Misused AI
Date: October 16th, 2023 Time: 10 AM Location: CS 220
Recent developments in machine learning and artificial intelligence have taken nearly everyone by surprise. The arrival of arguably the most transformative wave of AI did not bring us smart cities full of self-driving cars, or robots that do our laundry and mow our lawns. Instead, it brought us over-confident token predictors that hallucinate, deepfake generators that produce realistic images and video, and ubiquitous surveillance. In this talk, I’ll describe some of our recent efforts to warn, and later defend against some of the darker side of AI. In particular, I will tell the story of how our efforts to disrupt unauthorized facial recognition models led unexpectedly to Glaze, a tool to defend human artists against art mimicry by generative image models. I will share some of the ups and downs of implementing and deploying an adversarial ML tool to a global user base, and reflect on mistakes and lessons learned.
Ben Zhao is Neubauer Professor of Computer Science at University of Chicago. He completed his Ph.D. at U.C. Berkeley (2004), and B.S. from Yale (1997). He is a Fellow of the ACM, and a recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), USENIX Internet Defense Prize, ComputerWorld Magazine's Top 40 Tech Innovators award, IEEE ITC Early Career Award, and Faculty awards from Google, Amazon, and Facebook. His work has been covered by media outlets including New York Times, CNN, NBC, BBC, MIT Tech Review, Wall Street Journal, Forbes, and New Scientist. He has published over 180 articles in areas of security and privacy, machine learning, networking, and HCI. He served as TPC (co-)chair for the World Wide Web conference (WWW 2016) and ACM Internet Measurement Conference (IMC 2018). He also serves on the steering committee for HotNets.
Missouri S&T
Talk Title: Zero Trust for Tactical Edge Network Environments
Date: October 2nd, 2023 Time: 10 AM Location: CS 220
Tactical edge network environments are critical to deploy applications in e.g., military, disaster response, and industrial manufacturing environments. Given the dynamic, as well as the Denied, Disrupted, Intermittent, and Limited Impact (DDIL) nature of these environments, a resource-aware security approach is essential to address edge resource constraints and enable real-time decision-making. The Zero Trust (ZT) security paradigm can be used to enable strict access controls, continuous entity verification, and mitigation of unauthorized access, tampering, and data integrity issues. However, there is a need to transform ZT security principles that are typically developed for unconstrained data center environments with reliable networking and abundant computing power and are not suitable in a tactical edge network setting. In this talk, a risk-based ZT scale approach will be presented that tailors security measures to scenario-associated risk levels, while having low resource overheads. Specifically, approach to devise a Bayesian Network (BN) model will be presented to evaluate communication request risk based on metrics indicating possible attacks. The talk will conclude with findings that demonstrate the effectiveness and adaptability of our risk-based ZT scale approach in ensuring secure and efficient operations within tactical edge network environments.
Prasad Calyam is the Greg L. Gilliom Professor of Cybersecurity in the Department of Electrical Engineering and Computer Science at University of Missouri-Columbia, and Director of the Center for Cyber Education, Research and Infrastructure (Mizzou CERI). His research and development areas of interest include: Cloud Computing, Machine Learning, Artificial Intelligence, Cyber Security, and Advanced Cyberinfrastructure. Previously, he was a research director at the Ohio Supercomputer Center/Ohio Academic Resources Network at The Ohio State University. He has published over 200 peer-reviewed papers in various conference and journal venues. As the Principal Investigator, he has successfully led teams of graduate, undergraduate and postdoctoral fellows in Federal, State, University and Industry sponsored R&D projects totaling over $30 Million. His research sponsors include: National Science Foundation (NSF), Department of Energy (DOE), National Security Agency (NSA), Department of State (DOS), Army Research Lab (ARL), VMware, Cisco, Raytheon-BBN, Dell, Verizon, IBM and others. His basic research and software on multi-domain network measurement and monitoring has been commercialized as ‘Narada Metrics’. He is a Senior Member of IEEE. He currently serves as an Associate Editor for IEEE Transactions on Network and Service Management.
Missouri S&T
Talk Title: Next Generation Cyberinfrastructure for Space Weather Analytics
Date: September 25th, 2023 Time: 10 AM Location: CS 220
Space weather refers to variations in conditions in the space and near-Earth environment that are a consequence of charged particles and electromagnetic radiation emitted from the sun. The sun is the main source of space weather. Sudden bursts of plasma and magnetic field structures from the sun's atmosphere called coronal mass ejections together with sudden bursts of radiation, or solar flares, all cause space weather effects here on Earth. Space weather can have a significant impact on our planet including power outages, communications disruptions, satellite damage, and astronaut safety. Space weather analytics is primarily concerned with understanding and predicting the complex and interconnected space weather phenomena.
While researchers take many different paths for analyzing the solar phenomena, data-intensive analytics applications are commonly used in recent decades. These approaches require a robust amalgamation of data modeling, predictive analytics and system design. In this talk, we will dive into how we are addressing the task of building the next generation of data and prediction cyberinfrastructure for space weather analytics. We will discuss the common characteristics of solar data, how they can be used under operational requirements of space weather analytics, and present our solar energetic particle event forecasting system.
Dr. Berkay Aydin is an Assistant Professor of Computer Science at Georgia State University (GSU), Atlanta. He got his PhD from GSU and BS degree from Bilkent University. As a faculty in the Data Mining group, his lab works on broad areas of data mining and knowledge discovery from solar big data, including computer vision, indexing, data management and integration, deep learning, frequent pattern mining, and time series mining. His research has been funded by NASA and NSF.
Web page link: https://www.berkayaydin.net/home
Missouri S&T
Talk Title: Free-Space Gesture Interaction Through Communication and Sensing Conversion
Date: September 18th, 2023 Time: 10 AM Location: CS 220
Recent advances in RF-sensing have demonstrated that communication systems (e.g. WiFi, cellular, LoRa, Blue-tooth, etc.) may not only provide connectivity but also sensing and environmental perception capabilities. Therefore, RF convergence – realizing sensing capabilities utilizing resources originally reserved for communication – has gained attention as a potential solution to better utilize the available spectrum. The proposed designs target architectures where sensing and communication are co-designed at the physical and Medium Access Control (MAC) layer. Supported by edge intelligence, this enables new possibilities for free-space (gesture) interaction, activity recognition or localization, and tracking in IoT-augmented smart computing environments. The talk will summarize recent developments in RF sensing and highlight challenges and opportunities for interaction in the context of HCI.
Stephan Sigg is an Associate Professor at Aalto University in the Department of Information and Communications Engineering. He heads the Ambient Intelligence research group. Prior to Aalto University, he has worked in Germany and in Japan. Professor Sigg’s research is focused on the design, analysis, and optimization of (randomized) algorithms in Mobile and Pervasive Computing domains, in particular focusing on (wireless) networking and security. In recent years has worked on problems related to usable security, device-free human sensing, pro-active context computing, distributed adaptive beamforming, communication, and Sensing convergence, and physical layer
function computation.
Missouri S&T
Talk Title: Computational Molecular Design in Plant Biotechnology at Bayer Crop Science
Date: September 11th, 2023 Time: 10 AM Location: CS 220
In the near future, the world will face many challenges. A growing population, as well as climate change, creates challenges to our food supply. In my talk, I will discuss how we are addressing these challenges at Bayer Crop Science. Specifically, I will discuss how we are driving innovation and new products in the Computational Molecular Design Team in the Data Science and Analytics Group. The Computational Molecular Design Team designs proteins for the insect control and herbicide tolerance pipeline, designs synthetic elements for expression, and works with the Biotransformation Group. Highlights from projects in each of these areas will be discussed during the talk.
Christy Taylor is the Computational Protein Design Lead and Science Fellow at Bayer Crop Science in St. Louis, MO. Christy graduated summa cum laude from Missouri University of Science and Technology with a B.S. degree in Chemistry. Christy received the NSF Predoctoral Fellowship and the Anna Fuller Cancer Research Predoctoral Fellowship for her Ph.D. studies. Christy received a Ph.D. in Biology at MIT with Dr. Amy Keating with her doctoral thesis titled “Redesigning Specificity in Miniproteins”. In Dr. Keating’s lab, Christy leveraged computational and experimental protein design strategies to study protein oligomerization and coiled coil proteins. Christy did her postdoctoral studies at Washington University in St. Louis with Dr. Garland Marshall. While in Dr. Marshall’s lab, Christy focused on computational chemistry projects around GPCRs. Christy was awarded the NIH National Research Service Award Postdoctoral Fellowship, W.M. Keck Postdoctoral Fellowship in Molecular Medicine, and the NIH National Research Service Award Postdoctoral Fellowship for her post-doctoral work. Wanting to learn more about computational biology, Christy took a staff scientist position at the Genome Institute at Washington University School of Medicine where she did comparative genomics of nematodes. Christy joined Monsanto in 2012 in the Chemistry Division where she did bioinformatics and small molecule research. In 2018, Christy transitioned over to the Computational Protein Design Team in the Biotechnology organization. Christy’s team designs proteins for insect control and herbicide tolerance in the major row crops. In 2022, Christy’s team expanded to also encompass synthetic element design and protein expression optimization. Christy has over 19 publications and 6 patents the areas of bioinformatics, computational chemistry, protein design, agrochemicals and insect control. At Monsanto and Bayer, she has received several awards including the Bayer Eclipse Award, Bayer Life Science Collaboration Competition Winner, Bayer Impact Award, Monsanto ICE (Inspire, Communicate, Execute) Award.
Missouri S&T
Talk Title: Knowledge Graphs and Large Language Models: Friends or Foes?
Date: August 21st, 2023 Time: 10 AM Location: CS 220
With the advent of large language models, previous state-of-the-art performance has been exceeded on a number of difficult AI challenges in the natural language processing community, including commonsense reasoning, question answering (and other rich forms of information retrieval), text summarization, and computational creativity. Knowledge graphs had been used to address some of these problems before. This talk will tackle the question of whether knowledge graphs are competitive or synergistic with applications of LLMs. In other words, are knowledge graphs obsolete, and should they be thought of as rivals to LLMs, or will they continue to play an important role in AI? I will view this question both through a practical and theoretical perspective, including experiences from academic and industry collaborations.
Mayank Kejriwal is a research assistant professor and research team leader in the University of Southern California. He holds joint appointments in the USC Information Sciences Institute and the Department of Industrial & Systems Engineering, and directs a group on Artificial intelligence and Complex Systems. He is the author of four books, including an MIT Press textbook on knowledge graphs.
Missouri S&T
Talk Title: Some Thoughts and Possible Future Directions Regarding Software and Hardware Safety and Security
Date: May 1st, 2023 Time: TBD
Are there fundamental limitations on the safety, security, and timeliness of heterogeneous cyber-physical systems? Why are there so few safety+security engineers? Why is it difficult for security engineers to design for safety and for safety engineers to design for security? Why are security and safety so challenging to teach and learn in the first place? This talk will address these broad topics and suggest promising future directions for research while drawing from existing published work, preliminary ideas, and observations which have not yet been quantified.
Eugene Vasserman is an Associate Professor in the Department of Computer Science at Kansas State University, specializing in the security of distributed systems. He is also the director of the Kansas State University Center for Cybersecurity and Trustworthy Systems and runs the Cybersecurity degree program. He received a B.S. in Biochemistry and Neuroscience with a Computer Science minor from the University of Minnesota in 2003. His M.S. and Ph.D. in Computer Science are also from the University of Minnesota, in 2008 and 2010, respectively. His current research is chiefly in the area of privacy, anonymity, censorship resistance, and socio-technical aspects of security. His research has resulted in over 45 peer reviewed publications in computer science, psychology, and education, with work spanning the gamut from medical cyber-physical systems, authorization with integrated break-glass capabilities, security vulnerabilities emergent from the BGP infrastructure of the internet, blockchains, energy depletion attacks in low-power systems, secure hyper-local social networking, and privacy and censorship resistance on a global scale (systems capable of supporting up to a hundred billion users). He has collaborated with the U.S. Food and Drug Administration on medical device cybersecurity and contributed to FDA policies on building safety-focused cybersecurity into legacy and future medical devices and systems-of-systems. In 2013, he received the NSF CAREER award for work on secure next-generation medical systems. He contributed to the UL 2900 standardization process for cybersecurity of network-connectable devices, the AAMI interoperability working group, and the ANSI / AAMI / UL 2800 standards effort for medical device interoperability. He has served on numerous program committees including USENIX Security, ACSAC, PETS/PoPETs, USEC, ASIACCS, HotWiSec, WPES, and SecureComm.
Missouri S&T
Talk Title: GPT-4, Human Reasoning and Formal Language
Date: April 24th, 2023 Time: 10:00AM Location: CS 209
Recent advances in Large Language Models have taken many by surprise, including both the general public and those who develop these very systems. These models have already demonstrated remarkable capabilities, particularly in their ability to follow complex instructions and provide highly plausible reasoning behind their generated output. Although the debate on the impact of such systems on our society has only just begun, I believe they can be used to uncover insights about the human mind that were previously unattainable. In this talk, along with my co-presenter GPT-4, I will share some preliminary experiments of a predominantly phenomenological nature. I will demonstrate how certain behaviors exhibited by GPT-4 have intriguing implications for human reasoning and problem-solving. Additionally, I will discuss some fundamental limitations of these systems and explore potential avenues for overcoming these challenges in the future.
Missouri S&T
Talk Title: Modern Organ Exchanges: Market Designs, Algorithms, and Opportunities
Date: April 17th, 2023 Time: 10 AM Location: CS 209
I will share experiences from working on organ exchanges for the last 18 years, ranging from market designs to new optimization algorithms to large-scale fielding of the techniques and even to computational policy optimization. Originally in kidney exchange, patients with kidney disease obtained compatible donors by swapping their own willing but incompatible donors. I will discuss many modern generalizations of this basic idea. For one, I will discuss never-ending altruist donor chains that have become the main modality of kidney exchanges worldwide and have led to over 10,000 life-saving transplants. Since 2010, our algorithms have been running the national kidney exchange for United Network for Organ Sharing, which has grown to include 80% of the transplant centers in the US. Our algorithms autonomously make the transplant plan each week for that exchange, and have been used by two private exchanges before that. I will summarize the state of the art in algorithms for the batch problem, approaches for the dynamic problem where pairs and altruists arrive and depart, techniques that find the highest-expected-quality solution under the real challenge of unforeseen pre-transplant incompatibilities, algorithms for pre-match compatibility testing, and approaches for striking fairness-efficiency trade offs. I will describe the FUTUREMATCH framework that combines these elements and uses data and supercomputing to optimize the policy from high-level human value judgments. The approaches therein may be able to serve as ways of designing policies for many kinds of complex real-world AI systems. I will also discuss the idea of liver lobe exchanges and cross-organ exchanges, and how they have started to emerge for real.
Tuomas Sandholm is an Angel Jordan University Professor of Computer Science at Carnegie Mellon University. His research focuses on the convergence of artificial intelligence, economics, and operations research. He is Co-Director of CMU AI. He is the Founder and Director of the Electronic Marketplaces Laboratory. In addition to his main appointment in the Computer Science Department, he holds appointments in the Machine Learning Department, Ph.D. Program in Algorithms, Combinatorics, and Optimization (ACO), and CMU/UPitt Joint Ph.D. Program in Computational Biology. In parallel with his academic career, he was Founder, Chairman, first CEO, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 of the world’s largest-scale generalized combinatorial multi-attribute auctions, with over $60 billion in total spend and over $6 billion in generated savings. He is Founder and CEO of Optimized Markets, Inc., which is bringing a new optimization-powered paradigm to advertising campaign sales, pricing, and scheduling.
Sandholm has developed the leading algorithms for several general classes of game with his students and postdocs. The team that he leads is the multi-time world champion in computer heads-up no-limit Texas hold’em, which was the main benchmark and decades-open challenge problem for testing application-independent algorithms for solving imperfect-information games. Their AI Libratus became the first to beat top humans at that game. Then their AI Pluribus became the first and only AI to beat top humans at the multi-player game. That is the first superhuman milestone in any game beyond two-player zero-sum games. He is Founder and CEO of Strategic Machine, Inc., which provides solutions for strategic reasoning under imperfect information in a broad set of applications ranging from poker to other recreational games to business strategy, negotiation, strategic pricing, finance, cybersecurity, physical security, auctions, political campaigns, and medical treatment planning. He is also Founder and CEO of Strategy Robot, Inc., which focuses on defense, intelligence, and other government applications.
Among his honors are the Minsky Medal, McCarthy Award, Engelmore Award, Computers and Thought Award, inaugural ACM Autonomous Agents Research Award, CMU’s Allen Newell Award for Research Excellence, Sloan Fellowship, NSF Career Award, Carnegie Science Center Award for Excellence, and Edelman Laureateship. He is Fellow of the ACM, AAAI, INFORMS, and AAAS. He holds an honorary doctorate from the University of Zurich.
Missouri S&T
Talk Title: A Quantum Framework for Topological Data Analysis
Date: April 3rd, 2023 Time: 10 AM Location: CS 209
Topological data analysis (TDA) methods capture shape properties of data that can be useful for classification and clustering problems. TDA methods first extract topological features from the data – like the number of connected components, holes and voids – via persistent homology, tracking them across different scales or resolutions. Classical algorithms for computation of persistent homology have been around for some time, but they often have to deal with running time and memory requirements that grow exponentially with respect to the number of data points. In recent years, a few quantum algorithms for persistent homology have appeared, which could outperform their classical counterparts once quantum computers get larger and more tolerant to errors.
The topological features obtained this way are then displayed in persistence diagrams that show when each feature appears and disappears. These 2-dimensional diagrams are a great way to summarize large and high-dimensional data sets in order to perform machine learning algorithms for classification and clustering. But to do this one must define distances on the space of persistence diagrams and compute them, perhaps the most well known are the Wasserstein and bottleneck distances. Something a lot of these distances have in common is the need to minimize a cost function over all the possible ways to match the points from two persistence diagrams. Quantum algorithms for optimization, like the Quantum Approximate Optimization Algorithm (QAOA) and the Quantum Alternating Operator Ansatz (QAOA+), are currently a very hot topic, so it comes as no surprise that there are quantum approaches for estimating the distance between persistence diagrams as well.
In this talk I will present some of these quantum algorithms and explain how one could use them to solve classification problems, as well as the advantages that they promise once quantum computers are more advanced.
Bernardo Ameneyro was born in Mexico City in 1997 and later moved to Colima (in the pacific coast of Mexico), where he obtained his BSc in Mathematics at the University of Colima in 2019. After that, he started a PhD program in Mathematics at the University of Tennessee in Knoxville. He's currently a GRA in the Maroulas Research Group at UTK and his research focuses on quantum algorithms for topological data analysis.
Software development revolves around human interactions, where emotions profoundly influence developers' communication, decision-making, and collaborative dynamics. Nonetheless, traditional practices and tools frequently overlook these emotional facets. Our research harnesses the transformative potential of large language models (LLMs) in automating emotion-aware practices, ranging from precise emotion recognition to causal analysis of affective experiences within software engineering contexts. We demonstrate LLMs' capability to classify emotions in open-source software engineering texts, highlighting their efficacy while acknowledging their limitations in capturing nuanced sentiments. We propose techniques to enhance emotion classification performance. Going beyond classification, we utilize LLMs as zero-shot reasoners for uncovering the underlying causes of emotions in developer communication. This reveals their potential for triaging emotion escalation and yields actionable insights regarding developers' pain points and challenges.
Mia Mohammad Imran is an advanced Ph.D. candidate in Computer Science at Virginia Commonwealth University, where he works under the guidance of Dr. Kostadin Damevski. His research focuses on applying machine learning and natural language processing techniques to software engineering problems, with particular expertise in emotion recognition, language models, and developing robust AI systems. Mr. Imran has published 7 peer-reviewed papers at venues such as ICSE, ASE, and MSR. In addition to his academic work, he has three years of industry experience as a software engineer and an intern at Google.
Missouri S&T CS 220
Talk Title: Next Era of Computing for Planetary-Scale Applications
Date: March 10th, 2023, 1:30 PM
In the last two decades, we have seen an exponential growth in hardware performance and capacity at every level of Systems Infrastructure, from Microprocessors and Accelerators to Storage and Networking. This has given rise to the planetary-scale services and connectivity that run highly resilient and secure applications, some of which have over a billion users worldwide. With the slowdown of Moore’s law and the end of Dennard scaling, there is a profound impact on the growth of datacenter capacity in a cost-efficient manner. In this talk, we will show that the current computing landscape requires new innovative thinking for the next era of computing. We will discuss several efforts to improve system capacity to match the exponential demand for Data, AI and Analytics requirements.
Dr. Sinharoy is currently a Lead Architect in Google’s Systems Infrastructure group with the mission to define the next generation System Architecture for Data Centers. Prior to joining Google, Dr. Sinharoy was an IBM Fellow for 10 years and served as the Chief Architect of several generations of POWER processors and systems. Dr. Sinharoy has published numerous conference and journal articles and has been granted over 200 patents in many areas of Computer Architecture. Dr. Sinharoy has been a keynote speaker in various IEEE conferences, such as Micro-42 and ARITH-25, among others and taught at Rensselaer Polytechnic Institute and University of North Texas. Dr. Sinharoy received his MS and PhD from Rensselaer Polytechnic Institute. He was an IBM Master Inventor and has been an IEEE Fellow since 2009.
Missouri S&T
Talk Title: Malware Threat Hunting and Threat Intelligence in Critical Infrastructure: Research Achievements and Plans Going Forward
Date: March 9th, 2023, 3:30 PM
Dr. Dehghantanha commences the presentation by providing an overview of his past research endeavors and emphasizing the most notable accomplishments. The research trajectory he showcases includes privacy, digital forensics, malware threat detection and analysis, as well as critical infrastructure security, with particular emphasis on utilizing fuzzy machine learning for malware threat hunting to attribute cyber threats. The presentation culminates with a discussion of his upcoming research plans, which involve the development of anti-forensics and anti-anti-forensics systems utilizing adversarial machine learning, and the presentation of preliminary results in this area.
Dr. Ali Dehghantanha, the director of the Cyber Science Lab in the School of Computer Science at the University of Guelph (UofG) in Ontario, Canada, is an Associate Professor and a Canada Research Chair in Cybersecurity and Threat Intelligence. He has a decade-long track record of working in various industrial and academic roles for prominent players in the fields of Cybersecurity and Artificial Intelligence. Prior to joining UofG, he held the position of a Senior Lecturer at the University of Sheffield in the UK and served as an EU Marie-Curie International Incoming Fellow at the University of Salford, also in the UK. His educational qualifications include an MSc. and Ph.D. in Security in Computing, and he holds several professional certifications, such as CISSP and CISM. His primary research interests encompass malware analysis and digital forensics, critical infrastructure security, and the application of AI in Cybersecurity.
Missouri S&T
Talk Title: Attribute-based Encryption Scheme for Secure Data Sharing
Date: March 2nd, 2023, 3:30 PM
Attribute-based encryption (ABE) is a prominent cryptographic tool for secure data sharing in the cloud because it can be used to enforce very expressive and fine-grained access control on outsourced data. The revocation in ABE remains a challenging problem as most of the revocation techniques available today, suffer from the collusion attack. The revocable ABE schemes which are collusion resistant require the aid of a semi-trusted manager to achieve revocation. More specifically, the semi-trusted manager needs to update the secret keys of non-revoked users followed by a revocation. This introduces computation and communication overhead, and also increases the overall security vulnerability. In this work, we propose a revocable ABE scheme that is collusion resistant and does not require any semi-trusted entity. In our scheme, the secret keys of the non-revoked users are never affected. Our decryption requires only an additional pairing operation compared to the baseline ABE scheme. We are able to achieve these at the cost of a little increase (compared to the baseline scheme) in the size of the secret key and the ciphertext. Experimental results show that our scheme outperforms the relatable existing SOA schemes.
Sanjay K Madria is a Curators’ Distinguished Professor in the Department of Computer Science at the Missouri University of Science and Technology (formerly, University of Missouri-Rolla, USA). He has published over 285+ Journal and conference papers in the areas of cybersecurity, mobile and sensor computing, Big data and cloud computing, data analytics. He won five IEEE best papers awards in conferences such as IEEE MDM and IEEE SRDS. He is a co-author of a book (published with his two PhD graduates) on Secure Sensor Cloud published by Morgan and Claypool in Dec. 2018. He has graduated 20 PhDs and 33 MS thesis students, with 8 current PhDs. NSF, NIST, ARL, ARO, AFRL, DOE, Boeing, CDC-NIOSH, ORNL, Honeywell, and others have funded his research projects of over $18M. He has been awarded JSPS (Japanese Society for Promotion of Science) invitational visiting scientist fellowship, and ASEE (American Society of Engineering Education) fellowship. In 2012 and in 2019, he was awarded NRC Fellowship by National Academies, US. He is ACM Distinguished Scientist, and served as an ACM and IEEE Distinguished Speaker He is an IEEE Senior Member as well as IEEE Golden Core Awardee.
Missouri S&T
Talk Title: Robust, Scalable, and Verifiable Approaches to Learning, Acquisition, and Decision-Making
Date: February 10th, 2023, 10:00 AM
Future applications of national importance, such as healthcare, critical infrastructure, transportation systems and smart cities, will increasingly rely on machine learning methods and AI solutions, including structured learning, supervised learning, and reinforcement learning. In this talk, we will discuss our research efforts in the Data Science and Machine Learning Lab (DSML) on understanding the fundamental limits of learning, data acquisition and decision making, as well as the design of scalable, robust and provable learning algorithms, and verifiable decision policies in uncertain dynamic environments. This work is motivated by numerous challenges in these domains, including large data volume and dimensionality, distributional uncertainties, data corruption, incomplete data, non-linearities, complex data structures, safety constraints, and sparse rewards.
The talk will highlight some of our major findings and results in this context, which include scalable algorithms that achieve super-linear speedups in processing big data, robust strategies with enhanced breakdown points and phase transitions, provable guarantees for the long-standing open problem of policy synthesis in constrained multichain MDP's, and fast reinforcement learning methods in the presence of sparse reward signals.
George K. Atia is an Associate Professor in the Department of Electrical and Computer Engineering with a joint appointment in the Department of Computer Science at the University of Central Florida (UCF), where he directs the Data Science and Machine Learning Laboratory (DSML). He was a Visiting Faculty at the Air Force Research Laboratory (AFRL) in 2019-2020. He received his PhD degree from Boston University in Electrical and Computer Engineering in 2009 and was a Postdoctoral Research Associate at the Coordinated Science Laboratory (CSL) at the University of Illinois at Urbana-Champaign (UIUC) from 2009 to 2012. His research focuses on robust and scalable machine learning, statistical inference, verifiable and explainable AI, sequential decision methods, and information theory. He is an Associate Editor for the IEEE Transactions on Signal Processing. Dr. Atia is the recipient of many awards, including the UCF Reach for the Stars Award and the CECS Research Excellence Award in 2018, the Dean's Advisory Board Fellowship and the Inaugural UCF Luminary Award in 2017, the NSF CAREER Award in 2016, and the Charles Millican Faculty Fellowship Award (2015-2017). His research has been funded by NSF, ONR, DARPA, and DOE.
Missouri S&T
Talk Title: Bridging the Digital Divide: Foundations of Next Generation Integrated Space-Air-Ground Communication Systems
Date: February 8th, 2023, 10:00 AM
Despite the revolution in communication technologies that was witnessed in the past decade, there is still a significant portion of the earth's population who falls out of today's wireless broadband coverage. While people who live in under-developed, disaster-affected, or rural areas remain in "wireless darkness," communities in megacities are also often experiencing below-par wireless connectivity due to increasing demand for wireless services. To provide high-speed, reliable wireless connectivity to those on the less-served side of the digital divide, as well as to crowded urban environments, airborne and satellite-based communication platforms can be deployed as promising solution to boost the capacity and coverage of existing wireless cellular networks (e.g., 5G and beyond). In this talk, we address some of the fundamental challenges that face the realization of integrated space-air-ground communication systems by developing new frameworks that weave together new concepts from communications, machine learning, and game theory. First, we focus on the problem of wireless-aware control and navigation for drones that are used as access points to provide connectivity to remote areas. In particular, we introduce a novel multiagent, reinforcement learning solution with meta learning capabilities that can be used to control the navigation of these aerial access points, allowing them to provide effective, on-demand coverage to distributed, dynamic, and unpredictable ground user requests. We show that, using value decomposition techniques and a meta training mechanism, the proposed low training overhead control framework is guaranteed to converge to a local optimal coverage for the users, under systems. In particular, we develop a novel exchange market-based framework that allows the integrated system to efficiently exploit its spectral resources while optimizing its communication performance in terms of data rates. We then show that the optimal. equilibrium allocation of resources can be found by using a lightweight, distributed solution that facilitates cooperative spectrum sharing in the integrated system and yields a faster convergence compared to a baseline sub-gradient algorithm. We conclude the talk with an overview on future, exciting research directions.
Ye Hu (S'17) received her PhD in the Bradley Department of Electrical and Computer engineering at Virginia Tech, Virginia, USA, in 2021, and was also a postdoctoral research scientist at the Electrical Engineering Department at Columbia University, New York, USA. Her research interests include machine learning, game theory, cybersecurity, blockchain, unmanned aerial vehicles, cube satellite, and wireless communication. She is also the recipient of the best paper award at IEEE GLOBECOM 2020 for her work on meta-learning for drone-based communications.
Missouri S&T
Talk Title: Using Multimodal Human Sensing and Machine Learning to Understand Human Behaviors and Improve Health
Date: February 6th, 2023
Today, technological advances have allowed us to model human subjects in ways never before possible (i.e., video, motion capture, wearable sensors, electronic health records, fMRI). Coupled with artificial intelligence (AI) and machine learning (ML) algorithms, we can find promising solutions to address pressing societal challenges, such as mental/physical health and well-being. In this talk, I will present our efforts of using multimodal human-centered data and machine learning to understand human behaviors and improve health, ranging from human sensing to model development and deployment. More specifically, I will first describe human sensing that makes sense of humans using a variety of sensors. Following that, I will discuss the challenges in the modeling and characterizing human behaviors and health from multimodal data. Then, I will introduce the proposed algorithms to solve the corresponding challenges, so that we can have a better understanding of human behaviors and activities, emotional states, health conditions, and more. Finally, I will present our recent works on using multimodal human-centered data to improve health.
Huiyuan Yang is currently a Postdoctoral Research Associate at Rice University, where he works with Dr. Akane San in the Computational Wellbeing Group and Digital Health Lab. He received his PhD in Computer Science, Binghamton University 2021, Masters Degree in Electronics and Communication Engineering in Chinese Academy of Sciences, Beijing 2014, and B.E. at Wuhan University in 2011 respectively. His research is at the intersection of machine learning and multimodal human-centered data, using a variety of sensory data (e.g. video, motion capture, wearable sensors, EHR), to develop models and datasets, and thus understand human behavior, enhance human physical and cognitive performance, and improve health. His work has appeared in top-tier venues including CVPR, ICCV, ACM MM, and others. He has been a PC member/reviewer for ICLR, CVPR, KDD, IJCAI, AAAI, ACM MM, ACI, FG, and others, and an active reviewer for more than 25 journals. His dissertation won the Binghamton University Distinguished Dissertation Award (2021). He has co-organized three workshops and co-released several popular multimodal datasets, including BU-EEG, 3DFAW, BP4D+, and SMILE dataset.
Missouri S&T
Talk Title: Computations on Complex Socio-Technical Systems
Date: February 3rd, 2023
We are surrounded by complex, adaptive and evolving socio-technical systems. The expansion and interdependence of technology and social interaction gives rise to a number of complex challenges from various perspectives in these systems. This talk builds on two inter-connected threads of my research : i) Social computing and ii) Science of Science. In the first part of the talk, I will discuss how social media is misused, specifically focusing on the detection and mitigation of malicious content on social media platforms. I will present novel methods and techniques for identifying and combating these issues, in order to create a safer and more inclusive environment on social media for all users. In the second part of the talk, I will delve deeper into the fundamental dynamics and uncertainties underlying the inner-workings of Science. I will focus on the evolving global landscape of grants and interdisciplinary publications to better understand team science and collaboration, the evolution of professional careers, and inequality in Science.
Suman Kalyan Maity is currently a postdoctoral research associate at the MIT Department of Brain and Cognitive Sciences (BCS) and affiliated with the MIT Center for Research on Equitable and Open Scholarship (CREOS). He is also a visiting research fellow at Center for Science of Science and Innovation (CSSI) and The Northwestern Institute on Complex Systems (NICO), Northwestern University. He received his PhD in Computer Science and Engineering from Indian Institute of Technology Kharagpur. He was the recipient of the prestigious IBM PhD Fellowship and Microsoft Research India PhD Fellowship Award during his PhD. His research interests lie in the interdisciplinary area of Human-centered Computing. He develops and applies methods from Natural Language Processing, Machine Learning and Network Science to investigate the potential and opportunities within this interdisciplinary area, with a specific focus on issues related to social media and the Science of Science.
Missouri S&T
Talk Title: Quantum walk mixing with Chebyshev polynomials
Date: November 14, 2022
A quantum walk is the quantum analogue of a random walk. A long-standing open question is whether quantum walks can be used to quadratically speed up the mixing time of random walks. We describe a new approach to this question that builds on the close connection between quantum walks and Chebyshev polynomials. Specifically, we use that the quantum walk dynamics can be understood by simply applying a Chebyshev polynomial to the random walk transition matrix. This decouples the problem from its quantum origin, and highlights connections to classical second-order methods and the use of Chebyshev polynomials in random walk theory as in the Varopoulos-Carne bound. We illustrate the approach on the lattice: we prove a weak limit on the quantum walk dynamics, giving a different proof of the quadratically improved spreading behavior of quantum walks on lattices.
Simon Apers (he/him) received a PhD from Ghent University, Belgium, on the topic of quantum walks and quantum algorithms. Afterwards, Simon completed postdocs at CWI (Amsterdam) with Ronald de Wolf, INRIA (Paris) with Anthony Leverrier and ULB (Brussels) with Jérémie Roland. Simon is currently CNRS researcher at IRIF in Paris.
Missouri S&T
Talk Title: Federating Cyber-Physical Systems
Date: November 7th, 2022
It is predicted that with the advances made in the Internet-of-Things, Next Generation Networking, Cloud computing and Artificial Intelligence, hundreds of thousands of islands of Cyber Physical Systems (CPS) which are currently geographically distributed in different countries and regions worldwide, as well as in different locations in cities, will be federated to generate unprecedent datasets which, when submitted to emerging artificial intelligence models (e.g. machine learning), will provide solutions to some of the key problems that the world could not solve till today. This is the case, for example, in the emerging field of virtual/soft sensing, where artificial intelligence techniques are used to complement Internet-of-Things (IoT) measurements to provide innovative and more practical and economical solutions to many of the issues the world is currently facing. However, the digital divide between developed and developing nations, leading to an imbalance in cloud processing resources, is a key issue that might jeopardize/hamper the development of such CPS federations. This talk focuses on the design and performance evaluation of Federated Cyber-Physical Systems (FCPS), with the objective of showcasing how a federated cloud computing (FCC) model can be used to build data-intensive ecosystems. Building on a use-case, the talk reveals that through federation and cooperative sharing of cloud resources, a cloud computing system can be built and used to improve the quality of service delivered by data-intensive processing infrastructures across a continent. The details of a Federated Cloud Computing Infrastructure (FCCI) called “AFRICA 3.0” aiming at fostering the Fourth Industrial Revolution (4IR) on the African continent will be presented and discussed during the talk.
Bigomokero Antoine Bagula received a Ph.D. degree (Tech. Dr.) in Communication Systems from the Royal Institute of Technology (KTH), Stockholm, Sweden, and 2 MSc degrees (Computer Engineering – Université Catholique de Louvain (UCL), Belgium and Computer Science - University of Stellenbosch (SUN), South Africa). He is currently a full professor and head of the Department of Computer Science at the University of the Western Cape (UWC) where he also leads the Intelligent Systems and Advanced Telecommunication (ISAT) laboratory. He is also an extraordinary Professor at ESIS-Salama where he is in charge of spearheading the institution research agenda. Prof. Bagula is well-published scientist in his research field. His current research interests include Data Engineering including Big Data Technologies, Cloud/Fog Computing and Network Softwarization (e.g., NFV and SDN); The Internet of Things (IoT) including the Internet-of-Things in Motion and the Tactile Internet; Data Science including Artificial Intelligence and Machine Learning with their applications in Big Data Analytics; and Next Generation Networks including 5G/6G.
Missouri S&T
Talk Title: Searches with Quantum Walks
Date: October 31, 2022
Random walks serve as the basis of a number of algorithms, and this led to the idea that perhaps a quantum version of a random walk would be useful for finding quantum algorithms. Constructing a quantum walk is not completely straightforward, and there are three ways of doing it, two of which will be discussed in this talk. They have proved useful in search problems, where there is a distinguished vertex in a graph, and we would like to find it. It is also possible to search for other objects, such as extra edges or more general structures that break the symmetry of a graph. Most recently, it has been possible to use quantum walks to find paths between two special vertices, which is analogous to using them to find a path though a maze.
Mark Hillery is a professor of physics at Hunter College and the Graduate Center of the City University of New York. He received his Ph.D. from the University of California at Berkeley and went on to a postdoctoral position with a quantum optics group that was based both at the University of New Mexico and the Max Planck Institute for Quantum Optics. He then joined the City University of New York. He has worked in quantum optics and, since 1996, in quantum information. He had a long-time collaboration with a research group in Bratislava, Slovakia, which led to his being awarded the International Prize of the Slovak Academy of Sciences in 2019. For 15 years he was an associate editor of the journal Physical Review A. He is a fellow of both the American Physical Society and the Optical Society of America.
Missouri S&T
Talk Title: Neural Collapsed Representation in Deep Learning Classifiers
Date: October 24, 2022
In the past decade, the revival of deep neural networks has led to dramatic success in numerous applications ranging from computer vision to natural language processing to scientific discovery and beyond. Nevertheless, the practice of deep networks has been shrouded with mystery as our theoretical understanding of the success of deep learning remains elusive.
In this talk, we will focus on the representations learned by deep neural network classifiers. In this setting, the recent work by Papyan et al. revealed an intriguing empirical phenomenon, called neural collapse, that persists across different neural network architectures and a variety of standard datasets. We will first provide a geometric analysis for understanding why neural collapse always happens on a simplified unconstrained feature model. We will then exploit these findings to understand the roles of different loss functions proposed in the literature for training deep neural networks. Among all the proposed loss functions, which one is the best to use is still a mystery because there seem to be multiple factors affecting the answer, such as the properties of the dataset, the choice of network architecture, and so on. Through the principles of neural collapse, we will show that all relevant losses produce equivalent features on training data and lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence.
Zhihui Zhu is currently an Assistant Professor with the Department of Computer Science and Engineering at the Ohio State University. He was an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Denver from 2020-2022 and a Post-Doctoral Fellow with the Mathematical Institute for Data Science, Johns Hopkins University, from 2018 to 2019. He received his Ph.D. degree in electrical engineering in 2017 from the Colorado School of Mines, where his research was awarded a Graduate Research Award. He is or has been an Action Editor of the Transactions on Machine Learning Research and an Area Chair for NeurIPS.
Missouri S&T
Talk Title: Data-Driven Smartphone Localization with Zero Infrastructure
Date: October 17, 2022
The early suppression of fires on ro-ro vessels requires rapid fire identification as a fire of medium growth exponentially reaches 50kW after only 1 minute. Fire patrol members (e.g., able seamen) are asked to act as first responders in such fire incident cases. They do however lack the necessary digital technology for immediate localization, verification and coordination with the bridge and other first responders. Indoor localization requires dense referencing systems (such as Wi-Fi, UWB, Bluetooth antennas), but these technologies require expensive installations and maintenance. Also, Satellite-based indoor localization is obstructed by the bulky steel structures of vessels, so this doesn’t work either. In this work we develop a ground-breaking localization technology that requires zero infrastructure using computer vision on commodity smartphone devices attached to the gear of first responders. The developed solution comprises of three steps: (i) Training, where vessel owners supply video recordings that are processed on a deep learning data center to produce an accurate computer vision machine learning model; (ii) Logging, where a mobile app allows referencing non-movable objects to the (x,y,deck) coordinates of a vessel; and (iii) Localization, where first responders localize on a digital map. Additionally, in case a sparse communication network is available, first responders can share their location, emergency messages and heat scan images with nearby first responders and the bridge. Our proposed algorithm, coined Surface, is shown to be 80% and 90% accurate for localization and tracking scenarios, respectively, in both a remote study and an on-board study we carried out on a real ro-ro vessel. The overall developed Smart Alert System (SMAS), streamlines the lengthy fire verification, coordination, and reaction process in the early stages of a fire, improving fire safety. My talk will conclude with a summary of other relevant work in the scope of our open-source indoor localization system, named Anyplace.
Demetris Zeinalipour is an Associate Professor of Computer Science at the University of Cyprus, where he leads the Data Management Systems Laboratory (DMSL). His primary research interests include Data Management in Computer Systems and Networks, particularly Mobile and Sensor Data Management; Big Data Management in Parallel and Distributed Architectures; Spatio-Temporal Data Management; Network and Telco Data Management; Crowd, Web 2.0 and Indoor Data Management; Data Privacy Management and Data Management for Sustainability. He holds a Ph.D. in Computer Science from University of California - Riverside (2005). Before his current appointment, he served the University of Cyprus as an Assistant Professor and Lecturer but also the Open University of Cyprus as a Lecturer. He has held visiting research appointments at Akamai Technologies, Cambridge, MA, USA, the University of Athens, Greece, the University of Pittsburgh, PA, USA and the Max Planck Institute for Informatics, Saarbrücken, Germany. He is a Humboldt Fellow, Marie-Curie Fellow, an ACM Distinguished Speaker (2017-2020), a Senior Member of ACM, a Senior Member of IEEE and a Member of USENIX. He serves on the editorial board of Distributed and Parallel Databases (Elsevier), Big Data Research (Springer), SI Editor for ACM Transactions on Spatial Algorithms and Systems (ACM TSAS), and is an independent evaluator for the European Commission (Marie Skłodowska-Curie and COST actions), the Hong Kong RGC and the Hellenic HFRI. His h-index is 30, holds over 3800 citations, has an Erdös number of 3, was awarded 12 international awards and honors (IEEE-MDM18, IEEE-MDM17, ACMD17, ACMS16, IEEES16, HUMBOLDT16, ACM-IEEE-IPSN14, EVARILOS14, APPCAMPUS13, IEEE-MDM12, MC07, CIC06) and delivered over 30 invited talks. He is/was General Co-Chair for 4 events (IEEE MDM22, EDBT21, VLDB's DMSN11, ACM MobiDE10), and is/was Program Co-Chair for 8 events (IEEE-MDM19, IEEE-MDM10, IEEE-ALIAS19, IEEE-MUSICAL16, IEEE-HuMoComP15, HDMS18, VLDB-DMSN10, ACM-MobiDE09). He has participated in over 20 projects funded by the US National Science Foundation, by the European Commission, the Cyprus Research Promotion Foundation, the Univ. of Cyprus, the Open University of Cyprus and the Alexander von Humboldt Foundation, Germany. Finally, he has also been involved in industrial Research and Development projects (e.g., Finland, Taiwan and Cyprus) and has technically lead several open source mobile data management services (e.g., Vgate, Anyplace, Rayzit and Smartlab) reaching thousands of users worldwide. More: https://www.cs.ucy.ac.cy/~dzeina/
Missouri S&T
Talk Title: Multimedia Big Data Analytics for Data Science and Introduction of dSAIC
Date: October 10, 2022
The idea of Data Science has gathered many different perspectives throughout different domains and articles. In this talk, Data Science is discussed as the theories, techniques, and tools designed for data-centered analysis and the methodology to apply them to real world applications. We have revolutionized the process of gathering multimedia data from an abundance of sources. Things such as mobile devices, social networks, autonomous vehicles, smart household appliances, and the Internet have increasingly grown in prominence. Correspondingly, new forms of multimedia data such as text, numbers, tags, networking, signals, geo-tagged information, graphs, 3D/VR/AR, sensor data, and traditional multimedia data (image, video, audio) have become easily accessible. Therefore, older traditional methods of processing data have become increasingly outdated, emphasizing the demand for more advanced data-science techniques to process these heterogeneous, large data sets which fluctuate in quality and semantics. Additionally, the inconsistency of utilizing single modality severely hinders our ability to withstand multiple data sources simultaneously. Upon analyzing the capabilities of each model, it becomes pivotal for us to consider the multi-modal frameworks to leverage the multi-data sources to assist data analytics. With the implementation of data science and big data strategies, we are able to efficiently capture, store, clean, analyze, mine, and visualize exponential growth of multimedia data, which is responsible for the majority of daily Internet traffic. With the data science field growing with importance, the goal of big data revolution is to bridge the gap between data availability and its effective utilization which enables research, technological innovation and even decision making. In this talk, I will discuss the research opportunities and challenges in multimedia big data for data science. A set of core techniques and applications will be discussed. I will also discuss Data Science and Analytics Innovation Center (dSAIC) - a University of Missouri System-wide Center.
Dr. Shu-Ching Chen is the inaugural Executive Director of Data Science and Analytics Innovation Center (dSAIC). dSAIC is a multi-university center and based at the University of Missouri-Kansas City (UMKC). His main research interests include data science, multimedia big data, disaster information management, content-based image/video retrieval, and multimedia systems. He has authored and coauthored more than 370 research papers and four books. He has been the PI/Co-PI of many research grants from NSF, NOAA, DHS, NIH (Co-I), Department of Energy, Army Research Office, Naval Research Laboratory (NRL), Environmental Protection Agency (EPA), Florida Office of Insurance Regulation, Florida Department of Transportation, IBM, and Microsoft. Dr. Chen received 2011 ACM Distinguished Scientist Award, best paper awards from 2006 IEEE International Symposium on Multimedia and 2016 IEEE International Conference on Information Reuse and Integration. He also received the best student paper award from 2022 IEEE International Conference on Multimedia Information Processing and Retrieval. He received the 2019 Service Award from IEEE Computer Society’s Technical Committee on Multimedia Computing. He was awarded the IEEE SMC Society’s Outstanding Contribution Award in 2005 and IEEE Most Active SMC Technical Committee Award in 2006. He has been a General Chair and Program Chair for more than 60 conferences, symposiums, and workshops. He is the Editor-in-Chief of IEEE Multimedia Magazine, founding Editor-in-Chief of International Journal of Multimedia Data Engineering and Management, and the Co-Chair of IEEE SMC’s Technical Committee on Knowledge Acquisition in Intelligent Systems. He was the Chair of IEEE Computer Society Technical Committee on Multimedia Computing and a steering committee member of IEEE Trans. on Multimedia. He is a fellow of IEEE, AAAS, SIRI, and AAIA.
Missouri S&T
Talk Title: Scaling HPC Applications Through Predictable and Reliable Data Reduction Methods
Date: April 15th, 2024 Time: 10 AM Location: CS 222
For scientists and engineers, high-performance computing (HPC) systems are one of the most powerful tools to solve complex computation and modeling problems, such as climate change, water management, cosmological survey, artificial intelligence, and vaccine and drug design. With the ever-increasing computing power, like the new generation of exascale (one exaflop or a billion billion floating-point operations per second) supercomputers, the gap between computing power and limited storage capacity and I/O bandwidth has become a major challenge for scientists and engineers. This talk introduces predictable and reliable data reduction techniques for scaling up and out HPC applications. It covers how we design and leverage data reduction techniques such as lossy compression to improve the performance of large-scale HPC and ML applications (e.g., scientific simulations and ML model training).
Sian Jin is an Assistant Professor at Temple University's Department of Computer and Information Sciences. He received his Ph.D. in Computer Engineering from Indiana University in 2023, under the supervision of Prof. Dingwen Tao. He received his bachelor degree in physics from Beijing Normal University in 2018. His research interest falls in High-performance computing (HPC) data reduction & lossy compression for improving the performance for scientific data analytics & management, as well as for large-scale machine learning & deep learning. Within the past five years, he has published over 20 papers in top conferences and journals, including SC, PPoPP, VLDB, ICDE, EuroSys, HPDC, and ICS.
Follow Computer Science