Project 1: Measuring the effects of daytime activity levels on sleep quality (led by Dr. Metsis)
The adverse effects of sleep deprivation, caused by various sleep disorders, are widely acknowledged by the research community. Traditional medicine focuses on diagnosing and treating such disorders via sleep studies. However, daily activities that affect sleep quality are often hard to monitor and take into consideration during diagnosis. There have been some preliminary efforts that looked at the effects of daily urban lifestyle and activities on sleep quality. However, such efforts are limited as daytime activities cannot be easily monitored outside the lab, and they mainly rely on self-reports. Also, current sleep monitoring standards do not allow for a long-term sleep monitoring, that will quantify the effects of lifestyle changes on sleep quality. Metsis has a track record of developing non-invasive sleep monitoring technologies, that can be used at home. In this project, we will develop methods for the collection and computational analysis of human activity data, to assess the effects of daytime activity levels on sleep quality. The collected data will be analyzed for correlations between daily activity levels and sleep quality, using machine learning and standard sleep quality indexes.
Project Plan: In this project, students will:
- Develop a Java desktop application which extracts the activity data collected by a smart-watch over a period of time (using the available API of the device), and provides some basic visualizations.
- Develop machine learning-based tools to analyze the data by segmenting the continuous collected signals into separate activities, and classifying those activities into a predefined set of classes (e.g. walking, running, sitting, driving, sleeping, etc.), and intensity levels.
- Utilize existing tools developed by the PI and develop new sleep analysis tools to determine sleep quality from sensor data. The last item will build upon the Summer 2015 REU project, in which Dr. Metsis mentored an REU student to develop sleep analysis methods from Polysomnography data.
Project 2: Modeling Tools and Risk Assessment for Smart Infrastructure (led by Dr. Guirguis)
Smart infrastructure will be more connected than ever with various components sensing and sending critical information to decision making entities. For example, smart meters in a neighborhood can sense various power loads and send this information to power generation and distribution components to adapt how power is generated, distributed and consumed. While such connectivity offers efficient methods to operate these systems and the convenience to manage them, they do open doors for cyber attacks to be mounted. Cyber attacks on such infrastructures continue to be one of the major threats that face the vision of smart cities. In one study, it was shown that two weeks of power loss in Los Angeles County can cause an economic loss of $20.5B.
The goal of this project is to develop a general cloud-computing defense mechanism to mitigate the risk of attacks on smart infrastructure. There have been a lot of work on developing check blocks that check various components and signals in the system and this project will design Artificial Intelligence (AI) and Machine Learning (ML) algorithms that combine these checks through a game-theoretic approach to detect attacks. The defense will consider the worst-case adversary policy.
Project Plan: In this project, students will:
- Study the operation of some case studies (e.g., smart power meters) in terms of the information available and the decisions the defender and adversary can make. This includes plausible adversary models.
- Design AI/ML algorithms that process information (e.g., signals) to detect ongoing attacks.
- Implement these algorithms in the cloud (e.g. as containers) to ensure the correct and safe operation of the smart infrastructure.
Project 3: Computing with Words in Threat Detection Systems (led by Dr. Tamir)
Often, Human reasoning involves computation that is using natural language constructs and is referred to as Computation with Words (CWW). This is in contrast to the computing employed in numerous tasks of autonomous and intelligent computer-based control systems, where large amount of “numbers” obtained from sensors, images, and videos is used to perform the control operations via Computations with Numbers (CWN).
For example, consider the task of identifying human and vehicle activities, such as “two cars are diverging close to school,“ based on videos obtained from street surveillance cameras and drones. Numerous early alert systems can benefit from warnings concerning suspected events. Existing surveillance cameras capture these activities. Currently, however, there is no automated processing of vehicle behavior patterns that is capable of adequately informing early warning systems. In contrast, human beings would have “no problem” identifying behaviors such as “vehicle X is following vehicle Y,” or “vehicle V is circling the secured area” from direct observation or from inspection of surveillance cameras content. In fact, humans are accustomed to observing, completely understand, and possibly even enjoy watching these types of activities in movies. In doing so the humans are employing CWW.
The detection process can be divided into two stages, 1) data acquisition and 2) inference. The state of the art in Artificial Intelligence (AI) might approach the data acquisition task using an image processing-based Convolutional Neural Network (CNN) that can reliably detect objects of interest in videos. Several types of agent-based AI inference engines including other types of artificial neural networks (ANNs) have been proposed as candidate solutions for the inference stage. However, evaluation of these approaches yields the observation that even semi-supervised automatic systems, including a “human in the loop,” might miserably fail in executing the reasoning part of the task. Often, due to lack of training data.
Project Plan: In this project, we will study the state of the art of AI inference; concentrating on Automated Reasoning, State Space Search Techniques, Game Theory, and ANNs. Additionally, we will study the state of the art in Fuzzy Reasoning and CWW. Furthermore, the problem of activity detection will be explored, specified, and analyzed. The effectiveness and usability of the CWW systems will be evaluated. As a part of evaluating the proposed approach, we will employ the following process: we will adapt the CNN object detector that produces near real-time high-recall object-detection results for vehicles in a city. Next, we will use a CWW-based inference system that is fed by the CNN detectors and other sensors’ data, fuzzify this data, and use CWW and fuzzy inference to interpret these activities. The students will:
- Select a CWW inference approach.
- Implement a CWW-system for identifying vehicle activities based on information obtained from surveillance cameras mounted on patrol drones. The implementation will include using OpenGL for accelerating the detection process via a GPU and visualizing boat activities.
- Produce synthetic data for training of CWN systems.
- Evaluate the utility of combined CWW-CWN systems for early alerts based on early threat detection.
Project 4: Vision-based Automated Vehicle Activity Alert System (led by Dr. Tesic)
In the data-rich operational world of smart connected cities, it is increasingly difficult to sift through the vast amount of information coming from the wide array of camera sensors feeds. As emergency responders will be able to pick up a live feed of CCTVs, cameras mounted on unmanned aerial systems (UAS) or cameras in the vicinity of the event. Typically, cameras are used as “eyes” of first responders AFTER they have been alerted of the incident (e.g. traffic accident, fire, theft, unusual activity).
In this project, we will focus on developing tools for automated activity alerts from existing camera feeds. The idea is to automate the alert system, and in conjunction with other available contextual information (e.g. location, information obtained through 911 or 311, web activity), use the existing camera system as an “eye” WHEN the event happens, and increase the efficiency of first responders. The problem: camera sensors are cheap, and transforming the incoming video data streams from surveillance cameras around the city into actionable items still demands expensive human processing. For object recognition in overhead low-resolution video datasets, the best algorithms struggle with objects that are small (distant car) or with the distorted view of the parking lot from the UAV (sun glare), where humans have no issues in recognizing cars in such videos. AI-powered Computer vision allows us to design and train algorithms to reliably detect objects from overhead cameras.
The goal of the project is to detect unusual or unexpected activity in the camera feeds and alert the end user to review the data and take appropriate action. In this work, the student will address fundamental questions regarding machine learning and computer vision algorithm application, data
Project Plan: The goal of this project is to demonstrate the usability of state-of-art deep learning for smart city alert generation. The students will gather data from overhead city imagery, and:
- Adapt and deploy a machine vision system to identify objects, track them, and re-identify them in videos.
- Connect contextual data (location, camera info, user feedback) to improve the system.
- Test activity alert algorithms
Project 5: Energy-aware smart parking for self-driving vehicles (led by Dr. Chen)
Recently self-driving vehicles have attracted tremendous attention from all walks of life. They are part of smart transportation in smart cities. In self-driving vehicles, the driver hands over control to the vehicle and is no longer responsible for monitoring the system and making decisions for the vehicle. One problem facing self-driving vehicles is where to find a parking place. We assume that a self-driving vehicle goes to a campus every day (goes to school or work). There are some garages on the campus with known locations. But the availability of each garage is not known. Our goal is to design some algorithms to find the parking place so that we can save as much gas as possible over a period of time.
Decision making with uncertainty is a challenge and reinforcement learning provides a model for this dilemma. In reinforcement learning, there are exploration and exploitation. Exploration is where we gather new information that might lead to better decisions later and exploitation is where we make the best decision given the current information. Through exploration and exploitation, we plan to come up with algorithms to let the vehicle make wise decisions about where to go for parking.
Project Plan: Our plan for this project includes the following tasks:
- In the first two weeks, the students will learn some reinforcement learning concepts and algorithms.
- In the next three weeks, the students will work on designing parking decision algorithms using reinforcement learning techniques.
- In the remaining weeks, the students will conduct simulations to test and compare the performance of the proposed strategies.