Full Name
Tara Javidi
Job Title
Jacobs Family Scholar, HDSI Fellow, and Professor, Electrical & Computer Engineering and Halicioglu Data Science
Company
University of California at San Diego
Speaker Bio
My research area involves stochastic analysis, design, and control of information collection, processing, and transfer in modern networked systems. This covers broad theoretical questions as well as practical implementation of various solutions. In my research group, we often prove theorems but we also build and test our theoretical findings when possible (graduate applicants, please refer to this link!). In particular, the work can be broadly broken into the following areas:
1. Active Learning, Active Hypothesis Testing and Sequential Information Theory,
2. Stochastic Control and Optimization of Networks, and
3. AI and Learning-enabled Optimization of Wireless Communications and Networks
On the theoretical front, I am most concerned with the problem of sequential information acquisition and interactive learning where the cost of data collection and/or labeling can be substantially reduced. Here the challenge is to deal with imperfect and noisy data as well as the dynamics of data. Here our objective has been to 1) develop algorithms that acquire the most informative features with the minimum cost and 2) design queries and data collections that account for the uncertainty and inconsistency (of humans) in the loop.
On the more practical front, I am interested to apply our developed algorithms in the following three application domains: 1) next generation wireless networks , 2) service drones, and other 3) decentralized learning and control systems. https://tjavidi.eng.ucsd.edu/
1. Active Learning, Active Hypothesis Testing and Sequential Information Theory,
2. Stochastic Control and Optimization of Networks, and
3. AI and Learning-enabled Optimization of Wireless Communications and Networks
On the theoretical front, I am most concerned with the problem of sequential information acquisition and interactive learning where the cost of data collection and/or labeling can be substantially reduced. Here the challenge is to deal with imperfect and noisy data as well as the dynamics of data. Here our objective has been to 1) develop algorithms that acquire the most informative features with the minimum cost and 2) design queries and data collections that account for the uncertainty and inconsistency (of humans) in the loop.
On the more practical front, I am interested to apply our developed algorithms in the following three application domains: 1) next generation wireless networks , 2) service drones, and other 3) decentralized learning and control systems. https://tjavidi.eng.ucsd.edu/
Speaking At
Abstract
In this talk, inspired by sequential nonparametric tests of Shekhar and Ramdas, I will describe a simple strategy for constructing sequential goodness-of-fit tests for finite state Markov chains. Given a stream of observations $X_1, X_2, \ldots$ lying in a finite set $\mathcal{X}$, our goal is to design a scheme to decide whether the observations represent a trajectory from a (first-order) Markov chain with transition probability matrix $P$ or not.
Formally, we construct a sequential level-$\alpha$ test of power one: a random stopping time at which we stop collecting the observations, and reject the null. Our main result is to show that this task can be reduced to that of online prediction with log loss, and in particular, the performance of the resulting test can be precisely characterized in terms of the regret of the prediction strategy. I will illustrate this by presenting an instance of our general test using the prediction strategy of Takeuchi, Kawabata and Barron (2013). I will then conclude the talk with a discussion of ongoing work and important extensions.
This is based on joint work with Greg Fields and Shubhanshu Shekhar.
Formally, we construct a sequential level-$\alpha$ test of power one: a random stopping time at which we stop collecting the observations, and reject the null. Our main result is to show that this task can be reduced to that of online prediction with log loss, and in particular, the performance of the resulting test can be precisely characterized in terms of the regret of the prediction strategy. I will illustrate this by presenting an instance of our general test using the prediction strategy of Takeuchi, Kawabata and Barron (2013). I will then conclude the talk with a discussion of ongoing work and important extensions.
This is based on joint work with Greg Fields and Shubhanshu Shekhar.