Contact Information

Michelle X. Zhou (周雪)
Senior Manager, User Systems and Experience Research (USER)
IBM Research - Almaden, San Jose, CA, USA
      +408dash927dash7000


Tab navigation

A Short Bio

Dr. Michelle Zhou is a research senior manager at IBM Research – Almaden, where she manages the User Systems and Experience Research (USER) group. Prior to her current post, she had worked at IBM T. J. Watson Research Center for 9 years, managing the group of Intelligent Multimedia Interaction before she went on an international assignment from June 2008 to December 2009 at IBM Research, China, managing the Department of Intelligent User Interaction and Social Collaboration. Michelle received a Ph.D. in Computer Science from Columbia University. Her expertise is in the interdisciplinary areas of intelligent user interaction, smart visual analytics (2D/3D), and people-centric information management. She has published over 70 peer-reviewed, refereed articles, and filed over twenty patents in above areas. Michelle is an ACM Distinguished Scientist and is active in several research communities, including intelligent user interfaces (IUI), information visualization and visual analytics, and multimedia (MM), where she has co-organized/co-chaired conferences and workshops, and often serves on the technical program committees for key conferences in these areas. She was the general conference co-chair for ACM IUI 2007 and is the technical program co-chair for ACM MM 2009 and IUI 2010. Currently she serves on the IUI steering committee, and is on the editorial board of three ACM journals: ACM Transactions on Intelligent Systems and Technology (TIST), ACM Transactions on Interactive Intelligent Systems (TiiS), and ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP).

Research Interests

Although my research interests have evolved over the years, I have always been interested in the interdisciplinary area of intelligent user interaction (IUI) and its applications to real-world problems. In particular, my past and present work/interests fall into three areas.

Smart Visualization One picture is worth a thousand words. For thousands of years, people have been using information graphics—visual representation of data—to comprehend and analyze information. However, creating high-quality visualization is a daunting task especially for ordinary people who are neither graphic artists nor computer scientists. To democratize the use of visualization, I have been investigating how to automate the design and generation of visualization. As part of my thesis work, I developed a system called IMPROVISE, which uses an AI planning-based approach to automatically design and create a visual discourse, a connected series of animated visual illustrations for explaining complex information to users (e.g., patient briefings for health caregivers or network traffic analysis to network administrators). After joining IBM T. J. Watson Research Center, I and my colleagues co-developed IMPROVISE+, which uses a case-based learning engine to automatically generate interactive visual responses by examples and tailor the responses to highly dynamic user interaction situations and unanticipated information. IMPROVISE+ has been used in IBM’s engagement with a U.S. government agency and in other IBM products/solutions. With the advances in text mining and analytics, more recently, I have been working in the area of interactive visual text analytics, which combines state-of-the-art text analytics with novel interactive visualization to empower average business users to analyze massive amounts of textual data. With my colleagues at IBM Research China, we developed an interactive visual text summarization system, called TIARA, which combines topic modeling (e.g., LDA) and novel visual metaphors to help users examine what is inside a text collection (e.g., email, news, and emergency room patient records) and discover topic patterns and trends in such a collection. The core technology of TIARA went into three IBM analytics products released in 2010: IBM eDiscovery V2.2, IBM Content Analytics V2.2, and Cognos Consumer Insights. In addition to helping users detect patterns and make discoveries from massive amounts of text, currently my colleagues and I at Almaden are investigating how interactive visual text analytics can facilitate user decision making (e.g., making a purchase decision based on the visual text analysis of extensive consumer reviews or voting on a proposition based on others’ opinions).

Mixed-initiative Human-Computer Interaction (HCI) Licklider in his “Man-Computer Symbiosis” said: “Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. … In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.” I believe that the future of HCI is to facilitate the development of such a man-computer symbiosis where both humans and machines can leverage their strengths and avoid their weaknesses. One of such developments is the support of mixed-initiative interaction where both users and systems can take initiatives during a complex interaction process. Part of my earlier work at IBM T. J. Watson was on developing a mixed-initiative intelligent information system (RealHunter™), where the humans and computers work together collaboratively in an information seeking process. In particular, users take initiatives to freely express (almost!) their information requests in context using multimodal input (e.g., natural language expressions and visual query). To respond to a user’s request, the system automatically generates a multimedia response tailored to the context (e.g., query context and retrieval results). To maximize the efficiency of human-computer interaction (e.g., minimizing the number of steps taken to find the desired information) in such a process, the system takes initiatives whenever needed (e.g., filling in the blanks when a user’s request is vague or suggesting alternative information if user-requested data is not found). More recently, I become interested in the development of social recommender systems, another class of mixed-initiative interaction systems. Working with my colleagues at IBM Research China, we developed a system called Pharos, which automatically summarizes user online social behavior over time and presents users with a social map of the site (i.e Marauder’s Map of online social sites). Using the derived social map, the users can learn about the site dynamics easily and also take initiatives to navigate the site and engage in social activities. To enable computers to take sensible initiatives and push this class of systems to main stream applications, I am particularly interested in developing novel and practical computational approaches to the problem. So far I have investigated the use of optimization-based approaches and developed a suite of algorithms to address a range of fundamental challenges in the space (e.g., an algorithm for dynamically determining data content in response to a user data query in context and a graph-matching algorithm for media allocation). Going forward, I am interested in exploring new interaction paradigms where users can interact with complex system responses (e.g., system-derived text summarization results) and the use of interactive machine learning in support of an adaptive, mixed-initiative human-computer interaction, where both humans and computers can learn from each other.

Opportunistic Social Computing The use of social software (e.g., social networking, micro-blogging, and online forums) has penetrated the masses. I am very much interested in finding out how such phenomena will change our daily lives as well as its long-term impact on our world. In particular, I am interested in how social computing will bring us opportunistic information and collaboration partners whenever we need them but without subjecting ourselves to “constant availability and instant intimacy” as we do today. For example, when I have a question regarding car repair, through social media I should be able to locate a person/group who has perhaps just had his/her car repaired and is able to provide me with the most accurate information to my inquiry; if I want to find a person to share a rental car at a tourist destination, again through social media I should be able to find such a partner whom I don’t know before but we now share the same interest/need at that moment. To support these scenarios, I believe there are fundamental research issues to be addressed. They include but not limited to:

  • Understanding, modeling, and automatically deriving social profile of a person, a community, or an organization based on their digital footprints (i.e., online behavior);
  • Use of the derived social profiles to objectively reveal key characters of individuals and organizations, assess community/organization dynamics, value, and risks, predict the development or growth of individuals, communities and organizations, as well as help establish opportunistic collaborations among individuals and organizations;
  • Monitoring social channels (e.g., facebook and twitter) and detecting which social channels would be the most valuable source(s) for extracting social intelligence (e.g., knowledge about car repair or the consumer complains/needs); Analysis and mining of social messages to distill useful insight (information or people) for opportunistic information sharing (e.g., sharing the extracted consumer complains), knowledge acquisition (e.g., asking target audience to voice their problems and suggest their solutions), and crowd-sourced problem solving (e.g., soliciting and analyzing information submitted by a crowd who is at or near the scene of an accident for crime investigation).

Current Research

System U: Deep People Insights from Social Media for Hyper-Personalized Experience Working with my team at IBM Research, we have been developing System U, a system that can automatically derive an individuals' personality traits, including one's motivators, fundamental needs, and emotional styles, from the individual's linguistic footprints online (e.g., tweets, blogs, and reviews). Such derived traits can then be used to help an individual to better understand him/herself as well as others to obtain or deliver hyper-personalized experience (e.g., self discovery/assessment, social engagements, product or career recommendations). Not only such traits can be used to improve human-computer-human interaction, but they can also be used to facilitate human-computer interaction, as one's psychological traits, including cognitive and emotional styles, may aid one's information tasks, e.g., information navigation and visual perception/interaction of information. Here is a YouTub video by Sandy Carter (@sandy_carter) on System U.

Our work is just at the beginning of a trend that uses big data and analytics to gain a deeper understanding of individuals and groups. As this line of work bears both important scientific and societal implications, here is my take on how big data and analytics could better help us as individuals and the world at large rather than harm us. A more complete version is here.

System U in the Press

Current Professional Activities