Publications, Patents, and Panels
Publications
We appreciate the many benefits of all colleges in the California Community Colleges being able to share a common infrastructure for the student-facing college application process. CCCApply serves as a "gateway to the California Community Colleges," and its existence eliminates the need for each college or district to independently procure and customize (or build and maintain) its own application system, and its annual improvements benefit colleges system-wide. But while all colleges in the system share the benefits of this common infrastructure, all colleges also share the limitations of this infrastructure. We focus on six key issues with CCCApply and OpenCCC today that affect prospective students and the colleges that serve them. We believe these recommendations will result in (a.) a more streamlined, cohesive experience for incoming students system-wide, and (b.) an enhanced ability for colleges to remotely support all of their prospective students more equitably through the entire application process.
Self-directed learners value the ability to make decisions about their own learning experiences. Educational systems can accommodate these learners by providing a variety of different activities and study contexts among which learners may choose. When creating a software-based environment for these learners, system architects incorporate activities designed to be both effective and engaging. Once these activities are made available to students, researchers can evaluate these activities by analyzing observed usage and performance data by asking: Which of these activities are most engaging? Which are most effective? Answers to these questions enable a system designer to highlight and encourage those activities that are both effective and popular, to refine those that are either effective or popular, and to reconsider or remove those that are neither effective nor popular. In this paper, we discuss Grockit - a web-based environment offering self-directed learners a wide variety of activities - and use a mixed-effects logistic regression model to model the effectiveness of nine of these supplemental interventions on skill-grained learning.
Web-based learning systems offer researchers the ability to collect and analyze fine-grained educational data on the performance and activity of students, as a basis for better understanding and support- ing learning among those students. The availability of this data enables stakeholders to pose a variety of interesting questions, often specifically focused on some subset of students. As a system matures, the number of stakeholders, the number of interesting questions, and the number of relevant sub-populations of students also grow, adding complexity to the data analysis task. In this work, we describe an internal analytics system designed and developed to address this challenge, adding flexibility and scalability. Here we present several examples of typical examples of analysis, discuss a few uncommon but powerful use-cases, and share lessons learned from the first two years of iteratively developing the platform.
Grockit provides a place for students to master new concepts and exercise what they learn through a set of study modes designed to accommodate a variety of learning styles and learner preferences. These include: (1.) small group study, which leverages the power of collaborative learning dynamics to provide students with a social learning network that can help motivate and assist them, (2.) individual study, which builds and uses a data-driven model of a student's abilities to provide that student with appropriate challenges for learning, (3.) instructor-led classes, which draw on a teacher's domain knowledge and experience to provide a guided and structured path for groups of learners.
One unique characteristic of learning systems that support peer collaboration is that these systems have the potential to supplement or replace software-based representations of domain- and learner-models with the representations implicitly formed by peers. In order to realize this potential, a collaborative activity must sufficiently motivate peers to reflect, collect, and communicate these mental models. Peer-assessment represents a class of activities that address this challenge by design. In this work, we describe a project, currently under development, in which peer-assessment is melded with peer-instruction to create a new learning activity for an existing collaborative learning platform. We present the rationale behind the design of the activity, focusing specifically on how it draws from and synthesizes the three modes of learning supported by the Grockit platform: adaptive individual study, live collaborative small-group study, and instructor-led skill-focused lessons. By treating teaching as a demonstration of learning, we illustrate how a single activity can peer-assess mastery and peer-assist learning.
While many web-based learning systems connect students asynchronously, fewer systems focus on facilitating synchronous interactions among learners. Given the value of real-time communication - the social and motivational benefits of having a cohort of peers and the ability for a student to get immediate answers to pressing questions - it is perhaps surprising that more systems do not support interaction synchronicity. We suggest that this is due, in part, to a mismatch between the hypertext document-oriented nature of the web and the social activity-oriented nature of learning, and we explore how several systems address this discrepancy. We discuss Grockit, a web-based learning environment that we designed to support both synchronous and asynchronous interactions, and share lessons learned from grappling with the choices enabled by this flexibility: Which interactions should to be synchronous? Which should be asynchronous? Which should be a mix? What should that mix be?
The recent movement towards publishing open educational resources has increased the variety and quantity of learning materials available to students outside of the traditional classroom environment. Several core characteristics of the classroom environment, however, are difficult to offer through a web-based interface, including: (1) interaction and camaraderie among a cohort of peers, (2) the ability to get "real-time" answers to pressing questions, and (3) a motivating force to keep the student engaged over time. An online learning environment can approximate the value of peer cohorts and live question-answering by supporting (and encouraging) synchronous interactions among individuals studying a common topic. A learning system can motivate participation and collaboration by incorporating elements of game mechanics in the activity. We discuss Grockit, a recently-launched website that combines a virtual study group format with multi-player game dynamics to provide an engaging live collaborative learning environment for geographically-dispersed learners.
In classroom-based studies, peer tutoring has proved to be an effective learning strategy, both for the tutees and for their peer tutors. Today, the increasingly widespread availability of computers and internet access in the homes and after-school programs of students offers a new venue for peer learning. In seeking to translate the successes of peer-assisted learning from the classroom to the Internet, one major hurdle to overcome is that of motivation. When teachers are no longer supervising student activity and when participation itself becomes voluntary, peer tutoring protocols may stop being educationally productive. In order to successfully leverage these peer interactions, we must find a way to facilitate and motivate learning among a group of unsupervised peers. In this dissertation, we respond to this challenge by reconceptualizing the interactions among peers within the context of a different medium: that of games. In designing a peer-tutoring experience as a two-player game, we gain a valuable set of tools and techniques for affecting student participation, engagement, goals, and strategies.
Our contributions:
- We define a criteria for games -- the Teacher's Dilemma criteria -- that motivates players to challenge one another with problems of appropriate difficulty;
- We show three games that satisfy the Teacher's Dilemma criteria when played by rational players under idealized conditions;
- We demonstrate, using computer simulations of strategic dynamics, that game-play will converge towards meeting these criteria, through time, under more realistic conditions;
- We design a suite of software that incorporates a Teacher's Dilemma game into several web-based activities for different learning domains;
- We collect data from thousands of students using these activities, and examine how the games actually affected the game-play strategy and learning among these students.
Games provide a promising mechanism for intelligent tutoring systems in that they offer means to influence motivation and structure interactions. We have designed and released several game-based tutoring systems in which students learn to identify the best game strategies to adopt, and, in doing so, create for each other increasingly productive learning environments. Here, we first detail the core game underlying our deployed systems, designed to leverage human intelligence in tutoring systems through the tutor's identification of "appropriate" challenges for their tutee. While this game works well for task domains in which problem difficulty is known, it cannot be applied to domains if nothing is known about a problem beyond its correct solution. We introduce a second, more robust, game here capable of addressing this larger set of task domains. By incorporating player-generated probability estimates (in place of a difficulty metric), we show that a game can be designed to simultaneously elicit best-effort responses from tutees, honest statements of probability estimates from tutees, and appropriate challenges from tutors. We derive a set of constraints on the parameterized version of this game necessary for rational players to converge on this "Teacher's Dilemma" learning environment. Beyond providing a foundation for future tutoring systems, this work offers a new mechanism with which to simultaneously leverage and enhance the knowledge of peer learners.
Problem difficulty estimates play important roles in a wide variety of educational systems, including determining the sequence of problems presented to students and the interpretation of the resulting responses. The accuracy of these metrics are therefore important, as they can determine the relevance of an educational experience. For systems that record large quantities of raw data, these observations can be used to test the predictive accuracy of an existing difficulty metric. In this paper, we examine how well one rigorously developed - but potentially outdated - difficulty scale for American-English spelling fits the data collected from seventeen thousand students using our SpellBEE peer-tutoring system. We then attempt to construct alternate metrics that use collected data to achieve a better fit. The domain-independent techniques presented here are applicable when the matrix of available student-response data is sparsely populated or non-randomly sampled. We find that while the original metric fits the data relatively well, the data-driven metrics provide approximately 10% improvement in predictive accuracy. Using these techniques, a difficulty metric can be periodically or continuously recalibrated to ensure the relevance of the educational experience for the student.
In many intelligent tutoring systems, a detailed model of the task domain is constructed and used to provide students with assistance and direction. Reciprocal tutoring systems, however, can be constructed without needing to codify a full-blown model for each new domain. This provides various advantages: these systems can be developed rapidly and can be applied to complex domains for which detailed models are not yet known. In systems built on the reciprocal tutoring model, detailed validation is needed to ensure that learning indeed occurs. Here, we provide such validation for SpellBEE, a reciprocal tutoring system for the complex task domain of American-English spelling. Using a granular definition of response accuracy, we present a statistical study designed to assess and characterize student learning from collected data. We find that students using this reciprocal tutoring system exhibit learning at the word, syllable, and grapheme levels of task granularity.
Tutoring systems that engage each student as both a tutee and a tutor can be powerfully enhanced by motivating each tutor to try to appropriately challenge their tutee. The BEEweb platform is presented as a foundation upon which to build such systems, based upon the Reciprocal Tutoring protocol and the Teacher's Dilemma. Three systems that have recently been built on the BEEweb platform are introduced.
The task of monitoring success and failure in coevolution is inherently difficult, as domains need not have any external metric to measure performance. Past metrics and visualizations for coevolution have been limited to identification and measurement of success but not failure. We suggest circumventing this limitation by switching from "best-of-generation"-based techniques to "all-of-generation"-based techniques. Using "all-of-generation" data, we demonstrate one such technique - a population-differential technique - that allows us to profile and distinguish an assortment of coevolutionary successes and failures, including arms-race dynamics, disengagement, cycling, forgetting, and relativism.
Formalizing a student model for an educational system requires an engineering effort that is highly domain-specific. This model-specificity limits the ability to scale a tutoring system across content domains. In this work we offer an alternative, in which the task of student modeling is not performed by the system designers. We achieve this by using a reciprocal tutoring system in which peer-tutors are implicitly tasked with student modeling. Students are motivated, using the Teacher's Dilemma, to use these models to provide appropriately-difficult challenges. We implement this as a basic literacy game in a spelling-bee format, in which players choose words for each other to spell across the internet. We find that students are responsive to the game's motivational structure, and we examine the affect on participants' spelling accuracy, challenge difficulty, and tutoring skill.
Coevolutionary algorithms require no domain-specific measure of objective fitness, enabling these algorithms to be applied to domains for which no objective metric is known or for which known metrics are too expensive. But this flexibility comes at the expense of accountability. Past work on monitoring has focused on measuring success, but has ignored failure. This limitation is due to a common reliance on "best-of-generation" (BOG) based analysis, and we propose a population-differential analysis based on an alternate "all-of-generation" (AOG) framework that is not similarly limited.