Projects

Building a longitudinal learning analytics ecosystem in a new dental school

Over the past two years, I have been involved in designing and implementing a longitudinal learning analytics ecosystem within a newly established American dental school. The original goal of this work was not simply to collect assessment data, but to address a persistent challenge in competency-based education: how can we better measure student progress over time as they develop clinical competence? This effort came with a practical constraint: longitudinal evaluation requires the compilation and interpretation of large volumes of performance data in a scalable and nimble way. In a fast-paced environment where patient care is a priority, the data needed to be presented in ways that meaningfully supported learners, faculty, and institutional leaders as they made real-time decisions. More than ten practices needed visibility into where each student stood at any point in time to responsibly assign cases aligned with demonstrated readiness. With that objective in mind, I led a team that worked tirelessly to build a solution to one of the most unique and complex challenges I have had the privilege to embark upon.

Building the Ecosystem: Designing the Evaluation Workflow

Before any longitudinal interpretation could occur, we needed a flexible and reliable way to capture performance data in a format that reflected the realities of clinical education. Hence, one of the earliest priorities in this work was not the dashboard itself, but the evaluation workflow that would feed it.

To address this, we used Qualtrics to design an encounter-based evaluation system. Instead of relying on a single static form, evaluators can access a personalized link, enter contextual details about the clinical interaction, and are guided to the specific rubric relevant to that encounter. From there, they assess performance using color-coded ratings and narrative feedback aligned to the clinically defined competency tasks.

This structure allows evaluators (and students completing self-assessments) to focus only on the competencies demonstrated during a particular patient interaction, reducing unnecessary navigation while maintaining consistent data capture across multiple evaluations. By embedding flexibility at the point of data entry, the system supports efficient evaluation in the clinical floor and ensures that large volumes of assessment data can later be aggregated through a streamlined process using secure file transfer protocols (SFTP). Such protocols enable encrypted transfer of files between Qualtrics and our internal systems, supporting the secure movement of sensitive evaluation data.

From Data Capture to Interpretation: Visualizing Longitudinal Performance

Once the evaluation workflow was established, the next challenge was to ensure that the large volume of data we received could be processed effectively. Individual assessment events provided valuable information, but they did not offer sufficient visibility into patterns of progression across time, clinical sites, and evaluators.

Each academic quarter, the system generates more than one million data points from evaluator assessments and student self evaluations across all practices. These records are consolidated into a structured dataset that begins with over 1,500 columns of information, which must be cleaned into 18 standardized constants before they can be meaningfully interpreted.

To make this information usable, we developed a series of Power BI dashboards designed to transform large volumes of assessment data into clear visual representations of longitudinal performance. The dashboards provide users with an overview of competency progression, highlight tasks requiring improvement, and allow filtering by role, rubric, and individual learner. Color coded indicators communicate levels of competency quickly, while longitudinal graphs display trends in performance across time.

By organizing complex data into navigable visual layers, the dashboards enable faculty, students, and leaders to move from isolated evaluation events toward a shared understanding of performance patterns and overall curricular health. They serve as an interpretive bridge across multiple practices, allowing institutional leaders to gain insight into how quickly students progress in their academic journey.

Example of the longitudinal analytics dashboard used to visualize competency progression and performance trends across time.

From Interpretation to Action: What Changed

Once longitudinal performance data became visible through the dashboards, the most noticeable changes were not technical but conversational. The system did not simply provide information. It created a shared reference point that allowed different stakeholders to interpret performance using the same evidence. A collective story began to emerge.

For faculty, the dashboards made it easier to identify patterns that were previously difficult to detect. Questions that once relied on isolated impressions could now be explored using shared data. How are students in Practice A performing compared to those in Practice B? Are we collecting the right information to ensure competencies are being achieved? For practice managers, the dashboards provided visibility into progression needs and helped guide decisions about allocating resources to areas where students required additional support.

At the leadership level, discussions increasingly focused on evaluation consistency and curricular effectiveness. Leaders began asking whether evaluators across practices were sufficiently calibrated. What does it truly mean for a student to be marked green for indirect supervision? Do all stakeholders share the same understanding of performance levels and their implications for clinical readiness?

Students also experienced a shift in how they engaged within clinical settings. The system enabled them to build a documented portfolio of patient encounters and track their performance over time. For many second year students, this represented a significant change. Some accumulated close to one hundred patient encounters during their first semester, giving them an unprecedented view of their own progression. The ability to review longitudinal data alongside evaluator comments allowed learners to compare self assessments with clinical evaluations and better understand their development. This visibility supported more reflective conversations with supervisors about readiness, strengths, and areas requiring further practice.

Over time, the ecosystem shifted the role of assessment data from isolated documentation toward a shared resource that supports ongoing interpretation and decision making across the academic community.

A snapshot view of student progression was developed to enable a quick assessment on student standing in their experiential education.

Lessons Learned: Technology, Culture, and Organizational Change

Looking back on this work, one of the most important lessons is that building a learning analytics ecosystem is not primarily a technical task. The greatest challenges were related to people, shared understanding, and how the institution interpreted and responded to the information being presented. Whether stakeholders were comfortable with the story being told was secondary. The data revealed patterns, and it was the responsibility of leadership to respond and make the necessary adjustments.

While the system made performance data visible, it quickly became clear that visibility alone was not enough. Stakeholders needed a shared language to interpret what the data meant. Even simple indicators, such as color coded competency levels, required ongoing conversations to ensure consistent understanding across practices. Calibration is not a one time effort. It remains an ongoing process supported by continuous dialogue among faculty.

Transparency also introduced new considerations. Making performance patterns visible supported alignment and collaboration, but it required trust and careful communication. Over time, the system shifted from being seen as a reporting tool to becoming a shared resource for reflection and improvement.

Perhaps the most meaningful outcome has been the cultural shift around assessment data. What once existed as isolated documentation is now used as a common reference point to support conversations about progression, expectations, and curricular health across the academic community.

An image of the Hygiene evaluations to date shows 1,864 patient encounters recorded after just one semester, illustrating the significant volume of data generated through clinical practice activities.

Leave a Reply

Your email address will not be published. Required fields are marked *