Interactive technologies have become an important part of teaching and learning. However, the data that these systems generate is increasingly unstructured, complex, and therefore difficult of which to make sense of. Current computationally driven methods (e.g., latent semantic analysis or learning based image classifiers) for classifying student contributions don’t include the ability to function on multimodal artifacts (e.g., sketches, videos, or annotated images) that new technologies enable. We have developed and implemented a classifcation algorithm based on learners’ interactions with the artifacts they create. This new form of semi-automated concept classification, coined Collaborative Spatial Classification, leverages the spatial arrangement of artifacts to provide a visualization that generates summary level data about about idea distribution. This approach has two benefits. First, students learn to identify and articulate patterns and connections among classmates ideas. Second, the teacher receives a high-level view of the distribution of ideas, enabling them to decide how to shift their instructional practices in real-time.