Invisible Integrated Assessment | AI for Human Improvement

Deep Learning APIs

Invisible Integrated Assessment is here!

Use Games, Simulations, Technology Enhanced Items and Authentic Performance Tasks to teach AND assess simultaneously!

NGSS NEWS: metacog announces availability of its new open-ended rubric based scoring API and a new partnership with PhET Interactive Simulations:

AI Based Performance Scoring - Automated Competency Assessment

Testing the ability to do the real job at scale

multiple choice test with red do not use symbol
Stacks Image 16162
Competency Assessment and Learning Converge Invisibly

Metacog is an analytics platform that integrates seamlessly and invisibly into existing (and new) digital challenges like simulations, games, performance tasks and delivery platforms. It enables the capture and analysis of event streams generated by user interactions with digital learning objects or devices so that a user’s metacognitive processes become visible. The event stream ingested by metacog is a complete record of the user’s trajectory of problem-solving during a learning activity - and is much richer than the final end-state answer.

Metacog integrates with your learning solutions via a series of APIs, enabling you to use only what you need. These APIs range from data ingest and retrieval and visualization, to automated rubric-based AI scoring (with score-and-forward capability), diagnostic (learner affect) and micro-credentialing.
Stacks Image 16165

Subject-matter experts can design open-ended performance tasks, define rubrics for scoring, and ask the metacog machine-learning engine to build and deploy scoring models - without the need for individual attention from a team of data scientists.
“Multiple choice wires your brain to pick an answer rather than have an original thought”
— Neil DeGrasse Tyson, Astrophysicist
The resulting scoring models, informed by research and developed using educational data mining, are adaptable, automatic, and scalable to millions of students (and more). They are validated via human-human (initially) and human-machine (ongoing) inter-rater reliability. Scores can then be collected for consumption within your visualizations through the Visualization API to inform your subject-matter experts, your learners, and your instructors.

The metacog score-and-forward API allows you to map scores into another format and push them downstream into your other enterprise learning systems (e.g., adaptive testing, lesson recommendation, learner competency portfolios, HR compliance, etc.).

Metacog powers integrated learning and assessment using authentic performance measures and lessons you design (including those using simulations, VR/AR, and gaming).