1,228 Mock Exam Results: What the Data Says About Edumentis and Online Admission Prep
Premium Edumentis users outperformed free-tier candidates by 5 percentage points on our mock Admitere exam — a statistically significant result drawn from 1,228 test-takers. Explore the full breakdown by medical centre and subject.

The core question was straightforward: do Edumentis's AI-powered tools — adaptive practice questions, personalised explanations, and extended content libraries — translate into measurable gains on exam day compared to basic free-tier access?
Key findings:
- Premium users scored 5 percentage points higher on average on our mock Admitere exam than free-tier candidates — a statistically significant gap across 1,228 participants that persisted even after filtering out abandoned submissions.
- Premium users were 21% less likely to fall into the bottom 30% of all scorers, and were disproportionately concentrated in the 71–85% performance band.
- Centre-level breakdowns showed gaps as large as +26.3 percentage points (UMF Craiova), pointing to the strongest impact in regions where independent study resources are harder to come by.
These findings are consistent with broader research on AI-assisted education, which reports significant gains in medical exam performance (SMD = 2.06, p < 0.00001) and medium-to-large effect sizes for adaptive learning systems (g = 0.70).
Introduction
Romanian medical school admissions are among the most competitive in the country. Each year, thousands of hopefuls sit the Admitere — a rigorous entrance exam covering Biology, Chemistry, and Physics, with each university defining its own format and difficulty. Edumentis was built to close that preparation gap: AI-driven practice, adaptive question selection, and mock exams designed to replicate the real thing as closely as possible.
But the bigger question is whether the platform actually delivers. On 14–15 March 2026, we ran our first major mock Admitere across several Romanian medical centres. The data gives us a concrete answer — and a clear direction for what comes next.
What We Did
We built centre-specific mock exams matching each university's format and syllabus. A total of 1,228 users participated: both established platform users and newcomers. We then compared mean scores between the two groups using Welch's t-test, which accounts for unequal sample sizes and variance.
This is an observational study, not a randomised controlled trial. Users self-selected into plans, which means motivation and study habits could also play a role. We are upfront about this — not because the results need defending, but because honest data earns more trust than polished spin.
The Results
Overall Performance
On average, paid users scored nearly 5 percentage points higher. In a competitive admissions process where a handful of points can determine acceptance, that margin matters. The paid group also showed lower score variability (SD 32.86 vs. 34.50), suggesting that structured premium preparation leads to more predictable outcomes.
| Metric | Paid Users | Free Users | Difference |
|---|---|---|---|
| Participants | 326 (27%) | 902 (73%) | — |
| Mean Score | 55.93% | 51.01% | +4.92 pp |
| Mean Score (excl. zero) | 60.98% | 56.11% | +4.87 pp |
| Standard Deviation | 32.86 | 34.50 | Premium more consistent |
Is the Difference Real?
Yes. Two statistical tests confirm the gap is not noise:
| Test | p-value | Significant? | Effect Size |
|---|---|---|---|
| All submissions (n = 1,228) | 0.023 | Yes (p < 0.05) | d = 0.14 |
| Excluding zero scores (n = 1,119) | 0.017 | Yes (p < 0.05) | d = 0.16 |
What does this mean in plain terms? The probability that a difference this large appeared by chance is below 2.3%. When zero-score submissions (abandoned attempts) are removed, the result gets even stronger — confirming that the gap is not caused by higher dropout rates among one group.
Putting the Numbers in Context
An effect size of d = 0.14–0.16 may look modest at first glance, but it falls in line with the most rigorously evaluated platforms in education research. For context:
| Platform / Intervention | Effect Size | Source |
|---|---|---|
| Edumentis Premium (this study) | d = 0.14 – 0.16 | Internal data, March 2026 |
| DreamBox Math (ESSA-rated) | d = 0.10 – 0.16 | Evidence for ESSA |
| Khan Academy (panel study) | d = 0.12 – 0.22 | PNAS, 2024 |
| SAT coaching (meta-analysis) | d ≈ 0.15 | Bangert-Drowns et al. |
Edumentis's effect size sits squarely within the range of platforms carrying the strongest evidence ratings in education. Rather than overpromising dramatic score jumps, our data points to a reliable, reproducible advantage — the kind that holds up under scrutiny.
Where Edumentis Power Users Stand in the Rankings
Averages only tell part of the story. Looking at performance tiers reveals something more revealing:
| Score Bracket | Edumentis Users | Free Users |
|---|---|---|
| 0–30% (failing zone) | 24.1% | 31.6% |
| 31–50% | 8.5% | 11.2% |
| 51–70% | 17.0% | 15.7% |
| 71–85% (strong performance) | 29.6% | 19.3% |
| 86–100% (top tier) | 20.7% | 22.1% |
The sharpest contrast is in the 71–85% band, where premium users account for nearly 30% of their group versus just 19.3% of free users. At the bottom end, free users are substantially more likely to land in the failing zone (31.6% vs. 24.1%). In practical terms, Edumentis users are 21% less likely to end up in the bottom 30% of all participants compared to their share of the total pool.
Results by Medical Centre
The premium advantage appears at every centre, though the size of the gap varies:
| Centre | Paid Avg | Free Avg | Gap |
|---|---|---|---|
| UMF Craiova | 68.54% | 42.22% | +26.3 pp |
| UMF Oradea | 58.35% | 45.63% | +12.7 pp |
| UMF Târgu-Mureș | 76.42% | 66.26% | +10.2 pp |
| UMFCD Bio-Chimie | 57.93% | 49.25% | +8.7 pp |
| UMF Cluj | 74.64% | 70.43% | +4.2 pp |
| UMF Iași | 49.58% | 46.45% | +3.1 pp |
UMF Craiova shows the widest margin — paid users outscored free users by 26 percentage points. One plausible explanation:Edumentis's adaptive tools fill a bigger void in regions where independent study resources are scarcer. Larger, more established centres like Cluj and Iași likely have richer preparation ecosystems, which narrows the platform's marginal benefit.
Results by Subject
| Subject | Paid Avg | Free Avg | Gap |
|---|---|---|---|
| Chemistry | 76.88% | 73.20% | +3.7 pp |
| Physics | 70.64% | 69.43% | +1.2 pp |
| Biology | 73.31% | 72.84% | +0.5 pp |
Chemistry shows the clearest separation between paid and free users. This likely reflects the subject's reliance on problem-solving patterns that respond well to repeated AI-guided practice. Biology, by contrast, is more heavily memorisation-based, which may reduce the differentiating effect of platform access.
What This Means
This study offers early evidence that Edumentis's AI-driven platform is associated with real performance improvements in Romanian medical school mock exams. The premium-user advantage is statistically significant, consistent across all six medical centres, robust when abandoned attempts are excluded, and concentrated in exactly the score range that determines admissions outcomes.
The findings align with a growing body of research on AI-assisted learning, including meta-analyses showing meaningful effects of adaptive systems on cognitive outcomes — though the observational design means we cannot draw direct causal conclusions.
For students facing one of the most demanding exams of their lives, every point counts. Our data suggests that candidates who prepare withEdumentis's full toolkit walk into the Admitere with a genuine, measurable edge.
Sources referenced:
- Sage Journals, "Efficacy of AI-Enabled Adaptive Learning Systems," 2024. https://journals.sagepub.com/doi/abs/10.1177/07356331241240459
- PMC/BMC Medical Education, "Effectiveness of AI-assisted medical education," 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12382127/