Open each panel to earn XP. Master the SACS standards for institutional and unit-level effectiveness.
Select your role to begin (+10 XP)
"The institution engages in ongoing, comprehensive, and integrated research-based planning and evaluation processes that (a) focus on institutional quality and effectiveness and (b) incorporate a systematic review of institutional goals and outcomes consistent with its mission."
Planning must feed back into actual changes. Data collection without plan revision fails this standard. The cycle must be demonstrably closed.
"The institution has a QEP that (a) has a topic identified through its ongoing planning processes; (b) has broad-based support; (c) focuses on improving specific student learning outcomes and/or student success; (d) commits resources; and (e) includes a plan to assess achievement."
Improvement Cycle
1.Emerge from Planning→2.Build Broad Support→3.Focus on Students→4.Commit Resources→5.Assess Achievement
Reviewer Questions
◆Does the QEP genuinely emerge from planning—not bolted on?
◆Is there genuine broad-based input, not just leadership buy-in?
◆Are goals specific to student learning OR student success?
◆Are sufficient resources (beyond just money) committed?
Key Concepts
QEPbroad-based supportstudent successresourcesassessment plan
Core Insight
Notification after the fact does not equal broad-based support. Faculty, staff, and students must have genuine input during development—not just receive an announcement.
"The institution identifies expected outcomes of its administrative support services and demonstrates the extent to which the outcomes are achieved."
Admin units need outcomes too. Efficiency targets, satisfaction rates, and financial goals all qualify. Link findings to resource decisions to show the cycle is active.
"The institution identifies, evaluates, and publishes goals and outcomes for student achievement appropriate to its mission, using multiple measures to document student success."
◆Are goals, criteria, AND thresholds all clearly identified?
◆Is this published publicly—not just on an internal intranet?
◆Are multiple DISTINCT measures used (not the same thing measured differently)?
◆Are thresholds appropriate for the institution's mission and student population?
Key Concepts
published publiclymultiple measuresthresholdsgraduation rate
Core Insight
Three distinct obligations: identify goals with thresholds, evaluate outcomes, AND publish publicly. Behind a firewall does not count as public.
"The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of seeking improvement based on analysis of the results for student learning outcomes for each of its educational programs."
◆Specific and measurable—not "students will appreciate literature"
◆Directly assessable with a defined method and rubric or measure
◆Has a defined threshold of acceptable performance (e.g., 75% of students will score 3+ out of 4)
◆Tied to curriculum so results can meaningfully drive changes
Reviewer Questions
◆Is the loop closed—do results lead to actual curriculum/pedagogy changes?
◆Does improvement evidence exist even when outcomes are met?
◆Are all programs assessed, or is there valid sampling?
◆Are outcomes being continuously refreshed as programs evolve?
Key Concepts
closing the loopmeasurable outcomesthresholdscurriculum alignment
Core Insight
Collecting data is not enough. Results must drive documented action. "We are aware of the findings" fails this standard. See the Coach tab for how to handle maxed-out or too-broad outcomes.
"The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of seeking improvement based on analysis of the results for student learning outcomes for collegiate-level general education competencies of its undergraduate degree programs."
Improvement Cycle
1.Define Gen Ed Competencies→2.Map to Courses→3.Assess Across Programs→4.Improve Curriculum
Reviewer Questions
◆Are gen ed competencies clearly defined at the institutional level?
◆How are competencies assessed across different programs and disciplines?
◆Do results drive curriculum mapping changes in the gen ed core?
◆Is the assessment distinguishable from program-level assessment (8.2.a)?
Key Concepts
general educationcompetenciescurriculum mappingundergraduatecross-program
Core Insight
Gen ed outcomes must be assessed at the institutional level, not just within individual courses. The findings must demonstrably influence the gen ed curriculum structure.
"The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of seeking improvement based on analysis of the results for academic and student services that support student success."
◆Do support units have defined outcomes beyond satisfaction scores?
◆Are findings used to make specific service improvements?
◆Are outcomes tied to student success, not just service delivery?
◆Even when scores are "acceptable," are deliberate actions documented?
Key Concepts
student supportservice outcomesimprovement actionsstudent success link
Core Insight
"Scores are fine" is not a SACS-compliant response. Even positive findings require documented decisions about sustaining or improving services.
"The institution provides ongoing professional development opportunities for faculty members as teachers, scholars, and practitioners, consistent with the institutional mission."
Improvement Cycle
1.Assess Faculty Needs→2.Plan Development→3.Provide Opportunities→4.Evaluate Impact→5.Improve Program
Reviewer Questions
◆Is faculty development systematic and comprehensive—not ad hoc?
◆Are adjunct and part-time faculty included in development opportunities?
◆Are development activities aligned with the institution's mission?
◆Is there evidence faculty actually participate and benefit?
Faculty development must be systematic and ongoing—not just optional workshops. Adjunct faculty must also be included. Evidence of actual participation is required.
"The institution publishes and implements policies and procedures for the appointment, employment, and regular evaluation of faculty, regardless of contractual status."
◆Are evaluations conducted at least every three years for all faculty?
◆Do policies apply to tenured, non-tenured, and adjunct faculty?
◆Are student evaluations used in combination with other methods (not alone)?
◆Are evaluation criteria published and consistently applied?
Key Concepts
published policiesregular evaluationall contractual statusesmultiple methods
Core Insight
Student evaluations alone are insufficient. Policies must apply to ALL faculty regardless of status and must be regularly implemented—not just exist on paper.
"The institution publishes and implements policies and procedures for the appointment, employment, and regular evaluation of all personnel."
Reviewer Questions
◆Are personnel policies published and accessible to all employees?
◆Do policies address rights, responsibilities, promotion, and grievance?
◆Is there evidence policies are actually implemented consistently?
◆Are workload and evaluation expectations clearly documented?
Key Concepts
published policiesimplementedall personnelpromotiongrievance
Core Insight
Policies must be published AND implemented. A well-written policy manual that isn't actually followed is not SACS compliant. Evidence of consistent application is essential.
"The institution has a clearly defined, comprehensive, and published mission statement that is specific to the institution and appropriate for higher education. The mission addresses teaching and learning and, where applicable, research and public service."
Reviewer Questions
◆Is the mission clearly defined and specific to this institution?
◆Does it address teaching, learning, and applicable research/service?
◆Is it published and accessible to students, faculty, and the public?
◆Do all programs and services align with and advance the mission?
Key Concepts
definedcomprehensivepublishedspecificteaching and learning
Core Insight
The mission must drive institutional decisions—it's the foundation every other standard builds on. Everything from resource allocation to new programs must align with it.
🎮
Ready to practice?
Apply these standards to real institutional scenarios and earn XP.
Scenario-Based Practice
Apply the Standards
Real institutional situations. Choose the best SACS-aligned response. Earn XP based on your answer.
Completed0 / 6
Accreditation Feedback Coach
How Would You Respond?
SACS reviewers give real feedback. Work through how your unit should respond to the most common challenges to your assessment cycle.
Reviewer Feedback Received
"Your program has reported that 95%+ of students meet this student learning outcome for three consecutive assessment cycles. While commendable, this outcome is no longer providing meaningful evidence of continuous improvement. Please identify a new or refined outcome that will challenge the program to demonstrate further growth."
Your department chair asks: "We worked hard to achieve this. Why are we being penalized for success? What should we do next?"
A. Defend the outcome—95% is excellent and proves the program is working. Document this and move on.
B. Lower the threshold to 70% so there is room to show improvement again.
C. Retire this outcome and select a new, more challenging SLO that will push the program further—or add a second measurement method that reveals deeper levels of mastery.
D. Split the outcome into two more specific sub-competencies so performance variation becomes visible again.
Ways to Choose a New or Refined Student Learning Outcome
1
Go deeper on the same skill. If students "demonstrate basic written communication," shift to "demonstrate discipline-specific argumentation using primary sources and scholarly evidence."
2
Move up Bloom's Taxonomy. If the current outcome is at the "understand" or "apply" level, revise it to "analyze," "evaluate," or "create."
3
Add a performance context. "Students will solve quantitative problems" becomes "Students will solve novel quantitative problems in authentic professional contexts with minimal guidance."
4
Identify a gap from employer or alumni feedback. Review advisory board minutes or graduate surveys—what skills are graduates still lacking? Build a new SLO around that.
5
Add a second measurement method. Keep the outcome but add a direct measure (e.g., portfolio, capstone rubric) alongside the existing indirect one, revealing nuances the first measure missed.
Reviewer Feedback Received
"The stated student learning outcome 'Students will develop critical thinking skills' is too broad to be meaningfully assessed. As written, it is not possible to determine what constitutes achievement, what method would measure it, or what threshold represents success. Please refine this outcome to be specific and measurable."
How should your program rewrite this outcome to satisfy the reviewer?
A. Add a definition: "Students will develop critical thinking skills, defined as the ability to think carefully and logically."
B. Specify the behavior, context, and standard: "Students will evaluate the validity of research arguments by identifying methodological strengths and weaknesses using a 4-point rubric, with 75% scoring 3 or higher."
C. Replace it with a standardized test like the CLA+ that measures critical thinking broadly.
D. Break it into three separate outcomes: analysis, evaluation, and synthesis—each with its own measure and threshold.
How to Make a Student Learning Outcome Measurable
1
Use an action verb from Bloom's Taxonomy. Replace "understand," "appreciate," or "develop" with: analyze, evaluate, construct, design, argue, compare, demonstrate, apply.
2
Specify the context. Add where or how the skill is demonstrated: "in a capstone project," "using primary sources," "in a clinical simulation," "given a case study."
3
Define the standard. Add a measurable threshold: "with 80% accuracy," "scoring 3 or higher on a 4-point rubric," "as rated by two independent evaluators," "meeting professional licensure criteria."
4
Name the assessment method. "As measured by the capstone rubric," "as scored on the NCLEX examination," "as evaluated using the AAC&U VALUE rubric for Critical Thinking."
5
Test it with this question: "If I gave this outcome to two different assessors, would they agree on whether a student met it?" If not, it needs more specificity.
Reviewer Feedback Received
"The program's compliance certification documents that assessment data was collected and shared with faculty. However, there is no evidence that this data led to specific changes in curriculum, pedagogy, resources, or program design. SACS requires not just data collection but evidence that the results were used to seek improvement."
Your assessment coordinator says the faculty "definitely discussed the data at the annual retreat." What must you do to satisfy this feedback?
A. Submit the retreat agenda as evidence that data was discussed.
B. Document the specific action taken as a result of the data, the rationale connecting the finding to the action, and the timeline for reassessment.
C. Note that outcomes were met, so no changes were needed—and document this conclusion.
D. Conduct the assessment cycle again with a different tool to generate new data.
What "Closing the Loop" Actually Requires as Evidence
1
A specific action statement. Not "we discussed results" but "we revised the ENGL 301 rubric to add a section on source evaluation after 42% of students scored below threshold on that criterion."
2
A rationale linking finding to action. Explain how the data led to the specific decision—this shows the cycle is intellectual, not mechanical.
3
A timeline for follow-up. "We will reassess this outcome in Spring 2026 using the revised rubric to determine whether the change improved student performance."
4
Even when outcomes are met: Document a deliberate decision to sustain the practice—"Results met threshold; we will continue current pedagogy and reassess in two years."
5
Curriculum mapping updates. If findings reveal a coverage gap, show the updated curriculum map or course sequence change that addresses it.
Reviewer Feedback Received
"For the past four assessment cycles, this program has consistently reported that fewer than 60% of students meet the threshold for the professional writing outcome—below the program's own stated acceptable level of 75%. Despite this persistent gap, no documented changes to curriculum or instruction have been implemented. This pattern raises serious concerns about the program's commitment to continuous improvement."
The department has been "planning to address this." What must the program demonstrate now?
A. Lower the threshold to 60% so the program is now "meeting" its own standard.
B. Document a specific intervention already implemented (or being implemented this term), with a clear timeline and plan to measure its effect in the next cycle.
C. Explain that writing is inherently difficult to assess and the outcome may not be appropriate for this discipline.
D. Request that this standard be reviewed by a different committee that might be more familiar with the discipline's norms.
When a Gap Persists: Required Steps
1
Implement something specific—now. A curricular change, a new assignment, a writing workshop, a prerequisite adjustment. The intervention must be documented and concrete.
2
Set a reassessment date. Commit to measuring whether the intervention worked in the next cycle. This shows good faith and a functioning improvement process.
3
If the threshold is truly wrong, justify the change. Lowering a threshold is acceptable if the program provides a reasoned argument tied to mission, discipline norms, or student population—not just to make a gap disappear.
4
Investigate root causes. Is the issue in prerequisite preparation? A single course? Adjunct instruction inconsistency? Alignment between the outcome and the courses that teach it? Document the diagnosis.
5
Never let a gap sit for more than two cycles without action. SACS views persistent inaction on documented gaps as evidence of a dysfunctional effectiveness process.
Reviewer Feedback Received
"The Financial Aid office has submitted satisfaction survey results showing an average score of 3.8 out of 5.0 for three consecutive years. While satisfaction data is valuable, this office has not identified specific expected outcomes tied to its core functions—processing accuracy, time-to-award, appeal resolution timelines, or compliance rates. SACS Standard 7.3 requires that administrative units identify and assess meaningful outcomes, not only satisfaction."
The Financial Aid director says, "We're not an academic program. What outcomes are we supposed to have?" How do you respond?
A. Explain that satisfaction surveys are the appropriate measure for a service unit and the scores are above average.
B. Work with the director to identify 3-5 process and quality outcomes specific to the unit's core functions, set targets, and build a measurement plan.
C. Request that Financial Aid be categorized as a student support service under 8.2.c instead of 7.3 to use different criteria.
D. Add a second satisfaction survey with more detailed questions to generate richer data.
Appropriate Outcomes for Administrative Units (Examples)
1
Financial Aid: 95% of complete applications will receive an initial aid offer within 10 business days; error rate on awards below 2%; 90% of appeals resolved within 30 days.
2
Facilities: Reduce energy consumption by 5% annually; 90% of work orders closed within SLA timeframe; capital projects completed within 5% of approved budget.
3
Registrar: 99% accuracy on transcripts; degree audits completed within 5 days of request; graduation clearance errors below 1%.
4
HR / People Operations: Time-to-hire for faculty positions under 90 days; onboarding completion rate above 95%; training compliance rate above 98%.
5
IT: System uptime above 99.5%; help desk tickets resolved within 24 hours for priority-1 issues; data breach incidents at zero.