7 Tips for Proper Assessment Conduction

Sebastián Pulido
Adjust the
text size

Quick take: How to effectively assess outcomes? How can technology facilitate this process for faculty? How can outcomes assessment lead to higher quality in education? Proper assessment conduction can be challenging for instructors, but a few tips can help.

When approaching student grading and outcomes assessment, many questions arise as to how these processes can be conducted effectively and without interference. The purpose of outcomes assessment in education is defining outcomes, evaluating how they are delivered, and acting to improve. So, what are the best ways to approach evaluation at different levels within institutions? How does technology help? What are some of the risks and benefits?

E-Learn spoke with Dr. Karen Yoshino, senior member of the Enterprise Consulting team at Blackboard, to gain a deeper understanding of the most relevant do’s and don’ts of assessment, especially in higher education. Dr. Yoshino helps Higher Education institutions meet their academic goals through student and program evaluation, data analytics, and research. She has gained a deep understanding of the higher education landscape through her consulting experience, focusing on competency-based education, student outcomes assessment and accreditation.

1. Take outcomes assessment out of the classroom

For outcomes assessment, a simple question must be answered: Did the program or institution deliver on the expected outcomes? In the case of outcomes assessments, we only need to assess mature student work – typically a sample of papers from students that are about to graduate. Through careful artifact selection, institutions can use the same artifacts to score multiple outcomes. For example, a capstone project should be able to produce assessments for critical thinking, written communication, and information literacy.

Often times, institutions argue that the evaluation process is redundant and they try to tackle grading and assessment at the same time. This is misguided for several reasons. First, a grade typically combines competencies like writing, critical thinking and disciplinary content. You can begin to see how, in this scenario, outcomes assessment begins to interfere with classroom grading. Secondly, this approach typically generates unnecessary work if you consider multiple faculty across the institution evaluating all their students. Whereas in a separate, secondary outcomes evaluation processes, a small team of evaluators can divide a representative sample and complete the assessment for the whole institution in a few hours.


We Need to Talk About Integrity

2. Keep assessment focused on how to improve delivery on the outcome

In the outcomes assessment process, graded student work is systematically collected, the population is then sampled and scored with a common rubric. Rubric scores are aggregated to evaluate program performance by faculty, who then collectively develop changes in either the design or delivery of the curriculum to improve outcomes institution-wide.

An example of this process is an institution that has scheduled the evaluation of critical thinking for the upcoming year. Critical thinking is defined as: Learners identify issues, recognize context, take perspective, and evaluate assumptions, evidence, and implications related to specific issues and topics. After collecting, sampling, and scoring their sample, and with the aggregated results of the evaluation, the institution brings the faculty together to interpret the data.

When the faculty see that the data reflect a low score for “evaluate assumptions,” for instance, the conclusion would be that the institution is not delivering on that specific criteria. The faculty would then develop strategies to develop this skill, and implement them across the institution. They might, for example, agree that in all courses, a specific exercise will be introduced to develop students’ ability to evaluate underlying assumptions in a given scenario. With a program in scientific research or social scientific research, this method could be used effectively. Institutions should not tackle more than one improvement strategy per outcome per year in order to effectively disseminate new practices.

Illustration 7 tips for proper assessment conduction - Climbing mountain recrod

3. Use standard rubrics with descriptions for each performance level to define each outcome

Numbers are used to focus on evaluating learning outcomes. These numbers come from rubrics, which are quantitative expressions of quality. The rubric evaluation process is indeed qualitative, but numeric values are applied to those expressions.

Test scores from comprehensive exams can be used, provided the test items and score values are aligned to specific criteria, in order to achieve sufficient granularity to understand how to improve programs. For example, if the average grade of a physics final exam is 75% or C, nothing can be done by way of analysis and program improvement because it is impossible to know what exactly needs to be improved. In contrast, evaluation rubrics are clear as to what composes each criteria and make it possible to assign numbers to each one. By achieving this granularity, more precise actions can be taken, as the failures are localized. For example, institutions can consider the implementation of more critical thinking activities in their courses if the rubrics show that component is lagging.

4. Survey results have a very limited and tangential role in outcomes assessment

Surveys are classified as indirect methods of assessment. The self-reporting nature of these instruments are subject to politics, emotions and other factors that should not influence the processes or judgments about what’s working and what is not. That said, certain survey items might provide information on why results turned out the way they did. For the most part, outcomes assessment uses direct methods, which means tests and rubric evaluations.

5. Technology is a great opportunity schools should leverage

Generating outcomes assessments on a paper basis is very burdensome, but there are now technological means to gather artifacts and samples, distribute to evaluators, provide electronic rubrics, monitor the evaluation process, and generate reports. There are two issues where technology and practice present challenges. Firstly, when the institution’s processes are relatively new and the institution is still unclear about what quality, measurement, and improvement look like, and may expect the technology to help them answer those questions, which is not a realistic expectation. Institutions should identify best practices and set up their implementation process before deciding on which technology best suits their needs.

6. Write down clear processes for staff and faculty to follow

Outcomes assessment can lead to a higher quality of education while relieving institutions from bureaucratic burdens, but the processes must be laid down clearly. For instance, a glossary of terms is in order to unify language. Also, governance oversight has to be assured, in conjunction with the definition of key assessment elements and templates to carry it out.

Some institutions could be ready to conduct a solid outcomes assessment and they might not even know it. In these cases, the institutions have every outcome defined to maintain compliance with government requirements. With outcomes already defined, it is an easy transition to put the static list of outcomes to work through the assessment process.

7. Link improvement strategies to the assessment data

In many cases, data are presented together with a list of improvement strategies that have no relationship to the data. Often, programs will propose strategies based on a general and decontextualized idea of assessment, without thinking if those proposals will work with the reality shown by the data. This is a huge missed opportunity to improve the quality of education and is a recurrent scenario in many institutions. To manage this, leaders could develop processes to educate their communities about assessment, but more importantly, instruct on how to use data for the improvement of educational practices.

Illustration Assessment tips - top of the mountain with a goal flag


Karen Yoshino, Senior Member of the Bussiness Consulting team at Blackboard.



End of Comments