Catholic Distance Learning Network

Module 2 of 4 Module 4 of 4 MODULE 4 OF 4 (¡Ahora en Español! translation by Dr. Carlos Miranda)


In this module, we will examine transactive assessment, explore surveys, questionnaires, and ePortfolios, discuss goals and objectives, review Brookfield’s Critical Inquiry Questionnaire, and use a template to create a syllabus. 

ASSIGNMENTS/ACTIVITIES OVERVIEW (details are listed below)


This document contains all the information needed to successfully complete the fourth of four modules for the online certification. 




As late as the 1980s, assessment was focused on the presence of institutional resources – whether enough books of the right kind were available in the library, whether enough professors of sufficient diversity to make a viable faculty were available for teaching and research, and whether a sufficient budget was available to keep the school running – but by 1996, according to Bill Miller, Director of Accreditation and Institutional Evaluation at the Association of Theological Schools, "a greater emphasis [was placed] on analysis of the data and decision-making within the institution" (italics his). In any reaccreditation recommendation, Miller adds, the Commissioners "want to know whether the school understands what is happening. What is quality in a school? What [does a school] need to sustain and nurture it? What will [a school] do and how do[es it] know when [it's] successful in doing what [it] do[es]?" (See the Online Self-Study Workshop hosted by ATS.)

An answer lies in outcomes-based assessment, for we measure what we value. If our primary product is an educated student, then his or her performance outcomes become the "product" upon which we can ground an understanding of our program's quality. Programmatic evaluation, therefore, is substantively tied to student assessment.


Assessment is the "systematic, on-going, iterative process of monitoring learning in order to determine what we are doing well and what we must improve." Assessment involves "observing, describing, collecting, recording, scoring, and interpreting information." Assessment is effective when it "is student centered …congruent with instructional objectives …relevant …comprehensive …clear (in purpose, directions, expectations) …objective and fair, …simulates "end" behavior/ product/ performance, …incites active responses, …shows progress/development over time."

What's the difference between assessment and evaluation? "The term assessment refers to the systematic gathering of information about component parts of the thing to be evaluated. The evaluation process is broader than assessment and involves examining information about many components of the thing being evaluated and making judgments about its worth and effectiveness." From

"What's the difference between assessment and grading? When instructors assess student performance, they're not placing value or judgment on it — that's evaluating or grading. They're simply reporting a student's profile of achievement." From:

"42.7% of all statistics are made up on the spot." From the comedy of Steven Wright

Why assess?

Educators know how to assess and know that assessing is important – but educators may not realize that if they were to improve their assessing skills, their teaching potential would also improve, which will help their students become better learners and thinkers. According to Frederikson and Collins, "Assessment should not simply monitor achievement or report scores. Whether we are assessing to report to others or for ourselves, …assessment should lead to instructional action." 

One method to improve teacher assessing skills is to use technology to assess (i.e. use Excel to create a grade book, Word to create a quiz, a website to create a rubric). The technology is readily available and can help predict results and increase both student interest in learning and teacher interest in teaching. Technology can lessen the burden of test giving, test taking, and grading, and it may offer new and imaginative ways to measure student performance. (See the 2010 Revised ATS General Accreditation Standards regarding the integration of appropriate technologies in theological teaching and learning.)

Assessment is changing!

As Bill Miller noted above, assessment changed from resource-based assessment to outcomes-based assessment by the mid-90s of the 20th century. Now that we're in the second decade of the 21st century, we can look back over the past decade and a half and see that outcomes-based assessment itself has undergone change – we are no longer as concerned with what a student knows about a given course of study as we are with how that student is able to practically apply it.

A call for this shift in concern was sounded as early as 1998 by, among others, Norton and Wiburg when they wrote in Teaching with Technology that "[a]ssessment strategies need to shift from the assessment of a student's knowledge about a subject to an assessment of a student's ability to reason, think critically, and solve problems." What they were really calling for was an applications-based assessment process derived not from gauging the student's ability to process data transmitted from the instructor but from the capacity of the student to apply that data in a meaningful way. The pursuit of demonstrations of such applications evolved in online teaching and learning into transaction-based activities that engaged students and instructors through the materials made available to them. The new teaching was, therefore, transactive rather than transmissive in nature, so the new assessment model had to also become transactive.

In transmissive teaching and learning environments, assessment was considered to transpire after instruction, usually in the form of tests over lecture notes, course readings, and class discussions. In transactive teaching and learning environments, assessment is formative and an important part of lesson planning and implementation.

Transactive assessment cannot be delayed; instead, activities and projects that demonstrate desired outcomes and measure skills and knowledge, should be included from the start.

Under a transactive assessment model, students are not only co-producers of their own teaching and learning environments, but they also become co-assessors of their own outcomes. When students have a say in their own assessment they take ownership, perform better, develop pride in what they do, and become better communicators who will get in the habit of reflecting upon their own work. Students are more likely to meet the conditions for favorable completion of an assignment if they have a hand in developing the criteria for that assignment. Self-assessment helps students become focused learners who are able to think about what they have learned, question what is and isn't clear, and think critically to evaluate their work, which leads to students who become self-directed and active learners instead of passive listeners.


Survey Monkey and Formsite are two web sites that allow for creating forms, evaluations, surveys directly online. Both are free. 

You can generate tests, evaluations, and surveys using Blackboard test and survey managers.


A transactive assessment model does not mean that students grade themselves. Grading is a form of evaluation meted out by the instructor based on a student's ability to respond against a given set of criteria provided in advance in the form of a grading rubric. Grades, furthermore, may reveal things that assist in transactive assessment, but they do not substitute for the process of transactive assessment. Among the things that grades may reveal are whether the content is presented properly, whether the lectures are relevant, whether the test material is applicable, and/or whether students are doing more than memorizing. In online teaching and learning environments, grading ought to be considered during all aspects of a lesson – such as its goals, objectives, planning stages, and method(s) of delivery – and in all aspects of course planning – such as textbook selection, lecture material, activities, projects, assignments, planning stages, and method(s) of delivery.

The grading process should, therefore, be ongoing as a factor of lesson and course evaluation, not just as a final product. (An example would be in the development of a research project – instead of simply assigning a grade to the final product on its due date, teachers can create incremental due dates factoring grades for various peer-reviewable components of the project. John Paul Heil and Anne Marie Kitz of Kenrick-Glennon Seminary in St. Louis accomplished something like this in their joint web-based exegetical research projects from fall 2002 to spring 2006. Educators will be assured that what they are doing is working and know when and what adjustments need to be made for future classes if grading is included in all aspects of a lesson.  


(Educator created / Student created)

Assessment situations might include the following:


Good feedback practice…


Self-assessment is meaningful for students endeavoring to develop competencies in the production and dissemination of their own work, yet the skill has to be cultivated through encouragement and instruction. An assumption that precedes any effort of an instructor to assist students in the development of self-assessment strategies is that students are capable of doing so competently.

We have to make this assumption in their favor if we expect them to one day be able to operate their own ministries without teacher supervision, and it is better they learn such strategies under our supervision than otherwise.

The entire weight of Malcolm Knowles's writings on andragogy and adult learning also fall into play here. Knowles makes the following assumptions in The Adult Learner (6th edition):

If we accept these assumptions, then we have to also give our students an opportunity to build on them. --The Four Steps of the Student Learning Assessment Loop by William R. Myers (p. 20)


Grading rubrics are quick ways to show students what they are doing or not doing within a given assignment. Each rubric is tailored to the specific activity for which it is used. Below are two grading rubrics – the first was created for a discussion board prompt asking faculty to identify one activity in which they engage in transmissive evaluation and explain how they might change their evaluation method into one that is transactive while the second is specific to peer responses on a discussion board.

A rubric is helpful to both instructor and student: it is a simple way to set up grading criteria for assignments or can be used as a self-assessing tool or check list. “A rubric defines in writing” what is expected to get a “specific grade” from

Short Activity Assessment Rubric:





Matching activity to transmissive assessment method

neither the activity nor the method to assess is clear from the description provided

either the activity or the method to assess is clearly stated, but not both

both the activity and the method to assess are clearly stated

Matching activity to transactive assessment method contrast

neither the activity nor the method to assess is clear from the description provided no real explanation of the difference between the transmissive and transactive methods is provided

either the activity or the method to assess is clearly stated, but not both the explanation of the difference between the transmissive and transactive methods is unclear

both the activity and the method to assess are clearly stated the explanation of the difference between the transmissive and transactive methods is clear

Total Points

Student Response Assessment Rubric:





Student responses

Response added nothing new, such as simply agreeing with the original posting

Response added something new by way of parallel information or personal experience, but it did not indicate a clear understanding of the expressed viewpoint/summary

Response advanced the conversation, added parallel information or personal experience, and indicated a clear understanding of the expressed viewpoint/summary

Total points

Visit RubriStar's web site. While it is primarily designed for K-12 teachers, the demonstrations it provides might be helpful when creating a rubric for higher education. Once on the site, scroll down and click on one of the blue buttons for a rubric template, or on create at the top. (You may have to register, but it is free.)


Portfolios are collections of student work over time (within a given course or across a number of courses within a given program of study). Each portfolio is a demonstration of student accomplishment in responding (more narrowly) to the course goals and (more broadly) to the program goals. In essence, the portfolio is hard evidence of the student's contribution to and experience of a given course or program and is considered a direct means of programmatic assessment.

An ePortfolio might be created using a PowerPoint template, might be created using an online application for a fee, or not if not electronically created, a portfolio might be hard copies in a binder.

For a detailed description of the values and norms inherent in the use of portfolios, see Truman State's Portfolio Assessment program. While a portfolio provides meaningful data concerning a student's growth within a program, then, it is really geared to assess the teaching and learning environment within which that student has been formed.

The ePortfolio differs from the text-based portfolio in a couple of significant ways - one, in the way it is generated, and, two, in the way it is distributed. In the first case, the ePortfoliois developed on a web page, which enables it to take advantage of appropriate use of multimedia in the packaging of course content. Students can provide audio or video reflections on those areas of their work that they selected as representative of their experience within a course or program in light of the course or program goals they have to prove they have addressed. In addition to providing links to the work they upload to their sites, students may also provide interpretive contexts for external links to various entities online that supported their work. In the second case, the ePortfolio is universally accessible to those responsible for its review, and this includes a student's peers and a program's review board. The semi-public nature of the ePortfolio increases student attentiveness to its development as a self-evaluation designed to be used as an institutional assessment tool.

When we ask our students to develop ePortfolios, we are actually asking them to do two things - the first is to demonstrate a facility with the various technologies necessary to post materials online, and the second is to collate examples of their learning that evidence their responsiveness to the course (and, ultimately, program) goals.

At the beginning of a given course (or program), then, students have to be told that this portfolio will be a portion of their course (or program) requirement and that it will merely entail the gathering and packaging of evidence specific to the course goals. Students will pay much more attention to their course goals (and their relation to the program goals) if they have this in mind, and they'll actually be the ones collecting much of the assessment data for the instructor (or dean) to use within the context of course or institutional assessment. Students will also develop a stronger understanding of the relevance of any given course to the overall program in which they're involved and of the overall program to the vocation for which they're studying.


(A completed syllabus will be the final project for the course)

A syllabus is a formal table of contents for a course and will most likely be students’ first contact with you as an instructor (the welcome document will be the next).


Sometimes goals and objectives are used interchangeably – this depends on syllabus criteria of the teaching establishment. When writing goals and objectives in the syllabus, it is important to note that goals are where we want to be, and objectives are methods or steps we should take to get there. Goals and objectives tell students what you want from them and what the course will offer. Use active measurable verbs such as ‘explain’ and ‘demonstrate’ – not words like ‘learn.’

Visit the WriteExpress site for a list of active verbs.

New terminology has evolved for writing objectives relative to technology:
‘Navigating’, ‘wrap text’, ‘use the ... menu bar Atool palette,’ ‘import’ or ‘export’ Agraphics Apictures), ‘attach documents’, ‘cut, copy, paste’, are some new additions to the action verb list for writing realistic objectives pertaining to technology. The general rule, of course, is to choose words when writing objectives that involve technology and words that are appropriate to the action.

Think of what it is you want your students to do during the course. A general rule is to examine the activity or software that online students are using, decide on objectives and learning outcomes, choose a tech tool and method to assess, then choose wording.


The standards used when teaching is dependent upon the teaching establishment.

Macros standards like those provided by ISTE or by the Association of Theological Schools help orientate us in our decision-making of what goes into a given syllabus because we, ultimately, want to be able to demonstrate through the courses that we teach that we are meeting some of the standards of the agencies that accredit us.

Given what you now know about how these standards can affect your course design, you will want that understanding reflected in the kind of syllabus you put together, derived not in superfluous technological integration but in a very real and positive response to the nature of the course you teach.


Personal experience: One fall, a single eight-week course of 16 students created nearly 350 personal questioning emails to the instructor during the first week of class - many could’ve been avoided if the instructor had posted clear instructions from the start (remember that each email should be answered personally)


Submit each of the following using the module 4 assignment post


Suggestions – use the module 4 comment post for the following:


Last Updated May 25, 2013.  © Catholic Distance Learning Network.
"The program was excellent. It was a very generous idea to make it available to students. It provides a clear understanding of the advantages and limitations of the online environment. Anyone interested in learning more about distance learning would benefit greatly." - Carlos Miranda, Ph.D. - certification completed in winter 2012 through the graduate program at Holy Apostles College & Seminary (read more)