Category Archives: Assessment and Data

Continuous Improvement – Beyond Busywork

Opening note:  “OPAR” and “OPARties” have become part of the lexicon of Academic Affairs at FSU.  “OPAR” refers to the Operational Plan and Assessment Report, while “OPARty” – an attempted play on words combining OPAR and party, as if this process could be fun – refers to the annual meeting in which each department’s OPARs is evaluated.  (See attached rubric.) OPARRubric The OPAR score becomes part of the CIR – another common acronym – the Continuous Improvement Report, which enables academic departments to earn additional funds based on their performance on 10 metrics.

The commitment to continuous improvement is one of the few non-negotiables in Academic Affairs.  I am willing to discuss and debate virtually any issue and entertain alternate solutions to any problems — except for my expectation that as individuals, departments, and a division we will strive to get better and better in all that we do.

Few would object continuous improvement as an abstract ideal.  Like love and truth and justice and the Golden Rule – who will say they’re against it?

Affirming continuous improvement in the abstract is not the problem.  Operationalizing it — infusing it into the way we do business on a daily basis — is the challenge.  OPARs enable us to operationalize continuous improvement by bringing together the essential components of continuous improvement — goals, measures, and use of results — into an integrated plan.

Continuous improvement is always relative to specific goals which, for us, are derived from the university mission statement and strategic plan, namely, to improve student learning, increase the number of graduates, provide effective educational support, conduct research, serve the university and extended communities, achieve operational efficiency and fiscal sustainability, and increase external funding.  OPARs explain how each department will help accomplish one or more of these goals.

OPARs delineate how we will measure our progress.  It is one thing to talk about continuous improvement or to strive for it; but it is a qualitatively different thing to be able to empirically demonstrate and measure improvement.  To measure improvement, we must set meaningful targets and have reliable and relevant data.

Measuring progress, however, is a waste of time, if we do nothing about what we learn from these measures.  Our OPARs enable us to develop and document strategies implemented to use results for improvement.

After more than six years of OPARties – previously under the leadership of Dr. Marion Gillis-Olion and now under Dr. Rollinda Thomas – our use of OPARs as a vehicle for continuous improvement has vastly increased.

Yet, progress remains uneven. In 2009-10, the first year we developed OPARs, virtually all of us – including me — viewed OPARs as busywork, that is, meaningless activity with little connection to the work we do on a daily basis. When viewed as busy work, the OPAR is set of templates to be filled up with something – information, narrative, and numbers — whether it makes sense or not.

Most often, compliance with SACS was seen as the driver of this activity.

For many departments, OPARs have evolved into much more than busy work. These departments are marked by several important characteristics.

  1. The OPAR is reviewed periodically throughout the year and updated as needed. It is not developed at the beginning of the year and reopened a week or two before the OPARty just to have something to present at the OPARty.  It is discussed throughout the year; the implementation of strategies is monitored and documented.
  2. The OPAR is not a one- or two-person project. All the members of the department are engaged at some level in collecting and analyzing data and developing strategies following from the data.
  3. The OPAR provides an occasion for deep and careful reflection on big questions such as, “Why does our department exist?” “What is our role in achieving the missions and goals of the university?” “What should our students be learning to prepare them for success in their careers and personal lives and as responsible citizens?”  “How will we know if students have in fact learned what we say they have learned?”  “What do our assessment results reveal about student learning land what will do about it?”
  4. The OPAR provides an important occasion for clarifying common goals and discussing ways of achieving them. Any successful organization has a sense of common purpose that is shared by its members.  The OPARs serves to delineate that common purpose in departments that use them most effectively.
  5. The OPAR encourages creativity and innovation. Defining goals, developing assessment tools and measures, translating assessment results into the strategies for improvement, and other similar tasks require creative approaches and innovative solutions. We cannot do things in the same manner year after year and hope to improve.
  6. The OPAR generates a “hunger” for good data. These departments seek to obtain relevant, accurate, comprehensive data to measure improvement. Consider the example of critical thinking, one of the most commonly cited outcomes of higher education.  Departments committed to developing critical thinkers do not just talk about it, but develop meaningful assessments that can guide improving these skills in our students.  The same can be said for all other learning outcomes we value. For departments committed to continuous improvement, data always initiate, rather than stop, dialogue.

Virtually all departments exhibit some of these attributes; very few exhibit all of them.  As we begin 2016-17, I encourage departments to consider the extent to which these attributes apply to them.  The most essential question:  “Is your OPAR mainly busy work or a tool for continuous improvement?” “What will you do in response to your answer to this question?”

I hope this discussion itself promotes continuous improvement.

FSU Continuous Improvement Report Wins Innovation Award

On March 15, 2016 the American Council of Education awarded FSU the 2016 ACE/Fidelity Investment Award for Institutional Transformation. FSU competed in the category of institutions with enrollment of 5,000 or more.

The press release from ACE: FSU wins ACE Award

Here is the application we submitted: FayettevilleStateUniversity_ACEInnovationCompetition

Board of Trustees Report – March 2016: Making Sense of Graduation Rates

During the UNC Board of Governors meeting at FSU on March 4, several individuals made reference to retention and graduation rates.  I thought it would be helpful to give our own Board of Trustees a more comprehensive explanation of retention and graduation rates. I have attached the presentation with my notes.  (It is probably more than you care to know.)

As a starting point, consider this analogy.  If you knew nothing whatsoever about golf and you overheard two people describing their most recent game.  One says he shot an 88 and the other says she shot a 79.  Who would you conclude had the better score?  It is a very weak analogy, I know. There are many differences between golf score and retention and graduation rates. But my point is that when retention and graduations rates are discussed apart from any context or with those who know little about how these rates are measured, the audience would probably be as confused about the meaning of these rates as the person overhearing the conversation about golf scores.

See the presentation here: AcademicAffairsUpdateWithNotesFinal