Utah Tuning Project

Randall S Davies and David Williams[*]

First publication online: 30 June 2015

Abstract: Tuning is a faculty-driven initiative designed to improve the quality of higher education by establishing transparent and fully assessable learning outcomes and proficiencies for degrees, discipline by discipline. Unlike many other initiatives in the United States which function within an individual institution, the Utah Tuning Project involved all institutes of higher education within the state of Utah. The purpose of this paper is to document the findings from an evaluation of a multiyear project targeting four undergraduate degree programs involved in a tuning initiative. A summary of recommendations and best practices is provided, along with the challenges and benefits to individuals and programs engaged in this process.

Keywords: Tuning; learning outcomes; higher education; degree specifications; program development; program evaluation.

I. Introduction

Largely due to economic issues, higher education in the United States is currently in crisis.[1],[2] Expectations that we as a society increase the number of individuals receiving a college education are often at odds with the perception that adequate financial support is allocated to state-funded institutions through government budgets. Outcries over rising tuition costs (i.e., student debt) have sparked accountability concerns (e.g., completion rates). Public confidence in the ability of colleges and universities to adequately prepare graduates for their chosen careers has diminished greatly in recent years.[3],[4],[5] Prompted by accountability concerns, educators and invested stakeholders have suggested several ways learning in schools might be improved. And, for better or for worse, the relative autonomy teachers have had in determining what and how to teach has become increasingly more regulated by state and federal mandates.[6],[7],[8]

Accountability mandates in public elementary and secondary schools are manifest in increasing reliance on standardized testing.[9] Additionally, schools are under pressure to establish a common set of core standards to guide curriculum development and instruction.[10] At post-secondary institutions, accountability has typically focused on accreditation mandates, which require colleges and universities to establish learning outcomes.[11],[12] Faculty are then expected to assess students on the learning outcomes established for a course or degree;[13] this is intended to serve as evidence that graduates are prepared to enter the work place and that a post-secondary education is worth the expenditure of time and money. An additional concern for higher education to that of creating learning outcomes, which is addressed by the tuning initiative being studied in this paper, is consistency across the learning outcomes established for similar degrees offered at different universities and colleges.

Funders of this project have asked whether universities and colleges offering a similar degree could agree on a common set of learning outcomes for students receiving that degree, regardless of where they receive it. Project administrators are also interested in identifying benefits that might be realized for students and institutions by engaging in such an endeavor. The purpose of this paper is to document the findings from an evaluation of a multiyear project targeting four undergraduate degree programs involved in a tuning initiative. A summary of recommendations and best practices is provided, along with the challenges and benefits to individuals and programs engaged in this process.

II. Utah Tuning Project Background

While tuning is an international phenomenon, the Utah Tuning Project was introduced to improve student learning by embedding tuning and tuning reference points (expressed in terms of learning outcomes and competencies at the subject area level) within the academic culture and practices of those working at institutions of higher education across the state. In essence, the project endeavored to facilitate a systemic change at these institutions by clearly articulating a common set of expectations across the state for what students should know, understand, and be able to do upon completing a specific program of study or set of learning experiences. Most Tuning projects in the United States tend to function within a single institution or program. The stated purpose of this initiative was to improve the quality of higher education by establishing transparent and fully assessable learning outcomes and proficiencies for degrees, discipline by discipline. The tuning project was designed to work in conjunction with other programs, including the Degree Qualifications Profiles initiative and Utah’s Faculty Discipline Majors’ Meetings. The long-term objective of the Utah Tuning Project was that all disciplines would be tuned, and every student graduating with a degree in a tuned discipline would demonstrate mastery of all learning outcomes and competencies that the team had determined to be critical for work in that discipline.

III. Project Activities and Components

The central activity of this project is its tuning teams. For the past three years, four teams have operated in Utah: physics, history, elementary education, and general education mathematics. Earlier a pilot project of the tuning process had existed for physics and history. Thus two of these groups had been functioning for two years previous. Tuning teams consisted of a representative from each of the public universities and colleges in Utah, in addition to some from private institutions. Because the physics and history teams had been tuning for over two years prior to this initiative, many of the team members were well versed in the process; however, several team members representing the various institutions had only recently joined these teams, either replacing prior team members or representing institutions new to the tuning process. The elementary education (ELED) and general education mathematics (GE Math) teams began meeting in September of 2011. Eight institutions of higher education across Utah participated in this project to establish fully assessable transparent learning outcomes and competencies for each degree or discipline (or set of learning experiences in the case of GE Math). Team members were expected to represent their institutions as liaisons and advocates for the tuning process in their departments.[14]

In addition to these tuning teams, a Utah Tuning Leadership Team was established consisting of five principal members and the external evaluators. The main function of this state coordinating committee was to facilitate and evaluate the success of tuning teams. The leadership team for the tuning project met regularly to discuss the progress of each team, plan next steps, and provide professional development opportunities aligned with the project goals (e.g., the Educated Persons Conferences held annually in Utah to address issues of learning in higher education). Each of the tuning teams had a chairperson with responsibilities for conducting team meetings and communicating with team members.

IV. Evaluation Activities and Context

The external evaluators for this project provided evaluation support using a developmental approach.[15] Developmental evaluation centers on situational sensitivity, responsiveness, and adaptation. It is particularly suited to this project given the socially complex nature of the initiative and the participants’ expectation to continually adapt and revise tuning to meet the changing needs and purposes of specific groups. For this purpose the evaluators serve as participating members of the Utah Tuning Leadership Team, providing consultation and evaluation expertise to principal stakeholders in their efforts to accomplish the goals of the tuning project.

The role of the evaluators was to consult and to conduct targeted data collection and analysis. Their activities included observing (and at times participating in) tuning meetings and conferences; conducting surveys, interviews, and focus groups; and counseling with the coordinating group. The first two years of the evaluation focused on tuning team activities and accomplishments, with efforts concentrated on observing tuning team meetings as well as interviewing and surveying participants. During this last year the evaluators examined the ways tuning was being implemented at each institution and the extent it was being shared with department faculty. This was done through an online faculty survey supplemented by on-site focus groups with faculty from each of the four disciplines.

V. Summary of Evaluation Findings

The contextual analysis below summarizes observations, analyzes data, and shares insight gained from the tuning project. These descriptions are based on team meeting observations, interviews, focus groups with participating team members, and survey results from faculty over a three-year period.

1. Claims, Concerns, and Issues

Tuning is a socially complex process of cultural change, which takes place in diverse settings and contexts. Thus although some aspects of the process were similar, some experiences of the four participating teams were unique. A common term used by participants for the tuning process was muddling. Team members tended to muddle first with understanding what the tuning process would entail before attempting to accomplish and implement it. The process by which each team eventually arrived at their goal and the concerns each experienced were different from all others. Based on observations and interviews, the following claims, concerns, and issues have been raised.

1.1. Understanding Tuning

One of the common challenges of the tuning initiative was that the term tuning is not particularly intuitive. As individuals were initially introduced to the term, many were confused about its meaning. Their confusion was exacerbated by the need for understanding related terms and definitions (e.g., learning outcomes, competencies, etc.). In fact, the primary concern of the GE Math and ELED tuning group members at their first meeting was that they did not initially grasp the meaning and purpose of tuning nor did they understand the expectations for their group. In contrast, the history and physics tuning groups were more familiar with tuning, as most members of these teams had been involved previously in a two-year pilot study. Much of the initial meeting time for both GE Math and ELED was spent discussing tuning and explaining why institutions might benefit from it. Most participants indicated that, while they initially did not know the meaning of the term tuning, they attended the meeting because they had been invited (or assigned) to represent their faculty. Several indicated that they assumed the tuning leadership team would let them know the process and expectations.

When history students were interviewed as part of a focus group in the second year of the evaluation, they too could not define tuning, but they provided some rather interesting possible meanings. These students had studied at institutions where tuning was in progress, but faculty at many of these institutions rarely used the term tuning with students. They preferred to discuss tuning in terms of what students were expected to know, understand and be able to do upon graduating from the program.

Table 1 presents the results of a survey item questioning individual faculty members’ understanding of tuning. Nearly half (44%) of those who responded to the survey indicated they didn’t know what was meant by tuning or did not provide an answer to the question. Several of the non-respondents (nine individuals who did not complete the survey) emailed saying they didn’t know enough about tuning to respond to the survey, tending to defer to the tuning team member representing their department.

Faculty who were categorized as having only a partial understanding of the tuning process included those who seemed to confuse tuning with other initiatives that promote development of learning outcomes. For example, the Degree Qualification Profile (DQP) and Liberal Education and America’s Promise (LEAP) initiatives both encourage educators to develop learning outcomes. As part of the accreditation process at most universities, each department receives a mandate to establish learning outcomes. Most often faculty could relate tuning to learning outcomes but were unaware or unconcerned about where the learning outcomes came from, why they might be important, or how they differed from other initiatives. Based on survey and focus group results, many individual faculty members seemed to have some misconception about tuning and its purpose. A notable issue common to all groups was the need for tuning education.

Table 1

Faculty Understanding of Tuning by Tuning Group

Knowledge of Tuning

ELED

%

GE Math

%

Physics

%

History

%

ALL Groups

%

Knowledgeable

15

38%

14

32%

8

29%

12

27%

49

31%

Partial

13

32%

7

16%

10

36%

9

20%

39

25%

Don’t Know

3

8%

6

14%

5

18%

11

24%

26

16%

No Answer

9

22%

17

38%

5

18%

13

29%

44

28%

Total

40

 

44

 

29

 

45

 

158

 

Note: Overall response rate for the faculty survey was 52% (158 of 304).

1.2. Disseminating tuning information to faculty

A related issue to tuning education that was common among disciplines was the challenge of exchanging tuning information between the tuning teams and faculty at their related institution. Tuning team members eventually came to understand the process well (i.e., what tuning should accomplish and why it might be beneficial). For the most part they were pleased with what they had accomplished. A large majority of tuning team members also indicated that they valued having participated in the tuning exercise with colleagues from across the state and found the collaborative process extremely beneficial. This reaction was less apparent for the typical faculty members who were not involved in tuning at each institution.

An expectation of each of the tuning team members was to act as a liaison between the tuning team and faculty in their respective departments. Efforts to ensure that they do so met with several obstacles. Tables 2 and 3 present results from the faculty survey regarding how often faculty discussed tuning and learning outcomes as a department.

When asked how often departments took time to discuss tuning, about a third of the respondents did not answer this question; however, 80% of those who did answer indicated they discussed the tuning initiative as a department three to four times per year at most. In comparison, 44% of those who answered these questions said they discussed department learning outcomes at least once per month. The elementary education departments were the ones most likely to discuss tuning and learning outcomes, likely because teacher preparation programs are highly regulated by outside accreditation entities. Due to external pressure from regulators, teacher preparation programs are required to make explicit links between expected learning outcomes, instruction, and assessment evidence. Physics was the least likely discipline to report having spent time as a department discussing tuning and department learning outcomes.

Another constraint on the frequency of these departmental discussions was the individual controlling the department meeting agenda. The more influential the tuning team member (e.g., a department chair or senior faculty member), the more likely tuning seems to have been addressed in meetings. Additionally, departments with only one or two faculty members (e.g., two-year preparatory colleges) reported little or no need to discuss tuning. An additional challenge for those in the GE math group was opportunity. GE math was not a department with official meetings, and they reported that the learning outcomes they developed tended to be of interest to a variety of departments to varying degrees.

Table 2

Faculty Discussion Regarding Tuning by Discipline

Tuning Discussions

ELED

%

GE Math

%

Physics

%

History

%

ALL Groups

%

Never

2

5%

7

16%

3

10%

5

11%

17

11%

1-2 per Year

3

8%

9

20%

8

28%

13

29%

33

21%

3-4 per Year

14

35%

7

16%

5

17%

6

13%

32

20%

Monthly

3

8%

6

14%

1

3%

8

18%

18

11%

2-3 a Month

1

3%

1

2%

0

0%

1

2%

3

2%

Weekly

0

0%

0

0%

0

0%

1

2%

1

1%

No Response

17

43%

14

32%

12

41%

11

24%

54

34%

Total

40

44

29

45

158

Table 3

Faculty Discussion Regarding Department Learning Outcomes by Discipline

Learning Outcomes Discussions

ELED

%

GE Math

%

Physics

%

History

%

ALL Groups

%

Never

0

0%

1

2%

2

7%

0

0%

3

2%

1-2 per Year

3

8%

6

14%

5

17%

7

16%

21

13%

3-4 per Year

5

13%

9

20%

5

17%

15

33%

34

22%

Monthly

10

25%

8

18%

3

10%

7

16%

28

18%

2-3 a Month

4

10%

4

9%

2

7%

3

7%

13

8%

Weekly

1

3%

1

2%

0

0%

2

4%

4

3%

No Response

17

43%

15

34%

12

41%

11

24%

55

35%

 

40

44

29

45

158

1.3. Promoting faculty and institutional buy-in

In addition to tuning education, faculty and department feedback and buy-in have been significant issues—the most common mentioned by almost all tuning team members. There was a general perception that faculty in their departments were slow to provide feedback and in some instances resisted or expressed some apathy regarding this endeavor. This varied by discipline. The following issues and concerns were shared by faculty as reasons they valued or resisted tuning efforts.

1.3.1. Benefits and value of Tuning

Several participants made claims about the value of tuning in addition to those articulated in the official purposes of the Utah Tuning Project. These include (1) the benefit of meeting with colleagues from other institutions to network and discuss common interests and issues; (2) the perceived benefit of personal learning; and (3) the fact that tuning is aligned with and useful for meeting accreditation requirements.

The benefits of collaboration among institutions and personal learning were mentioned primarily by tuning team members. Even if they felt the tuning initiative was not warmly welcomed in their department by other faculty, tuning team members consistently expressed the belief that engaging in dialogue and networking with peers from other institutions had been extremely beneficial, especially for two-year colleges. Faculty from each of the two-year programs acknowledged that they could not address all the learning outcomes the group established for the degree. However, they recognized the benefit of communicating the expectations their students would encounter when they transferred to four-year programs to complete their degrees.

Several individuals also noted the relationship between tuning and accreditation expectations. Many of the departments used the learning outcomes and assessment alignment efforts from the tuning initiative as part of their accreditation documentation. The elementary education group consistently mentioned the benefit of reframing the state’s Utah Effective Teaching Standards (UETS) for practicing (in-service) teachers and aligning them with expectations for teacher candidates (pre-service teachers). Many of the standards for classroom teachers could not be applied to student teachers as they participated in field and practicum experiences. Several faculty felt that one of the greatest benefits of tuning was expressed by supervising teachers who were expected to assess teacher candidates’ performance. Many of the standards listed in UETS could not be assessed for teacher candidates since student teachers were not given opportunities to develop expertise with them. Having the UETS tuned for teacher candidates has allowed supervising teachers to make better (more valid) assessments of candidates’ knowledge and ability.

1.4. Concerns regarding standardization

Many have expressed concern that tuning could become a subtle form of standardization. The official tuning statement asserted that establishing a common set of learning outcomes and expectations for degree completion does not mean institutions must standardize the way they provide services or assess students. Reaction to the standardization concern varied by discipline.

Many physics faculty seemed fine with standardization, which they believe fits nicely with the nature of science as a body of knowledge. Some participants said they accept standardization as an important goal of tuning for their discipline. They saw value in having departments across the state teaching a common set of science concepts. They expect students who receive a physics degree to have a standardized foundation of basic knowledge. Many physics faculty members accepted standardization as a partial purpose of tuning and considered the state-sponsored Faculty Discipline Majors’ Meetings as an appropriate venue to discuss issues of course credit transferability.

Elementary education participants were also more likely than most disciplines to accept standardization as part of the certification process for their students. For some time now teacher preparation programs have been regulated by state and other external organizations. They are expected to have clearly stated learning objectives along with assessments targeted at providing evidence that their students are prepared to teach. Although they accept a common (or standardized) set of learning outcomes, they also believe their individual programs are not compelled to prepare and assess their students in standardized ways. Faculty members in each of these programs feel they provide a unique benefit to their students. Members of each program indicated they felt they were doing a good job preparing their students, emphasizing that they have been doing this for several years. As mentioned above, they also felt that coming together as a group to discuss ways to accomplish the goals of tuning had been a highly beneficial endeavor.

For the GE math group, standardization was less a concern than a challenge. The distinctive contexts of the various GE programs make tuning difficult. GE math courses may not be completed as a series like they would for degrees in other disciplines, but often function as service courses for various degrees. Standardization in these situations was seen almost impossible. Thus the learning outcomes for GE math needed to be very general.

The most consistent push back to tuning as standardization came from history faculty. Contentious debates regarding standards and standardization within the American Historical Association (AHA), the premier professional organization in the United States in this discipline, may have contributed to negativity. The primary concern about tuning and standardization seems to center on the kinds of skills and knowledge historians require. While minor learning outcomes for this discipline might address recall of historical events or familiarity with historical facts, the primary skills and abilities focus on critical thinking and interpretation, along with the ability to present a persuasive evidence-based historical argument recognizing a range of divergent viewpoints. For many of these faculty members the nature of history as a discipline seems to reject the notion of standardization. Historians don’t want to be told what to teach or how to teach it. Most seem to believe they are preparing their graduates effectively, providing a valuable set of skills required to function in a wide variety of fields. However, unlike physics course content, which is somewhat standardized, the content of specific courses taught for a history degree seems secondary to the foundational skills the program develops within an area of specialization.

1.5. Concerns regarding assessment

Most tuning group sessions identified assessment as a major concern. While any uniform use of standardized testing was clearly not an option for any of the disciplines, most respondents indicated they were unsure about how to assess each of their tuning learning outcomes effectively and efficiently. They often expressed the concern that some important outcomes and expectations would be extremely difficult to measure. Many of the groups expressed a desire to share assessment ideas.

Some respondents expressed concern over a perceived expectation that departments would guarantee that each of their graduates would have all the important dispositions and abilities put forth by tuning members. Given external criticism regarding low graduation rates, few if any expressed willingness to withhold a degree from a student who had successfully completed required courses if the faculty felt he or she had not fully or adequately learned all that was expected.

1.6. Concerns regarding tuning as a grassroots initiative

Some have questioned the claim that tuning is a grassroots initiative. Many faculty respond initially to the concept of tuning by questioning the source and the motives behind it.

Tuning is meant as a faculty-driven process, seeking and usually obtaining input from department faculty through their representatives. The Utah Tuning Leadership Team was careful to avoid prescribing outcomes or telling team members how to implement and assess them. However, some have mentioned that the learning outcomes developed by tuning teams need more faculty input. This same concern was voiced by history faculty for current national tuning efforts under AHA, which have produced similar but slightly different learning objectives from the Utah tuning outcomes (see http://historians.org/teaching-and-learning/current-projects/Tuning/history-discipline-core). While input was obtained from a wide variety of individuals from across the country, a final true consensus by all history faculty is unlikely. Agreement is low on which learning outcomes and competencies are most important and on how to word specific outcomes.

Elementary education tuning groups face similar issues. Teacher education programs have many masters, principal among them being state regulatory bodies that ultimately issue teaching licenses to graduates. Each state establishes standards for these programs (e.g., UETS) and expects faculty to align their learning outcomes, curriculum, and assessments accordingly. While some faculty may disagree with these mandates, they must comply to the best of their ability.

1.7. Faculty implementation of Tuning

Another concern participants expressed was implementation of tuning in the classroom. Faculty rarely talked about tuning directly with students; about 31% did not answer the survey question asking if they had done anything different as a result of tuning, and another 26% indicated specifically that they had not done anything different. Another 12% simply indicated that they were already tuning. This group and those indicating they had not done anything different seemed to mean they had already been sharing course and program learning outcomes with their students verbally or on course syllabi, which appears to be the primary implementation activity for most participants, along with creating a document for the department outlining the expected learning outcomes for the program. Another 13% said they now share program learning outcomes with students.

A few respondents indicated that they implemented tuning in additional ways, which included aligning assessments with learning outcomes, adjusting instruction to address learning outcomes, and using learning outcomes to communicate to others what they expected of graduating students. The history faculty at some institutions also indicated they were trying to change the students’ perception of their degree by coaching them to express their qualifications in terms of learning and skills rather than courses taken. For example, faculty reported their attempts to focus student discussion on the broader goals of historical study, the skills and proficiencies developed in a course, the sequential “laddering” of skills, the importance of the capstone research project as a way to acquire and demonstrate skills and abilities, as well as the ways history proficiencies translate into success in further education, public sector work, and private sector employment.

VI. Conclusions and Recommendations

Overall, this initiative was well organized and in compliance with all the specified aspects of the grant. The Utah Tuning Leadership Team functioned well together and actively sought to facilitate the success of each of the Utah tuning teams. The discipline-specific teams meet regularly and each has made progress toward their goals. However, while the project was a success in many ways, initiatives like tuning require considerable time and effort if they are to have a lasting impact.

Taking tuning forward successfully will require a long-term commitment to the concepts and principles of the initiative. Initial success in tuning requires agreement on learning outcomes for a discipline, along with changes to policies and practices at the individual institutions involved. Although the Utah Tuning Project has seen this level of success, lasting improvement will require a systemic change in the way faculty and students think about university training. Sustainable success will also require continued state-level support to coordinate and facilitate the collaborations the initiative requires. The following recommendations and best practices are presented as findings from this evaluation.

1. Project Support and Initiative Advocates

Buy-in is an essential ingredient for the success of any initiative. For a tuning project to be successful, support must be established at each level of the university system within the state. Advocates at each level must be willing to take on the challenge of tuning implementation. Without someone at the state level to rally support, facilitate meetings, and encourage cooperation, the individual institutions will be less likely to participate. At the university level, college administrators must provide individual departments with encouragement and incentives to participate. At the department level, an individual must advocate for the program and be willing to serve as a liaison between the tuning teams and department faculty.

An important aspect of the Utah Tuning Project’s success was support from individuals representing the Utah System of Higher Education on the tuning leadership team, who gave credibility to the initiative and increased initial support for the project when individuals from the various institutions were invited to participate. These individuals believed in the benefits of tuning and worked to make it successful. Because the state was involved, the leadership team was also able to integrate tuning into the agendas of the established state-level Utah’s Faculty Discipline Majors’ Meetings, which are designed to bring together department representatives from various colleges across the state to discuss issues and articulate agreements. We also found it is a best practice to have highly regarded individuals from the departments serve on the tuning team. A department liaison who is a prominent member of the faculty with the support of the department chair (or who is the department chair) is much more likely to be able to share tuning information, gather feedback, and garner faculty support.

2. Continued Tuning Education for Faculty

Another aspect critical to the success of this project is tuning education. Individual faculty and university administrators have to be educated in terms of what tuning is, including its benefits and challenges. Tuning participants must also be educated regarding what they are expected to do. Training cannot be a one-time event, as personnel often have competing obligations and expectations demanding their time and attention. To be successful, tuning must be integrated into department meetings and processes. It also needs to be explained to new hires and presented persuasively to faculty who hesitate to participate because they do not understand the potential benefits.

This project demonstrated clearly that as individuals learn about tuning (understanding what it is and what it is not), they tend to participate more fully. While each group had specific issues to deal with, all participants faced the challenge of gaining full collaboration, adequate input, feedback, and buy-in from their department colleagues. Obviously many faculty members at particular colleges did not understand tuning and as a result were less willing to participate. A best practice for educating faculty is to establish regular meetings to inform faculty about tuning and to discuss ways to implement tuning practices into classrooms.

3. Contextualized Adaptions by Discipline

Unfortunately tuning is not a one-size-fits-all endeavor. Successful implementation of tuning will likely vary significantly among disciplines. For example, while the GE math group had some success and benefited individually from tuning, they found it more difficult to implement tuning and get buy-in from those at individual institutions that might benefit from tuning GE math. The primary difficulty was that GE math courses do not constitute a specific degree and those involved are not organized in a department. For those in the elementary education group, tuning had to align with mandated learning outcomes from state regulatory departments and in some cases from more than one accreditation organization. Physics faculty agreed fairly quickly on learning outcomes and then struggled with assessment issues, as well as differentiating between bachelor’s and master’s degrees. The history group had to contend with buy-in issues and also efforts by their national organization that would inevitably affect their own tuning efforts. A best practice for those trying to implement tuning is to avoid expecting that it will be implemented in just one way.

4. Summary of Conclusions

Many initiatives fail because participants are unable to sustain ongoing support after introductory efforts. Success in tuning may initially be measured by agreement on learning outcomes delineating what students will know, understand and be able to do once they complete their degree. However, long-term success requires a systemic change in the attitudes and actions of individual faculty members and students. The essence of that change would require individuals to focus on students learning rather than simply completing course requirements for a degree. Sustainable success of tuning also requires a long-term commitment of support from state administrators and from individual universities that will benefit from the potential tuning has to offer.

One crucial benefit of tuning identified in this evaluation was collaboration and communication among faculty at institutions offering similar degrees, particularly in identifying efforts and articulating expectations between four-year programs and two- year transfer programs. Additional benefits included refocusing students’ attention away from formal degree requirements to specific knowledge and skills necessary for working in their chosen field; making learning outcomes more explicit and transparent for faculty and other stakeholders such as employers; and helping faculty better align their teaching efforts with the intended learning outcomes of a degree.

During all tuning efforts those who are trying to implement tuning must understand that tuning faculty will not likely implement tuning in just one way. Tuning can be a dynamic and messy process that will present itself in different ways depending on the discipline involved. For tuning to be effective, those implementing the initiative must also make a long-term commitment to encourage, educate and regularly re-train its stakeholders.

Bibliography

Arum, Richard, and Josipa Roksa. Academically Adrift: Limited Learning on College Campuses. Chicago, IL: University of Chicago Press, 2011.

Carnevale, Anthony P. “The Real Education Crisis Is Just over That Cliff.” The Chronicle of Higher Education, 2012, 1–5. http://chronicle.com/article/The-Real-Education-Crisis-Is/132167.

Common Core State Standards Initiative. About the Standards, 2011. http://www.corestandards.org/about-the-standards.

Educational Testing Services. Student Learning Outcomes in Higher Education, n.d. http://www.ets.org/education_topics/learning_outcomes.

Fischer, Karin. “Crisis of Cinfidence Threatens Colleges.” The Chronicle of Higher Education, 2010. http://chronicle.com/article/A-Crisis-of-Confidence/127530.

Hacker, Andrew, and Claudia Dreifus. Higher Education?: How Colleges Are Wasting Our Money and Failing Our Kids - and What We Can Do About It. New York, NY: St. Martin’s Griffin., 2010.

Legislative Analyst’s Office. Education Mandates: Overhauling a Broken System, 2010. http://www.lao.ca.gov/reports/2010/edu/educ_mandates/ed_mandates_020210.aspx.

Millett, Catherine M., David G. Payne, Carol A. Dwyer, Leslie M. Stickler, and Jon J. Alexiou. “A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes.” Learning, 2008. http://www.ets.org/education_topics/learning_outcomes.

Patton, Michael Quinn. Developmental Evaluation: Applying complexity concepts to enhance innovation and use. New York NY: The Guilford Press, 2011.

Petrides, Lisa A., Sara I. McClelland, and Thad R. Nodine. “Using External Accountability Mandates to Create Internal Change. Planning for Higher Education.” Planning for Higher Education 33, no. 1 (2004): 44–50.

Turley, Steve. “Professional Lives of Teacher Educators in an Era of Mandated Reform.” Teacher Education Quarterly 32, no. 4 (2005): 37–59.

Weiner, Wendy F. “Establishing a Culture of Assessment: Fifteen Elements of Assessment Success—how Many Does Your Campus Have?” Academe 32, no. 9 (2009): 37–59.

West, Martin R., and Paul E. Peterson. “The Politics and Practice of Accountability.” In No Child Left behind? The Politics and Practice of School Accountability, edited by Martin R. West and Paul E. Peterson, 1–20. Washington DC: Brookings Institution Press, 2003.

[*] Dr. Davies (Randy.Davies@byu.edu) and Dr. Williams (David_Williams@byu.edu ) are professors of Instructional Psychology & Technology at Brigham Young University (USA).

[1] Anthony P. Carnevale, “The Real Education Crisis Is Just over That Cliff,” The Chronicle of Higher Education, 2012, 1–5, http://chronicle.com/article/The-Real-Education-Crisis-Is/132167.

[2] Karin Fischer, “Crisis of Cinfidence Threatens Colleges,” The Chronicle of Higher Education, 2010, http://chronicle.com/article/A-Crisis-of-Confidence/127530.

[3] Richard Arum and Josipa Roksa, Academically Adrift: Limited Learning on College Campuses (Chicago, IL: University of Chicago Press, 2011).

[4] Andrew Hacker and Claudia Dreifus, Higher Education?: How Colleges Are Wasting Our Money and Failing Our Kids - and What We Can Do About It (New York, NY: St. Martins Griffin., 2010).

[5] Catherine M. Millett et al., “A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes,” Learning, 2008, http://www.ets.org/education_topics/learning_outcomes.

[6] Legislative Analyst’s Office, Education Mandates: Overhauling a Broken System, 2010, http://www.lao.ca.gov/reports/2010/edu/educ_mandates/ed_mandates_020210.aspx.

[7] Lisa A. Petrides, Sara I. McClelland, and Thad R. Nodine, “Using External Accountability Mandates to Create Internal Change. Planning for Higher Education,” Planning for Higher Education 33, no. 1 (2004): 44–50.

[8] Steve Turley, “Professional Lives of Teacher Educators in an Era of Mandated Reform,” Teacher Education Quarterly 32, no. 4 (2005): 37–59.

[9] Martin R. West and Paul E. Peterson, “The Politics and Practice of Accountability,” in No Child Left behind? The Politics and Practice of School Accountability, ed. Martin R. West and Paul E. Peterson (Washington DC: Brookings Institution Press, 2003), 1–20.

[10] Common Core State Standards Initiative, About the Standards, 2011, http://www.corestandards.org/about-the-standards.

[11] Educational Testing Services, Student Learning Outcomes in Higher Education, n.d., http://www.ets.org/education_topics/learning_outcomes.

[12] Wendy F. Weiner, “Establishing a Culture of Assessment: Fifteen Elements of Assessment Success—how Many Does Your Campus Have?,” Academe 32, no. 9 (2009): 37–59.

[13] Millett et al., “A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes.”

[15] Michael Quinn Patton, Developmental Evaluation: Applying complexity concepts to enhance innovation and use (New York: The Guilford Press, 2011).

Copyright

Copyright for this article is retained by the Publisher. It is an Open Access material that is free for download, distribution, and or reuse in any medium only for non-commercial purposes; provided any applicable legislation is respected, the original work is properly cited, and any changes to the original are clearly indicated.