1 Introduction

“Public engagement with science” (PES) programs appear to be becoming more prevalent in American higher education institutions (HEIs) as a way to share institutions’ research and education with broader communities. These efforts aim to foster mutual benefits by enabling a more informed and engaged society while helping ensure research meets societal needs. Despite the importance of PES, many institutions or departments struggle with assessing these programs due to the lack of explicit goals tied to specific audiences and a lack of evaluation. In this practice insight, we describe a strategic planning and evaluation approach to support effective PES that is currently taking place at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), an astrophysics research institute co-managed by Stanford University and the SLAC National Accelerator Laboratory.

While the concept and practice of strategic planning are not uncommon, what sets this work apart is our approach to integrating social science theory into the process. More specifically, while science communication theory helps elucidate important concepts (such as trust) for PES, the existing literature offers little guidance for PES practitioners about the use of practical and feasible tactics that can be implemented. The strategic planning and evaluation processes described here were designed to explore this space, and provide guidance for putting research into practice.

The absence of clearly defined, audience-specific behavioral goals and associated objectives1 makes assessing impact functionally impossible [Hon, 1998]. Without goals to define what “success” looks like — including both the intended audience and outcomes — PES programs are simply being offered to “those interested and available” without the guidance of a strategic direction. A lack of clear goals (and associated objectives) also makes it difficult for audiences to identify activities that make sense for their own goals, potentially limiting peoples’ willingness to participate. For example, an adult astronomy enthusiast who attends an event to help them relax and meet other enthusiasts likely wants to know if an event is aimed at adult rather than youth audiences, and whether there will be opportunities for substantive interaction.

Furthermore, many PES programs are not evaluated and those that are rarely have audience-specific behavioral goals to use to anchor evaluation efforts. Although limited data appear to exist, it appears that only a small fraction of those involved in science communication use evaluation to guide PES program design, execution, and refinement, which has far-reaching consequences for discovery science as a field [Jensen, 2014; Pellegrini, 2021; Sörensen, Volk, Fürst, Vogler & Schäfer, 2024; Ziegler, Hedder & Fischer, 2021]. Without explicit goals and sufficient evaluation, it is challenging for HEIs to justify the funding for PES programs or make informed decisions about resource allocation. Statistics, such as number of attendees and clicks on YouTube, provide key information on how well the programs have been advertised and the level of potential interest. However, these numbers do not speak for the quality of the program, the depth of engagement, and any potential positive changes it has made. It is worth noting that HEIs receiving government funding are technically required to share their research findings and education outcomes with the public, who essentially fund the research and education at these HEIs. Finally, the absence of reliable impact data limits the potential for scaling successful initiatives and replicating effective strategies across different contexts.

On the theoretical side, however, a number of frameworks are available to guide research on and evaluation of the impact of PES programs. Two of the best known and used come from the field of informal science learning, and both define a range of outcomes that are of common interest to those who design PES programs [Friedman, 2008; National Research Council, 2009], including knowledge or understanding, skill, behavior, interest, motivation, and identity. These frameworks can be useful for those who are newer to evaluation and who have not had the chance to consider broader categories of learning. Once a category is identified, the next step is to delve into the related social science literature(s) to make evidence-based choices in how to support and evaluate that type of learning. Identifying a social scientist as a partner in this process has been recommended [Peterman et al., 2021].

Addressing these issues requires HEIs to adopt more strategic and structured approaches to setting goals and evaluating the impact of their PES efforts. The work shared here was designed to fill this gap. In this practical insight, we will discuss successful strategies, challenges, and potential applications of our strategic planning and evaluation approach that may help other HEIs improve the effectiveness and sustainability of their PES programs. We emphasize that we are not developing, modifying, or testing any science communication theories in this work. Instead, this article offers a practical process for applying one conceptual framework to PES practice; it provides a valuable guide for individual scientists, PES practitioners, and HEIs.

2 The goals-objectives-tactics-objectives-goals (GOTOG) framework

KIPAC’s strategic plan for PES is based on the “goals-objectives-tactics-objectives-goals” (GOTOG) framework in Besley and Dudo [2022b] and Besley and Dudo [2022a, see Figure 1]. This framework starts with the differentiation between audience-specific behavioral goals and cognitive and affective objectives from the strategic communication literature [Hon, 1998] and is consistent with calls to recognize parallels between the needs of science communicators and strategic communicators in other sphere [Borchelt & Nielsen, 2014; VanDyke & Lee, 2020]. Put simply, in this framework, goals are about changing behavior (i.e., what people do) while objectives are about the way people think and feel about the behavior you are hoping to change. Communication choices related to the latter are informed by key concepts from both behavior change-related theory [Montano & Kasprzyk, 2015] and trust-related theory [Mayer, Davis & Schoorman, 1995; Besley, Lee & Pressgrove, 2021]. These literature help identify the cognitive and affective objectives for PES that should affect the behavior a scientist is hoping to change (in others or in themselves). Whereas some approaches to communication strategy allow objectives to include short-term behaviors, the GOTOG approach suggests treating goals as desired behavioral outcomes and objectives as desired cognitive or affective outcomes to be consistent with social science theory that distinguishes between behaviors and behavioral predictors. An overarching goal in the GOTOG approach is more typically described as a vision.

PIC

Figure 1: The Goals-Objectives-Tactics-Objectives-Goals (GOTOG) framework of strategic communication, adapted from Besley and Dudo [2022b]. This framework outlines the identification of goals and objectives, implementation of tactics, and the corresponding evaluation of tactics, objectives, and goals.

Behavior change theory, in this regard, focuses on understanding what leads people — including scientists — to engage in largely intentional behaviors as a function of the degree to which they believe the behavior is beneficial or risky (i.e., their attitude towards the behavior), normative (i.e., common and/or expected), and within their capacity (i.e., they have the skills, resources, and/or authority to enact the behavior) [Fishbein & Ajzen, 2010; Montano & Kasprzyk, 2015]. For example, scientists who want youth to consider science careers likely need to ensure that those youth believe these careers are going to be beneficial, accepted by key others, and possible for themselves. Similarly, trust-related theory (e.g., the Integrative Model of Organizational Trust) focuses on what leads people to make themselves vulnerable. Trust is thus treated as a specific type of behavior and willingness to trust appears to occur when people believe a trustee is trustworthy. Specific beliefs underlying trustworthiness include beliefs about ability (i.e., expertise), benevolence, and integrity [Mayer et al., 1995]. These beliefs represent potential communication objectives that a communicator could seek to foster through communication efforts, including tactical choices about behavior, content, style, channel, and source. Figure 2 lists some examples of goals, objectives, and tactics within this framework.

PIC

Figure 2: Key examples of tactics, objectives, and goals in the GOTOG framework, adapted from Besley and Dudo [2022b].

A feature of using established theory to differentiate between behavioral goals, affective and cognitive objectives, and tactics is that doing so enables evaluation at multiple levels. Everyone wants to achieve behavioral goals but these should typically be seen as long-term outcomes that become more likely as positive interactions accumulate. Behavioral changes are rarely the result of single interventions. By extension, short-term evaluation measures are more useful when they focus on feelings and beliefs (i.e., objectives rather than goals) or whether and how tactics were used and used successfully.

For example, all things being equal, it would be reasonable to expect that college students who have multiple positive experiences with KIPAC scientists who seem to be knowledgeable and have a desire to help others should be expected to develop more positive beliefs about similar scientists’ expertise and benevolence (i.e., objectives). These beliefs should therefore shape the students’ willingness to trust such scientists (i.e., a behavioral goal). This suggests that it may be worth assessing such students’ immediate experiences even if it is not possible to assess their future beliefs or associated behaviors. Further, even if it was not possible for an evaluator to assess immediate perceptions, the reliance on theory suggests that evaluation could assess whether the scientists behaved in ways that might be expected to affect the audience’s perceptions of expertise and benevolence through their use of tactics known to demonstrate expertise and caring, shared content related to outcomes, and used a communication style that was consistent with such perceptions.

Tracking the use of these kinds of literature-based tactics can provide evidence to demonstrate that a scientist or organization is striving toward objectives and behavioral goals. When framed in relation to Figure 1, scientists choose Priority Tactics for a PES activity because they are expected to help achieve the Priority Objectives that will ultimately result in changes in Behavioral Goals. Choosing to communicate in ways that demonstrate expertise and benevolence, for example, are tactics that are expected to build trust.

Tracking when and how these types of tactics are used across multiple PES activities and initiatives can be a useful way to document the ways that a research organization is taking steps to achieve related objectives and goals. Cumulatively, these kinds of data can also be used to document an organization’s PES portfolio; this type of evaluation is often referred to as process evaluation or monitoring. We believe this type of evaluation is particularly important for documenting the use of evidence-based PES practice and thus it is a core component of our evaluation of PES strategy for KIPAC.

Outcomes evaluations are more common than process evaluations. In the context of the GOTOG framework, short-term outcomes are measured to document success in relation to PES objectives that focus on ways of thinking and feeling, and longer-term outcomes are documented in relation to goal behaviors. Building from a specific example in Figure 2, a scientist or research organization might measure short-term outcomes after a public lecture event to learn audience members’ perceptions of whether the scientists featured seemed honest and willing to listen, instead of measuring long-term outcomes related to turning to science and scientists for advice. These and each of the other outcomes/objectives in Figure 2 might then be used as early indicators of success as a research organization strives to achieve its longer-term audience-specific behavioral goals.

3 Strategic planning at KIPAC

Founded in 2003 by a gift from Fred Kavli and The Kavli Foundation, KIPAC is an independent joint laboratory located on the main Stanford campus and at the SLAC National Accelerator Laboratory. It comprises over 150 scientists with diverse interests in astrophysics and cosmology, spanning theory, computation, observations, and instrumentation.

The work shared here was enabled by funding from The Kavli Foundation to initiate and implement strategic PES across the Institute. Funding was also provided to a team of consultants to lead the strategic planning and evaluation process. The authors of this paper include the Outreach and Engagement Manager (OEM) who was hired to support this work at KIPAC, and members of the consultant team.

PIC

Figure 3: An overview of the key steps involved in the strategic engagement planning process.

The current PES team at KIPAC includes one full-time OEM, a 25% staff member, and 2–3 rotating student interns. Reporting to the KIPAC Director and Managing Director, the OEM leads the PES efforts (including five regular, year-round programs) on behalf of the institute, working closely with several dozen student and scientist volunteers.

The evaluation of PES programs at KIPAC started with strategic planning efforts that lasted for nearly a year. This phase of work focused on learning from existing practices and perspectives and then weaving those into KIPAC’s strategic plan for PES. The consultant team gathered information about existing engagement programs and assets through a website review, meetings with the KIPAC OEM, and meetings with the OEM and leaders from existing PES programs. Individual interviews were also conducted with engagement staff, scientists, and institute leaders to understand their personal engagement goals, and the goals they would and would not like to see KIPAC pursue as an organization. A key element of these interviews was challenging scientists to go beyond their initial answers that often focused on objectives (e.g., increase factual knowledge, evoke emotions such as awe/wonder) and identify real-world changes in goal behaviors they might like to see (e.g., what they believe might occur if they were able to increase literacy or evoke awe/wonder). A total of 26 KIPAC members participated in these interviews.

The results from the interviews and early meetings with the OEM were then shared with a group of more than 20 KIPAC staff and scientists during a workshop-style meeting . The purpose of the meeting was to prioritize the engagement goals that had been shared with the consultant team to date. Three goals were identified during the meeting, and these then became the focus of KIPAC’s engagement plan.

  • Strengthening the Role of Science in Society

    • Leveraging the public appeal of astronomy and physics to contribute to public trust and support of science.

  • Broadening Participation in Astronomy and Physics

    • Supporting pathways for girls, youth of color, and other underrepresented groups to choose STEM careers, and supporting pathways for diverse community members to fulfill their interests in astronomy and physics.

  • Building Better Scientists

    • Advancing professional development for scientists via public engagement training and participation.

Given these goals, the planning process also helped KIPAC identify four priority audiences for engagement including Bay Area youth, adults, decision-makers, and KIPAC scientists themselves. Each of the three goals was relevant for multiple audiences. In total, nine audience-specific behavioral goals were identified. For each audience-specific behavioral goal, the strategic plan included a GOTOG table that outlined priority objectives and suggested tactics for achieving those objectives, as well as the outputs and outcomes for evaluation. The metrics are then broken down into outputs that list examples of what happened to prepare for or lead a PES activity and outcomes that can be used to measure what difference a PES activity made. As noted above, the process evaluation and metrics focus on tactics and the outcomes data focus on short-term progress in relation to objectives. Table 1 includes an example of one of these GOTOG tables, with a subset of objectives, tactics, and metrics. These tables were particularly useful in the next phase of the work that involved aligning KIPAC’s PES activities and evaluation to the strategic plan.

PIC
Table 1: Example GOTOG table in KIPAC’s PES strategic plan that emphasizes building trust with adult audiences.

3.1 The alignment initiatives

The strategic plan culminated with a set of five recommended alignment initiatives intended to align KIPAC’s PES programs and activities to the strategic plan. The five alignment initiatives include: strategy alignment, training alignment, evaluation alignment, capacity alignment, and community alignment.2 This work takes time. Thus far KIPAC has made progress on the first three alignment initiatives; the next sections report on this progress.

Strategy alignment required KIPAC to review the strategic plan and make decisions about whether and how to align all program activities to priority goals and objectives. For each of KIPAC’s main recurring programs, the first step was to confirm which of the nine audience-specific goals the program was seeking to achieve. With the program goals clearly defined, the consultant team and OEM used the corresponding GOTOG table(s) to identify existing program tactics that already aligned well with the strategic plan and ones that did not, and came up with new possible tactics for implementation.

Additional practices were also added at this stage of the process. GOTOG helps identify the process to use to structure the overall strategy, based on the behavior change theory. Additional social science literatures are then used to support effective engagement itself via the SCRREE framework that compiles six literature-based attributes of effective and ethical engagement [Garlick et al., 2024]. Effective PES is strategic, cumulative, reciprocal, reflexive, equitable, and evidence-based. These attributes were used to identify additional strategies and evidence bases to support PES at KIPAC as part of the strategy alignment process.

The result was a document for each program that outlines tactics already in place (e.g., including a diverse range of speakers), tactics that were “easy wins” that could be implemented quickly (e.g., providing a speaker bio template), tactics that were of interest but would take longer to implement (e.g., partnering with community organizations), and tactics that KIPAC was not interested in pursuing at the time (e.g., hosting events at variable days/times).

Training alignment required KIPAC to revise their PES training and reflection activities to achieve their goals related to “building better scientists,” which, in turn, would contribute to the “science in society” and “broadening participation” goals. Because a major objective identified in the strategic plan was having scientists demonstrate trustworthiness, the consultant team and the OEM collaboratively created a set of guides for scientists about how they can communicate trustworthiness. These guides provide scientists with evidence-based tactics that they can consider under a range of contexts. For example, one guide details tactics for communicating trustworthiness through their online biography by adding information including what motivates their research, what they do to ensure that their research is reproducible, and how they ensure they are hearing from a range of voices when choosing and designing research projects. For practitioners looking for examples, all of our guides can be accessed here.

Furthermore, KIPAC organized a series of science communication training workshops for KIPAC members (primarily early-career scientists) who have been engaged in or are interested in PES work. The OEM worked closely with a professional trainer to make the sessions interactive, relevant, and practical, with key input from the consultant team to ensure the training materials were directly aligned with KIPAC’s strategic plan. The training topics for a 4-hour workshop included how to structure an engaging public lecture, how to tailor talks to specific audiences, and how to gain confidence in public speaking. Figure 5 in the appendix includes the workshop outline with subtopics, along with an example training agenda for “how to structure an engaging public lecture.”

Evaluation alignment required KIPAC to identify meaningful metrics for documenting both program planning and implementation as well as outcomes of PES programs and training according to the strategic plan. For several KIPAC programs, staff had developed and implemented post-program participant surveys prior to the strategic engagement process. Those surveys contained common questions (e.g., about participant satisfaction) as well as questions specific to the program and were routinely administered to participants following the program. One aspect of evaluation alignment involved revising those existing post-program surveys to directly measure the outcomes identified in the strategic plan. The evaluation alignment phase involved returning to the relevant GOTOG table(s) for the particular program and then comparing the existing program survey items to evaluation items in the table. We considered which survey items should be dropped and identified gaps that needed to be filled with new items to ensure we were measuring short-term outcomes related to the plan’s objectives.

For example, across various programs, new questions were added to measure goodwill (i.e., benevolence, caring), a key component of trustworthiness [Besley et al., 2021; McCroskey & Teven, 1999; Mayer et al., 1995]. Both the focus on goodwill and the items used to measure it were based on existing academic literature and a set of items that had validity evidence to support their use. Those questions asked program participants how much the scientists they interacted with seemed to care about helping others, how open-minded they seemed, and how willing they seemed to consider others’ points of view [Besley et al., 2021]. Post-program surveys were also amended to measure all other key outcomes from the strategic plan, including satisfaction and self-efficacy. The result was a set of post-program surveys designed to measure the outcomes of the strategic plan that the specific program is aiming to achieve. Because the programs share some but not all outcomes, each program’s survey shares some items with other surveys and includes unique items. These surveys are now routinely administered to a program’s participants following the event. Table 2 in the appendix details the survey questions and response scales for the KIPAC public lecture program.

To capture outputs that indicate whether and how tactics from the strategic plan were implemented, the consultant team further built a database to track all KIPAC PES events and the use of the training guides described above. The tracking database requires the OEM to submit basic information and statistics after each PES event has taken place, such as when and where it occurred, the number of in-person/virtual attendees, and which scientists and staff were involved. Further information relevant to the strategic plan and elements of SCRREE are also solicited in the documentation process. These include whether and how KIPAC worked with a partner organization, what the public did during the activity, and the priority population(s) that the event was designed for.

3.2 Next steps

Strategic plans are meant to guide action and to function as living documents that are revised over time in response to strategies in action. The alignment work described here is ongoing and iterative. To date, the strategic plan and related evaluation measures have guided the development of PES programs at KIPAC on a number of fronts under the direction of the OEM. The next step in the process is to raise awareness of the strategic plan and its related initiatives across the organization, creating broad recognition, buy-in, and adoption of the plan by the entire KIPAC community and eventually on individual levels. These steps are critical for institutional implementation overall.

Even in a context such as KIPAC’s that has a devoted OEM, there are many perspectives and demands to balance at research institutions. Prioritizing the integration of strategy and scaling up PES activities and practices must be balanced with these other demands, and thus takes time and patience. The different types of alignment strategies shared above have been a useful way to consider different areas of progress, and to track the steps completed to date. The SCRREE framework has also been a useful reflection tool, in that it has allowed us to consider gaps and thus next steps in the continued development of KIPAC’s strategic PES.

The consultant team has developed a monitoring system to track KIPAC’s PES events along with the partner organizations and scientists involved. The relational database was built using Airtable to track these data; it allows KIPAC to view records for individual events, organizations, and scientists as well as summary dashboards that aggregate key metrics (see Figure 4). The OEM began using the monitoring system in Spring 2024 by entering the key details of PES events after their completion (e.g., a description of the event, event start and end times, priority population(s), number of attendees, etc.). The OEM has found the data entry process a key step for creating a centralized record of the PES details, and the dashboard greatly helps with the visualization of the statistics as well as maintaining institutional knowledge. The OEM is already using the trustworthiness guides during coaching sessions with scientists to ensure that evidence-based tactics are being integrated into PES activities; the next step is for the OEM to begin logging the tactics used in the monitoring system so that we can track which tactics are being used overall. Over time, these data will help tell the story of the strategic public engagement that the institute has supported and will allow KIPAC to note the specific tactics that have been used the most.

PIC

Figure 4: Sample figures from KIPAC’s PES summary dashboard that tracks key statistics and engagement aspects of individual PES events.

Another next step for KIPAC is to work on the capacity alignment and community alignment initiatives. Capacity alignment requires KIPAC to review program plans and make decisions about program offerings and program calendars based on realistic staff capacity. Community alignment requires KIPAC to consider the communities that they have already begun developing relationships with and determine which communities they want to enhance relationships with to support the long-term goals of the strategic plan.

3.3 Challenges & solutions

As elaborated in this article, strategic planning is vital for effective PES, but integrating these plans into the culture of research organizations like KIPAC can be complex. A major issue is moving from raising awareness of the strategic plan to active adoption in practice. Specifically, scientists need data as compelling evidence to motivate action. Fortunately, we now have just sufficient evaluation data to demonstrate the importance and positive outcomes of our PES efforts driven by the strategic planning process. Regularly sharing these data, particularly for proposal writing purposes, will encourage scientists to fully integrate the plan (including the adoption of tactics) in their PES activities.

Another challenge is balancing the need for thorough evaluation against the practicality of data collection tools such as surveys. While it is important to measure outcomes based on the strategic plan, there is a need to keep surveys concise and accessible to avoid survey fatigue and ensure high response rates. As an example, trust is a complicated construct that is made up of multiple dimensions [i.e., ability, goodwill/benevolence, expertise; Besley et al., 2021]. In an attempt to balance intervention and survey length, we are currently measuring only one dimension of trust (goodwill/benevolence) as part of our evaluation.

Furthermore, sustaining these strategic planning and evaluation efforts can be challenging, especially with the consultant team’s contract ending soon. To tackle this, we are designating the final year of the contract as a “transition” period, during which the OEM will take on more evaluation responsibilities, with the consultant team serving as a resource and developing sustainable evaluation infrastructure to ensure a smooth handover.

Lastly, sustaining the OEM position in the long term is critical, especially given its role integrated into all PES-related efforts. We are actively pursuing funding opportunities to support this position, exploring research grants led by individual PIs as well as those multi-institutional, collaborative grants that require a substantive PES component.

4 Summary & recommendations

In this paper, we have presented the framework upon which KIPAC has based its strategic plan for PES, and discussed the dimensions and implementation of such a plan. It is important to note that, while we used KIPAC as an example, the consultant team has implemented similar strategic planning processes with other research organizations; the planning and alignment processes shared here have successfully transferred across organization types. KIPAC is furthest along in its alignment to and implementation of its strategic plan, and we still have much to learn. We hope the initial lessons shared here will guide PES practitioners as they consider evidence-based effective practices.

We recognize that not every HEIs or department has available funding, expertise, and staff to build a strategic plan from scratch [Sörensen et al., 2024], but there are major ways that HEIs can be more strategic and intentional with their PES effort to maximize their impact. Below, we summarize how similar work can be implemented more broadly to enhance PES and community engagement in higher education at three different levels.

  1. Scientist/Individual Level. Scientists are encouraged to identify personal goals for public engagement and align them with available PES programs at their institution when possible. For example, scientists and PES practitioners could choose one audience-specific behavioral goal from the GOTOG framework (Figure 2) and then consider the shorter-term objectives and tactics that are most likely to help them achieve those goals (Figure 1). We recognize that it is challenging for scientists to make evidence-based decisions according to science communication literature; the series of Practice Briefs may be a good starting point as it covers the basic information about each element of the SCRREE Framework.

    Evaluation data (typically provided by event organizers) can also be useful for practitioners and scientists to consider whether they have achieved the objectives they intended through the PES activities. Reflexive practitioners and scientists will also benefit from thinking about the ways their PES experiences are improving their PES practice and their science.

    Finally, personal goals and institution-level PES programs can build on each other. For example, scientists can propose expanding a PES activity in the “broader impacts” section of research proposals, using institutional resources and evaluation data to demonstrate feasibility.

  2. Program Level. Having clearly defined program-level goals enables program managers to prioritize and make strategic decisions about where to invest to be most effective. Since no individual, program, or institution has unlimited time and resources, it is important to focus on activities that align with the program goals and priority audiences. Clear goals help managers recognize when opportunities do not fit the overall direction, allowing them to say “no” and avoid overextending themselves or dedicating resources and efforts to activities with minimal impact.

    Assessment should not be treated as an “afterthought.” Too often, PES activities are designed based on intuition rather than based on the use of evidence-based tactics. Tracking the use of tactics through outputs can be a useful way to guide the intentional design of PES activities and the use of evidence-based tactics, in particular, optimizes the likelihood of success for practitioners and audiences alike. For example, program organizers are encouraged to share the lists of evidence-based tactics with the scientists they work with and note those tactics that the scientists tried to use as part of their engagement. Engaging and preparing the presenters can effectively support the evaluation of tactics, especially when the goals and objectives have been set.

    Program managers should also use program-level objectives to help identify evaluation measures that can be used to measure progress over time, both for individual events and for the program overall. When possible, we recommend using existing scales that have validity evidence rather than creating survey items from scratch. A number of repositories provide examples of existing scales, and many cluster scales by construct and age group.

  3. Institution Level. HEIs and departments are strongly encouraged to develop their own strategic plan and secure leadership support for PES. Strategic planning not only provides a unique opportunity to define desired outcomes with specific groups of audiences on an institutional level, but is also an internal reflection process to gather inputs from individual scientists, key stakeholders, and the leadership. Having individual voices reflected in the plan helps build a sense of community and ownership, which motivates scientists to adopt and follow the strategic directions. Leadership buy-in is crucial for emphasizing the importance of the plan and encouraging faculty and senior researchers to engage — from early discussions through the adoption of the plan and related PES activities, particularly in ways that benefit their groups. Such a plan also has the potential to guide and strengthen the “broader impacts” of individual grants, if they are written to align with and be supported by institution-level strategy.

    Strategic plans are supposed to be a living document that needs to be reviewed and updated to reflect new insights, opportunities, and challenges. We recommend reviewing progress as an internal team on a quarterly basis to reflect together about what has been accomplished to date, and the next steps needed. Our team has been working to implement and reflect on the plan for two years. In so doing, we have made very minor updates to the plan thus far, mostly to add minor details that we missed in the first draft. Looking ahead, we anticipate that a comprehensive review and edit of the plan may be warranted every five years based on what has been accomplished to date, organizational priorities that may shift over time, and updates to the literature that should inform evidence-based strategy. By maintaining a dynamic and adaptable plan, the institution can respond to changing needs and priorities, ensuring sustained engagement and impact.

In closing, we also would like to advocate for more PES researchers and practitioners to partner together to apply and explore research in practice. Though not the focus of the current paper, applying the GOTOG framework to guide strategic planning and evaluation efforts has resulted in an additional focus on the evaluation of tactics as a critical component of the framework. We are also beginning to understand the benefits and limits of tracking the use of tactics in PES activity design and implementation. More work is needed to explore the use of conceptual PES frameworks in action. We hope this practice insight provides examples of how this kind of work might be done, and the learning that can be gained for PES practitioners and researchers alike.

Acknowledgments

We thank the anonymous referees for their constructive comments that have strengthened this paper. The work presented here is supported by The Kavli Foundation. XD acknowledges support from The Kavli Foundation and KIPAC through grant S-2021-GR-033, which enables her OEM position at KIPAC. JCB further acknowledges support from the USDA-AFRI program (MICL02468). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders.

A Survey questions and training outline

PIC

Figure 5: Outline (above) and example agenda (bottom) of an in-person science communication training session for KIPAC members. The training materials were co-developed by a professional science communication trainer Thi Nguyen and the OEM with key inputs from the consultant team, emphasizing the “trustworthiness” goals in the strategic plan. (Credit: Thi Nguyen).

PIC
Table 2: Post-event survey questions and response scales for the KIPAC public lecture program. The survey collects data across various dimensions, including summative (satisfaction, trustworthiness) and formative (topical, programmatic) data and demographics.

References

Besley, J. C. & Dudo, A. (2022a). Strategic communication as planned behavior for science and risk communication: a theory-based approach to studying communicator choice. Risk Analysis 42 (11), 2584–2592. doi:10.1111/risa.14029

Besley, J. C. & Dudo, A. (2022b). Strategic science communication: a guide to setting the right objectives for more effective public engagement. U.S.A.: Johns Hopkins University Press.

Besley, J. C., Lee, N. M. & Pressgrove, G. (2021). Reassessing the variables used to measure public perceptions of scientists. Science Communication 43 (1), 3–32. doi:10.1177/1075547020949547

Borchelt, R. E. & Nielsen, K. H. (2014). Public relations in science: managing the trust portfolio. In Routledge Handbook of Public Communication of Science and Technology (pp. 74–85). Routledge.

Davis-Becker, S. L. & Buckendahl, C. W. (2013). A proposed framework for evaluating alignment studies. Educational Measurement: Issues and Practice 32 (1), 23–33. doi:10.1111/emip.12002

Fishbein, M. & Ajzen, I. (2010). Predicting and changing behavior: the reasoned action approach. Psychology Press. doi:10.4324/9780203838020

Friedman, A. J. (Ed.) (2008). Framework for evaluating impacts of informal science education projects. Arlington, VA, U.S.A.: National Science Foundation.

Garlick, S., Besley, J., Peterman, K., Black-Maier, A., Downs, M., Ortiz Franco, E., … Templer, P. (2024). Six elements of effective public engagement with science. Frontiers in Ecology and the Environment. Submitted.

Hon, L. C. (1998). Demonstrating effectiveness in public relations: goals, objectives and evaluation. Journal of Public Relations Research 10 (2), 103–135. doi:10.1207/s1532754xjprr1002_02

Jensen, E. (2014). The problems with science communication evaluation. JCOM 13 (01), C04. doi:10.22323/2.13010304

Mayer, R. C., Davis, J. H. & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review 20 (3), 709–734. doi:10.5465/amr.1995.9508080335

McCroskey, J. C. & Teven, J. J. (1999). Goodwill: a reexamination of the construct and its measurement. Communication Monographs 66 (1), 90–103. doi:10.1080/03637759909376464

Montano, D. E. & Kasprzyk, D. (2015). Theory of reasoned action, theory of planned behavior and the integrated behavioral model. In K. Glanz (Ed.), Health behavior: theory, research and practice (5th ed., pp. 67–96). Wiley-Blackwell.

National Academies of Sciences, Engineering and Medicine (2016). Communicating science effectively: a research agenda. U.S.A.: The National Academies Press. Retrieved from https://nap.nationalacademies.org/read/23674/chapter/1

National Research Council (2009). Learning science in informal environments: people, places and pursuits. U.S.A.: The National Academies Press.

Pellegrini, G. (2021). Evaluating science communication: concepts and tools for realistic assessment. In M. Bucchi & B. Trench (Eds.), Routledge handbook of public communication of science and technology (3rd ed.). doi:10.4324/9781003039242

Peterman, K., Garlick, S., Besley, J., Allen, S., Fallon Lambert, K., Nadkarni, N. M., … Wong, J. (2021). Boundary spanners and thinking partners: adapting and expanding the research-practice partnership literature for public engagement with science (PES). JCOM 20 (07), N01. doi:10.22323/2.20070801

Sörensen, I., Volk, S. C., Fürst, S., Vogler, D. & Schäfer, M. S. (2024). “It’s not so easy to measure impact”: a qualitative analysis of how universities measure and evaluate their communication. International Journal of Strategic Communication 18 (2), 93–114. doi:10.1080/1553118x.2024.2317771

VanDyke, M. S. & Lee, N. M. (2020). Science public relations: the parallel, interwoven and contrasting trajectories of public relations and science communication theory and practice. Public Relations Review 46 (4), 101953. doi:10.1016/j.pubrev.2020.101953

Wilson, L., Ogden, J. & Wilson, C. (2023). Strategic communication for PR, social media and marketing (8th ed.). Kendall Hunt.

Ziegler, R., Hedder, I. R. & Fischer, L. (2021). Evaluation of science communication: current practices, challenges and future implications. Frontiers in Communication 6, 669744. doi:10.3389/fcomm.2021.669744

Notes

1. We recognize that key terms such as “goals” and “objectives” are used in many different ways in closely related fields such as science communication and public relations. For example, some textbooks allow both “goals” and “objectives” to include behaviors [e.g., Wilson, Ogden & Wilson, 2023] while some reports do not make any distinction between behavioral and non-behavioral outcomes [e.g. National Academies of Sciences, Engineering and Medicine, 2016]. In this article, we define these terms based on the goals-objectives-tactics-objectives-goals (GOTOG) framework, where “goals” are referred to behavioral outcomes while “objectives” as treated as desired cognitive or affective outcomes (see details in the “GOTOG framework” section below).

2. While some evaluation literature exists on the concept of “alignment” [e.g. Davis-Becker & Buckendahl, 2013], the alignment processes discussed in this work are primarily grounded in practice rather than literature-based. In fact, we are testing these alignments as part of the implementation plan for this project.

About the authors

Xinnan Du is the Outreach and Engagement Manager at The Kavli Institute for Particle Astrophysics and Cosmology at Stanford University. She is an astronomer who pivoted into the education realm and has formal training and leadership experience in science communication and public outreach. She has worked with numerous higher education institutions and community partners on a wide range of educational outreach initiatives.

E-mail: xinnandu@stanford.edu X: @XinnanDu

Karen Peterman is the President of Catalyst Consulting Group, formerly Karen Peterman Consulting, Co. She has more than 20 years of experience evaluating and studying STEM education projects, especially those that take place in informal learning spaces.

E-mail: karenpetermanphd@gmail.com X: @CONSULTKPC

John C. Besley is the Ellis N. Brandt Professor of Public Relations at Michigan State University. He has authored more than a hundred articles, chapters, and reports on public opinion about science and scientists’ views about communication.

E-mail: jbesley@msu.edu X: @JohnBesley

Allison Black Maier is an evaluator and applied researcher who works with STEM education programs to help them collect, manage, and analyze data to assess their outcomes and impacts.

E-mail: allison@consultwithcatalyst.com