Skip to main content

Currently Skimming:

5 Evaluation to Refine Goals and Demonstrate Effectiveness
Pages 69-90

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 69...
... . As part of its formal information-gathering process, the committee commissioned this paper, which provides an extensive review of social science theory on evaluation for informal science learning.
From page 70...
... WHY EVALUATE? Evaluation, if begun at the outset of planning, can make communication events more effective at meeting their intended goals.
From page 71...
... Although evaluation is often conducted by trained professionals using specialized techniques, anyone can use basic evaluative approaches to inform the design and development of communication activities and to learn about their impact. OVERARCHING CONSIDERATIONS The data described in Chapter 3 revealed that chemistry communication events in informal environments vary greatly in objectives, activities, content, and participants.
From page 72...
... In the logic model, the outcomes are often described as short term, occurring within a few years of the event; midterm, occurring 4 to 7 years after the activity; or long term, occurring many years after an event has commenced. Communication events are often designed to encourage long-term impacts, such as future science participation or additional learning through subsequent experiences.
From page 73...
... Two evaluation frameworks developed through research and practice in informal science learning (Friedman, 2008; NRC, 2009) provide guidance in developing outcomes that meet these criteria.
From page 74...
... report Learning Science in Informal Environments details the first framework, six interrelated strands of science learning that can "serve as a conceptual tool" for both designing and evaluating informal learning experiences. The strands encompass learning processes and outcomes and can be used for any type of evaluation -- front end, formative, or summative.
From page 75...
... First, the breadth of each set of categories makes aggregation challenging. Second, the Friedman framework meets NSF's need to assess the impact of its AISL program on society as a whole, but the NRC framework places emphasis on the learning process within individuals.
From page 76...
... A list of example evaluation questions that could be adapted to a range of specific projects is provided in Box 5-3. The evaluation questions serve multiple purposes: assisting in targeting the important outcomes of a project and helping determine if the project's design and implementation are effective.
From page 77...
... Summative Evaluation Questions (to assess activity outcomes) •  participants demonstrate increases in any of the intended outcomes of the activity, Did such as increases in interest or engagement, content understanding, or identifying as a science learner, related to the topic or issues addressed?
From page 78...
... Premature attempts to assess a project's summative outcomes can be meaningless or, worse, can limit chances, through formative evaluation, to continue or improve a promising event. Evaluation Design Evaluation design is the manner in which an evaluation is structured to collect data to answer the questions about a communication event's intended outcomes.
From page 79...
... Overall, summative evaluation of participants in informal science learning events requires planning, persistence, and sometimes luck (NRC, 2009)
From page 80...
... Based on the consideration of such questions, professional evaluators and researchers have successfully used various assessment methods to measure each of the six informal science learning outcomes identified by the NRC (2009)
From page 81...
... Table 5-2 illustrates this alignment, presenting a range of chemistry communication events organized by outcome, scale of effort, and types of setting and activity. The table suggests measurement and data collection strategies that might provide evidence of whether, and to what extent, participants achieved the intended outcomes, and hence whether the project succeeded.
From page 82...
... Websites, Data analytics, posts and Data analytics, posts and Participant information Follow-on interviews videos, comments, responses on comments, responses on seeking, registering on or surveys regarding broadcasts, and linked surveys or online linked surveys regarding sites, liking pages or posts, postexperience activities, other media- forums regarding why they what they learned reposting online, responses reposts (e.g., Twitter) , based resources participated, what they on brief surveys regarding information seeking, liked, etc.
From page 83...
... Broader, systematic communication efforts Public Responses in brief surveys Content knowledge Participant information Follow-on interviews or programming or interviews regarding assessments such as the seeking, verbal descriptions surveys regarding behaviors and why they participated, what one used for The Amazing of plans or ambitions, and attitudes specific to performances they liked, etc., appropriate Nano Brothers Juggling responses on brief surveys the communication goals, science interest Show regarding what they might appropriate attitudinal assessments from ATIS do differently based on assessments from ATIS or participation other sources Ongoing Responses in surveys or Content knowledge Responses in surveys or Follow-on interviews or programming interviews regarding why assessments carefully interviews regarding choice surveys regarding behaviors in after-school they participated, what aligned with the of activities, courses, or and attitudes specific to programs, they liked, etc., appropriate experiences and objectives careers, appropriate science the communication goals, museums, or science interest of the programming attitudinal assessments appropriate attitudinal and public settings assessments from ATIS from ATIS or other sources behavioral assessments from ATIS or other sources NOTE: ATIS, Assessment Tools in Informal Science. SOURCE: Michalchik, 2013.
From page 84...
... Some forms of bias that are relevant to chemists and organizations conducting communication events include the following: • The social desirability factor. Participants, like all people, will lean toward telling a friendly interlocutor what the interlocutor wants to hear.
From page 85...
... Or, a chemist involved in an ongoing communication program may conduct a formative evaluation, and these findings should be summarized in writing to inform improvements in the program. EXAMPLES OF CHEMISTRY COMMUNICATION EVALUATION The communication efforts described in the following two cases illustrate how different the challenges of evaluating chemistry communication can be.
From page 86...
... EFFECTIVE CHEMISTRY COMMUNICATION IN INFORMAL ENVIRONMENTS • One hundred views could represent one person watching the video repeatedly from different computers or mobile devices, whereas one view could represent a teacher showing it once to hundreds of students. • Age and gender profiles rely on data that viewers provide, perhaps inaccurately.
From page 87...
... Example survey items are show in Box 5-4. The preshow survey involved approaching individual attendees and asking them to participate in a brief survey about the show.
From page 88...
... Teachers found that it reinforced classroom lessons and correlated well with science standards. The theatrical techniques supported learning potential by engaging participants.
From page 89...
... 2013. Communicating Chemistry in Informal Environments: Evaluating Chemistry Out reach Experiences.
From page 90...
... 2010. Surrounded by science: Learning science in informal environments.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.