Defining the Measures for the ROI Evaluation

The Leader. The Organization. The Business.  

Define the Measures: How Will You Know When You Get There?

Business Results

Use four basic categories to measure business results: time, money, quality, and quantity. First, state the business results as goals, the critical process to get there, and the time frame.

Money: Sales, Revenue, Profit, Discounts, Employee absenteeism, Employee retention.

Time: Project length, Production time, Downtime, Task time, First-to-market innovation, Approval time.

Quality: Meet quality standards, Defects, Prevention or relapse, Customer satisfaction, Meet environmental standards, and Positive media visibility.

Quantity: Production, Number of services, Service volume, Customers, Inventories, Market share.

Step 1 of the ROI Methodology

Start with Why: Align Programs with the Business. The “why” of programs is the business need, expressed as an exact business measure. Next, pinpoint one or more business measures already in the system that should improve due to the program.

Payoff Needs are opportunities for the organization to make or save money, avoid costs, or do a greater good. Identify payoff needs with the questions:

Is this program worth doing?

Is this a problem worth solving?

Is this an opportunity worth pursuing?

Will this new program add enough value to offset its cost?

These questions ensure a program aligns with the organization’s needs.

Of course, a program’s ultimate payoff is profit, cost savings, or cost avoidance.

The payoff is captured by level-5 ROI evaluation, focusing on comparing monetary benefits from the program to its costs. It is captured in a Benefit-Cost Ratio (BCR), ROI (% in net return), or Payback period. ROI is a financial metric.

Business Needs directly link to payoff needs as monetary value added by improving business measures in terms of improving productivity, quality, and efficiency by saving time and reducing costs.

Business improvement measures are captured by level-4 Impact evaluation, focusing on the impact of implementing programs and processes expressed as improved business measures directly linked to the program or project. Business improvement measures are captured in Revenue, Productivity, Quality, Efficiency, Customer Satisfaction, Retention, Incidents of accidents, Jobs created, Graduation rates (for schools), and Infant mortality (for societal health policy).

Tangible Business Data Categories and Examples 

Output Variables (Quantity) 

Citizens Vaccinated, Units Produced, Income Increased, Items Assembled, Money Collected, Licenses Issued, New Accounts Generated, Forms Processed, Loans Approved, Inventory Turnover, Inspections Made, Applications Processed, Patients X-Rayed, Students Graduated, Permits Issued, Projects Completed, Jobs Secured, Productivity, Patients Discharged, Shipments.

Cost Variables (Money) 

Budget Variances, Unit Costs, Unemployment Costs, Fixed Costs, Variable Costs, Overhead Costs, Operating Costs, Education Costs, Accident Costs, Program Costs, Shelter Costs, Treatment Costs, Participant Costs, and Cost Per Day.

Time Variables

Length of Stay, Cycle Time, Equipment Downtime, Overtime, On-Time Shipments, Project Time, Processing Time, Supervisory Time, Time to Proficiency, Time to Graduate, Meeting Schedules, Repair Time, Time to Replace, Work Stoppages, Response Time, Late Times, Lost Time Days, Wait Time.

Quality Variables

Readmissions, Failure Rates, Dropout Rates, Scap, Waste, Rejects, Error Rates, Rework Required, Complications, Shortages, Product Defects, Deviations from Standard, Product Failures, Inventory Adjustments, Infections, Incidents, Compliance Discrepancies, Agency Fines, Accidents.

Intangible Business Data Categories and Examples


Teamwork, Collaboration, Networking, Communication, Alliances, Decisiveness, Caring, and Compassion.

Work Climate, Work Satisfaction

Grievances, Discrimination Charges, Employee Complaints, Job Satisfaction, Organization Commitment, Employee Engagement, Employee Loyalty, Intent to Leave, and Stress.

Initiative, Innovation

Creativity, New Ideas, Suggestions, Trademarks, Copyrights, Patents, Process Improvements, and Partnerships.

Client Service

Client Complaints, Client Satisfaction, Client Dissatisfaction, Client Impressions, Client Loyalty, Client Retention, Client Value, and Client Lost.

Development, Advancement

Promotions, Capability, Intellectual Capital, Programs Completed, Certifications Held, Transfers, Performance Appraisal Ratings, Readiness, and Development Assignments.

Image, Reputation

Brand Awareness, Reputation, Impressions, Social Responsibility, Environmental Friendliness, Social Consciousness, Diversity, Inclusiveness, and External Awards.

Step 2 of the ROI Methodology

Make it Feasible: Select the Right Solution. First, determine how to improve the business measure by identifying the cause of the problem or exploring various approaches to address an opportunity. Then, to identify and implement the best solution to address the business need, address three questions:

First, what must change to influence the impact measure?

Second, what can enable this change?

Third, what is the best solution?

Performance Needs

The business impact measures reveal problem or opportunity areas, indicating the need for business contribution through seeking causes of problems or opportunities and uncovering solutions to influence the business need. For example, a proactive approach may examine the data and records; initiate the discussion with the client; use benchmarking from similar solutions; use evaluation as a hook to secure more information; involve others in the discussions; discuss disasters in other places; discuss consequences of not having business alignment.

Performance improvement is captured by level-3 Application and Implementation evaluation, focusing on the application and use of knowledge, skills, and competencies, including progress made and implementation success. Performance improvement is captured in terms of Behaviors used, Extent of use, Task completion, Frequency of use, Actions completed, Success with use, Barriers to use, Enablers to use, and Engagement.

Learning Needs

Uncovered performance needs often require a learning component to ensure all parties know what to do and how to do it as the performance is delivered. Learning sometimes becomes the principal solution, as in competency development, major technology changes, and system installations. For other programs, learning is a minor solution and often involves simply understanding the solution, such as a process, procedure, or policy. For example, when implementing a new ethics policy for an organization, the learning component requires understanding how the policy works and the participants’ role in it. In short, a learning solution is only sometimes needed, but all solutions have a learning component.

Learning is captured by level-2 Learning evaluation, which focuses on knowledge and skills gained, including developing concepts and using skills and competencies to drive program success. Learning is captured in terms of Skills, Learning, Knowledge, Capacity, Competencies, Confidences, and Contacts.

Preference Needs

The final needs analysis level is based on preferences, which drive the program requirements. For example, individuals prefer certain content, processes, schedules, or activities for the program’s structure. These preferences define how the particular program is implemented. For example, if the program is a solution to a problem or taking advantage of an opportunity, this step defines how the solution will be implemented and how participants should perceive its value.

Preference is captured by level-1 Reaction and Planned Action evaluation which focuses on reaction to the programs, including participants’ perceived value and planned action to make them successful. Preference is captured in terms of Relevance, Importance, Usefulness, Appropriateness, Intent to use, Motivational, and Recommended to others.

An additional level of data is level-0 Input evaluation with a measurement focus on input into programs, including indicators representing scope, volumes, times, costs, and efficiencies. This data is captured in specific measures of Types of programs, Number of programs, Number of people involved, Hours of involvement, and Costs.

MetaImpact Framework (

An integrative framework that measures 4 Types of Impact:

Clear Impact measures change in stakeholder performance with a focus on objective criteria to track behavior and performance using metrics such as skill assessments, analytics, observation tools, and various KPIs.

High Impact measures change in stakeholder systems with a focus on inter-objective or systemic criteria to track organizational and market dynamics using metrics such as environmental impact assessments, financial impact assessments, input indicators, and various KPIs. Stakeholder systems include supply chains, cash flow, and customer engagement.

Wide Impact measures change in stakeholder relationships with a focus on intersubjective criteria to track the quality and quantity of relationships and their influence using metrics such as 360 assessments, relationship mapping, interviews, and social impact assessment.

Deep Impact measures change in stakeholder experience with a focus on subjective criteria to track somatic, emotional, and psychological dimensions of experience using metrics such as self-evaluations, psychometrics, satisfaction surveys, and happiness inventories.

Each type of impact can be expressed as types of capital, each of which can be measured as three types of data. 

First-person data involve subjective methods such as self-report and self-awareness practices and are used to qualitatively study how people feel, think, and experience the world. 

Second-person data involve intersubjective methods such as focus groups and interviews and are used to qualitatively study how people relate to and communicate with each other. 

Finally, third-person data involve objective methods such as observation and statistical analysis and are used to quantitatively study how people and systems behave and function.

The 10 Capitals and their measurement combine into 4 Bottom Lines.

The Profit bottom line focuses on the exterior dimensions of individual human beings and social and natural systems. It includes five capitals:

Clear Impact capitals

Health Capital ~ energized vitality and vigor

Human Capital ~ skillful doing of tasks

High Impact capitals

Manufactured Capital ~ innovate productive equipment and resources

Financial Capital ~ flexibly acquire resources

Natural Capital ~ enrich the environment and use resources sustainably.

The People bottom line focuses on individual human beings’ interior and exterior dimensions. It includes five capitals:

Deep Impact capitals

Knowledge Capital ~ creatively utilizing and arranging information

Psychological Capital ~ efficiently marshaling abilities

Spiritual Capital ~ protectively stewarding the planet and human dignity for future generations

Clear Impact capitals

Health & Human Capital

These five capitals capture the various forms of value associated with supporting human beings in their full multi-dimensionality; their rich interior life, physical wellbeing, and skills and behaviors.

The Planet bottom line focuses on the interior and exterior dimensions of the planet. It includes five capitals:

High Impact capitals

Manufactured & Financial & Natural Capital

Wide Impact capitals

Cultural Capital ~ embracing abundance in diversity

Social Capital ~ collaboratively relating with people

The Purpose bottom line focuses on the interior dimensions of People and the Planet. It includes five capitals:

Wide Impact capitals

Cultural & Social Capital

Deep Impact capitals

Knowledge & Psychological & Spiritual Capital

The Impact Conversation

Business Impact; Effectiveness Impact; Performance Impact; Learning Impact; Wellbeing Impact

“What do you want?” is equivalent to “What do you think your business needs (to change or improve)?”

Use measures, scales, and indicators to establish benefits and improvement. Use 

complex dynamic systems thinking for process design and internal organization;

dialectical thinking for strategy design and external market adaptation;

full capacity thinking and embodying for edge walking, innovative living, and actualizing potentials. 

The Business, The Organization, and The Leader form an Organized Whole

Business results realize through individuals’ actions and interactions. These reveal the Performance Needs and the Application and Implementation of knowledge, skills, and competencies. Performance is filtered through Capacities pointing to Learning Needs.

Effective enterprise is built through effective behavior. To achieve aimed business results, the organization (the people) must exhibit effective interpersonal behaviors in their key work relationships. This requires working more effectively with others and transforming ineffective patterns co-created in interactions into effective interactional patterns.

Measurable goals on the leader and team factors lists must be specific, observable, and repeatable. For example, the leader’s interpersonal behaviors include all the basics of good managing. Likewise, the team’s interactions include all the basics of good teamwork.

These two lists contain interactional verbs, actions that are directed toward someone. A specific goal clarifies which behaviors serve the goal. For example, the leader and team lists identify the critical actions to be made daily to achieve business success. If the leader and the team pursue these behaviors during daily interactions, could someone watching a movie of these interactions identify the desired behaviors and accurately describe them? Could an observer recognize the behavioral expectations set forth by the leader?

The executive and the coach assess the frequency and quality of these interactions over time. For example, how many times do they see these behaviors in meetings? Are they consistently and effectively used in every meeting, whether the leader is present or not? Are expectations clear, and do team members seek the leader’s clarification?

Each situation is unique; therefore, the behaviors are customized and built from the ground up, convincing the client that this combination will make a difference.

Three Strategies for Exercising Role Authority and Achieving Alignment and Commitment

Strategy #1 Help leaders clarify their responsibilities to their Role Authority. 

Role authority is the power to create and protect boundaries; create the goal, role, and decision clarity; insist on commitment; ensure implementation; manage performance; create an environment of maximum team influence.

Do This #1: List which role authority jobs each leader must clarify.

Team Influence Power involves this:

The team can advocate for change that creates better processes, engagement, and results.

They get clear on direction rather than remain passively compliant and not understand the implications of what’s needed. 

They share their concerns about the direction of the goals and decisions.

They declare their commitment level and what they need to fully commit.

They initiate problem-solving before and during implementation.

Strategy #2 Invite the leader to explore their internal reactions to using Authority. 

Responses to Role Authority. 

Do This #2: Write what leaders told you about their reaction to authority, or if not, your best guess.

Internal reaction to using Role Authority:

@1 uncomfortable with using authority

@10 comfortable with using authority

Action in response to using Role Authority:

@1 unclear and hands-off about goals and decisions – Low in clarity of direction and invitation to participate

@10 clarifies and communicates goals and decisions effectively – High in clarity of direction and invitation to participate

Strategy #3 Help leaders bring backbone and heart to effective alignment discussions

True Alignment requires that People are fully committed to delivering; are clear on expectations; articulate a range of reactions; identify their level of commitment; work on reservation obstacles. One-to-one focus on impediments to commitment.

Do This #3: Rate how well each leader does when they hold Alignment Meetings. 

Step 3 of the ROI Methodology

Expect Success: Plan for Results. Define success for the program by setting objectives at multiple levels (Reaction, Learning, Application, Impact, and maybe ROI), defining responsibilities of all stakeholders, and completing a data collection plan, ROI analysis plan, and evaluation project plan.

The data collection plan answers fundamental questions about data collection: What, How, Who, When, Where, and How Much?

The ROI analysis plan details how improvement in business measures will be isolated to the project and converted to monetary value and identifies cost categories, intangible benefits, and target audiences for communications.

The evaluation project plan details each step of the evaluation.

Developing Objectives

Reaction Objectives may be described as, “Participants are engaged, enjoy the programs, and see their experience as valuable.” Typical Reaction Objectives: At the end of the program, participants should rate each of the following statements at least a 4 out of 5 on a 5-point scale:

The program is relevant to the target audience’s needs.

The facilitators/organizers responded to my questions clearly.

The program is valuable to this mission, cause, or organization.

The program is important to my (our) success.

The program is motivational for me (us).

The program is practical.

The program contained new information.

The program represented a good use of my time.

I will recommend the program to others.

I will use the concepts and materials from this program.

Learning Objectives may be described as, “Participants are learning the latest information and skills to make this program successful.” Learning objectives can have three components:

Performance – what the participant or stakeholder will be able to do as a result of the program.

Conditions under which the participant or stakeholder will perform the various tasks and processes.

Criteria – the degree or level of proficiency necessary to perform a new task, process, or procedure that is part of the solution.

Typical Learning Objectives: After the program, participants will be able to:

Name the three pillars of the new (..) strategy in 3 minutes.

Identify the four conditions for a micro-finance loan.

Identify the six features of the new ethics policy.

Identify the five new technology trends presented at the conference.

List five benefits of healthy living.

List 7 out of 10 harmful effects of pollution.

Explain the five benefits of diversity in a workgroup in 5 minutes.

Successfully complete the leadership simulation in 15 minutes.

Demonstrate the use of each software routine in the standard time.

Demonstrate all five customer-interaction skills with a success rating of 4 out of 5.

Use problem-solving skills, given a specific problem statement.

Follow the steps to secure a job in 30 days.

Score 75 or better in 10 minutes on the new-product quiz on the first try.

Score at least 9 out of 10 on a (..) policy quiz.

Application Objectives may be described as, “Participants take action, use the content, and make important changes.” Typical Application Objectives: When the program is implemented:

Participants will be involved in five job interviews within one month.

In 15 days, participants will apply for a micro-finance loan.

Ninety-five percent of high-potential employees will complete individual development plans within two years.

Participants will routinely use problem-solving skills when faced with a quality problem.

Customer service representatives will use all five interaction skills with at least half the customers within the next month.

At least 50% of participants will join a hiking/walking group in 20 days.

The average 360-degree assessment score will improve from 3.4 to 4.1 on a 5-point scale in 90 days.

At least 99% of software users will follow the correct sequence after three weeks.

Impact Objectives may be described as “Participants are driving important impact measures and having an impact in their work, community, or organization.” Typical Impact Objectives: After program completion, the following conditions should be met:

After nine months, grievances should be reduced from 12 per month to no more than two per month.

Turnover of high-potential employees should be reduced to 10% in nine months.

The average number of new accounts should increase from 300 to 350 per month in six months.

Unplanned absenteeism of call center associates should decrease by 20% within the next calendar year.

By the end of the year, the average number of product defects should decrease by 30%.

Operating expenses should decrease by 10% in the fourth quarter.

There should be a 10% increase in brand awareness among physicians during the next two years.

Product returns per month should decline by 15% in six months.

Payoff Objectives may be described as, “Participants and the organization have a positive return on the investment of their time and the resources for this program.”

Data Collection Plan for the Coaching for Business Impact Program

Objectives; Data and Measures; Data Collection Method; Data Sources; Timing; Responsibilities;

Objectives Level 1 Reaction and Planned Action

Relevance and importance to the job

Coach’s effectiveness

Recommendation to others

Objectives Level 2 Learning

Uncovering strengths, weaknesses

Translating feedback into action

Involving team members

Communicating effectively

Objectives Level 3 Application and Implementation

Complete and adjust the action plan

Identify barriers and enablers

Show improvements in skills

Objectives Level 4 Impact

Sales growth

Productivity, efficiency

Direct cost reduction

Retention of key staff

Customer satisfaction

Objectives Level 5 ROI

25% ROI

Measures and data: 4 out of 5 on a 1 to 5 rating scale; Checklist for the action plan; monetary values and business data for level 4 impact

Data collection method: Questionnaire, Action Plan

Data sources: Executives, Coach

Timing: 2 months and six months after engagement begins

Responsibilities: L&D staff

Comments: Executives are committed to providing data. They fully understand all data collection issues before engaging in the coaching assignment.

ROI Analysis Plan for the Coaching for Business Impact Program

Data items Level 4 Impact

Sales growth, Productivity or Operational efficiency, Direct cost reduction, Retention of key staff members, Customer satisfaction

Methods for isolating the effects of the project

Estimates from executives (the method is the same for all data items)

Methods for converting data to money

Standard values, Expert input, and Executive estimates (the three methods are the same for all data items)

Cost categories

Initial needs assessment, Coaching fees, Travel costs, Executive time, Administrative support, Administrative overhead, Telecom expenses, Facilities, Evaluation

Intangible benefits

Increased commitment, Reduced stress, Increased job satisfaction, Improved customer service, Improved teamwork, Improved communications

Communication targets for the final report

Executives, Coaches, Senior Executives, Coaching supplier firm, L&D staff, Learning and Development Council, Prospective participants for continuous business integration

Other influences and issues during the application

E.g., Various initiatives will influence the impact measures, including our six sigma process, service excellence project, and our effort to become a great workplace.


Securing commitment from executives to provide accurate data promptly is important.

Evaluation Project Plan for the Coaching for Business Impact Program

Gantt Chart or simple table with monthly progress

Decide to conduct ROI study, Complete evaluation planning, Design instruments, Pilot test instruments, Collect data from group A, Collect data from (control) group B, Summarize data, Conduct analysis, Write and print reports, Communicate results, Initiate improvements, and Complete improvements.

Step 4 of the ROI Methodology

Make it Matter: Design for Input, Reaction, and Learning. Develop programs with relevant, meaningful content important to the individuals and the organization and something they will use. Provide participants with examples, activities, and exercises that reflect what the participants are learning, what they will do with what they’ve learned, and the impact it will have. Two types of outcome data are collected at this step: level-1 Reaction and level-2 Learning.

Program input (who’s involved) measures input into programs, including the number of programs, participants, audience, costs, and efficiencies.

Reaction (how participants perceive it) measures reaction to and satisfaction with the experience, ambiance, content, and perceived value of the learning.

Learning (what participants will learn) measures what participants learned in the program, such as information, knowledge, skills, and contacts (takeaways from the program).

Data Collection for Reaction and Learning Evaluation

Data are captured through various measurement processes, from formal testing to informal self-assessments.

Surveys and questionnaires determine the extent to which participants have acquired skills, knowledge, and information.

Facilitation assessments are ratings from facilitators or project leaders based on observations during the project.

Written tests and exercises measure changes in knowledge and skills.

Skill practices help assess the degree of applied learning and acquisition of problem-solving skills.

Performance demonstrations directly evaluate the ability to apply knowledge and skills.

Simulations enable the assessment of skills and knowledge acquisition.

Team assessments assess the extent of skills and knowledge acquisition.

Skill/confidence-building exercises are an interactive approach to capturing skill and knowledge levels.

Learning Topics

Skills, Knowledge, Competency, Capacity, Readiness, Confidence, Awareness, Networking, and Information.

Reaction and Planned Action Topics

Necessary, Intent to use, Relevant, Important to success, Recommended to others (these usually correlate with application and implementation).

Ready, Useful, Appropriate, Motivational, Rewarding, Leading-edge, Important, Enjoyable, Timely, Easy or difficult, Engaging, Environment, Good use of funds, Information, Facilities, Facilitator, Practical, Valuable, Overall evaluation.

Input Topics (This program must be:)

Conducted with at least 100 participants per month. (Parameter: Volume, Staffing)

Implemented as a pilot project only. (Scope)

For individuals testing positive for the virus. (Audience, Coverage)

Completed by 1 September. (Timing)

Completed in less than three months. (Duration)

Less than $1,000 in cost per participant. (Budget, Efficiency)

Cover all micro-financing options. (Content)

Implemented to support new revenue for the university. (Origin)

Implemented with blended learning. (Delivery)

Conducted in each West African country. (Location)

Implemented without disruption of family activities. (Disruption)

Using virtual reality. (Technology)

Implemented with no more than 50% outsourcing. (Outsourcing)

Step 5 of the ROI Methodology

Make it Stick: Design for Application and Impact

Collect data to identify and enhance the enablers of the program’s success and identify and eliminate the barriers to the program’s success. Two types of data are collected at this step: level-3 Application and level-4 Impact.

Application and Implementation measure the progress after the program, the use of information, knowledge, skills, and contacts. This is the connection to performance impact. Use surveys, questionnaires, observation, interviews, focus groups, action planning, and performance contracting as follow-up methods to collect data after the program has been implemented.

Impact measures the changes in business impact variables such as output, quality, time, and cost linked to the program. This is the connection to business impact. Use questionnaires, action planning, performance contracting, and performance monitoring as follow-up methods to collect data after implementing the program.

Action plans are developed by participants during the program and are implemented after the program is completed. Follow-up on action plans provides evidence of application and business impact success.

Performance contracts are developed by the participant, the participant’s manager, and the facilitator, who all agree on performance outcomes.

Performance monitoring is useful where various performance records and operational data are monitored for changes.

Surveys and questionnaires determine the extent to which participants have used the various aspects of the program, such as skills, knowledge, and information.

Observation captures actual skill application and use.

Interviews are conducted to determine how extensively the program is used.

Focus groups are conducted to determine the extent to which the program is used.

Questionnaire Topics for Application and Impact

Use of materials, guides, and technology

Change in behavior

Extent of use, Actions taken, Procedures followed

Task completion, Actions completed

Application, Frequency of use, Success with use of knowledge and skills

Barriers to, Enablers to, Support for implementation and use


Improvements and accomplishments

Linkage with output measures (Sales growth, Productivity or Operational efficiency, Direct cost reduction, Retention of key staff members, Customer satisfaction)

Improvements linked to the program

Monetary impact of improvements

Perceived value of the investment

Confidence level of data supplied


Increasing Response Rates

To improve response rates for post-program data collection:

Provide advance communication about the follow-up data collection.

Review the instrument at the end of the formal session.

Communicate the reason for the evaluation and how the data will be used.

Indicate who will see the results.

Keep the instrument simple and brief.

Keep responses anonymous or at least confidential.

Communicate the time limit for submitting responses.

Have the introduction letter signed by a top executive.

Use local managers to distribute the instruments, show support, and encourage response.

Send a summary of the results to the target audience.

Let the participants know what actions will be taken with the data.

Break the instrument into parts (Reaction, Learning, Application, Impact).

Data Collection Issues for Application and Impact Evaluation

Sources of Information for Program Evaluation

Participants, Manager of participants, Direct reports of participants, Peer group, Internal staff, External group, and Organizational performance records.

Factors to Consider when Selecting Data Collection Methods

Time required for participants, Time required for participants’ Manager, Costs of the method, Amount of disruption of normal activities, Accuracy, Utility, Culture/Philosophy.

Factors to Consider when Determining the Timing of Follow-up

Availability of data, Ideal time for application (level 3), Ideal time for business impact (level 4), Convenience of collection, and Constraints on collection.

Sequence of Activities for Action Planning


Communicate the action plan requirement early.

Require one or more impact measures to be identified by participants.


Describe the action planning process.

Allow time to develop the plan.

Teach the action planning process.

Have the facilitator approve the action plan.

With some assistance, require participants to assign a monetary value for each proposed improvement.

If possible, require action plans to be presented to the group. Explain the follow-up mechanism.


Require participants to provide improvement data.

Ask participants to isolate the effects of the program.

Ask participants to provide a level of confidence for estimates.

Collect action plans at the pre-determined follow-up time.

Summarize the data and calculate the ROI (optional).

Report results to sponsor and participants.

Use results to drive improvement.

Example Action Plan

Name: Facilitator: Follow-up Date:

Objective: Evaluation Period:

Improvement Measure: Current Performance: Target Performance:

Action Steps…


    1. What is the unit of measure?
    2. What is the value (cost) of one unit?
    3. How did you arrive at this value?
    4. How much did the measure change during the evaluation period? (monthly value)
    5. What other factors could have caused this improvement?
    6. What percent of this change was caused by this program?
    7. What level of confidence do you place in the above information? (100%= certainty 0%= no confidence)

Step 6 of the ROI Methodology

Make it Credible: Isolate the Program’s Effects. Identify the amount of impact directly connected to the program.

Without this step, there is no proof that the program is connected to a business measure.

Control groups are used to isolate the program’s impact. Then, their performance is monitored in a parallel timeframe.

Supervisors or managers estimate the impact of the program on the output variables. Estimates are also adjusted for error.

Experts provide an estimate of the impact of the program on the performance variable based on previous studies.

Customers estimate how the program has influenced their decisions to purchase or use a product or service.

Collect estimates from the most credible source. Start with facts (actual improvement).

Step 7 of the ROI Methodology

Make it Credible: Convert Data to Monetary Values. Convert the improvement in business measures to monetary value using techniques such as standard values, historical costs, external databases, or expert estimation.

Participants’ wages plus employee benefits are used to develop the monetary value for the time where employee time is saved. The time saved must be legitimate, where the time savings is used on other productive work.

Step 8 of the ROI Methodology

Make it Credible: Identify Intangible Measures. Intangible benefits are project benefits that cannot be converted to money credibly with minimal resources, such as image, teamwork, and employee engagement. 

Intangibles of Positive Organizational Behavior

Positive Psychological Capital

Positive Strength

Communication Competence, Interpersonal Interdependence


Positive Emotion



Ethical Leadership

Political Skill

Forgiveness in Organizations

Self-Engagement at Work

Positive Core Self-Evaluations

Attitudinal and Behavioral Outcomes

Psychological Ownership

Organizational Commitment

Identification, Internalization

Organizational Citizenship Behavior

Job Satisfaction

Fairness Perception

Meaningfulness at Work; Task Variety, Task Identity, Task Significance, Job Challenge

Task Autonomy

Task Feedback

Group Cohesion

Role Stress; Role Conflict, Role Ambiguity

Job Characteristics Model: The Motivating Potential of a Job

A job’s Five core dimensions affect the psychological state of employees, leading to certain personal and work outcomes:

Meaningfulness at Work consists of

(a) Task variety – The degree to which the job utilizes various employee skills and talents to accomplish diverse work activities.

(b) Task identity – The degree to which the job allows the employee to complete a work product from beginning to end with visible results.

Autonomy – The degree to which the job provides substantial freedom, independence, and discretion to the individual in scheduling the work processes.

Feedback – The degree to which an individual can obtain direct and clear information regarding the effectiveness of performing work tasks from actual job outcomes (the work product itself) or from agents (e.g., managers, co-workers, clients, customers).

Effects of job characteristics on job satisfaction and organizational commitment vary with context, profession, and personal factors.

Intangibles of Positive Psychological Capacities

Self-efficacy, Hope, Optimism, Resilience

(Cognitive:) Creativity, Wisdom

(Affective:) Subjective Wellbeing, Flow, Humor

(Social:) Gratitude, Forgiveness, Emotional Intelligence, Spirituality

(Higher-order:) Authenticity, Courage

Identifying Intangibles

During the needs assessment, intangibles identified as directly connected to the program are listed as intangibles, and a decision is made not to convert them to monetary values.

In the planning phase of the ROI study, intangible measures are often suggested as outcomes.

During the data collection, participants and other stakeholders may offer additional intangibles, usually unintended, that are connected to the program.

During the data analysis, when measures cannot be converted to monetary values credibly with minimum resources, they are listed as intangibles.

When should data be converted to money?

To decide whether or not to convert a measure to monetary value, use this four-part test:

Does an acceptable, standard monetary value exist for the measure? If yes, use it in the ROI calculation; if not, go to question two.

Can a method be used to convert the measure to money? If not, list it as an intangible; if yes, go to question three.

Can the conversion be accomplished with minimal resources? If not, list it as an intangible; if yes, go to question four.

Can the conversion process be described to an executive audience and secure buy-in in two minutes? If yes, use it in the ROI calculation; if not, list it as intangible.

Connecting the intangibles to the program

The most credible source (usually the participants) provides input about the program’s influence on the intangibles.

Rate as: Not Applicable. No Influence. Some Influence. Moderate Influence. Significant Influence. Very Significant Influence.

Intangible Measure (examples)

Image; Teamwork; Sustainability; Engagement; Stress; Risk; Work/Life Balance

Typical Intangibles

Agility, Ambiguity, Alliances, Awards,

Brand, Brand awareness, Burnout,

Capability, Capacity, Carbon emissions, Caring, Clarity, Client complaints, Client impressions, Client satisfaction, Collaboration, Communication, Compassion, Competencies, Complexity, Compliance, Conflict, Corporate social responsibility, Creativity, Culture, Customer service,

Decisiveness, Development, Diversity,

Emotional intelligence, Employee attitudes, Engagement, Environmental friendliness,

Food security,


Human life,

Image, Impressions, Inclusiveness, Initiative, Innovation, Intellectual capital, Intent to leave,

Job satisfaction,

Leadership effectiveness, Learning, Loyalty,

Mindfulness, Mindset,

Net promoter score, Networking,

Organizational commitment,

Partnering, Patient satisfaction, Performance appraisal ratings, Poverty, Process improvements,

Reputation, Risk,

Social capital, Social consciousness, Social responsibility, Stress, Sustainability,

Team effectiveness, Timeliness, Trust,



Work/Life Balance, Work climate, Work satisfaction

Some More Notes on Intangibles

Improved communication entails information flow without loss of information, with no noise added, and with insights added at the processing points or nodes.

Leadership involves direction, alignment, and commitment.

Clarity involves making cause-effect links explainable.

Sustainability will add tangible value to tangible benefits and intangible value to intangible benefits.

Competence is a broad category including knowledge, behavior, identity, critical thinking, ethics and morality.

In accounting terms: A benefit (asset) is recognized if it can make a difference in user decision, i.e., it is relevant (CON5 asset recognition criteria) and has a relevant attribute measurable with sufficient reliability.

Control over the asset’s future economic benefits (resulting from contractual or other legal rights) is possible for identifiable intangible assets. The asset is capable of being separated or divided and sold, transferred, licensed, rented, or exchanged, or if it cannot be individually sold, etc., it is capable of being sold, transferred, licensed, rented, exchanged along with a related contract, asset, or liability.

{I would say that Non-identifiable intangibles include positive psychological capacities, attitudinal and behavioral outcomes, and intangibles of positive organizational behavior. Even extensively studied constructs such as engagement, job satisfaction, organizational commitment, and teamwork are contextual, contingent, non-repeatable, and thus non-identifiable.} 

Step 9 of ROI Methodology

Make it Credible: Capture Costs of Project. Tabulate the fully-loaded costs, including all the direct and indirect costs.

Step 10 of ROI Methodology

Make it Credible: Calculate Return on Investment. Divide the net program benefits by program costs. ROI is a financial metric representing the ultimate measure of project success.

Step 11 of ROI Methodology

Tell the Story: Communicate Results to Key Stakeholders. First, properly identify the audiences and provide appropriate information. Then, report the results using the five levels of outcome data to tell the story.

Step 12 of ROI Methodology

Optimize Results: Use Black Box Thinking to Increase Funding. Analyze the data to identify factors enhancing future program results and increasing funding investments.