WHO IS ACCREDITED?

Private Organization Accreditation

Northside Psychological Services is a combination of both private practice and community mental health provider. We provide services to children and adults (EAP, private insurance, private pay, etc.) in our private practice setting. In our Community Care Program, we provide services to children and adolescents in their homes.
read more >>

VOLUNTEER TESTIMONIAL

Anita Paukovits

Volunteer Roles: Peer Reviewer

Being a COA peer reviewer has clearly played a role in my professional development and has made me a better administrator at my own agency as a result!  To be part of a professional network that is on the cutting edge of program, practice, fiscal responsibility, and insuring Best Practice across the field is an amazing opportunity.
read more>>

Purpose

An organization-wide Performance and Quality Improvement system advances efficient, effective service delivery, effective management practices, and the achievement of strategic and program goals.

FOC
PQI 4: Performance and Outcomes Measures

The organization evaluates:
  1. the impact of services on clients; 
  2. quality of service delivery; and
  3. management and operations performance. 

Interpretation: COA expects data related to the standards in this section to be collected, aggregated, and reviewed at least quarterly. See PQI 6.

Interpretation: Organizations providing child welfare services are encouraged to integrate the Federal Child and Family Service Review (CFSR) Outcomes measures and Systemic Factors, particularly those identified in Performance Improvement Plans, into their overall PQI system and ongoing monitoring.

Rating Indicators
1
The organization's practices fully meet the standard as indicated by full implementation of the practices outlined in the PQI 4 standards.
2
Practices are basically sound but there is room for improvement as noted in the ratings for the PQI 4 Practice standards.
3
Practice requires significant improvement as noted in the ratings for the PQI 4 Practice standards.
4
Implementation of the standard is minimal or there is no evidence of implementation at all, as noted in the ratings for the PQI 4 Practice standards.

Table of Evidence

Self-Study Evidence On-Site Evidence On-Site Activities
    • See PQI plan re: description of what is being measured. Response must address and include PQI 4.02, PQI 4.03, and PQI 4.04, and include:
      1. outcomes
      2. outputs
      3. data sources
      4. indicators
      5. targets
    • See response to Narrative Question #4
    • See also PQI outcomes/outputs documentation provided in the Service Narratives
    Networks Only
    • Networks provide network-specific performance measures
    • Regulatory/licensing or other external reviews/reports (PQI 4.05)
    • For organizations seeking re-accreditation:
      1. Pre-Commission Review Report (PCR) 
      2. Final Accreditation Report (FAR) 
      3. Maintenance of Accreditation (MOA) Reports for the three most recent years
    • Interview:
      1. PQI personnel
      2. Relevant staff
      3. Other relevant stakeholders

  • PQI 4.01

    The organization identifies key outputs and outcomes, and related:
    1. measurement indicators;
    2. performance targets; and
    3. data sources, including data collection tools or instruments for each identified output and outcome.

    Interpretation: Although not required, a program specification model or logic model can be a useful tool to help staff think systematically about how the program can make a measureable difference. The models help to define the connection between the service population’s needs, required resources, program activities and interventions, and program outputs / desired outcomes.

    Interpretation: If an organization has not yet identified outputs and outcomes for all of its programs, it must, at minimum, do so for high-risk services such as protective services, foster care, residential treatment, etc. 

    Outputs are what the program delivers. Examples of program outputs include:
    • number of educational or clinical sessions provided
    • total number of clients served over a specified period of time
    • number of housing placements made
    Outcomes are the observable and measureable effects  of a program’s activities on its service recipients. Examples include:
    • improved functioning as measured by the Children’s Functional Assessment Rating Scale (CFARS) 
    • number/percent of homeless & runaway youth that are reunited with family during past quarter
    • reduction in criminal justice system involvement
    • improved family/community involvement
    Interpretation: For some programs, outcomes, outputs, indicators, tools, etc. may be established by contractual and/or funding requirements. For programs where this is the case, organizations are expected to go beyond simply reporting the required data and engage staff and other stakeholder to: 
    • review data that is important for their work or interest;
    • use the data to benchmark results with external organization’s providing the same funded services;
    • use the data to improve services beyond required expectations; and 
    • compare data with additional or other data collected by the organization not covered by contractual requirements to improve services.

    Rating Indicators
    1
    The organization's practices reflect full implementation of the standard.
    2
    Practices are basically sound but there is room for improvement; e.g.,
    • The organization has not developed indicators or performance targets for some of its programs.
    3
    Practice needs significant improvement; e.g., 
    • At least one of the standard's elements are not being addressed at all; or 
    • Outputs and outcomes have not yet been identified for one of its high-risk programs.  
    4
    Implementation of the standard is minimal or there is no evidence of implementation at all.

  • PQI 4.02

    On an ongoing basis, each of the organization’s programs measures client outcomes, including two of the following areas:
    1. change in clinical status;
    2. change in functional status;
    3. health, welfare, and safety;
    4. permanency of life situation; 
    5. quality of life; 
    6. achievement of individual service goals; and 
    7. other outcomes as appropriate to the program or service population.

    Interpretation: When measuring client outcomes organizations often adapt existing measurement tools or develop new measurement tools. However, organizations are encouraged to use standardized or recognized outcomes evaluation tools when such tools are available and appropriate.  

    Standardized Tools
     
    A standardized tool is a tool that has been tested through a process that ensures that the tool is both valid and reliable. Validity indicates that the tool actually measures what it claims to measure. Reliability indicates that the results should be the same (or similar) regardless of who administers the tool or when it is administered. Additionally, the tool should be responsive, meaning that it is able to test change over time in whatever is being assessed.
     
    Using a standardized tool increases the likelihood that the measurement process will give the organization a true picture of client progress and program impact. Using a standardized tool also makes it easier to compare a program’s results with the results of other programs using the same instrument.

    When standardized tools are not being used, clearly describe and document how client outcomes are evaluated and measured.  See also the fourth PQI Narrative Question.

    Please note that standardized measurement tools have not yet been developed for many of the types of programs and services that COA accredits. 
     
    Interpretation: Within the context of PQI 4.02, “clients” refers to the individuals or families who receive services from the organization. “Clients” can also refer to communities or other organizations in programs designed to effect change at that level, e.g., Community Change Initiatives (CCI) or Social Advocacy (SOC) programs.

    Interpretation: In an EAP common outcomes include, for example, personal and/or workplace productivity and healthy workplace relationships.

    Rating Indicators
    1
    The organization's practices reflect full implementation of the standard.

    On an ongoing basis the organization collects data on at least two of the listed areas for each of its programs.
    2
    Practices are basically sound but there is room for improvement; e.g.,
    • The organization collects data on at least one of the listed areas for each of its programs.
    3
    Practice needs significant improvement; e.g., 
    • The organization has not begun collecting outcomes data for most of it programs; or 
    • Has not begun collecting outcomes data for one of its high-risk programs.
    4
    Implementation of the standard is minimal or there is no evidence of implementation at all.

  • PQI 4.03

    At least annually, the organization examines its service delivery processes to plan, manage, and evaluate the quality of its services, including:
    1. client satisfaction or surveys related to services provided;
    2. review of immediate and ongoing risks related to service delivery such as use of behavior management interventions; and
    3. evaluation of program methodology and service delivery processes including barriers to receiving or successfully completing services. 

    Interpretation: Element (b) is directly related to the quarterly risk management reviews addressed by RPM 2.02.   If those reviews are not being conducted by PQI staff, COA expects that quarterly reports from those meetings will be reviewed by PQI staff at least annually for patterns and trends.

    Also regarding element (b), "immediate and ongoing risks related to service delivery" refers to risks such as medical issues, the use of service interventions, and others.

    Research Note: According to the Urban Institute, client surveys can be an indispensable source of outcome information. They provide a systematic means of gathering data on service outcomes from all or a portion of clients. Client surveys help organizations learn whether services are producing anticipated or desired results and, if not, provide clues for how to improve them. 

    Issues covered by a client survey should correspond to the key service outcomes an organization wishes to track. Because survey length generally affects response rates, issues not pertinent to improving outcomes should probably be limited. The goal is to develop the shortest possible list of questions consistent with the survey’s objective of assessing outcomes.  

    Rating Indicators
    1
    The organization's practices reflect full implementation of the standard.
    2
    Practices are basically sound but there is room for improvement; e.g.,
    • The organization is collecting data related to two of the three elements, at least annually, but planning has begun to address the missing element.
    3
    Practice needs significant improvement; e.g., 
    • The organization is not collecting data related to two of the standard’s elements.  
    4
    Implementation of the standard is minimal or there is no evidence of implementation at all.

  • PQI 4.04

    The organization collects and monitors data on management and operational performance to:
    1. strengthen and build organizational capacity;
    2. measure progress toward achieving its strategic goals and objectives;
    3. evaluate operational functions that influence the capacity to deliver services; and
    4. identify and mitigate risk.

    Interpretation: Examples of operations and management performance measures can include: 
    • Efficiency in the allocation and utilization of its human and financial resources in furthering or impeding the achievement of organizational objectives (HR 2);
    • Effectiveness of risk prevention measures (See RPM 2.01, RPM 2.02);
    • Staff retention/turnover and satisfaction (See HR 4.03, HR 4.04);
    • The cost of delivering a unit of service as compared to similar programs/the relationship of service delivery costs to the benefits derived by consumers of service (See FIN 5.06);
    • Costs v. benefits of fundraising efforts (See ETH 3.03);
    • Achievement of budgetary objectives (FIN 5);
    • Effectiveness of community education and outreach (See GOV 4.01); and
    • Efforts to diversify the governing body (See GOV 2.02, GOV 2.03).
    Interpretation for networks: Network management entities may also measure important network administrative processes, such as: 
    • The average length of time between receiving a clean claim and paying the claim; 
    • The proportion of services that are evidence-based or meet nationally recognized treatment guidelines developed by consensus groups; 
    • The effectiveness of network training; 
    • The satisfaction of stakeholders, such as high volume referral agents (e.g., judges, court workers, employee assistance agents); penetration rates, or the proportion of the whole population eligible to be served by the network who actually receive services; and 
    • Results of retrospective case record reviews, including the percentage of cases in which a placement decision includes an appropriate application of clinical criteria.
    Above are several examples of operations and management performance measures that relate to specific COA standards. Review those examples and consider if any data is currently being collected, e.g. financial reviews of budget objectives, staff retention, staff turnover and satisfaction, costs versus benefits of fundraising activities. Then, identify an outcome or goal, and evaluate how the organization is doing in some of these areas. If initial goals have not been met, develop an improvement plan.

    Rating Indicators
    1
    The organization's practices reflect full implementation of the standard. 
    2
    Practices are basically sound but there is room for improvemet; e.g.,
    • The organization is collecting and monitoring data related to three of the four elements of the standard. 
    3
    Practice needs significant improvement; e.g., 
    • The organization is not collecting data related to two of the standard's elements.
    4
    Implementation of the standard is minimal or there is no evidence of implementation at all.

  • PQI 4.05

    Findings and recommendations from external review processes are integrated into the organization’s PQI system, including:
    1. licensing and other reviews related to federal, state, and local requirements; 
    2. government and other funder audits; 
    3. accreditation reviews; and 
    4. other reviews, where appropriate.

    Rating Indicators
    1
    The organization's practices reflect full implementation of the standard.
    2
    Practices are basically sound but there is room for improvement; e.g.,
    • The process for review of findings and recommendations can be improved, e.g., while findings are reviewed by management, they are not integrated into the PQI improvement cycle when appropriate.
    3
    Practice needs significant improvement; e.g., 
    • There is evidence that the organization has not adequately addressed the findings or recommendations of at least one key external review; or 
    • It does not review or address findings in a timely manner and thus may be putting itself at risk of sanction.
    4
    Implementation of the standard is minimal or there is no evidence of implementation at all.
Copyright © 2018 Council on Accreditation. All Rights Reserved.  Privacy Policy and Terms of Use