Managing Physician and Administrator Relations | Physicians Practice

I recently attended a Medical Group Management Association sponsored conference on the “Business of Care Delivery.” Discussions focused on how physicians and administrators can work together to meet the complex challenges facing medical practices; now and in the future. These discussions focused on leadership and management, collaboration, and improving the quality of care provided to the populations that we serve.

Unilateral decision maker vs. practice facilitator

Basically, when you consider the physician mind-set it is one of decision making, autonomy, and dealing with things now. These concepts are not always congruent with the mind-set of the practice administrator. The administrator is trained to focus on the complexity of coordinating multiple practice issues inside the office, as well as outside the office. This coordination requires a set of skills that are not taught in medical schools.

So instead of discussing “managing the physician,” it seems more appropriate to address how the physician and administrator can work together. Typically the entrepreneur physician provides leadership to the entire group. In other cases it is the administrator that provides that leadership. Both must focus on the mission of the practice which relates to providing quality patient care, as well as maintaining the health of the business. Without this balance there will be no group of physicians to provide care to the patients.

We should strive to work together in any way possible. Physician and administrator decision making follow the same path; however, the time frame for action may be significantly different. The autonomous physician who is evaluating a patient will attempt to define the problem, seek information, consider alternatives, and decide the best course of action. Administrators will follow that same pathway; however, others will either be involved with the decision-making process or with the implementation. This then has a direct impact on the timing and expectations around the final outcome.

Clinical vs. administrative role: each is equally important

Recognizing the need for the tandem roles of physician leader and practice administrator will be necessary for the practice to survive, let alone thrive, in the future. This requires effective communication, acceptance of both the individual and the role, and a clear focus on what is ahead. It is not necessary to look at whether to join with the local hospital or lead the formation of a larger group of physicians. It is more important today to recognize the need to accept each role.

Clear communication channels

Assuming that all practice leaders are on the same page is not always wise; you need to have an established channel for communication. It is essential to maximize everyone’s time and effectively utilize all mechanisms available for information sharing. Active listening ensures that all decision makers are on the same page. And remember, management terminology may be foreign to the physicians — so use terms and concepts that are easily understood. Beyond what you do verbally, it is always necessary to make sure that policies are documented in your practice manual, to avoid confusion.

Ensuring that physicians and administrators work effectively together requires providing each with the necessary information to do their job well, understanding and respecting both roles, making sure adequate resources are available, and giving mutual respect. Administrators can facilitate these tasks by setting appropriate goals, providing adequate training, and developing benchmarks for all members of the practice.

Owen Dahl, FACHE, LSSMBB, is a nationally recognized medical practice management consultant and author of “Think Business! Medical Practice Quality, Efficiency, Profits.” He can be reached at or 281 367 3364.

Positive Clinical And Commercial Updates From Avita Medical

(Editors’ note: This article contains graphic images of burn victims).

Business & Financial Update

Avita Medical Limited (OTCQX:AVMXY) reported total sales for the first half of the year 2014, ended December 31, 2013, of AUD $1.37 million (1 AUD = 0.92 USD). Total sales were up 1.3% from the same period ended December 21, 2012. However, sales of ReCell Spray-On-Skin™ were up 35% in the quarter compared to December 31, 2012 and up 11% for the six-month period ended December 31, 2013. Sales of ReCell grew nicely in the United Kingdom and Australia, up 52% and 29% for the quarter, respectively. Unit sales were up 34% in the UK, 17% in Germany, and 10% in Australia.

(click to enlarge)

This increase demonstrates Avita’s enhanced focus towards growing the commercial acceptance of ReCell in its key markets. This includes expanding the commercial use of the product outside of burns and building on the strong brand name in Europe. The company is specifically looking to shift focus from general positioning towards a targeted, indication‐specific positioning strategy toward more innovative, thought‐leading clinicians and potential early adopters of the novel platform. Significant effort is also being channeled towards gaining reimbursement in approved market.

For example, in the UK, the National Institute for Health and Care Excellence (NICE) has accepted the company’s application for coding, published the Scoping Document for public comment, and has asked the External Assessment Centre to complete additional work. As a result, Avita anticipates a reimbursement decision in mid-2014 in the UK. In Germany, eight hospitals have submitted endorsement for reimbursement of ReCell in the treatment of burns and acute wounds, and 14 clinics have submitted for reimbursement for aesthetic and plastic surgery procedures. A decision on reimbursement in Germany is expected during the second half of the calendar year 2014. Progress on reimbursement is also being made in Turkey, where an application has been submitted to the Social Security Institution (SGK). ReCell has been classified with a code and included on the ‘positive list’, with a decision on final reimbursement expected shortly.

In Australia, the company has begun new marketing initiatives to expand reach into new regions of the country. Previously, Avita was only marketing ReCell in two hospitals in Western Australia with limited resources directed towards the reimbursement effort. Efforts to drive full reimbursement across the country are underway. Additionally, individual applications have been submitted to the New Zealand Accident Compensation Corporation (ACC). These submissions triggered the ACC to conduct a preliminary review for reimbursement of ReCell in scar/pigmentation improvements. We note the ACC has already approved the use of ReCell for scar reconstruction in burns and trauma victims on a case-by-case basis.

Despite progress in the above markets, France and Italy have been two weak spots for the company over the past year. Unit sales in France and Italy were down 32% and 64%, respectively, for the six month period ended December 31, 2013 compared to the same period in December 31, 2012. In France, Avita is currently evaluating consultants to facilitate reimbursement efforts. In Italy, a diagnosis-related group (DRG) code is in place for burns procedures and the company is currently working on establishing reimbursement for chronic wounds.

China represents the most exciting revenue opportunity for the company. The company reports that progress is being made on three fronts: clinical, sales, and reimbursement. With respect to clinical and sales, the immediate and mid-term market opportunities in China are in pigmentation and plastics. Management anticipates another stocking order from its Chinese distributor in the March 2014 quarter. However, in parallel to these sales efforts, Avita is laying the foundation in the burns market by collaborating with key opinion leaders and seeking reimbursement. Although reimbursement in China is a complex process involving differing requirements at the provincial and municipal levels, the company recently reported a key milestone was recently reached with price approval at the influential Peking Union Medical College Hospital (PUMCH) in Beijing. This approval can now be used as a reference tool at other hospitals in the country. In total, Avita now has five hospitals that have approved purchasing submissions; another seven hospitals have applications under review.

Revenue from the company’s Breath-A-Tech® and Funhaler® sales in Australia during the half-year ended December 31, 2013 were flat with the same period in 2012. The company is no longer investing in these products. In fact, it would not surprise us to see the company sell its respiratory franchise in Australia in the near term (investor presentation guides to third-quarter 2014). Other revenues totaled roughly AUD $0.4 million during the quarter. Total revenues were AUD $1.8 million, essentially flat with the same period in 2012.

Loss due to operations totaled AUD $1.9 million in the quarter, a 17% improvement over the corresponding quarter in 2012. For the six month period ended December 31, 2013, the company reported a net operating loss of AUD $3.8 million. Cash and investments as of December 31, 2013 stood at AUD $6.8 million, which we find sufficient to fund operations for the next twelve months. As noted above, selling the respiratory franchise could fetch the company AUD $1.5 to $2.0 million if executed. We remind investors that Avita Medical has no debt on the books.

Clinical Updates

We continue to be big fans of the company’s ReCell® Spray-On Skin™ system. ReCell is an autologous cell harvesting, processing and delivery technology that enables surgeons and clinicians to treat complicated skin defects, including chronic wounds, scars, burns, depigmentation, and aid in rejuvenation or reconstruction procedures that uses a patient’s own skin cells to facilitate the regenerative process. Clinical experience generated by the company shows the system can be successfully used to promote healing and the formation of new skin structure after a severe injury, such as a burn or scald. Data also shows that ReCell can improve the appearance of acne scars, remove areas of discoloration and restore pigmentation in patients with vitiligo, and aid in the healing of chronic wounds such as venous leg ulcers and diabetic foot ulcers.

(click to enlarge)

Burns / Scalds: We like the ReCell system for the treatment of burns and scalds because it is simple, fast, safe, and effective. It’s the ideal product, in our view, to treat major burns or scalds in children because the biopsy area is so small in comparison to split thickness meshed skin grafting (STSG), the current standard of care. A 1 cm2 biopsy with ReCell can treat a severe burn at up to 80 cm2, with wound re-epithelialization rates similar to the gold standard STSG. This is an enormous leap forward in our view considering covering a severe burn or scald at 80 cm2 with a STSG might require the biopsy area to be as large as 40 cm2. Plus, the ReCell procedure is less painful and offers superior pigmentation and patient satisfaction on follow-up.

Avita is currently conducting a U.S.-based Phase 3 / PMA trial (NCT01138917) with ReCell in acute burns and scalds. The trial is scheduled to enroll 106 patients, however, enrollment has been stalled at around 90 patients for the past year. Management notes that entry criteria for the trial is too strict, specifically with respect to prohibiting the patient from using any other wound care products. This leads to push-back and skepticism from leading wound care and burn physicians. Instead, management is seeing a significant number of filings on over the past year for compassionate use applications. Physicians are petitioning the U.S. FDA for approval to use ReCell in combination with STSG. In fact, the FDA has received enough compassionate use applications on ReCell that the agency asked Avita Medical to file an investigational device exemption (IDE) application on ReCell for compassionate adjunct therapy use. From conversation with management, we expect filing of this IDE application is imminent.

Use of ReCell as an adjunctive therapy for STSG would open up a tremendous number of patients to the active Phase 3 / PMA study. At this point, we are unsure if Avita will look to expand the current trial with a compassionate use cohort or seek to initiate a new Phase 3 study in just this indication. However, feedback from physicians report that using ReCell allows them to expand the mesh on a STSG from 2-1 (noted above) to more like 4-1 or even 6-1! This is an enormous leap forward for both physicians and patients because they can obtain “gold standard” efficacy with less graft size and biopsy pain. For Avita, it is very positive news because instead of trying to replace the standard of care, the company can market ReCell as an enhancement to standard of care. No tricky or difficult marketing message to a wound care or burn physician that has been using STSG for years! Now Avita can simply market ReCell as saving time, money, patient comfort, and improving outcomes. This is a very exciting development and the market has yet to pick up on the news. We suspect that when Avita announces the IDE approval in the next few weeks, shares will react positively.

Chronic Wounds: The chronic wound market remains the Holy Grail for Avita Medical. The market is large and growing rapidly as a result of the growing elderly and diabetic populations in the developed world. The market is dominated by negative pressure wound treatment, but things like advanced dressings and skin substitutes are coming on strong. Products like Shire’s Dermagraft and Organogenesis’ Apligraf have sales exceeding $150 million worldwide. We think ReCell compares well with these products on a cost and effectiveness standpoint, especially considering the U.S. Center for Medicare and Medicaid Services (CMS) has recently made significant changes to the skin substitute policy to bundle payment of the graft and the procedure together. This cap on reimbursement per procedure has created a massive shift away from the “too expensive” Dermagraft and Apligraf products to cheap alternatives, like ReCell.

In August 2013, the company recently initiated the RESTORE clinical trial (NCT01743053) studying ReCell vs. standard of care in patients with chronic venous leg ulcers. Enrollment has expanded over the past few weeks to now five active sites. Full enrollment should take place in early 2015, with top-line results from RESTORE expected around the middle of 2015. If successful, we expect a U.S. IDE submission late 2015.

Vitiligo / Pigmentation: We note there is no true “standard-of-care” for vitiligo. Treatment for vitiligo includes a number of moderately effective options. Many patients begin with vitamin B6 creams and makeup to hide the depigmentation of the skin. However, exposing the skin to UVB light is the most common treatment for vitiligo. UVA light can also be used. Patients can spot treat themselves at home, or undergo treatment at a clinic of hospital in severe cases. UVB / UVA phototherapy does work in some patients, but treatment can be long, expensive, and unpredictable. Potential risks include burns and skin freckling. Pharmaceutical immunomodulators such as Prograf, Advagraf, Protopic, and Elidel are also commonly used with varying degrees of success.

Avita has just recently completed a pilot study (NCT01640678) comparing the use of ReCell to CO2 laser abrasion plus UV-therapy. The 10-person pilot study is being conducted by Avita at The Netherlands Institute for Pigment Disorders. The trial completed enrollment on August 21, 2013. Albert Wolkerstorfer, M.D., of the Netherlands Institute for Pigment Disorders, the Principal Investigator of the study, noted “We are pleased to have completed enrollment in the study and believe ReCell offers promise as a potential therapy in the management of stable and segmental vitiligo patients.” If successful, it could be an important driver of potential use for the product in vitiligo and other pigmentation defects. We expect Dr. Wolkerstorfer to submit a manuscript for publication this quarter. Typical times to publish are between six to twelve months. Given the previous data and case-studies with ReCell in vitiligo, we expect this data to be very impressive. We expect a U.S. IDE submission in pigmentation later this year.

In Europe, ReCell has been granted CE Mark for the “Intended to be used to disaggregate cells from a patient’s split-thickness skin biopsy and to collect these cells for reintroduction to the patient. The cells can be used for autologous application to the prepared wound bed as determined by the physician.” The company is currently conducting several post-market studies to help drive reimbursement in specific indications such as burns and scalds, vitiligo, scars (NCT01476826), and aesthetics / reconstruction. We note that Avita is also investing in product development, with a next-generation product in planning. This will allow for improved economics, including a razor / razorblade model, and potential increased uptake from the market. A new ambient-stored kit is currently being rolled out in the UK, with the rest of Europe to follow.

Management Changes

Late last year, Managing Director and CEO Dr. William Dolphin stepped down from his role and was replaced in an interim capacity by current Chief Operating Officer and Chief Financial Officer, Mr. Timothy Rooney. The company has initiated an internal and external search program to identify a permanent CEO. The company’s Chairman, Mr. Dalton Gooding, also announced his resignation in November 2013. The Board elected Mr. Ian Macpherson as its interim Chairman and an executive recruitment firm has been engaged to identify a suitable candidate to be appointed as new Chairman.

We expect a new Chairman and CEO to be in place by May or June 2014. We expect to provide a larger update to investors following our initial conversation with the new Chairman and/or CEO. That being said, we continue to believe Avita is ripe for a take-out by a larger wound-care company, and will be watching closely for either a deal or new Chairman/CEO in the next few months.


Case study pictures tell the story. We simply cannot write an article on Avita Medical without including some of the impressive case study photos we pulled from peer-reviewed publications or through Google searches on ReCell Spray-on-Skin:

In Burns / Scalds:

(click to enlarge)

In pigmentation / vitiligo:

(click to enlarge)

In scars:

(click to enlarge)

Avita’s market capitalization at only $35 million looks bafflingly low and highly attractive for long-term investment in our view. Based on the projected sales opportunity of the ReCell device in potential indications such as burns and scalds, vitiligo, scars, and chronic wounds, we see the shares at meaningfully undervalued. We have built a discounted cash flow model to value the shares and arrive at a market capitalization of approximately $75 million. This equates to a target price of $4.00 per share. This company simply has too much going on with respect to clinical and commercial operations to remain undervalued for much longer.

(click to enlarge)

Editor’s Note: This article covers a stock trading at less than $1 per share and/or with less than a $100 million market cap. Please be aware of the risks associated with these stocks.

Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article. (More…)

A crystal clear answer for pressure injury prevention and management

The Pressure Injury Prevention Program in the Hunter New England Health Local Health District is a model highly recognised within the healthcare profession since its inception in 2008. The program’s clinical lead Margo Asimus, nurse manager Felicity Williams and project officer Pui Ling (Iris) Li explain the program’s aims and outcomes.

It is 2014 and pressure injuries continue to remain a major problem for patients, families, health care professionals and organisations 1. This adverse event contributes to harm of patients which affects long term wellbeing, while increasing length of stays2 and placing additional demands on the health care budget. An organisation that is willing to address this predominately avoidable injury must firstly be prepared to measure the real extent of the problem. It is then that the necessary strategies and changes can be implemented.

Pressure injury point prevalence studies have been performed in five states across Australia; Victoria, Queensland, Tasmania, Western Australia and ACT. In 2003, 2004 and 2006 the Victorian Quality Council (VQC) supported point prevalence studies which resulted in significant improvement in prevalence rates from 26 per cent in 20033 to 17.6 per cent by 20064 . The state of Western Australia commenced state wide auditing in 2007 which has continued through the WoundsWest program (WoundsWest)5. WA identified improvement strategies which also resulted in the reduction in the hospital acquired prevalence rates.

In an effort to gain continual improvement in reducing hospital acquired pressure injury prevalence rates, Queensland has imposed financial penalties for both stage 3 ($30,000) and 4 pressure injuries ($50,000) which had been classified as an adverse event.

Jackson et al (2011)6 reviewed the ICD10 coder data for hospital acquired conditions from 2006 to 2007 in public hospitals in the states of Victoria and Queensland. Over 144 categories were determined with pressure injuries being in the top 10 of all adverse events, (5th place) with over 2,873 cases identified in the 12 month period. The cost of a hospital acquired pressure injury was calculated at on average $8,435 for each case. It also reported an estimated expenditure of $24,234,740 would be required in the health care budget to treat the complications of a pressure injury.

The increasing pressure injury prevalence, patient harm and escalating health care costs prompted an urgent organisational change to improve the quality and safety of care within a large health service in NSW1. A systematic approach to pressure injury prevention and management was implemented with outstanding achievements in several key areas.

Crystal Model: A Crystal Clear Solution for Pressure Injuries

The Crystal Model was developed by Hunter New England Local Health District (HNELHD) in NSW and has been implemented by the Pressure Injury Prevention Program (PIPP) at an executive level in HNELHD7. It has focused on the prevention and management of pressure injuries since 2008; the prevalence rate of hospital-acquired pressure injuries decreased by13.5 per cent over the six years. This Model has been recognised as best practice by winning the Clinical Excellence Commission Award in the 2009 at the NSW Health Baxter Awards.

There are nine components in the Crystal Model, which are interconnected in prevention and management of pressure injuries– policy, surveillance, equipment, communication, documentation, wound management, education, paediatrics and community care. Each component has a leader, a clinician or manager who has substantive roles in the PIPP and promotes the development of resources and strategies.

pressure-woundcareFigure 1: Crystal Model Diagram
HNELHD Pressure Injury Prevention Program Crystal Model: A range of key aspects working together to stop pressure injuries

A crystal clear answer for pressure injury prevention and managementCrystal Model Components

The Pressure Injury Prevention and Management Policy include an equipment algorithm and have been developed with the reference to the best practice guideline. The policy interconnects key focus areas, strategies and evidence. The policy is reviewed regularly following recommendations from recent pressure injury prevalence studies.

heel- featuredSurveillance
Annual point prevalence surveys should be conducted to identify pressure injury prevalence, the sources and severity of the pressure injuries, patients’ risk profile of pressure injury and current prevention and management strategies. Our project has demonstrated that annual surveillance provides evidence of the effectiveness of strategies implemented, informs the development of future strategies, and enables future benchmarking.

Appropriate choice of pressure redistributing equipment is one of the important components in pressure injury prevention. In HNELHD, an equipment algorithm has been developed and implemented alongside an agreed and validated risk assessment tool8.The algorithm guides clinicians to identify through risk assessment the most appropriate pressure redistributing device.

The pressure injury prevention program logo was established as a marketing strategy. The communication strategies include fact sheets, meetings, and a dedicated PIPP website to ensure that all information and resources about the Pressure Injury Prevalence Study were supplied promptly to managers, staff and patients. In addition, all study results together with recommendations are accessible on the PIPP website.

Introduction and implementation of a validated pressure injury risk assessment tool, Waterlow Risk Assessment8 and Pressure Injury Notification Sticker enables Incident Information Management System (IIMS) and coder data to be compared. This information provides clinical units with direction to improve clinical care and documentation. Incidents of pressure injuries are expected to be reported to the IIMs of the health organisation for analysis of the root cause of the adverse event which then informs improvement strategies.

An interactive online program has been developed, which is available to all clinicians and students across HNELHD. The purpose of the program is to provide accessible, consistent training, develop knowledge and critical thinking about pressure injury prevention, assessment and management. Completion requires successfully undertaking of an on-line assessment that is automatically recorded on the staff member’s learning record. Evaluation of the validity of the knowledge acquisition and competence was undertaken during the initial pilot of the on-line education evaluated.

Wound management
Evidence-based management of pressure injuries using wound management principles has been implemented and this information is provided within an e-learning program.

Paediatrics and Community
PIPP community study has identified gaps in pressure injury prevention in clients living in the community9. Strategies have been introduced to prevent pressure injuries following patients’ discharge into the community settings. The paediatric population has specific care needs that now form part of specific care delivery to avoid pressure damage from device related pressure.

Measuring the extent of the pressure injuries Prevalence studies and clinic audits should be conducted to measure the extent of the organisations pressure injury problem and the effectiveness of the PIPP implemented. Also, to demonstrate excellence in accreditation, the systems such as prevalence study, clinical coder audit and IIMS data management that can capture, analyse and report data outcomes should be demonstrated.

“There are nine components in the Crystal Model, which are interconnected in prevention and management of pressure injuries – policy, surveillance, equipment, communication, documentation, wound management, education, paediatrics and community care.”

A crystal clear answer for pressure injury prevention and managementPrevalence vs Incidence

The proportion of patients with a pressure injury(s) within a particular population at a given time is known as pressure injury prevalence; and the number of patient with a new pressure injury (s) in a specified population during a period of time is known as incidence. The prevalence study reflects the magnitude of the problem, while the incidence study reveals the quality of care provided. 10

Data content

Apart from patients’ demographics, the sources: hospital acquired or pre-existing, severity and anatomical locations of the pressure injuries, the following data can also be collected to assess the awareness of pressure injury prevention:

  • Compliances in risk and skin assessment and reassessment
  • Preventative action taken
  • Appropriateness of the supporting surface and the length of time to access pressure redistributing equipment, especially for community clients
  • Compliances in IIMs reporting

The extent of data this collected will also depend on the purpose of the study and resources available, such as the number of surveyors, number of eligible patients to be surveyed, settings ( inpatient or community), and who will manage the data. The quality of analysis can also be affected by available resources.

Planning and study methodology

At HNELHD it has been necessary to obtain ethics approval before conducting the district wide annual study. The methodology of the study has been determined to collect accurate, valid and consistent data collection to enable trending, benchmarking and comparison with other studies

The point prevalence study can be conducted, for example, by physical examination and medical record audit and extracting data from clinical record coding or IIMs. It also sets out explicitly the inclusion and exclusion criteria, such as age, ward, and medical specialty. The methodology of the study in acute settings varies from primary care settings.

Validity of the study

To ensure the quality of data collected, standardised education and training and inter-rater testing are required for surveyors prior to the prevalence study11. An external assessor or independent surveyor is allocated to each survey team to mitigate bias.

HNELHD first point prevalence surveillance conducted in 2008 survey was facilitated by the district project team members leading surveillance teams. Local ownership of annual surveillance has transitioned as surveyors were trained and assessed. PIPP district team members were able to support surveillance teams by providing education and validation regarding the methodology and process to facilitate that the survey is consistent and of appropriate.

As the study with higher consent rate is more likely to reflect the true prevalence rate of pressure injuries, all facilities in HNELHD are required to achieve a minimum of 75% consent rate for eligible patients in prevalence studies.

Leadership has been a crucial component of the program’s success12. Executive sponsorship by the District Director of Nursing and Midwifery was vital in recognising the role of nurses in identifying risk, preventing and managing pressure injuries. Executive sponsorship has engaged strategic and clinical leaders responsible for implementation of standardised prevention and management. The clinical governance unit are engaged and involved to enable pressure injury prevention to interface with other district wide programs. A range of forums have been used to convey a consistent message to staff at all levels of the organisation these focus on clinical quality and patient care and include Senior Nurse Management, executive leadership and Nurse Educators/Nurse Practitioner/Clinical Nurse Consultant forums. They provide opportunity for the Pressure Injury project team to report on key results and trends emerging from prevalence studies and engage with staff on strategies for improvement.

The interdisciplinary district project team consists of members from clinical nursing, allied health and management with medical staff contributing to sub-committee activities as key stakeholders. Membership is based on geographical location, clinical expertise and ability to influence and lead1. The team is responsible for the development of processes, policies, procedures and engaging content matter experts around current evidence based practice to prevent and manage pressure injuries.

To cascade a consistent message to 41 inpatient facilities and 44 community health centres, Pressure Injuries has been included as a KPI for clinical managers. This factor facilitates engagement and clarifies accountability for clinicians at all levels of the organisation in preventing adverse events. The Pressure Injury prevention and management e-Learning program interfaces with the district learning management system (LMS). All clinicians who prescribe pressure relieving/ redistributing equipment are required to undertake the learning program and managers have access to their staff completion details. The online PI learning course is required by all people who participate as a surveyor.

Following the first district wide point prevalence study a mattress replacement scheme and equipment algorithm were rolled out. Hiring of powered mattress systems was reduced and this saved $500,000 in the first year. The reduction in preventable pressure injury prevalence and severity has also reduced the cost of services by contributing to reducing use of consumables and reducing length of stay

The annual point prevalence surveillance is the key evaluation process for implementation of pressure injury prevention and management strategies. Analysis of trends informs the priorities and direction that future strategies should take. To undertake the survey in all 44 inpatient facilities, the commitment of approximately 200 staff members is required.

Survey teams of three people consist of all levels of nursing, from undergraduate students, local university academic staff to senior nursing leaders. Inter-disciplinary engagement has occurred with increasing numbers of allied health participating in these teams.

The Pressure Injury project team analyses the data and prepares reports for the district, clusters, facilities and wards. Accompanying these are survey recommendations which are individualised according to the results. The reports are pivotal to the communication process and are published on the intranet site so all managers can compare results with other services.

In recent years alignment with the clinical governance unit has been strengthened by the inclusion of Pressure Injuries in the National Safety and Quality Health Service (NSQHS) Standards as Standard 813. The district uses a range of change management strategies to cascade strategies, articulate accountabilities and imbed expected behaviours.

The Crystal model developed in HNELHD is an example of a systematic approach which engages both executive leadership and clinicians in the prevention and management of pressure injuries. Implementing the best practice strategies has created a difference in the quality and safety for those patients in our care.

Additional resources are now available to support organisations in developing programs to minimise the prevalence of pressure injuries. In 2012 the Pan Pacific Clinical Practice Guideline for the Prevention and Management of Pressure Injury8 were released and can be freely accessed on the Australian Wound Management Association website ( ). These guidelines have been acknowledged across Australia as a best practice resource for organisations to refer to in preparation to meet National standards, improve outcomes and reduce the incidence of this avoidable wound.

“Prevalence studies and clinic audits should be conducted to measure the extent of the organisations pressure injury problem and the effectiveness of the PIPP implemented.”

Margo Asimus is a Nurse Practitioner – Wound Management and Clinical Lead (2008-2013) for the Pressure Injury Prevention Program in the Hunter New England Health Local Health District. She participated in the guideline development group for the Pan Pacific Clinical Practice Guideline for the Prevention and Management of Pressure Injury. Margo is Vice President of the Australian Wound Management Association (AWMA) and President of AWMA-NSW.

Felicty Williams is the Nurse Manager Professional Development for HNELHD. She has additional Midwifery and Management qualification and broad experience across Metropolitan, Rural and Remote Nursing. She has management responsibilities for the Pressure Injury Prevention Program at HNHELHD since 2008 and education component at a local and state level.

Pui Ling (Iris) Li currently works as a Project Officer. She has been involved in the Pressure Injury Prevention Program since 2009. Iris completed her nursing Gerontic Master degree in 2005.


1. Asimus M, Maclellan L, Li P. Pressure ulcer prevention in Australia: the role of the nurse practitioner in changing practice and saving lives. Int Wound J. 2011;8(5):508–13.

2. Graves N, Birrell F, Whitby M. Effect of pressure ulcer on length of hospital stay. Infect Control Hosp Epidemiol. 2005;26(3):293–7.

3. Victoria Quality Council VQC State-wide PUPPS Report-2003 [Internet]. Victoria; Department of Human Services; 2004 [cited 21/1/2014] Available from: report.pdf

4. Victorian Quality Council VQC State-wide PUPPS Report 2006. [Internet]. Victoria; Department of Human Services; 2007 [cited 21/1/2014] Available from:

5. Strachan V., Prentice J., Newall N., Elmes R., Carville K., Santamaria N. & Della P. WoundsWest Wound Prevalence Survey 2007 State-wide Report. Ambulatory Care Services, Department of Health 2007: Perth, Western Australia.

6. Jackson, T., Nghiem, H.S., Rowell, D., Jorm, C. & J. Wakefield ‘Marginal costs of hospital acquired diagnoses: Information for priority setting for patient safety programs and research,’ Journal of Health Services Research &Policy, 2011. 16(3): 141-146.

7. Rayner R, Asimus M, Li PL. Pressure Injury. In: Swanson T, Asimus M, McGuiness B editors. Wound Management for the Advanced Practitioner. 1st ed. London: Academic Press; In Press May 2014.

8. Australian Wound Management Association. Pan Pacific Clinical Practice Guideline for the Prevention and Management of Pressure Injury. Cambridge Media Osborne Park, WA: 2012

9. Asimus M, Li P. Pressure ulcers in home care settings: is it overlooked? Wound Practice and Research. 2011: 19(2): 88-97.

10. International guidelines. Pressure ulcer prevention: prevalence and incidence in context. A consensus document. Australia: IP Communications, 2014.

11. Prentice JL, Stacey MC, Lewin G. An Australian model for conducting pressure ulcer prevalence surveys. Primary Intention. 2003;11(2):87–109.

12. Studer Group. The Nurse Leader Handbook, Florida: First Starter Publishing; 2010.

13. Australian Commission on Safety and Quality in Health Care. Safety and Quality Improvement Guide Standard 8: Preventing and Managing Pressure Injuries (October 2012). Sydney. ACSQHC, 2012.

Evaluation of a Multistate Public Engagement Project on Pandemic …


Summary: Program evaluation of public engagement processes is important in understanding how well these processes work and in building a knowledge base to improve future engagement efforts. This program evaluation examined a CDC initiative in six states to engage the public about pandemic influenza. Evaluation results indicated the six states were successful in engaging citizens in their processes, participants became more knowledgeable about the topic, citizens believed the process worked well, and projects were successful in influencing opinions about social values. Lessons learned from the evaluation included the importance of communicating evaluation expectations early in the process; creating a culture of evaluation through technical assistance; ensuring resources are available for on-site evaluation collaboration; and balancing the need for cross-site data with the interests of local projects to capture evaluation data relevant to each unique project.
Keywords: Public engagement, public health, communication, program evaluation, public policy, participatory model, participatory medicine.
Citation: Bulling D, DeKraai M. Evaluation of a multistate public engagement project on pandemic influenza. J Participat Med. 2014 Mar 8; 6:e5.
Published: March 8, 2014.
Competing Interests: The authors have declared that no competing interests exist.

The US Department of Health and Human Services, Centers for Disease Control and Prevention (CDC) funded public engagement initiatives in six states (Minnesota, Washington, Ohio, Massachusetts, Hawaii, and Nebraska). The purpose of these initiatives was to include citizens in values-based public policy development pertaining to pandemic influenza. In this paper, we describe the evaluation of this multistate project, and based on our experience in designing and implementing this evaluation, we share lessons learned that may be useful in evaluating public engagement processes in general.

Overview of the Evaluation

We used a participatory model to evaluate the project (see generally [1], [2], [3]). We chose this model because the participatory approach ensures that the needs of project sponsors and the local project implementers are incorporated as the project unfolds over time. This approach is particularly useful for complex projects that are collaborative in nature. [4][5] This project included collaborations between the funder (CDC) and each of the implementation teams at the state level along with project facilitators and the evaluation team. We believe that communication between the evaluation team and the funder should be clear, consistent, and collaborative over the life of the project. The evaluation team was available for planned and impromptu discussions with the state implementers, facilitators and the funder throughout the project, which enhanced the quality of the final product.

Evaluation Questions

We began our evaluation process by reviewing the original request for proposals and talking with the project sponsors to better understand the purpose and desired outcomes of the evaluation. From these discussions emerged key questions of interest to the CDC:

1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?
A rule of thumb for the CDC was to attract 100 individuals to each state citizen meeting. This number was not based on any statistical model of representativeness: rather, project sponsors consider this level of participation reasonable in communicating to policy makers a broad involvement of citizens within each state. This level of participation would also allow process facilitators to structure meetings that include both small group and large group discussions.

Project sponsors and facilitators were interested in recruiting a diversity of citizens representing multiple perspectives. While an exact replication of demographics within each community was not intended, it was a goal to attract citizens from different racial/ethnic groups, income levels, education backgrounds, age, gender, and profession. As a normative matter, commentators have asserted that involving a representative cross-section of the public to participate in deliberative forums is an ideal goal. Such representativeness is important to ensure all members of a community potentially affected by the policy matter at issue are provided a voice in the discussion. [6][7] Practitioners have also found that participants find greater satisfaction and value in participatory processes in which a wide diversity of viewpoints is shared. [8] Additionally, government sponsors of participatory processes benefit from listening to and receiving a broad – not narrow or selective – array of input. [9]

Recruitment of a representative cross-section can be challenging. Often, participatory forums can be dominated by special interest groups or others who represent a narrow personal or professional interest in a policy matter, rather than the interests of the community as a whole.[10] Research has also shown that some participatory forums tend to disproportionately attract individuals who are white, female, high-income, older, and have high educational levels. [11] Strategies to obtain more representative participants might involve using aggressive outreach and promotion efforts or oversampling techniques. Additionally, the use of a financial incentive can offset costs incurred through travel, daycare, or taking a day off from work, and attract individuals to participate in forums who are not motivated by personal or professional interests. [7]

2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?
One of the goals of the process was to ensure a sufficient level of participant knowledge so they can engage in informed dialogue about the issues. A process of education or increase in knowledge among participants is implicit in an effective deliberative experience. Thus, increase in knowledge among participants and their perceptions of the value of their discussion experience are measurable indicators of a successful deliberative discussion. [12][13]

The evaluation allows us to test assumptions for each state including (1) the degree to which the process significantly increases the relevant knowledge of participants; (2) whether participants believe they have sufficient knowledge to engage in informed discussion and make reasoned recommendations; and (3) whether the process produces some equalization of knowledge among participants; in other words, while participants are likely to have varying levels of knowledge going into the deliberation, the process may close this knowledge gap, resulting in a more equitable discussion of the issues. Through the evaluation, we also examine whether the information was successfully conveyed to specific populations based on demographics.

3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process?
Generally speaking, a deliberative experience is one in which participants carefully consider the pros and cons of a policy issue in a reasoned, informed, and balanced discussion. [14][15] A good deliberative experience involves listening to all sides of a debate, analysis of relevant information or evidence, and a discussion environment free of bias, peer pressure, or over-reliance on rhetoric. [7][16][17] A positive deliberative process may thus amount to a successful problem-solving experience, in which a solution to a policy question is arrived at through a process of reasoned and informed discussion. [18] Other components of deliberative quality include a respectful discussion tone, transparency and clarity of meeting objectives and rules, equal and fair treatment among participants, and comfort with the meeting’s physical location and environment. [8] Characteristics of a successful deliberation, such as exposure to different viewpoints, factual learning, and careful consideration of issues, may likely result in a shift in opinions or attitudes about the policy question of issue.

It is assumed that a well-facilitated meeting will result in a rich discussion of the issues in which multiple perspectives are considered and well-reasoned decisions or recommendations are made. To achieve this desired outcome, there are underlying assumptions about the process that can be tested through the evaluation including (1) whether the process is perceived to be fair by participants, (2) whether individual participants felt comfortable sharing their perspectives, (3) whether discussions were dominated by select individuals or groups, (4) how well discussions helped participants understand the trade-offs involved in policy decisions, (5) whether participants are satisfied with the outcome of the process, (6) the degree to which the process was perceived to be free from bias, and (7) whether all important points and perspectives were voiced.

4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options?
One of the assumptions of public engagement and deliberative processes is that through the process of understanding the issues, sharing perspectives, and gaining an appreciation of the trade-offs involved in policy decisions, participants change their opinions about the policies that should be implemented. If this were not the case, public input could be attained much easier and less expensively through public polling. This deliberative aspect is considered to be value-added because outputs will be more thoughtful and well reasoned. The evaluation could test this assumption by examining changes in perspectives about vaccine goals and values relevant to those goals. In addition, we hypothesize that because participants have a chance to obtain similar knowledge about pandemic influenza and develop a greater depth of understanding about the policy options, they will have increasingly similar perspectives after participation than before. In other words, the deliberative process will result in a convergence of beliefs among participants. We were also interested in whether there were differences among demographic groups in perspectives about policy choices.

5. Did the process affect citizen trust in government and support for policy decisions?
The primary goal for this public engagement process was to produce citizen and stakeholder perspective for state level policy makers to consider as they grapple with important decisions. The evaluation also tested whether the process had an impact in participant beliefs in other areas: specifically whether participants had greater trust in government and willingness to support policy decisions by public officials who considered their input. The evaluation tested this assumption by assessing trust in various levels of government before and after the process.

6. Did the process empower citizens to participate effectively in policymaking work?
Another by-product of public engagement is that citizens might feel more empowered by participating in public dialogue about important issues and increasing their involvement in activities designed to improve society or their community (e.g., voting, volunteering, lobbying elected officials). [19] The evaluation tested this assumption by assessing changes in participant planned activities such as participating in civic activities and public policy generally.

7. How did decision makers use citizen information?
A key indicator of the success of a participatory process is the extent to which the process resulted in any significant policy impact. Identifying what impacts equate with success is, however, a subjective exercise. Arguably, the optimal goal of a participatory process is for the public to have a direct opportunity to make policy that reflects their preferences and priorities. However, successful impact can have other manifestations. Public participation can inform or improve decision-making; it can connect the public with each other and policymakers, build trust in government, provide opportunities for public education about policy issues, and foster healthy discourse and discussion in general. [20] In a minority of cases, policymakers can have less virtuous objectives behind sponsoring participatory processes, such as to placate select interests, manage public impression, or generate public acceptance of a pre-determined policy. [21]

Impact can be measured in a number of ways. The extent to which a participatory process does directly influence policy has been measured through policymaker perceptions of how public input improves or informs policy decisions. [22] Additionally, changes in citizen trust and confidence in government, or perceptions of government responsiveness, can indicate a positive impact in participant attitudes towards government. [11] Commentators have also argued that participating in robust, deliberative experiences about policy can increase political sophistication among participants, [23][24] and research has shown such an increase can indeed occur after citizens engage in deliberative forums, [25] or that participants’ policy opinions change in other ways. [26]

Once recommendations from the citizen engagement efforts are communicated, there is an assumption (or expectation) that decision makers will carefully consider this information as they make policy. Through the evaluation, we hoped to understand how information from the public engagement process was communicated to decision makers, how they considered the citizen and stakeholder input in relation to various other information sources, and the extent to which public engagement input impacted policy decisions. Specifically, we planned to assess (1) how well decision-makers understood the process, (2) whether decision-makers read the report or outputs from the process, (3) whether public input from the process was part of the information considered in developing the policy, (4) whether public input become part of the evidence or justification for or against certain alternatives, and (5) whether public input affected the policy in a clearly defined way. We also planned to explore the expectations of decision makers regarding the public engagement process and the type of information resulting from the process that would be useful in making policy decisions.

8. How well did the process increase state and local capacity to engage the public on policy choices? One of the goals of the project was to increase capacity of states and local jurisdictions to involve the public in decision making on an ongoing basis and to sustain this capacity after the project. The CDC funded technical assistance to assist each state in designing public engagement processes, identifying and recruiting participants, forming teams to identify public policy objectives, developing agendas, incentivizing participation in public engagement processes, facilitating meetings, incorporating citizen input into the decision making process, and communicating results to citizens.

Evaluation Methods

We used a mixed methods evaluation design including both quantitative and qualitative information. The protocol was submitted to the University of Nebraska Institutional Review Board and determined to be program evaluation and not human subject research. There were five major components to the evaluation methodology: (1) a pre-post survey conducted at each citizen and stakeholder meeting to assess change in knowledge, opinions about social values, and trust in government, (2) a survey conducted after each public engagement meeting to assess perceptions about the process, (3) focus groups and individual interviews conducted with randomly selected participants immediately after the meetings to assess empowerment and perceptions about the process, (4) key informant interviews with state officials, facilitation contractors, and CDC representatives to assess changes in capacity for engaging the public in policy decisions and how the public input was used in policy development (after meetings had all been conducted), and (5) a review of documents in each state to assess the overall process and how information was conveyed to policy makers.

All surveys and interview questions went through a rigorous process of cognitive testing for comprehension and ease of administration. Responses for survey items were randomly ordered where possible to account for selection order bias; three versions of each survey were produced. A coding system was developed for pre-post surveys to ensure before and after measures could be matched by individual respondent. Qualitative data for this evaluation were drawn from 69 interviews for over 24 hours of audio data; five focus groups held after public engagement events; meeting summaries and notes from all six project sites; notes from contractor conference calls; evaluator observations of public engagement events and material from two lessons learned meetings held at the beginning and end of the project period. This data was used to help document the process of implementing public engagement projects by each state. Initial codes used to analyze the focus group and interview data were derived from evaluation questions. Additional codes emerged using the constant comparative technique [27] with the aid of the Atlas.ti qualitative analysis software program. Multiple coders reviewed the data and periodically met to resolve differences in code interpretation. This approach of comparing data and reaching consensus is part of Consensual Qualitative Research (CQR) and is consistent with the constant comparative technique (Hill, Thompson & Williams, 1997).[28]

Evaluation Results

A comprehensive review of the evaluation results is beyond the scope of this paper; however, we will highlight the major findings. (The full evaluation reports can be found on the University of Nebraska Public Policy Center website.[29] )

1. How successful was each project in attracting participation by sufficient numbers of citizens with a broad diversity of perspectives?
The six states were successful in engaging sufficient numbers of citizens to engage in dialogue about pandemic influenza policy issues; however, most states did not reach the goal of attracting 100 participants to meetings. Projects were successful in attracting a diversity of citizens to deliberations. Demographic characteristics of participants did not always match the characteristics of the broader communities within which the meetings were held but in some cases this was intentional. For example, in Washington there was a concerted effort to partner with community groups who could reach out to specific minority populations. In several states the focus was attracting certain sectors or groups within their communities rather than convening a representative sample; and in Nebraska the focus was on Native Americans/American Indians. Males were underrepresented across all states and older persons tended to be overrepresented. Most of the citizen meetings were representative of the broader community with respect to race and ethnicity; for meeting locations that were not representative, minority populations tended to be overrepresented. Participants also reflected a diversity of education levels, income levels and whether participants had children living at home. At all locations and across states, citizens, on average, agreed with the statement “Participants at this meeting represented a broad diversity of perspectives.” (See Figure 1.)

Figure 1. Perceptions of diversity by state (citizens).
Bulling-DeKraai Fig 1

2. How successful was the process in ensuring a sufficient level of citizen knowledge about pandemic influenza policy so they could engage in informed discussions?
For the most part, projects were successful in increasing the knowledge of citizens so they could engage in informed discussions about pandemic influenza. Knowledge increased in all states; however the change was statistically significant in only four of the states. Citizens generally believed they had enough knowledge to have well informed opinions about decisions related to pandemic influenza. Also, contrary to expectations, the processes across projects did not significantly level the playing field in terms of knowledge; participants were as varied in their level of knowledge at the end of the process as they were when they walked in the door. (See Table 1.)

Table 1. Participant knowledge by state.
Bulling-DeKraai Table 1

3. Did the process result in a balanced, honest, and reasoned discussion of the issues and what would have improved the process?
Participants in the public engagement processes generally thought the deliberative processes were high quality. Participants believed the discussions were fair to all participants, individuals were comfortable talking in the discussion, the process helped them better understand the types of trade-offs involved in policy decisions, and the process produced independent information and resulted in a valuable outcome (see Table 2).

Table 2. Perceptions of process by state (citizens).
Bulling-DeKraai Table 2

4. How did the process affect citizen perceptions about pandemic influenza policy options and values underlying those goals or options?
The projects were generally successful in influencing opinions about social values and policy options related to pandemic influenza. Citizen posttest ratings of importance of social values were significantly different than pre-test scores. This result indicates that overall as part of the deliberative processes conducted in each state, citizens changed their opinions about social values after being exposed to an educational presentation and discussing policy options. This result is important because it demonstrates that deliberative processes provide a different quality of input than surveys or polls.

5. Did the process affect citizen trust in government and support for policy decisions?
Citizens did not significantly change their trust in various levels of government as a result of the process. However, participants tended to believe their input would be used by decision makers. Stakeholders and citizens expressed hope in interviews and focus groups that decision makers would use the information offered at the events when making policy level decisions (see Table 3). There was no single expectation about how the information would be used, but many participants wanted to receive some sort of feedback from the project sponsors with that information. The presence of a decision maker at citizen events seemed to be proof to many that the information generated at the event was considered important by someone. Even when citizen and stakeholder ratings on surveys for trusting officials were low, their comments in interviews about the office or person representing the office present at the event were positive.

Table 3. Perceptions of process by state (citizens).
Bulling-DeKraai Table 3

1. Did the process empower citizens to participate effectively in policymaking work?
To some extent the deliberative processes empowered citizens to participate effectively in public decision-making work. Citizens from all states reported in interviews and focus groups that they felt empowered and heard at the deliberation events. They were unsure of the impact their participation would have on decisions, but in almost every instance held out hope that the results of the deliberation would be considered when decisions were made. Almost all of the citizens interviewed enjoyed the deliberation events and appreciated the organization and facilitation. The seriousness of the event along with the presence of public officials led citizens to conclude their input would be taken into consideration, which was empowering. In one state, however, citizens perceived a public official as treating the event “casually,” which left them with a feeling that their input was not important. Conversely, in several states a public official traveled a great distance to attend and stayed for the entire event, which was noted by citizens as a sign their work was important.

Many of the citizens made comments about being empowered to serve as a conduit of information for their peers as a result of participating in the deliberative events. They may not have agreed with other discussants or with recommendations resulting from the event, but they generally believed they were better equipped to relay information to friends, family, neighborhoods or organizations as a result of participating in discussions. Empowerment to participate in public decision-making work seemed to emanate from different aspects of the events. For example, Nebraska tribal participants commented on the empowerment value of the information received at citizen gatherings and the value of the discussions at the stakeholder gathering. Citizens generally reported in interviews and focus groups they would consider attending another deliberation event on other topics as a result of their experience with this one.

2. How did decision makers use citizen information?
The state projects had some success in informing and assisting state and local decision makers involved in pending policy decisions related to pandemic influenza. Given the limited time period to assess this aspect, it is unclear how these deliberative processes will impact long-term decisions. Interviews with state level officials engaged in public health policy decisions revealed varying levels of immediate project impact with decision makers. Generally, the largest impact was personal and related to decision maker attendance at the event rather than from upward movement of a document or set of recommendations resulting from the event. In the limited time frame of the evaluation, states were still preparing final reports from the project and were not able to point to official documentation that reflected incorporation of citizen input in official state plans for pandemic preparation or response. This does not, however, tell the full story of how policy maker decisions were impacted. For example, one policy maker talked about the very real decisions that had to be made when the H1N1 outbreak occurred in the middle of the project; she said it was valuable to hear “real people wrestle with these issues while I was wrestling with it. It gave form and substance to conversations we need to have.” This sentiment was echoed by policy makers from every project who attended the project-sponsored deliberations. This influence was translated into operational decisions at the policy level that were not scripted by planning documents.

3. How well did the process increase state and local capacity to engage the public on policy choices?
There appeared to be some increase in state and local capacity to effectively engage the public in policy choices. The level of expertise in the public deliberation model envisioned by the CDC varied across the states receiving the cooperative agreement for this project. The project proposals contained a mix of traditional and innovative public information and engagement models. All jurisdictions receiving the awards were committed to engaging the public, but state project directors reported challenges reconciling their project designs with federal expectations to use a specific deliberative process with federal contractors as facilitation experts rather than the locally trusted contractors envisioned within their project proposals.

The states with prior experience using the model had less difficulty organizing and carrying out their projects than the states that had not been exposed to it prior to receiving funding via the cooperative agreement. All state project leads reported a temporary increase in capacity with the infusion of funds to support public engagement efforts. Although all states recognized value in engaging citizens and extracting focused input on issues, the time and cost of obtaining input using the deliberative model was perceived as prohibitive and not sustainable without additional funding to bolster capacity on an ongoing basis.

Lessons Learned

Many of the lessons learned from past public engagement projects have been associated with implementation of a process with citizens, stakeholders and policy makers. Evaluation lessons have resulted in recommendations to involve evaluators early in the process, create shared understanding of the importance of evaluation, clearly document the process to help explain evaluation results and involve policy makers early to track the impact of the public engagement process. The cross-site evaluation for pandemic influenza demonstration projects yielded four similar lessons learned that inform the role and function of evaluation and evaluators in multi-site public engagement projects.

1. Communicate cross-site or national evaluation expectations to project designers prior to their submission of project proposals.
State proposals for the pandemic influenza demonstration project included several types of public engagement models. Each project addressed policy issues important to the state or local organizers related to planning for pandemic influenza, but each varied in the approach taken to engage the public. The cross-site evaluation was designed to answer broad questions to assess impact across all of the projects. The States putting in project proposals were not aware of the cross-site evaluation goals when they designed their projects, so many had included evaluation components of local interest. Once awarded, States were told they were expected to use a single evaluation contractor to ensure cross-site evaluation needs were met. Although project sites were interested in using cross-site tools, they had to rethink their timelines and plans to incorporate them. The local/state partners who were testing innovative public engagement models were asked to incorporate the cross-site tools even though they were designed with the assumption that engagement would be in-person rather than on-line or via other mediums. We believe the local/State partners would have been more accommodating of the cross-site evaluation if they had been able to contemplate how it fit when they were designing their project applications. Setting the expectation of participation in the cross-site evaluation activities early assists project planners to incorporate evaluation components in their design.

2. Create an expectation that cross-site evaluators will provide technical assistance to local/state projects to ensure local evaluations are meaningful and compatible with cross-site evaluation needs.
Traditional evaluation usually means a neutral entity observes, collects data and provides feedback to project organizers and sponsors about process and outcomes. In the pandemic influenza demonstration project the evaluation could have been strengthened if the cross-site evaluators’ role was enhanced to include provision of technical assistance for local/State projects as they developed local evaluation questions. The cross-site material was valuable, but in some cases not as meaningful to local/State policy makers as it could have been. The cross-site evaluators offered to add questions or data points to the instruments but local teams were left with the responsibility of identifying the type of data they desired. In retrospect, this customization could have been stronger if cross-site evaluation team members were able to provide more in-depth technical assistance to the project sites as they considered the process and outcome measures that were meaningful to their policy makers as well as how the cross-site evaluation results could be used to strengthen their projects. The request for proposals for the overall demonstration project did not include a requirement for local evaluation personnel to unburden local/State projects by providing evaluation for them. However, the lesson learned was that cross-site evaluation would be more locally meaningful and effective if the role of the evaluator was expanded to include provision of technical assistance to ensure local needs are being adequately addressed.

3. Site visits by cross-site evaluators would increase applicability of results for local/state projects.
The pandemic influenza demonstration project began with a lessons learned conference to help successful State project applicants by bringing them together with previous public engagement organizers to give them the benefit of learning from the experience of others. Cross-site evaluators were introduced to State project personnel at this forum. This was a good beginning, but in the future we recommend follow-up with an in-person site visit as soon as possible at the beginning of the project. Although telephone contact was helpful, we believe cross-site evaluation expectations and adaptations could have been made more meaningful to local/State projects if on-site consultation were built into the overall design and expectations of evaluators. Early on-site consultation provides an opportunity for evaluators to communicate cross-site evaluation expectations, answer questions about the evaluation and begin the process of assisting projects with identification of local evaluation needs. This is recommended for instances where technical assistance is provided, and in cases when cross-site evaluation protocols are expected to be carried out by local organizers. On-site consultation would also be beneficial at the data collection stage and at the end when results are being interpreted. Increased involvement of local/State project personnel in interpreting the results of cross-site and site-specific data strengthens the applicability of findings and is consistent with the participatory model of evaluation.

4. Balance flexible evaluation design with tools that capture cross-site data effectively.
This evaluation included a need to flexibly balance local and federal expectations and tools. The role of the cross-site evaluator is to err on the side of comparison across sites rather than customizing to meet local needs. However, capturing the effectiveness of different models of public engagement required flexibility on the part of evaluators. For example, capturing change in knowledge of participants requires evaluators to understand the knowledge targets of project organizers. Cross-site comparison of similar knowledge questions only works when the same material is presented or made available to participants at each site. The variability in projects, presenters, presentation medium and style could only be documented but not controlled. Flexibly identifying change in knowledge as a cross-site question may be more effectively assessed by incorporating local knowledge targets rather than predetermining general knowledge questions.

The lessons learned from this evaluation can be of use to government planners as they consider how to structure cross-site evaluation components in future projects but they are also applicable to other planners and practitioners who want to incorporate evaluation in their work. For example, local public health agents may wish to use public engagement processes in neighborhoods related to a specific health issue and the methods they use may differ in each location to accommodate the culture of the area. Evaluation of the engagement processes across neighborhoods would be akin to the project we document here across states and the lessons learned could be of benefit to the public health community.


  1. Cousins JB, Earl LM, eds. Participatory Evaluation in Education: Studies in Evaluation Use and Organizational Learning. Washington, DC: The Falmer Press; 1995.
  2. Cousins J, Whitmore E. Framing participatory evaluation. In: Whitmore E, ed. Understanding and Practicing Participatory Evaluation. San Francisco: Jossey-Bass; 1998. New Directions for Evaluation. 1998; 80:5-24.
  3. Gregory A. Problematizing participation: a critical review of approaches to participation in evaluation theory. Evaluation. 2000; 6:179-199.
  4. Greene JC. Stakeholder participation and utilization in program evaluation. Evaluation Review. 1988; 12 (2):91-116.
  5. Mark MM, Shotland RL. Stakeholder-based evaluation and value judgments. Evaluation Review. 1985; 9:605-626.
  6. Chambers S. Deliberative democratic theory. Annual Review of Political Science. 2003; 6:307-326.
  7. Fishkin J. The voice of the people. New Haven, Conn: Yale University Press; 1995.
  8. Halvorsen KE. Assessing public participation techniques for comfort, convenience, satisfaction, and deliberation. Environmental Management. 2001; 28(2):179-186.
  9. Carnes SA, Schweitzer M, Peelle EB, Wolfe AK, Munro JF. Measuring the success of public participation on environmental restoration and waste management activities in the US Department of Energy. Technology in Society. 1998; 20:385-406.
  10. Guild W, Guild R, Thompson F. 21st century polling. Public Power Magazine. 2004; March-April:28-35.
  11. Goidel RK, Freeman CM, Procopio S, Zewe CF. Who participates in the “Public Square” and does it matter? Public Opinion Quarterly. 2008; 72(4):792-803.
  12. Shindler B, Neburka J. Public participation in forest planning: attributes of success. Journal of Forestry. 1997; 95(1):17-19.
  13. Webler, T., Tuler, S., & Krueger, R. (2001). What is a good public participation process? Five perspectives from the public. Environmental Management, 27(3), 435-450.
  14. Matthews D. For Communities to Work. Dayton, Ohio: Kettering Foundation Press; 2002.
  15. Stromer-Galley J. Decoding deliberation. Paper presented at the Second Conference on Online Deliberation: Design, Research, and Practice, May 20-25, 2005; Stanford, California.
  16. Delli Carpini, MX, Cook FL, Jacobs LR. Public deliberation, discursive participation, and citizen engagement: a review of the empirical literature. Annual Review of Political Science. 2004; 7:315-344.
  17. Gastil J. Democracy in Small Groups. Gabriola Island, BC: New Society Publishers; 1993.
  18. Muhlberger P. Defining and measuring deliberative participation and potential: a theoretical analysis and operationalization. Paper presented at the International Society of Political Psychology Twenty-Third Annual Scientific Meeting, July 1-4, 2000; Seattle, Washington.
  19. Min J. Online vs. face-to-face deliberation: effects on civic engagement. Journal of Computer-Mediated Communication. 2007; 12(4):article 11.
  20. Beierle TC, Cayford J. Democracy in Practice: Public Participation in Environmental Decisions. Washington, DC: Resources for the Future; 2002.
  21. Arnstein SR. A ladder of citizen participation. Journal of the American Institute of Planners. 1969; 35(4):216-224.
  22. Carnes SA, Schweitzer M, Peelle EB, Wolfe AK, Munro JF. Performance Measures for Evaluating Public Participation Activities in DOE’s Office of Environmental Management (ORNL-6905). Oak Ridge, Tenn: Oak Ridge National Laboratory; 1996.
  23. Fishkin J. Democracy and Deliberation. New Haven, Conn: Yale University Press; 1991.
  24. Gastil J, Adams GE. Understanding Public Deliberation. Albuquerque, NM: Institute for Public Policy; 1995.
  25. Gastil J, Dillard JP. Increasing political sophistication through public deliberation. Political Communication. 1999; 13:3-23.
  26. Barabas J. How deliberation affects policy opinions. American Political Science Review. 2004; 98(4):687-701.
  27. Glaser B, Strauss A. The Discovery of Grounded Theory. Chicago: Aldine Publishing Company; 1967.
  28. Hill CE, Thompson BJ, Williams EN. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997; 25(4):517-572.
  29. Public Policy Center, University of Nebraska. Evaluation of Public Engagement Demonstration Projects for Pandemic Influenza. Lincoln, Neb: Public Policy Center, University of Nebraska; May 31, 2010. Available at: Accessed March 5, 2014.

Copyright: © 2014 Denise Bulling and Mark DeKraai. Published here under license by The Journal of Participatory Medicine. Copyright for this article is retained by the author, with first publication rights granted to the Journal of Participatory Medicine. All journal content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 License. By virtue of their appearance in this open-access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

JMIR–Evaluating User Experiences of the Secure Messaging Tool …

This paper is in the following e-collections/theme issues:

Email & Web-Based Communication 

Personal Health Records and Patient Portals 

Advertisement: Preregister now for the Medicine 2.0 Congress

Original Paper

Evaluating User Experiences of the Secure Messaging Tool on the Veterans Affairs’ Patient Portal System

Jolie N Haun1,2, EdS, PhD; Jason D Lind1,3, MPH, PhD; Stephanie L Shimada4,5,6, PhD; Tracey L Martin7, RN, MSN; Robert M Gosline8; Nicole Antinori1, MBA; Max Stewart9; Steven R Simon9, MD, MPH

1Department of Veterans Affairs, HSR&D/RR&D Center of Innovation on Disability and Rehabilitation Research, James A Haley VA Medical Center, Tampa, FL, United States
2Department of Community & Family Health, Unversity of South Florida, College of Public Health, Tampa, FL, United States
3Department of Anthropology, University of South Florida, Tampa, FL, United States
4Department of Veterans Affairs, Center for Healthcare Organization and Implementation Research (CHOIR) and National eHealth Quality Enhancement Research Initiative (QUERI) Coordinating Center, Edith Nourse Rogers Memorial VA Hospital, Bedford, MA, United States
5Department of Health Policy and Management, Boston University School of Public Health, Boston, MA, United States
6Division of Health Informatics and Implementation Science, Department of Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, United States
7Department of Veterans Affairs, VA New England Health Care System, Bedford, MA, United States
8Department of Veterans Affairs, James A Haley VA Medical Center, Tampa, FL, United States
9Department of Veterans Affairs, Center for Healthcare Organization and Implementation Research, Section of General Internal Medicine, VA Boston Healthcare System, Boston, MA, United States

Corresponding Author:

Jolie N Haun, EdS, PhD

Department of Veterans Affairs
HSR&D/RR&D Center of Innovation on Disability and Rehabilitation Research
James A Haley VA Medical Center
8900 Grand Oak Cir (151R)
Tampa, FL, 33637-1022
United States
Phone: 1 813 558 7622
Fax: 1 813 558 7616


Background: The United States Department of Veterans Affairs has implemented an electronic asynchronous “Secure Messaging” tool within a Web-based patient portal (ie, My HealtheVet) to support patient-provider communication. This electronic resource promotes continuous and coordinated patient-centered care, but to date little research has evaluated patients’ experiences and preferences for using Secure Messaging.
Objective: The objectives of this mixed-methods study were to (1) characterize veterans’ experiences using Secure Messaging in the My HealtheVet portal over a 3-month period, including system usability, (2) identify barriers to and facilitators of use, and (3) describe strategies to support veterans’ use of Secure Messaging.
Methods: We recruited 33 veterans who had access to and had previously used the portal’s Secure Messaging tool. We used a combination of in-depth interviews, face-to-face user-testing, review of transmitted secure messages between veterans and staff, and telephone interviews three months following initial contact. We assessed participants’ computer and health literacy during initial and follow-up interviews. We used a content-analysis approach to identify dominant themes in the qualitative data. We compared inferences from each of the data sources (interviews, user-testing, and message review) to identify convergent and divergent data trends.
Results: The majority of veterans (27/33, 82%) reported being satisfied with Secure Messaging at initial interview; satisfaction ratings increased to 97% (31/32, 1 missing) during follow-up interviews. Veterans noted Secure Messaging to be useful for communicating with their primary care team to manage health care needs (eg, health-related questions, test requests and results, medication refills and questions, managing appointments). Four domains emerged from interviews: (1) perceived benefits of using Secure Messaging, (2) barriers to using Secure Messaging, (3) facilitators for using Secure Messaging, and (4) suggestions for improving Secure Messaging. Veterans identified and demonstrated impediments to successful system usage that can be addressed with education, skill building, and system modifications. Analysis of secure message content data provided insights to reasons for use that were not disclosed by participants during interviews, specifically sensitive health topics such as erectile dysfunction and sexually transmitted disease inquiries.
Conclusions: Veterans perceive Secure Messaging in the My HealtheVet patient portal as a useful tool for communicating with health care teams. However, to maximize sustained utilization of Secure Messaging, marketing, education, skill building, and system modifications are needed. Data from this study can inform a large-scale quantitative assessment of Secure Messaging users’ experiences in a representative sample to validate qualitative findings.

(J Med Internet Res 2014;16(3):e75)


veterans; secure messaging; patient-provider communication; Department of Veterans Affairs; usability testing; mixed methods; patient-centered care

The Institute of Medicine (IOM) has identified patient-provider communication as a central component to improving quality of care and patient outcomes [1]. My HealtheVet is the Department of Veterans Affairs’ (VA) online patient portal and personal health record designed for veterans, active duty service members, and their dependents and caregivers. My HealtheVet provides veterans with tools (eg, Blue Button, VA immunization records, laboratory test results, prescription refills, VA appointments) to make informed decisions and manage their health care. “Secure Messaging” is an email-like electronic resource within My HealtheVet designed to promote continuity of patient-provider communication [2-4]. As VA further implements Patient Aligned Care Teams (PACT) as a model of the patient-centered medical home, secure messaging is emerging as a key mechanism of communication between veterans and their health care team members. Successful implementation of secure messaging is therefore a priority not only for VA but also for other health care systems in the United States that strive to adopt principles of the patient-centered medical home. Moreover, outside VA, providers are being incentivized via Stage 2 Meaningful Use requirements (Medicare Electronic Health Records (EHR) Incentive Program) to use secure messaging among at least 5% of their patients to communicate relevant health information [5].

Previous work has demonstrated the utility and value of providing patients access to their electronic health record [6-8]. Patients also value secure messaging to communicate electronically with their providers [2-4]. Effective use of secure messaging can improve patient self-care management, patient engagement, and utilization of health services. In addition to allowing an option for self-care management, this electronic tool holds potential for supporting clinical tasks including medication reconciliation [9]. Secure messaging supports system utilization benefits in addition to perceived benefits by patient and clinical team users. A recent study showed a 7-10% decrease in outpatient visits and a 14% reduction in telephone contacts as a result of secure messaging [10,11]. Houston et al reported that 95% of respondents felt email was a more efficient means of communication with their physicians than the telephone, and 77% noted being able to communicate adequately via email without a face-to-face appointment [4]. Patient use of secure messaging has been associated with improved outcomes for chronic conditions [10,12]. Zhou et al reported in a recent study that within a two-month period there were improvements in care as measured by the Healthcare Effectiveness Data and Information Set (HEDIS) [10]. Patients with diabetes using secure messaging improved on all measures recommended for testing and control of glucose, cholesterol, and blood pressure levels by an average of 2.4-6.5% compared with patients not using secure messaging. In the same study, rates of received health services improved in the secure messaging group compared to the control group [10]. These findings suggest that successful implementation of secure messaging may provide a viable cost-efficient means of patient-provider communication.

Implementing health information technology, such as secure messaging, requires systematic inquiry grounded in implementation science to identify barriers to and facilitators of user adoption and utilization. The Technology Acceptance Model (TAM) [13] and the Theory of Planned Behavior (TPB) [14] have been found to be useful in predicting adoption of technology. While secure messaging has been shown to promote continuous and coordinated patient-centered care, little research has evaluated patients’ experiences with and preferences for using secure messaging. In order to maximize sustained utilization of secure messaging, marketing, education, skill building, and minor system modifications may be needed. Evaluation of secure messaging users’ experiences using the TAM and TPB frameworks can increase our understanding of issues related to access, continuity, and coordination of care for veterans that will support adoption and long-term utilization of Secure Messaging in My HealtheVet.

Findings from the Secure Messaging evaluation research will inform efforts to transform care delivery both within and beyond the VA system. Thus, the aims of this study were to (1) characterize veterans’ beliefs, attitudes, and perceptions toward using the Secure Messaging tool, (2) describe the patterns of veterans’ use of Secure Messaging, (3) identify the barriers to and facilitators of using Secure Messaging, and (4) describe strategies for promoting facilitators and overcoming barriers to using Secure Messaging.

Study Design

This prospective descriptive qualitative study used mixed-methods to describe veterans’ experiences using Secure Messaging in the My HealtheVet portal. As an implementation study, the underlying objective was to understand veterans’ needs to promote increased access to and sustained utilization of the Secure Messaging tool. A combination of in-depth interviews, user-testing, a 3-month review of transmitted secure messages between veterans and staff, and 3-month follow-up phone interviews was used to characterize veteran Secure Messaging utilization. Demographic data as well as computer and health literacy measurements were collected through survey and in-depth interviews at baseline and 3-month follow-up.

Setting and Participants

The two-site study was conducted at two large VA Medical Centers (VAMCs): the James A. Haley Veterans’ Hospital (Tampa, Florida) and the Veterans Affairs Boston Healthcare System (Boston, Massachusetts). We used administrative data to identify veterans at both VAMCs who had registered for My HealtheVet, completed the in-person process of authenticating their identity, and accessed the system to “opt-in” to use Secure Messaging. This approach identified 3926 potential participants at Tampa and 924 at Boston. Next, randomization was used to create contact lists of 120 potential participants from each site list. All 240 potential participants were contacted and screened to be purposively sampled based on their self-reported previous use of Secure Messaging. Participants were recruited for study participation until domain and theme saturation was reached.

Inclusion criteria included veterans who were independent Secure Messaging users, without cognitive impairment that prevented use of a personal computer or the ability to provide informed consent. Based on qualitative sampling methods [15,16], saturation was anticipated to occur between 12 to 15 interviews; an over-recruitment strategy was used at each site to allow for attrition, resulting in 33 total participants. One participant was lost to follow-up for unknown reasons, resulting in a complete dataset of 32 participants. Veterans received up to US$50 for their participation: US$20 for participation in the initial interview and user-testing and an additional US$30 for allowing the researchers unrestricted access to review the content of their secure messages and participation in the 3-month follow-up telephone interview. Participants provided informed consent upon their arrival for the initial face-to-face interview and user-testing. This study was approved and regulated by the VA Central Institutional Review Board.

Data Collection Instruments


Data were collected using demographic and health literacy surveys, in-depth face-to-face interviews, Secure Messaging usability testing, prospective collection of the content of secure messages, and 3-month follow-up telephone interviews. All data, with exception of the Secure Messaging data, were collected at two time points: during a baseline in-person meeting and during a 3-month follow-up phone interview. Prospective Secure Messaging data were collected between the baseline and 3-month follow-up time points.

Participant Surveys and Assessments

During the initial research visit, veterans completed a 13-item demographic survey to ascertain age, gender, race/ethnicity, education level, income level, marital status, computer use, Internet use, My HealtheVet use, and Secure Messaging use. Health literacy was assessed using two validated instruments: (1) the Brief Health Literacy Screening Tool (BRIEF), and (2) the Rapid Estimate of Adult Literacy in Medicine (REALM) survey. The BRIEF is a 4-item self-report screening tool to assess health literacy skills [17]. The REALM assesses health literacy by having respondents verbally articulate three columns of 22 health-related terms [18].

Electronic health literacy was also assessed using two instruments: (1) the eHealth Literacy Scale (eHEALS), and (2) the Computer-Email-Web (CEW) Fluency Scale. The eHEALS is a 10-item measure of eHealth literacy developed to measure consumers’ knowledge, comfort, and perceived skills at finding, evaluating, and applying electronic health information to health problems [19]. The CEW Fluency Scale is a 21-item measure of common computer skills [20].


Face-to-face semi-structured interviews with participants were conducted by an experienced interviewer trained in the social sciences. Interviews focused on participants’ experiences using Secure Messaging. The interview guide was created following the Theory of Planned Behavior (TPB) framework to elicit beliefs and attitudes, subjective norms, perceived behavioral control, and behavioral intention toward Secure Messaging use. Other interview questions were developed based on the Technology Acceptance Model (TAM) and addressed usefulness and ease of use of Secure Messaging. Interviews followed the guide but were open-ended in nature, allowing the interviewer flexibility to ask probing questions and to follow up on interesting topics and user experiences related to Secure Messaging.

Based on the initial interviews, a brief phone interview guide was developed to address Secure Messaging use during the 3-month period after the first interview. These interviews were conducted to assess recent Secure Messaging use: usefulness, expectations, barriers and facilitators, satisfaction, and suggestions for improvement.

Secure Messaging User-Testing

In-person Secure Messaging user-testing was conducted to prompt participants to complete a series of tasks they would normally encounter while using Secure Messaging. User tasks included navigating to the My HealtheVet site, logging in to Secure Messaging, setting user preferences, checking the Inbox, opening a secure message, opening and reading an attachment, and sending a secure message. Task completion, obstacles, and facilitators were recorded using a checklist, which directly corresponded to the user-testing tasks. Usability testing with each participant was conducted using Morae software [21,22], and allowed for the live, remote observation and video-recording of the user being tested (eg, recording of clicks, keystrokes, and other events) [23]. Participants were asked to “think aloud” and vocalize their thoughts, experiences, feelings, and opinions while interacting with the program as they used the Secure Messaging feature [24,25].

Secure Messaging Content

Secure messages were collected, both outgoing and incoming secure messages were collected for each participant over a 3-month period following their provision of informed consent. Data included sender and recipient identification, date and time of delivery, subject header, category of message subject (eg, test, appointment, medication, general), and verbatim content of the secure message text. We examined the quantity of messages, message content, exchange patterns, and timing of inbound and outbound messages between participants and their health care teams. This approach allowed for analysis of authentic user content and patterns to further inform research findings.

Data Management and Analysis

All data, including interviews and paper-based surveys gathered in this study were stored on a secure VA network. Audio recordings of all interviews were transcribed and subsequently analyzed using ATLAS.ti [26], qualitative data analysis software. Descriptive statistics from veteran surveys were managed using the statistical software suite SPSS version 21 (SPSS IBM, New York, USA). Data from Secure Messaging usability testing were captured using Morae recording software.

We used content analysis methods to analyze all interview data to identify domains and taxonomies related to participants’ experiences using Secure Messaging [15]. We used the semi-structured interview guide to organize and code interview text to develop thematic categories. Categories were grouped into taxonomic relationships and then compared and contrasted across coded categories. Coding schemas were developed by two research team members to create domains and taxonomies and evaluated for inter-rater reliability and credibility. Data were then categorized and interpreted, and barriers and facilitators were identified. Quantitative data were summarized with descriptive statistics to describe sample characteristics. Frequency counts and proportions provided a descriptive overview of the user-testing findings.


A total of 33 participants were recruited, of whom 32 provided complete data. One participant provided initial interview, user-testing, and secure message content data, but could not be reached for the follow-up phone interview.

Survey and Assessment Findings

The majority of participants were older white males (26/33, 79%) and ranged in age from 27 to 77 years, mean age 59.5 (SD 11.9). All participants had at least a high school education, and 64% (21/33) had an annual income of US$35,001 or more. Demographic characteristics are reported in Table 1.

Though skills varied, the majority of participants had adequate health literacy and eHealth competency skills. Study participants had higher levels of health literacy than the general veteran population [27]. Though comparative studies are not available for this population using these tools, the electronic health literacy scores on the eHEALS and the CEW produced similar findings. Instrument range, sample range, mean, and SDs are illustrated in Table 2.

At baseline, all participants (n=33, 100%) reported using a computer and the Internet more than once a week. Most participants (22/33, 67%) reported using Secure Messaging for at least the past six months (10/33, 30%) or longer (12/33, 36%), while the remaining participants reported using Secure Messaging for less than six months (11/33, 33%). The majority of participants (28/33, 85%) reported using Secure Messaging “at least once a month” (12/33, 36%) or “a few times a year” (16/33, 49%). Most veterans (27/33, 82%) reported being satisfied with Secure Messaging.

New and Established Patient Definitions (CMS vs. CPT®): The …

I get lot of requests from readers of The Happy Hospitalist asking how to know if a patient is a new or established patient.  Identifying the correct classification will prevent delays or denials of payment.  Many evaluation and management (E/M) codes are by definition described as new or established.  This lecture will attempt to explain various important clinical aspects related to this determination.  Keep in mind while the Centers For Medicare & Medicaid Services (CMS) uses  Current Procedural Terminology (CPT) codes, CMS definitions do not always agree with CPT® definitions.  This discrepancy often leads to confusion for practitioners.  I will attempt to provide some insight into these differences as well.  I am a practicing clinical hospitalist with over ten years of experience and I understand how complicated these E/M rules can be.  I have written an extensive collection of CPT® and E/M lectures to help physicians and other non-physician practitioners (NPP) navigate the complex rules of medical billing and coding.  The Medicare Evaluation and Management Services Guide on page six defines a qualified NPP as nurse practitioners, clinical nurse specialists, certified nurse midwives and physician assistants.


The CPT® definition of a new patient underwent subtle changes in 2012.  Unfortunately, CMS did not change their definition to stay aligned with these changes.  This difference in language has caused great confusion for many qualified healthcare practitioners trying to stay compliant with the complex rules and regulations of  E&M.  I encourage all readers to have a handy copy of  the American Medical Association’s CPT® manual for quick and easy reference.  The 2014 standard edition manual is available for purchase from Amazon by clicking on the image to the right.  How does the 2014 CPT® manual define a new patient?

A new patient is one who has not received any professional services from the physician/qualified health care professional or another physician/qualified health care professional of the exact same specialty and subspecialty who belongs to the same group practice, within the past three years.  

Let’s look at this definition a little closer.  The 2014 CPT® manual defines professional services as those face-to-face services provided by physicians or other qualified health care professionals who may report an E/M service by a specific CPT® code.  In other words, if you provided a service, such as interpretation of an EKG or you read an echo, or you called in a prescription but you did not provide a billable E/M face-to-face encounter, the patient is still considered a new patient by the definition of professional services.  The 2012 updated definition of a new patient also added in the the words exact as well as and subspecialty.  Unfortunately, CMS did not change their definition to recognize this change in specialty determination.



CMS provides insight into their definition of new versus established patients in several important resources.  These definitions are not the same as the updated 2012 CPT® definition.  First, a CMS definition of a new patient is provided in section 30.6.7 of Chapter 12 of the Medicare Claims Processing Manual (pdf page 52). From section A:

Definition of New Patient for Selection of E/M Visit Code 

Interpret the phrase “new patient” to mean a patient who has not received any
professional services, i.e., E/M service or other face-to-face service (e.g., surgical
procedure) from the physician or physician group practice (same physician specialty)
within the previous 3 years. For example, if a professional component of a previous
procedure is billed in a 3 year time period, e.g., a lab interpretation is billed and no E/M
service or other face-to-face service with the patient is performed, then this patient
remains a new patient for the initial visit. An interpretation of a diagnostic test, reading
an x-ray or EKG etc., in the absence of an E/M service or other face-to-face service with
the patient does not affect the designation of a new patient.

Page 7 of the Evaluation and Management Services Guide also provides definitions of new and established patients.
For purposes of billing for E/M services, patients are identified as either new or
established, depending on previous encounters with the provider. 

 A new patient is defined as an individual who has not received any professional
services from the physician/non-physician practitioner (NPP) or another physician of the
same specialty who belongs to the same group practice within the previous three years.

An established patient is an individual who has received professional services from
the physician/NPP or another physician of the same specialty who belongs to the same
group practice within the previous three years.

Both definitions lack the updated CPT® definition that includes the exact same specialty and subspecialty.  This has lead to great confusion when trying to define when a patient is new vs. established within the same group practice but of different specialty or subspecialty.  For patients not covered by Medicare, knowing how the insurance carrier reconciles this difference may prevent delays or denials of claims.


One must also know how to define a group practice to interpret the new and established patient rules.  Medicare has defined a group practice in Chapter 5 of Medicare General Information, Eligibility,
and Entitlement.  Section 90.4 (pdf page 38) says:

A group practice is a group of two or more physicians and non-physician practitioners
legally organized in a partnership, professional corporation, foundation, not-for-profit
corporation, faculty practice plan, or similar association:
  • In which each physician who is a member of the group provides substantially the
    full range of services which the physician routinely provides (including medical
    care, consultation, diagnosis, or treatment) through the joint use of shared office
    space, facilities, equipment, and personnel; 
  • For which substantially all of the services of the physicians who are members of
    the group are provided through the group and are billed in the name of the group
    and amounts so received are treated as receipts of the group;
  • In which the overhead expenses of and the income from the practice are
    distributed in accordance with methods previously determined by members of the
    group; and 
  • Which meets such other standards as the Secretary may impose by regulation to
    implement §1877(h)(4) of the Social Security Act. The group practice definition
    also applies to health care practitioners.

This Medicare carrier further clarifies the definition of a group practice by stating we determine whether physicians are members of the same group based on the Tax Identification Number.  They also have an assortment of other clinically relevant scenarios in question and answer format.  I encourage all readers to review them for their own educational value.



By CPT® definition, not all E/M codes require the qualified practitioner to determine if the patient is new or established. Which common E/M code groups are excluded from the new patient vs. old patient determination?

  • Initial observation care (99218-99220)
  • Subsequent observation care (99224-99226)
  • Observation care discharge services (99217)
  • Initial hospital care (99221-99223)
  • Subsequent hospital care (99231-99233)
  • Admission and Discharge Services same day (99234-99236)
  • Hospital discharge services (99238, 99239)
  • Critical care services (99291, 99292)
  • Emergency department services (99281-99285)
  • Initial nursing facility care (99304-99306)
  • Subsequent nursing facility care (99307-99310)
  • Inpatient consultations (99251-99255).  This code group is no longer recognized by CMS.
  • Office or other outpatient consultations (99241-99245).  This code group is no longer recognized by CMS.


By CPT® definition, some E/M codes require the practitioner to determine whether the face-to-face encounter involves a new patient or an established patient.

  • Office or other outpatient services new patient (99201-99205)
  • Office or other outpatient services established patient (99211-99215)
  • Domiciliary, rest home (eg, boarding home), or custodial care services new patient  (99324-99328)
  • Domiciliary, rest home (eg, boarding home), or custodial care services established patient (99334-99337)
  • Home services new patient (99341-99345)
  • Home services established patient (99347-99350)
  • Preventative medicine services new patient (99381-99387)
  • Preventative medicine services established patient (99391-99397)


Most E/M code groups used in the hospital do not require the practitioner to determine whether the patient is new or established in their group practice.  However, one common hospital billing and coding scenario does require quite a bit of effort to determine the correct E/M code group.  As a consultant caring for a Medicare patient in the hospital under ambulatory surgery center (ASC) or observation status, practitioners are directed to use the office or other outpatient service codes.   This also applies to any other patient who’s insurance does not accept consultation codes.  Determining whether the patient is a new patient or an established patient is necessary to prevent delays or denials in payment.

A 42 year old morbidly obese man with chronic lymphedema and a diagnosis of bilateral cellulitis is admitted to observation status by a hospitalist in a different group as a direct admission from the primary care physician’s office with a request to consult an infectious disease specialist.

In this scenario, the hospitalist would use the attending physician initial observation code group 99218-99220 for the admission,  code group 99224-99226 for subsequent care visits and 99217 for the date of discharge.  However, the infectious disease (ID) consultant would first have to know whether the patient’s insurance carrier accepts consultation codes.  If they do, the initial encounter should be coded as an outpatient consultation (99241-99245).  All subsequent care visits should be coded as office or other outpatient services of an established patient (99211-99215).

However, if the patient’s insurance does not accept consultation codes, then the ID consulting specialist must determine whether the patient is a new patient or an established patient in their group practice.  If the ID specialist determines the patient is new, they should bill their initial encounter as an office or other outpatient service of a new patient using code group 99201-99205.   If they determine the patient is an established patient of their group practice, they should choose the office or other outpatient service established patient code group 99211-99215 as their initial and all subsequent care visits.  Here is another clinical example.

A healthy 37 year old with stable seasonal allergies is admitted under ASC status by an orthopedic surgeon for shoulder surgery.  The hospitalist is consulted for medical management.

What should the hospitalist bill? The hospitalist must follow the same decision analysis as the ID specialist did in the clinical example above.  Most hospitalists do not have their own office charts or EMR to reference when trying to determine if they have seen the patient in the last three years.  The only way to know for sure whether any other hospitalist or other physician of the same specialty or non-physician practitioner (NPP) working with the same specialty in the same group practice has seen the patient in the last three years is to search their hospital’s EHR for evidence of any prior H&P, consult note or other face-to-face E/M progress note visit that would qualify as a professional service.  Most doctors don’t have the time, energy, education or resources to figure all that out.  


Excluding consultation codes, choosing new vs. established codes in the office is straight forward.  Either the patient is or isn’t a new patient based on the prevailing rules of the patient’s third party payer.  If the patient is a new patient, choose code group 99201-99205 for the initial encounter and 99211-99215 for subsequent established care visits until it is determined the patient is no longer an established patient.  If the patient is not a new patient, established care codes should be used.

If consultation codes are allowed by the patient’s insurance company, then code group 99241-99245 should be used for the consult request.  Depending on whether a transfer of care is made or not, subsequent visits should be coded using either this same consult code group or the office established patient code group (99211-99215).

A 78 year old Medicare patient is referred by the primary care physician to a cardiology group for chest pain.  A stress test, ordered the day prior, was read as abnormal by a different cardiologist in the same group practice.  

What should the cardiologist code for their initial E/M encounter?   Based on the Medicare definition detailed above, reading of a stress test does not constitute professional services.   It should be ignored when determining whether the patient is new or established.  In addition, Medicare does not recognize consult codes.  Cardiology is a Medicare recognized specialty.   If the patient has received any professional services (E/M service or other face-to-face service) by any cardiologist or NPP working under the direction of the cardiologist or any other cardiologist in the same group practice in the last three years, only the established patient clinic code group 99211-99215 can be used.   If no cardiologist or NPP working with a cardiologist in this group practice has seen the patient in the last three years, then the patient is a new patient.  Code group 99201-99205 should be used for the initial visit.


Qualified non-physician practitioners are considered part of the group practice and specialty for which they provide service along with physicians in the same specialty and group practice.  In fact, Medicare’s E/M Services Guide (on page 7 linked above) states quite clearly that non-physician practitioners are treated the same as physicians as providers of professional services over the three year time frame.   Even the CPT® definition bundles the physician with qualified health care professional in their definition of new vs. established patients.   In addition, the 2014 CPT® manual says

When advanced practice nurses and physician assistants are working with physicians they are considered as working in the exact same specialty and exact same subspecialties as the physician.


If a physician or other qualified NPP is providing cross-cover care for another physician, how does this affect the new or established patient decision?  The answer to this question has been answered by WPS, a Medicare contractor.

In the instance where a physician is on call for or covering for another physician, the patient’s encounter will be classified as it would have been by the physician who is not available.

The 2014 CPT® manual says

In the  instance where a physician/qualified health care professional is on call for or covering for another physician/qualified health care professional, the patient’s encounter will be classified as it would have been by the physician/qualified health care professional who is not available.  

I find this guidance interesting and conflicting with the definition of a new patient.  If one solo practitioner is providing coverage for another solo practitioner in a different group practice, they have different tax identification numbers.  By their own admission, Medicare states they audit the new vs. established patient decision based on the tax identification number.  Their computer algorithms may not be able to establish an on call or cross-covering scenario in situations where two physicians, whether of like specialty or not, of different groups with different tax identification numbers, are providing coverage for each other.

When would this scenario occur?  Consider the hospital observation scenario where one physician is providing on call services for another physician and they are asked to consult on a Medicare observation patient being admitted to the hospital by another group practice.  It may be possible the on call physician has not seen the patient in the last three years but the patient’s normal physician has.  Should the cross-covering physician bill for a new patient encounter or an established patient encounter?  According to CMS and CPT® guidance, the on call physician should bill as if they were the patient’s normal physician.   However, if they choose to bill the E/M visit as a new patient encounter, it may be difficult for  computer algorithms to identify this coding error  due to the different tax identification numbers used by both physicians.  In fact, the covering physician wouldn’t even know whether the patient had professional services provided by the patient’s normal physician in the prior three years as they probably would not have access to their office records.


Consider the scenario where a family practice group has multiple sites of care all billing under the same tax identification number.   Each site has their own patient records that are not available at other clinic sites.  The patient is now being evaluated at a clinic site by a different physician or NPP who has never seen the patient and has no records available.   Should this patient be coded as a new patient or an established patient?  If a patient has been seen in the previous three years by any physician or NPP in the same group and specialty, regardless of which clinic site they went to and regardless of whether patient records are available, only established patient codes should be used.  CMS and CPT® rules do not provide exceptions to practice sites that do not have access to records.

Site of service also does not apply if the patient received professional services in the hospital or in the emergency department.  Consider the scenario where Physician A provides inpatient hospital care for a patient.  The patient has never been seen previously by Physician A or any other physician or NPP in the same specialty and same group practice of Physician A.  The patient is discharged and fails to follow up as requested.  Two years later the patient calls the office of Physician A requesting to establish care in the clinic with Physician A.  Because Physician A has provided professional services in the last three years, the patient is considered an established patient, regardless of which physician or NPP in the in the same specialty and group practice provides the care.


How should a physician or NPP code patients after they have left one group practice and joined another?  Under a new group or solo practice, the physician would have a new tax identification number.  However, the definition of a new patient says they cannot have received professional services in the last three years from the physician or qualified health care professional.  Some payer algorithms may not be able to identify the new vs. established patient decision for physicians or NPP who change tax identifications.  Some may.  To bill and code correctly the correct interpretation of this scenario says to bill established patient care codes if the physician or NPP has seen the patient for professional services in the last three years.

What if a physician changes groups and one of their established patients is seen in the new group for the first time by a physician or non-physician practitioner in the new group who has never seen the patient and has no records on the patient?  Since the patient is established to the physician new to the group, the patient is established to all physicians and qualified health care professionals in the group.  Established care codes should be used.


Physicians and other non-physician practitioners should be awar that Recovery Auditors, under contract from CMS, are specifically targeting improper payments involving new patient claims when the beneficiary does not meet defined criteria to be a new patients.  Medicare Learning Network document MM8165 says

As a result of overpayments for new patient E&M services that should have been paid as established
patient E&M services, CMS will implement
changes to the
Common Working File (CWF) to prompt CMS contractors to validate
that there are not two new patient CPTs being paid within a three year period of time. 

Which codes will be checked?  This document further clarifies which codes the Recovery Auditors will be checking.

The new patient CPT codes that will be checked in these edits include 99201-
99345, 99381
92002, and 92004.
The edits will also check to ensure that a claim with
one of these new patient CPT codes is not paid subsequent to payment of a claim with an established
patient CPT code
99215, 99334
99337, 99347
99350, 99391
99397, 92012, and 92014)

Given the desire of CMS to recuperate overpayments and the complexity of the rules to follow, I encourage all physicians to be diligent in determining when their patient is a new patient or an established patient by CMS criteria.


What is the difference in relative value unit (RVU) for the new and established common outpatient clinic codes?  In 2014, the work RVU (wRVU) values of these common codes are described here.  I have provided a more detailed RVU and dollar analysis at each linked CPT® lecture below.  As you can see, the difference in work RVU value (and total RVU value) is quite significant for similar levels of service when comparing new vs. established care codes.


  • 99203 (new) wRVU = 1.42
  • 99213 (established) wRVU = 0.97


  • 99204 (new) wRVU = 2.43
  • 99214 (established) wRVU = 1.50


  • 99205 (new) wRVU = 3.17
  • 99215 (established) wRVU = 2.11

Department of Health and Human Services Office of Inspector …

Department of Health and Human Services Office of Inspector General 2014 Work Plan
February 18, 2014

On January 31, the Department of Health and Human Services (HHS) Office of the Inspector General (OIG) released its annual Work Plan for Fiscal Year 2014 (Work Plan).  The Work Plan is a useful road map for focusing compliance efforts.

The OIG is the primary federal investigative organization for detecting fraud, waste, and abuse in programs HHS administers.  In Fiscal Year 2013, the OIG attributed billions of dollars of recoveries to its investigative and audit efforts.  While the work plan identifies hundreds of programs the OIG intends to review, the OIG makes it very clear that it will continue to direct most of its resources towards the Medicare and Medicaid programs, consistent with its historical “follow the money” approach.  Within the Medicare and Medicaid areas alone, the Work Plan identifies forty-seven new audits for FY 2014.

Spending on inpatient and outpatient hospital services is the highest in the Medicare and Medicaid programs.  Hospital spending increased 4.9 percent to $882.3 billion in 2012 compared to 3.5-percent growth in 2011, according to the CMS Office of the Actuary.

Thus, the hospital sector is an area of first priority for the OIG, which has limited resources and is frequently forced to forgo the full scope or totality of Work Plan activities in favor of those with the highest rate of return for the government.

Hospital-related audit and review initiatives in the Work Plan include, but are not limited to, the following:

  • Outlier payment reconciliation. Addressing timeliness and appropriateness.
  • New inpatient admission criteria.  Which individuals are admitted as inpatients for “observation” services and which are admitted as inpatients for clinical intervention has vexed policymakers, including Congress and CMS, which recently delayed application of the so-called “two midnights” rule that would define an inpatient stay as that which included at least two overnights. Inpatient care is more costly and policymakers will likely thread this needle in the context of the OIG’s findings.
  • Defective medical devices.  In a new review, the OIG will examine additional costs associated with implantable and other devices that fail or function at suboptimal levels.
  • Hospital staff salaries reported on cost reports.  The review will assess whether and to what extent hospital staff salaries are appropriately included in a hospital’s operating costs and the potential savings to the Medicare program if limits were placed on what is considered “reasonable” compensation.  Currently, no limits exist on staff salary amounts reported on hospital cost reports.
  • Site of service comparisons.  The migration of like services between and among provider-based and freestanding clinics continues to concern policymakers and will be scrutinized by the OIG.
  • Swing-beds at Critical Access Hospitals.  The OIG will evaluate whether swing-bed patients in a critical access hospital could receive the same level of care at a skilled nursing facility, at a lower cost to the Medicare program.
  • Oversight of hospital privileging.  The OIG will focus on the accountability of the medical staff and the governing body in the credentialing process and for the quality of care provided to patients.

In addition to the above, the OIG will continue to monitor (1) billing practices for ventilator dependent patients; (2) select inpatient and outpatient billing rules for evidence of overpayments; (3) potentially duplicative graduate medical education (GME) payments.

New project areas in the hospital sector include, but are not limited to, review of (1) outpatient evaluation and management coding for new patients; (2) potentially duplicative payments made when cardiac catheterization and heart biopsies are performed simultaneously; (3) appropriateness of indirect medical education (IME) payments; (4) Hospital emergency preparedness.

Outside of the hospital space, the OIG will conduct appropriateness reviews of Part B billing practices occurring in skilled nursing facilities (SNFs) in addition to conducting oversight of state survey and certification standards for such facilities. Additionally, the OIG Work Plan will:

  • Study the medical necessity of re-hospitalizations for SNF residents
  • Review the performance of the national background check program

ParenteBeard understands that it’s challenging for healthcare providers to make decisions regarding resource allocation, given the significant and growing compliance burden. We are committed to closely monitor the shifting compliance priorities in Washington and at the state level, and communicate them to you.  Further, ParenteBeard can work with you to identify your key compliance risks,  assist you in establishing controls to mitigate those risks, and conduct compliance reviews and  audits so when government auditors come knocking you are prepared.

For more information on this topic, contact ParenteBeard Partner Mark Ross, healthcare practice leader, at or 570.820.0311.