Do Medical Doctors Need Continuing Education Requirements
Health Serv Res. 2018 Jun; 53(3): 1682–1701.
Do State Continuing Medical Education Requirements for Physicians Improve Clinical Knowledge?
Jonathan L. Vandergrift
1 American Board of Internal Medicine, Philadelphia, PA
Bradley M. Gray
1 American Board of Internal Medicine, Philadelphia, PA
Weifeng Weng
1 American Board of Internal Medicine, Philadelphia, PA
- Supplementary Materials
-
Appendix SA1: Author Matrix.
GUID: 6F609996-6B6F-4545-96ED-7CDDA4F866CC
Appendix SA2:
Table S1. Summary of State CME Requirements in 2006.
Table S2. Summary of State CME Requirements in 2013.
Table S3. Full Linear County Fixed Effects Regression Results with Change in Examination Score as the Dependent Variable.
Table S4. Linear County Fixed Regression Results Excluding All Controls Except for the Difference‐In‐Differences Controls Designed to Capture Unobserved Variables That Could Bias Association between CME Policy Changes and Examination Score Changes.
GUID: 63B25EDB-9665-49D5-8614-6C7259FA1AEF
Abstract
Objective
To evaluate the effect of state continuing medical education (CME) requirements on physician clinical knowledge.
Data Sources
Secondary data for 19,563 general internists who took the Internal Medicine Maintenance of Certification (MOC) examination between 2006 and 2013.
Study Design
We took advantage of a natural experiment resulting from variations in CME requirements across states over time and applied a difference‐in‐differences methodology to measure associations between changes in CME requirements and physician clinical knowledge. We measured changes in clinical knowledge by comparing initial and MOC examination performance 10 years apart. We constructed difference‐in‐differences estimates by regressing examination performance changes against physician demographics, county and year fixed effects, trend–state indicators, and state CME change indicators.
Data Collection
Physician data were compiled by the American Board of Internal Medicine. State CME policies were compiled from American Medical Association reports.
Principal Findings
More rigorous CME credit‐hour requirements (mostly implementing a new requirement) were associated with an increase in examination performance equivalent to a shift in examination score from the 50th to 54th percentile.
Conclusions
Among physicians required to engage in a summative assessment of their clinical knowledge, CME requirements were associated with an improvement in physician clinical knowledge.
Keywords: Continuing medical education, licensure, internal medicine
A state medical license is required to practice medicine in the United States. As highlighted by Arrow in his seminal 1963 paper, asymmetries of information around quality are a distinctive characteristic of the health care industry and that state medical licensure is one way the industry has evolved to overcome quality asymmetries (Arrow 1963). At its inception, state medical licensure only addressed a physician's competence at the beginning of his or her career. With the recognition that medical knowledge advances rapidly, states began mandating participation in some form of continuing medical education (CME) in 1971 as an ongoing requirement to assure that licensure indicates physicians are keeping current with their medical knowledge (Marinopoulos et al. 2007; Bastian, Glasziou, and Chalmers 2010; Glynn et al. 2010; Vandergrift 2012). The implicit assumption underlying this is that patients value better health outcomes, and physicians keeping abreast of changes in medicine and standards of care will translate into high‐quality care and, in turn, better patient outcomes (Moore, Green, and Gallis 2009). While there has been a general trend toward the adoption of more stringent CME requirements by states, this has occurred unevenly over time across states with 21 states implementing CME requirements by 1996 and all but six states requiring CME by 2013 (Patel et al. 2004; American Medical Association 2013).
Although there is evidence that well‐designed CME can have a lasting effect on a physician's medical knowledge, the overall evidence concerning CME effectiveness is mixed and many studies evaluating CME are of low quality (Marinopoulos et al. 2007; Mazmanian, Davis, and Galbraith 2009; Cervero and Gaines 2015). Further, data suggest that physicians have a limited ability to self‐assess and so may select activities that do not address their most pressing deficiencies (Davis et al. 2006; Eva and Regehr 2008). That said, even if the CME engaged in by physicians is effective, it might be that the market rewards physicians who maintain a high level of clinical knowledge and so mandating CME might simply result in physicians claiming credit for activities they would have undertaken had there been no requirement (Ehrenberg and Smith 2003; Gray et al. 2013). All told, no study directly assesses the degree to which state CME policies lead to the improvements in medical knowledge that originally motivated their implementation.
The aim of our study was to address this gap in the literature by taking advantage of the natural experiment alluded to above that arose from states making CME requirement changes at different points in time. Considering this, we applied a difference‐in‐differences methodology to examine the association between changes in a physician's clinical knowledge before versus after changes in state CME requirements. Our hypothesis is that physician knowledge will increase after states increase the stringency of their CME requirements in comparison with states with stable CME requirements. We were able to measure change in knowledge by applying data from a unique source, two certifying examination taken 10 years apart by general internists who participated in the American Board of Internal Medicine (ABIM) Maintenance of Certification (MOC) program.
Background
Most physicians achieve two types of accreditation. First, they receive a state‐issued medical license, which is acquired early in residency and is required to legally practice medicine. The second is board certification, which is obtained after completing residency or fellowship training. Although voluntary, board certification is often required to obtain hospital privileges and for participation in many insurance networks (Cassel and Holmboe 2006; Freed, Dunham, and Singer 2009).
Physician Licensure and CME Requirements
In aggregate, CME is a large educational undertaking with physicians in 2013 having participated 11 million times in CME activities (Accreditation Council for Continuing Medical Education 2014). These activities are fairly diverse and include didactic courses, seminars, and ongoing activities (e.g., grand rounds), as well as individualized and self‐directed activities. CME providers are equally diverse and include health systems, physician membership organizations (e.g., American College of Physicians), and medical journals.
In general, state CME requirements indicate a minimum level of credit‐hours be completed over a certain period of time (e.g., 50 credit‐hours every 2 years). However, the minimum level of CME required varies between states with terms for completion ranging from 1 to 4 years and credit‐hour‐per‐year requirements ranging from 15 to 50 hours (American Medical Association 2013). For the most part, state CME requirements allow physicians to choose which activities best meet their educational needs. That said, some states mandate that physicians complete specific CME topics. These content mandates generally account for a small portion of total CME hours required. For example, Pennsylvania requires that physicians complete 3 hours of patient safety and risk management CME as part of the 100 hours of CME required every 2 years (American Medical Association 2013).
Internal Medicine Board Certification
Board certification is distinct from medical licensure. For one, medical licensure is undifferentiated, while certification distinguishes physicians by their specialty training. Second, licensure is governed by state statutes, while board certification is a form of self‐regulation which is governed by a series of allied nongovernmental organizations, most of whom comprise the American Board of Medical Specialties (ABMS). The largest of the ABMS boards, in terms of number of physicians, is the ABIM, which certifies physicians in general internal medicine as well as internal medicine subspecialties (American Board of Medical Specialties 2014).
Physicians typically certify in general internal medicine by passing a certification examination after completing 3 years of residency training in an accredited program. Currently, the certification examination is composed of multiple‐choice questions that ask the examinee to select the appropriate course of action given a particular clinical case scenario or patient vignette. The vignettes and scenarios that compromise the question stem will often present information such as patient demographics, symptoms, and laboratory values (Lipner and Lucey 2010). The actions that comprise the multiple‐choice answers address a variety of the tasks completed by physicians such as making a diagnoses, ordering follow‐up or diagnostic tests, or recommending specific treatments or types of care (American Board of Internal Medicine 2015). Overall, the primary aim of these questions is to test diagnostic reasoning and clinical decision making and not be an assessment of one's ability to recall medical facts (Levinson and Holmboe 2011).
Once physicians have successfully certified in internal medicine, they must periodically complete ABIM's MOC program to remain certified. Although MOC evolved in 2014 to require more continuous effort, the ABIM's MOC program had generally required most physicians to engage in a four‐part assessment every 10 years. These four parts are based on the framework designed by the ABMS and include (1) maintaining their professional standing and licensure; (2) engaging in lifelong learning and self‐assessment; (3) demonstrating their clinical knowledge and judgment; and (4) improving their medical practice (Miller 2005). Internists are required to successfully pass a certification exam every 10 years in order to satisfy part 3 of the MOC framework. Until 2015, the internal medicine MOC examination was comparable to the initial certification examination given that it was constructed using the same question pool and blueprint framework as the initial certification examination (American Board of Internal Medicine 2015).
Methods
Our physician measures were drawn from 27,683 general internists (i.e., internists that did not further subspecialize) who took the internal medicine certification examination between 1996 and 2003. Of these, 78 percent (n = 21,560) attempted the Internal Medicine (IM) MOC examination between 2006 and 2013. Of note, physicians attempting the MOC examination prior to 2006 were not asked to provide their practice location and so we could not use data from any prior years. Additional sequential exclusions included an unknown or missing practice location (n = 1,054, 5 percent), being not clinically active (n = 599, 3 percent), or not practicing in the United States (n = 344, 2 percent). After all exclusions, the analytic cohort included 19,563 general internal medicine physicians.
Information on state CME requirements is compiled and reported annually by the American Medical Association (AMA). We reviewed these reports to identify changes in CME requirements (either credit‐hours per year (cr/yr) or term for completion) between 2006 and 2013 among the 50 U.S. states and Washington, D.C. (see Tables S1 and S2 in Appendix SA2 for a summary of the state requirements; American Medical Association 2006, 2013).
Physician location, demographic, and practice characteristics were compiled using ABIM administrative data, and county‐level measures were from the Area Health Resource File (U.S. Department of Health and Human Services 2016). Physician demographic and practice characteristics included the following: years in practice; physician sex; changing states between residency and the MOC examination; their medical training type; birth location; their time in either a primary care, hospital care, or other clinical roles; their practice size (solo, 2–10 physicians, 11–50 physicians, 50 or more physicians); and whether they work in an academic practice. Years in practice was measured as the years between their initial certification year and the year they attempted the MOC examination. Changing states was determined by comparing the location of their residency program and their practice location when attempting the MOC examination. Their medical training type was based on the type and location of their medical school and was classified as either U.S.‐based allopathic, U.S.‐based osteopathic, or international medical school. Their birth location was classified as either U.S. or international born. The physician's practice size, whether they have an academic practice, and their time in different clinical roles were based on a questions asked during MOC examination registration. Other clinical roles include consultative, principal, or procedural care roles. County‐level measures included the percentage of persons below the poverty level; median household income; and the number of physicians per 1,000 people. Physicians‐per‐capita data were unavailable for physicians taking the examination in 2009 and was set to missing for observation from that year.
Our measure of knowledge change was the difference in a physician's performance on two examinations taken about 10 years apart: their MOC and initial certification examinations. The initial certification and MOC examination scores were each individually equated. Examination score equating ensures that scores can be used interchangeably by accounting for any differences in difficulty between examination administrations (Kolen and Brennan 2004). However, the initial and MOC examination scores cannot be equated to each other primarily because the gap in time between the two examinations limits the number of common items available for equating. Because of this, the average change in scores across all physicians between the initial certification and MOC examination can be partially due to differences in difficulty of the two examinations as well as an overall change in medical knowledge. However, because each physician in the sample took both the initial certification and MOC examination, any gap in examination difficulty is accounted for when comparing changes in performance between two groups of physicians who took both examinations. As such, the empirical approach described below accounts for this difference in examination difficulty by applying changes in examination scores for a group unaffected by CME policies changes as a control group when assessing associations with these policy changes. In addition, we converted the initial certification and MOC examination scores to Z‐scores prior to taking the difference to ensure that the overall mean difference in scores between examinations across the entire sample is zero.
Empirical Specification
Applying a difference‐in‐differences framework, we used linear regression to evaluate the change in examination performance before versus after changes in state CME requirements in comparison with physicians in states with stable CME policies. To construct our difference‐in‐differences estimation of the association between changes in CME policies and change in medical knowledge, we regressed change in examination score against a set of control variables that measure: physician characteristics (e.g., gender, years in practice, practice type); county characteristics (e.g., per capita income); county indicator variables that accounted for both non‐time‐varying state unobserved characteristics that might be related to a state CME policy and unobserved variables that might be correlated with change in the geographic composition of physicians within states over time (e.g., physicians might locate in different parts of a state over time with different support for medical education); year indicator variables that account for unobserved variables that might be correlated with an increase in examination scores that are common to all states (e.g., changes in federal policy that promote learning); a state time trend (year‐2005), which accounts for the possibility that there are unobserved differences in the linear trend in examination score changes that are unrelated to the CME policy but might correlated with the trend in the adoption of these policies (e.g., differences in initial medical training over time between states); and two CME policy change variables to model the effects of state CME policy changes. These two policy change variables were indicators that changed from zero to one after states either (1) instituted CME or increased their credit‐hours‐per‐year requirements; or (2) decreased their CME term while holding credit‐hours per year constant. These two indicators were set to zero for states with stable CME policies. The coefficients on these policy change variables measure the difference in the change in examination score before versus after changes in CME requirements compared with changes in examination scores during this period for physicians who resided in states without CME policy changes. To characterize the magnitude of the change to examination scores estimated in our linear regression, we computed the comparable difference in percentile performance on the MOC examination associated with the examination score change (Lipsey et al. 2012).
We assessed the validity of our difference‐in‐differences specification to account for bias associated with unobserved variables by excluding measures of physician (e.g., physician sex, practice size, being in an academic practice) and country characteristics (e.g., median household income, physicians per capita). After these exclusions, the model was limited to the policy measures and the difference‐in‐differences controls designed to capture unobserved variables that could bias association between CME policy changes and examination score change. No material difference in the estimated association between CME policy changes and changes in exam scores between this model and the base case model would indicate that difference‐in‐differences controls accounted for these observed variables and, therefore, likely accounted for unobserved variables (Altonji, Elder, and Taber 2005).
All analyses were conducted with STATA 14 (College Station, TX STATACorp LP) and accounted for correlated errors by applying a state‐level Huber–White cluster adjustment with 51 clusters in each regression model (Huber 1967; White 1980; Cameron and Miller 2015).
Results
Seven states changed the length of the term for completing CME credits and/or the number of credits required per year (see Figure1 for the changes in state CME requirements and percentage of physicians affected). In particular, Connecticut (none to 25 cr/year), Washington, D.C. (none to 25 cr/year), and Wyoming (none to 20 cr/year) instituted a CME requirement in 2007 and Oregon (none to 30 cr/year) instituted a CME requirement in 2011. California (from 4 to 2 years in 2011) and New Hampshire (from 3 to 2 years in 2009) reduced the term for completing CME (holding credit‐hours per year constant). Alabama reduced the term for completing CME in 2008 (from 2 to 1 year) and then increased their credit‐hours‐per‐year requirements in 2011 (from 12 to 25 cr/year).
Percentage of Physicians Taking the MOC Examination Each Year in States after Changes to Their Continuing Medical Education Requirements
As shown in Table1, we found a 0.119 (p < .001; full regression results reported in Table S3 in Appendix SA2) greater increase in the standardized MOC examination score associated with state increases in their credit‐hours‐per‐year requirements (mostly instituting a new CME requirement). Was this association clinically meaningful? To place the magnitude of this association in context, it is equivalent to a change from the 50th percentile to the 54th percentile on the MOC examination. A decrease in the term for completing CME was not associated with a significant change in examination score (0.061, p = .058). We were unable to distinguish whether the relationship observed between an increase in the credit‐hours‐per‐year requirements and differences in examination score was due to a one‐time increase in the change in medical knowledge or an increase in the trend in medical knowledge (all ps > .14).
Table 1
Association between Changes in State Continuing Medical Education (CME) Policy and Changes in Examination Performance
Type of CME Change | Change in Standardized Examination Scorea | ||
---|---|---|---|
Coef (SE)b | p‐Value | Percentile Shift in Performance | |
Credit‐hours‐per‐year increase | |||
Overallc | 0.119 (0.029) | <.001 | 50th to 54th |
Leveld | 0.029 (0.100) | .769 | 50th to 51st |
Sloped | 0.027 (0.018) | .143 | 50th to 51st |
Term decrease | |||
Overallc | 0.061 (0.031) | .058 | 50th to 52nd |
Leveld | 0.262 (0.225) | .249 | 50th to 60th |
Sloped | −0.034 (0.033) | .309 | 50th to 49th |
Was Our Natural Experiment Assumption Met?
Underlying our research design was the notion that implementation of the CME requirements was unrelated to the composition and quality of physicians in a given state. We evaluated our assumption that CME changes were unrelated to physicians composition by examining whether changes in CME requirements resulted in a selection of general internists who performed better on their initial baseline certification examination by applying a linear regression with initial examination score as the dependent variable and the two policy variables discussed above, county fixed effects and time‐varying measures, trend–state interactions, initial certification year, MOC examination year, and physician characteristics as the independent variables. Here, we found no evidence that increasing the stringency of CME requirements resulted in a selection of physicians who performed better on their initial baseline certification examination (credit‐hours‐per‐year increase −0.068, SE = 0.096, p = .482; term decrease −0.003, SE = 0.033, p = .921).
To test our difference‐in‐differences assumptions further, we also evaluated whether the underlying characteristics of our physician population were similar between states with stable versus changing CME requirements. Supporting our research design, initial examination scores seemed unrelated to whether there was a change in CME policy. There were no significant differences in baseline initial certification examination performance between physicians practicing in states with stable CME requirements and those practicing in states with changes in credit‐hours per year (Table2, mean difference = 0.07, p = .09) or term (mean difference = 0.01, p = .83). Further, states that made CME changes were geographically diverse, representing three of the four main census regions, and had varying levels of baseline initial examination performance with average scores representing four of five quintiles (Figure2). In addition, excluding all controls except for the difference‐in‐differences controls designed to capture unobserved variables that could bias association between CME policy changes and examination score change (i.e., only including two policy change variables, county indicators, year indicators, and state–trend variables) yields results that were not significantly different from those observed in the based case model (from 1,000 bootstrap replicates: −0.010 [SE = 0.019, p = .596] difference in credit‐hours‐per‐year increase estimates; 0.004 [SE = 0.013, p = .730] differences in term decrease estimates; see Table S4 in Appendix SA2 for results of the regression excluding controls). This is despite the fact that the controls excluded were, as a whole, significantly associated with changes in examination score (joint p < .001). This suggests that our findings were not sensitive to the inclusion of these control variables in the model.
Changes in State Continuing Medical Education Requirements as Well as Variation in Mean Initial Certification Examination Score
Table 2
Physician Characteristics
CME Policy Change ina | Total | ||
---|---|---|---|
Credit‐Hours‐per‐Year Increase | Term Decrease | ||
Number of physicians, no. (%) | 1,043 (5)b | 2,911 (15)b | 19,563 (100) |
Standardized baseline examination score, mean (SD) | 0.07 (0.98) | 0.00 (0.98) | 0.00 (1.0) |
Years in practice, mean (SD) | 10.1 (0.9) | 10.0 (0.9) | 10.1 (0.9) |
Female physicians, no. (%) | 504 (48) | 1,323 (45) | 8,690 (44) |
Moved states between residency and MOC, no. (%) | 497 (48) | 1,591 (55) | 9,798 (50) |
Practice size, no. (%) | |||
Solo | 123 (12) | 346 (12)e | 2,887 (15) |
2 to 10 physicians | 402 (39) | 754 (26)f | 7,645 (39) |
11 to 50 physicians | 285 (27) | 544 (19)d | 4,264 (22) |
51 or more physicians | 162 (16) | 1,092 (38)f | 3,466 (18) |
Unknown | 71 (7) | 175 (6) | 1,301 (7) |
Academic practice, no. (%) | 155 (15) | 329 (11) | 2,527 (13) |
Percent time clinical role, mean (SD) | |||
Primary care | 63.3 (39.8) | 61.2 (40.1) | 59.9 (39.8) |
Hospital care | 23.7 (35.9) | 24.0 (35.7) | 24.5 (35.6) |
Other | 13.1 (24.7) | 14.8 (26.1)d | 15.6 (26.9) |
Born in the United States, no (%) | 636 (61.1) | 1,285 (44.2) | 9,973 (51.1) |
Medical training, no. (%) | |||
U.S. allopathic | 691 (66) | 1,832 (63)d | 11,157 (57) |
U.S. osteopathic | 34 (3) | 97 (3)d | 800 (4) |
International | 318 (31) | 982 (34)d | 7,606 (39) |
County‐level measures, mean (SD) | |||
% Persons below poverty level | 14.2 (4.4) | 14.0 (4.3) | 14.4 (5.2) |
Median household income | 56,165 (11,671) | 61,986 (13,283)e | 55,346 (14348) |
Physicians per 1,000 peoplec | 4.4 (2.1) | 3.2 (1.6)d | 3.7 (2.7) |
We did not observe any significant differences in practice characteristics between physicians from states that changed their credit‐hours‐per‐year requirements compared with states with stable CME requirements. We also evaluated whether physicians changing states between residency and the MOC examination might modify the relationship observed between increases in credit‐hours‐per‐year requirements and changes in examination performance. To do this, we included an interaction between an indicator for a physician having changed states and our credit‐hours‐per‐year increase policy variable and failed to find evidence of any moderating effects (interaction b = −0.052 SE = 0.041, p = .211).
We did observe differences in practice characteristics among physicians from states that decreased their CME term. Physicians from these states were more likely to be U.S. allopathic physicians (p = .001) from large practices composed of 51 or more physicians (p < .001) and to practice in counties with higher median household incomes (p = .002) but a lower rate of physicians per capita (p = .023). Physicians from these states also reported a lower rate of time engaged in other clinical roles (p = .03).
Discussion
Taking advantage of a natural experiment that occurred when seven states made changes to their CME requirements during our study period, we applied a difference‐in‐differences framework to measure the association between changes in medical knowledge and changes in state CME polices. Indicating that CME policies were associated with an increase in medical knowledge, we found that physicians practicing in five states that increased their credit‐hours‐per‐year requirements improved their examination performance between their initial certification examination and their MOC examination 10 years later more so than physicians practicing in states with no CME policy change. In terms of clinical significance, the magnitude of this improvement was equivalent to an increase in relative MOC examination performance from the 50th percentile to 54th percentile. Yet we failed to find evidence that decreasing the term for completing CME in three states was associated with changes in examination performance.
This is the first study to demonstrate a link between state‐level CME requirements and improvements in clinical knowledge. One prior cross‐sectional study from the mid‐1990s examined patient outcomes between states with and without a CME requirement (Patel et al. 2004). Here, Patel and colleagues observed little difference in quality care measures or outcomes for patients with acute myocardial infarction treated in states with, versus without, a CME requirement. However, it is likely that meaningful quality improvements arising from requiring CME might occur that are not distinguishable in cross‐sectional state‐level aggregate data given the large geographic variation in health care quality and utilization that are consistently observed across the United States (Fuchs 2004; Rettenmaier and Wang 2012).
While we found physicians subject to a CME requirement demonstrate in an educational setting a greater level of clinical knowledge, additional research will be required to determine whether these associations translate into better quality of care and patient outcomes (Moore, Green, and Gallis 2009). Prior research has found higher MOC examination scores were associated with better performance on clinical process measures (Holmboe et al. 2008; Hess et al. 2012). That said, a main emphasis of the MOC examination is in testing a physician's diagnostic skills (American Board of Internal Medicine 2015). Given that diagnostic errors are an important patient safety issue, future evaluations of CME policy should consider whether any increases in clinical knowledge associated with these policies might help improve a physician's diagnostic skill (Balogh, Miller, and Ball 2016). It will also be important to consider these findings in the context of how state requirements for CME impact the overall (both CME and non‐CME) costs of a physician maintaining the currency of their clinical knowledge.
As described in the Methods section, our difference‐in‐differences specification was designed to account for a variety of unobserved factors that might bias comparisons between changes in CME policies and changes in examination scores. To bias the associations we report, a factor related to changes in examination performance would have to change only in states making policy changes and the timing of this change would need to be coincident with the policy change itself. In addition, this shift would need to be fairly abrupt as steady changes in examination performance over time in each state are captured by our inclusion of a linear trend term for each state. Evidence of the validity of the difference‐in‐differences approach accounting for bias associated with unobserved variables was the robustness of our estimates when we did and did not exclude control variables that accounted for observed physician (e.g., being in a large or academic practice) and county (e.g., median household income, physicians per capita) characteristics (Altonji, Elder, and Taber 2005). As a whole, these characteristics were significantly associated with changes in examination score and yet their exclusion resulted in no discernable change in the association between changes in CME policy and changes in examination scores.
One limitation of these data is that we utilized repeated physician cross sections to compare examination performance between states before and after changes in CME requirements. Therefore, the effects we observe may be partially driven by compositional changes in the physicians sitting for the MOC examination. For example, if physicians in states making CME changes performed poorly on their initial certification examination, or the CME policy changes selected for lower performing physicians, it might be expected for physicians in states making CME policy changes to perform better on their MOC examination via regression to the mean. However, this is accounted for implicitly through our difference‐in‐differences specification. Further, we did not observe a significant difference in baseline performance between states with stable versus changing CME requirements, and we failed to find any evidence that changes in CME requirements selected for physicians who performed worse on their initial certification examination. In addition, we accounted for any nonsignificant differences that might influence our estimates by utilizing the change in standardized examination scores between the MOC and initial certification examination.
An additional limitation is that about half of the physicians in the sample trained in a different state than the one in which they currently practice and we do not know when they relocated. Therefore, it is possible that some physicians may have moved to their current state in the year prior to taking the MOC examination. In effect, these physicians would individually experience a change in CME requirements that is either more or less stringent. That said, for this movement to introduce bias it would need to mirror CME policies in a way that would not be captured by our difference‐in‐differences control variables, such as physicians moving out of a state because CME policies became more stringent. We assume that most of this movement between states occurred soon after residency and that internal medicine physicians are less likely to relocate once they develop their professional practice. Supporting this is data suggesting that only about one in ten of physicians relocates annually (SK&A 2016). Further, we failed to find a significant difference in the proportion of physicians that moved between states that did and did not change their CME requirements, changing states itself was unrelated to changes in examination performance, and we found no evidence that changing states moderated the relationship between increasing the credits per year CME requirement and changes in physician examination performance. Therefore, we presume that any effect on changes in examination performance associated with moving between states is minimal or balanced between physicians practicing in states that did and did not change their CME requirements and so is controlled for by our difference‐in‐differences specification.
It is also important to note that the format for examination administration changed between the initial and MOC examination. The initial examination was a two‐day examination administered on paper, and the MOC examination was a single‐day examination administered via computer. In addition, a physician's skills in performing on standardized tests may have declined given that he or she likely has not taken any standardized tests in the prior 10 years. This is in contrast to initial certification where physicians may have recently taken a number of standardized tests such as their licensing examination as well as in‐training examinations during residency (Boston 2015). Therefore, we assume that an increase in CME policy stringency does not improve a physician's ability to perform well on the computer‐based format of the MOC examination over the paper‐based format of the initial certification examination, or his or her ability to perform on standardized tests in general, holding the clinical knowledge of the physician constant.
Considering the generalizability of these findings, we only examined general internists who participated in ABIM's MOC program. There is potential for CME participation to have a larger effect on physicians who are participating in MOC. This is because CME activities are largely formative assessments. In contrast, the MOC examination is a summative assessment that can have a large impact on a physician's career if he or she is unable to pass. Therefore, having this impending summative assessment might motivate physicians to more meaningfully engage in a CME activity, holding the number of hours of CME constant, and so derive greater clinical knowledge enhancement. Another possibility is that the MOC examination might drive physicians to participate in CME activities more directly related to the internal medicine topics than activities related to other aspects of care.
In contrast to MOC amplifying the effects of CME, it is also reasonable to assume that our results may be an underestimate of the effects observed among physicians not participating in MOC. This is because the ABIM's MOC program itself is designed to help physicians keep up‐to‐date on their clinical knowledge and skills and so may crowd out the effects of CME requirements. That said, even if the study results are applicable only to ABIM‐certified general internists, it should be noted that this group represents about half of all adult primary care physicians and over 80 percent of these physicians participated in MOC (Petterson et al. 2012; American Board of Internal Medicine 2016). Therefore, if the associations we report only cover this population, they are still important for public policy.
In conclusion, these findings support the general CME requirements most U.S. states have implemented over the past 40 years and suggest that, at least among physicians who took the MOC examination, more rigorous CME credit‐hour requirements (mostly implementing a new requirement) is associated with an improvement in physician clinical knowledge equivalent to a change from the 50th percentile to 54th percentile of the MOC examination. A question for future research is to what degree does the increase in clinical knowledge associated with changes in CME policy translate into improvements in patient outcomes and, if so, whether these benefits are worth the cost in terms of physician time and CME fees.
Supporting information
Appendix SA1: Author Matrix.
Appendix SA2:
Table S1. Summary of State CME Requirements in 2006.
Table S2. Summary of State CME Requirements in 2013.
Table S3. Full Linear County Fixed Effects Regression Results with Change in Examination Score as the Dependent Variable.
Table S4. Linear County Fixed Regression Results Excluding All Controls Except for the Difference‐In‐Differences Controls Designed to Capture Unobserved Variables That Could Bias Association between CME Policy Changes and Examination Score Changes.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: Financial and material support was provided by the American Board of Internal Medicine. Design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, and approval of the manuscript; and decision to submit the manuscript for publication were all conducted by the authors independently of the American Board of Internal Medicine.
Disclosures: None.
Disclaimer: None.
References
- Accreditation Council for Continuing Medical Education . 2014. "Accreditation Council for Continuing Medical Education (ACCME®) 2013 Annual Report Executive Summary" [accessed on October 1, 2014]. Available at http://www.accme.org/sites/default/files/630_2013_Annual_Report_20140715.pdf
- Altonji, J. G. , Elder T. E., and Taber C. R.. 2005. "Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools." Journal of Political Economy 113 (1): 151–84. [Google Scholar]
- American Board of Internal Medicine . 2015. "Internal Medicine Maintenance of Certification (MOC) Examination Blueprint" [accessed on February 29, 2015]. Available at http://www.abim.org/~/media/ABIM%20Public/Files/pdf/exam-blueprints/maintenance-of-certification/internal-medicine.pdf
- American Board of Internal Medicine . 2016. "Maintenance of Certification Program Completion Rates (February 2016)" [accessed on February, 29, 2016]. Available at http://www.abim.org/~/media/ABIM%20Public/Files/pdf/statistics-data/maintenance-of-certification-completion-rates.pdf
- American Board of Medical Specialties . 2014. "2013–2014 ABMS Board Certification Report" [accessed on February 29, 2014]. Available at http://www.abms.org/media/84770/2013_2014_abmscertreport.pdf
- American Medical Association . 2006. "Continuing Medical Education for Licensure Reregistration." State Medical Licensure Requirements and Statistics 2006. 47–50. [Google Scholar]
- American Medical Association . 2013. "Continuing Medical Education for Licensure Reregistration." State Medical Licensure Requirements and Statistics 2013. 65–9. [Google Scholar]
- Arrow, K. J. 1963. "Uncertainty and the Welfare Economics of Medical Care." American Economic Review 53 (5): 941–73. [Google Scholar]
- Balogh, E. P. , Miller B. T., and Ball J. R.. 2016. Improving Diagnosis in Health Care. Washington, DC: National Academies Press. [Google Scholar]
- Bastian, H. , Glasziou P., and Chalmers I.. 2010. "Seventy‐Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?" PLoS Medicine 7 (9): e1000326. [PMC free article] [PubMed] [Google Scholar]
- Boston, J. 2015. "The Nature of the Beast — Resident‐in‐Training Exams" [accessed on October 31, 2015]. Available at http://knowledgeplus.nejm.org/the-nature-of-the-beast-resident-in-training-exams/
- Cameron, A. C. , and Miller D. L.. 2015. "A Practitioner's Guide to Cluster‐Robust Inference." Journal of Human Resources 50 (2): 317–72. [Google Scholar]
- Cassel, C. , and Holmboe E. S.. 2006. "Professional Standards in the USA: Overview and New Developments." Clinical Medicine 6 (4): 363–7. [PMC free article] [PubMed] [Google Scholar]
- Cervero, R. M. , and Gaines J. K.. 2015. "The Impact of CME on Physician Performance and Patient Health Outcomes: An Updated Synthesis of Systematic Reviews." Journal of Continuing Education in the Health Professions 35 (2): 131–8. [PubMed] [Google Scholar]
- Davis, D. A. , Mazmanian P. E., Fordis M., Van Harrison R., Thorpe K. E., and Perrier L.. 2006. "Accuracy of Physician Self‐Assessment Compared with Observed Measures of Competence: A Systematic Review." Journal of the American Medical Association 296 (9): 1094–102. [PubMed] [Google Scholar]
- Ehrenberg, R. G. , and Smith R. S.. 2003. Modern Labor Economics, Theory and Public Policy, Chapter 9, 8th Edition, Boston, MA: Addison Wesley. [Google Scholar]
- Eva, K. W. , and Regehr G.. 2008. ""I'll Never Play Professional Football" and Other Fallacies of Self‐Assessment." Journal of Continuing Education in the Health Professions 28 (1): 14–9. [PubMed] [Google Scholar]
- Freed, G. L. , Dunham K. M., and Singer D.. 2009. "Health Plan Use of Board Certification and Recertification of Surgeons and Nonsurgical Subspecialists in Contracting Policies." Archives of Surgery 144 (8): 753–8. [PubMed] [Google Scholar]
- Fuchs, V. R. 2004. "More Variation in Use of Care, More Flat‐of‐the‐Curve Medicine." Health Affairs Suppl Variation: VAR104‐7. [PubMed] [Google Scholar]
- Glynn, R. W. , Chin J. Z., Kerin M. J., and Sweeney K. J.. 2010. "Representation of Cancer in the Medical Literature—A Bibliometric Analysis." PLoS ONE 5 (11): e13902. [PMC free article] [PubMed] [Google Scholar]
- Gray, B. , Reschovsky J., Holmboe E., and Lipner R.. 2013. "Do Early Career Indicators of Clinical Skill Predict Subsequent Career Outcomes and Practice Characteristics for General Internists?" Health Services Research 48 (3): 1096–115. [PMC free article] [PubMed] [Google Scholar]
- Hess, B. J. , Weng W., Holmboe E. S., and Lipner R. S.. 2012. "The Association between Physicians' Cognitive Skills and Quality of Diabetes Care." Academic Medicine 87 (2): 157–63. [PubMed] [Google Scholar]
- Holmboe, E. S. , Wang Y., Meehan T. P., Tate J. P., Ho S. Y., Starkey K. S., and Lipner R. S.. 2008. "Association between Maintenance of Certification Examination Scores and Quality of Care for Medicare Beneficiaries." Archives of Internal Medicine 168 (13): 1396–403. [PubMed] [Google Scholar]
- Huber, P. J. 1967. The Behavior of Maximum Likelihood Estimates Under non‐Standard Conditions. Paper presented at Fifth Berkeley Symposium on Mathematical Statistics and Probability. Berkeley, CA.
- Kolen, M. J. , and Brennan R. L.. 2004. Test Equating, Scaling, and Linking. New York: Springer. [Google Scholar]
- Levinson, W. , and Holmboe E.. 2011. "Maintenance of Certification in Internal Medicine: Facts and Misconceptions." Archives of Internal Medicine 171 (2): 174–6. [PubMed] [Google Scholar]
- Lipner, R. S. , and Lucey C. R.. 2010. "Putting the Secure Examination to the Test." Journal of the American Medical Association 304 (12): 1379–80. [PubMed] [Google Scholar]
- Lipsey, M. W. , Puzio K., Yun C., Hebert M. A., Steinka‐Fry K., Cole M. W., Roberts M., Anthony K. S., and Busick M. D.. 2012. Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. Washington, DC: National Center for Special Education Research. [Google Scholar]
- Marinopoulos, S. , Dorman T., Ratanawongsa N., Wilson L., Ashar B., Magaziner J., Miller R., Thomas P., Prokopowicz G., and Qayyum R.. 2007. "Effectiveness of Continuing Medical Education. Evidence Report/Technology Assessment No. 149 (Prepared by the Johns Hopkins Evidence‐Based Practice Center, Under Contract No. 290‐02‐0018.) AHRQ Publication No. 07‐E006." Agency for Healthcare Research and Quality.
- Mazmanian, P. E. , Davis D. A., and Galbraith R.. 2009. "Continuing Medical Education Effect on Clinical Outcomes: Effectiveness of Continuing Medical Education: American College of Chest Physicians Evidence‐Based Educational Guidelines." Chest Journal 135 (3 suppl): 49S–55S. [PubMed] [Google Scholar]
- Miller, S. H. 2005. "American Board of Medical Specialties and Repositioning for Excellence in Lifelong Learning: Maintenance of Certification." Journal of Continuing Education in the Health Professions 25 (3): 151–6. [PubMed] [Google Scholar]
- Moore Jr, D. E. , Green J. S., and Gallis H. A.. 2009. "Achieving Desired Results and Improved Outcomes: Integrating Planning and Assessment throughout Learning Activities." Journal of Continuing Education in the Health Professions 29 (1): 1–15. [PubMed] [Google Scholar]
- Patel, M. R. , Meine T. J., Radeva J., Curtis L., Rao S. V., Schulman K. A., and Jollis J. G.. 2004. "State‐Mandated Continuing Medical Education and the Use of Proven Therapies in Patients with an Acute Myocardial Infarction." Journal of the American College of Cardiology 44 (1): 192–8. [PubMed] [Google Scholar]
- Petterson, S. M. , Liaw W. R., Phillips R. L. Jr, Rabin D. L., Meyers D. S., and Bazemore A. W.. 2012. "Projecting US Primary Care Physician Workforce Needs: 2010–2025." Annals of Family Medicine 10 (6): 503–9. [PMC free article] [PubMed] [Google Scholar]
- Rettenmaier, A. J. , and Wang Z.. 2012. "Regional Variations in Medical Spending and Utilization: A Longitudinal Analysis of US Medicare Population." Health Economics 21 (2): 67–82. [PubMed] [Google Scholar]
- SK&A . 2016. "Healthcare Provider Move Rates, Market Insights Report, October 2016" [accessed on November 3, 2016, 2016]. Available at http://www.skainfo.com/page/infographic-provider-move-rates
- U.S. Department of Health and Human Services . 2016. "2015–2016 Area Health Resource File (AHRF)" [accessed on October 31, 2016]. Available at http://ahrf.hrsa.gov/
- Vandergrift, J. L. 2012. "Measuring the Evidence That Informs Clinical Treatment Recommendations." Journal of the National Comprehensive Cancer Network 10 (5): 678–80. [PubMed] [Google Scholar]
- White, H. 1980. "A Heteroskedasticity‐Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity." Econometrica: Journal of the Econometric Society 48 (4): 817–38. [Google Scholar]
Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5980292/
0 Response to "Do Medical Doctors Need Continuing Education Requirements"
Post a Comment