Selasa, 29 Desember 2015

Top 10 Blog Posts of the Year

Writing the Dentrix Office Manager Blog is one of the most enjoyable things I do in my career.  Helping you all become Dentrix super-users is an honor I do not take lightly.  I get so excited when I receive your emails letting me know how my tips have helped your practice be more successful. 

Here are the top 10 topics you clicked on this year . . .

  1. Electronic signatures . . . a must have for going paperless    Since I wrote the blog on March 23, I have been receiving a lot of questions about integrating electronic signature devices, how they work in Dentrix, and which one I recommend. The good news is that I can answer all these questions and I am excited that the shift toward going paperless is growing.
  2. Oops! What to do when the wrong tooth was posted  Have you had a patient in your chair for his or her 6-month checkup and, while you are perio charting, you notice that the wrong tooth is missing. After further investigation, you realize that the patient was in for an extraction three months ago, but the wrong tooth was set complete. You don’t know what to do, but the tooth chart needs to be fixed.
  3. Someone deleted the entire Batch Processor . . . how do I find all those claims?   Do you ever wonder if all your claims are being sent out? Has the batch been completely deleted and you are worried there were insurance claims on there? Do you want to know what reports to look at to manage your claims? If you answered “yes” to any of these questions . . . read on. 
  4. How can you prepare for the ICD-10 deadline?  The countdown to October 1, 2015 continues as we all wait to see if the implementation of ICD-10 really happens or not. Many of you have been using the ICD-9 code sets when billing medical insurance for procedures like TMJ, sleep apnea, and trauma. However, we are now seeing the use of diagnostic coding in the adoption of EHR and practices that are billing Medicaid. After October 1, we are going to see many more requirements for diagnostic coding. So how can you prepare?
  5. Best Practices for Your Team . . . weekly  Last week, we focused on what was important or the “best practice” for your daily protocol.  Today, we want to move to your weekly systems.  I teach my offices about practicing on the business side of dentistry the same things you preach to your patients in the clinical side of dentistry . . .  “prevention will help keep you out of emergency situations.”  Living in a preventative rather than reactive frame of mind will reap tremendous benefits to your practice.
  6. Top 10 features in Dentrix you are not using   In 2003, my dental practice converted from an archaic DOS-based dental software to the impressive and robust Dentrix practice management software. To say the least, I was completely overwhelmed. Two years later, in 2005, I still felt like I hadn’t even scratched the surface of my new powerful software’s potential so I applied to become a Certified Dentrix Trainer in order to learn everything there was to know about the Dentrix program. What better way to learn than to get a certificate?
  7. Dentrix G6 launch . . . . be amazed!   I had the pleasure of talking with Brad Royer, the Dentrix product manager, about the release of Dentrix G6.  The launch date is scheduled on April 30, 2015 during the California Dental Association meeting.  Stop by the Dentrix booth during CDA to see all of the amazing features but you can watch our interview now to hear about the highlights.
  8. Thinking of implementing electronic forms? This is a must read  Which one are you … “chartless” or “paperless?” It is interesting when I talk with dental practices how they use these two terms interchangeably. Most people don’t see a difference. But put yourself in the patient’s shoes and there is a huge difference. If your office sends the patient paper forms to fill out before their appointment, sends the patient a PDF attachment of your new patient forms, or if you send your patients to your website where they have to download a PDF form, your office is ”chartless” not “paperless."
  9. Stop giving the insurance companies so much power  Will whoever gave the insurance companies the power to do whatever they want please stand? Oh yeah … we did. We have given and continue to give them power when we don’t have proper documentation to justify our treatment plan or confirmation showing that our claims have been received. If we don’t manage our systems properly, we have to do what they tell us to do because we can’t prove anything.
  10. Delta EPO's and Co-Pay plans, how to setup  Lately I have been seeing a few new insurance plans creeping up in the market. In Colorado, I have seen these new “EPO” Delta Dental plans that have a patient co-pay instead of the traditional coverage %. Then, when I was in Washington recently, the office showed me a plan where the patient pays a total dollar amount on certain procedures. These plans sound similar, but the setup is slightly different.

I am looking forward to an amazing 2016.  If you have a topic that you would like me to write about, email me directly at dayna@raedentalmanagement.com



 


Senin, 28 Desember 2015

Secret Data

On replication in economics. Just in time for bar-room discussions at the annual meetings.
"I have a truly marvelous demonstration of this proposition which this margin is too narrow to contain." -Fermat
"I have a truly marvelous regression result, but I can't show you the data and won't even show you the computer program that produced the result" - Typical paper in economics and finance.
The problem 

Science demands transparency. Yet much research in economics and finance uses secret data. The journals publish results and conclusions, but the data and sometimes even the programs are not available for review or inspection.  Replication, even just checking what the author(s) did given their data, is getting harder.

Quite often, when one digs in, empirical results are nowhere near as strong as the papers make them out to be.

  • Simple coding errors are not unknown. Reinhart and Rogoff are a famous example -- which only came to light because they were honest and ethical and posted their data. 
  • There are data errors. 
  • Many results are driven by one or two observations, which at least tempers the interpretation of the results. Often a simple plot of the data, not provided in the paper, reveals that fact. 
  • Standard error computation is a dark art, producing 2.11 t statistics and the requisite two or three stars suspiciously often. 
  • Small changes in sample period or specification destroy many "facts."  
  • Many regressions involve a large set of extra right hand variables, with no strong reason for inclusion or exclusion, and the fact is often quite sensitive to those choices. Just which instruments you use and how to transform variables changes results. 
  • Many large-data papers difference, difference differences, add dozens of controls and fixed effects, and so forth, throwing out most of the variation in the data in the admirable quest for cause-and-effect interpretability. Alas, that procedure can load the results up on measurement errors, or slightly different and equally plausible variations can produce very different results. 
  • There is often a lot of ambiguity in how to define variables,  which proxies to use, which data series to use, and so forth, and equally plausible variations change the results.

I have seen many examples of these problems, in papers published in top journals. Many facts that you think are facts are not facts. Yet as more and more papers use secret data, it's getting harder and harder to know.

The solution is pretty obvious: to be considered peer-reviewed "scientific" research, authors should post their programs and data. If the world cannot see your lab methods, you have an anecdote, an undocumented claim, you don't have research. An empirical paper without data and programs is like a theoretical paper without proofs.

Rules

Faced with this problem, most economists jump to rules and censorship. They want journals to impose replicability rules, and refuse to publish papers that don't meet those rules. The American Economic Review has followed this suggestion, and other journals such as the Journal of Political Economy, are following.

On reflection, that instinct is a bit of a paradox. Economists, when studying everyone else, by and large value free markets, demand as well as supply, emergent order, the marketplace of ideas, competition, entry, and so on, not tight rules and censorship. Yet in running our own affairs, the inner dirigiste quickly wins out. In my time at faculty meetings, were few problems that many colleagues did not want to address by writing more rules.

And with another moment's reflection (much more below), you can see that the rule-and-censorship approach simply won't work.  There isn't a set of rules we can write that assures replicability and transparency, without the rest of us having to do any work. And rule-based censorship invites its own type I errors.

Replicability is a squishy concept -- just like every other aspect of evaluating scholarly work. Why do we think we need referees, editors, recommendation letters, subcommittees, and so forth to evaluate method, novelty, statistical procedure, and importance, but replicability and transparency can be relegated to a set of mechanical rules?

Demand

So, rather than try to restrict supply and impose censorship, let's work on demand.  If you think that replicability matters, what can you do about it? A lot:
  • When a journal with a data policy asks you to referee a paper, check the data and program file. Part of your job is to see that this works correctly. 
  • When you are asked to referee a paper, and data and programs are not provided, see if data and programs are on authors' websites. If not, ask for the data and programs. If refused, refuse to referee the paper. You cannot properly peer-review empirical work without seeing the data and methods. 
  • I don't think it's necessary for referees to actually do the replication for most papers, any more than we have to verify arithmetic. Nor, in my view, do we have to dot is and cross t's on the journal's policy, any more than we pay attention to their current list of referee instructions. Our job is to evaluate whether we think the authors have done an adequate and reasonable job,  as standards are evolving, of making the data and programs available and documented. Run a regression or two to let them know you're looking, and to verify that their posted data actually works. Unless of course you smell a rat, in which case, dig in and find the rat. 
  • Do not cite unreplicable articles. If editors and referees ask you to cite such papers, write back "these papers are based on secret data, so should not be cited." If editors insist, cite the paper as "On request of the editor, I note that Smith and Jones (2016) claim x. However, since they do not make programs / data available, that claim is not replicable."  
  • When asked to write a promotion or tenure letter, check the author's website or journal websites of the important papers for programs and data. Point out secret data, and say such papers cannot be considered peer-reviewed for the purposes of promotion. (Do this the day you get the request for the letter. You might prompt some fast disclosures!)  
  • If asked to discuss a paper at a conference, look for programs and data on authors' websites. If not available, ask for the data and programs. If they are not provided, refuse. If they are, make at least one slide in which you replicate a result, and offer one opinion about its robustness. By example, let's make replication routinely accepted. 
  • A general point: Authors often do not want to post data and programs for unpublished papers, which can be reasonable. However, such programs and data can be made available to referees, discussants, letter writers, and so forth, in confidence. 
  • If organizing a conference, do not include papers that do not post data and programs. If you feel that's too harsh, at least require that authors post data and programs for published papers and make programs and data available to discussants at your conference. 
  • When discussing candidates for your institution to hire, insist that such candidates disclose their data and programs. Don't hire secret data artists. Or at least make a fuss about it. 
  • If asked to serve on a committee that awards best paper prizes, association presidencies, directorships, fellowships or other positions and honors, or when asked to vote on those, check the authors' websites or journal websites. No data, no vote. The same goes for annual AEA and AFA elections. Do the candidates disclose their data and programs? 
  • Obviously, lead by example. Put your data and programs on your website. 
  • Value replication. One reason we have so little replication is that there is so little reward for doing it. So, if you think replication is important, value it. If you edit a journal, publish replication studies, positive and negative. (Especially if your journal has a replication policy!) When you evaluate candidates, write tenure letters, and so forth, value replication studies, positive and negative. If you run conferences, include a replication session. 
In all this, you're not just looking for some mess on some website, put together to satisfy the letter of a journal's policy. You're evaluating whether the job the authors have done of documenting their procedures and data rises to the standards of what you'd call replicable science, within reason, just like every other part of your evaluation.

Though this issue has bothered me a long time, I have not started doing all the above. I will start now.

Here, some economists I have talked to jump to suggesting a call to coordinated action. That is not my view

I think this sort of thing can and should emerge gradually, as a social norm. If a few of us start doing this sort of thing, others might notice. They think "that's a good idea," and start doing it too. They also may feel empowered to start doing it. The first person to do it will seem like a bit of a jerk. But after you read three or four tenure letters that say "this seems like fine research, but without programs and data we won't really know," you'll feel better about writing that yourself. Like "would you mind putting out that cigarette."

Also, the issues are hard, and I'm not sure exactly what is the right policy.  Good social norms will evolve over time to reflect the costs and benefits of transparency in all the different kinds of work we do.

If we all start doing this, journals won't need to enforce  long rules. Data disclosure will become as natural and self-enforced part of writing a paper as is proving your theorems.

Conversely, if nobody feels like doing the above, then maybe replication isn't such a problem at all, and journals are mistaken in adding policies.

Rules won't work without demand

Journals are treading lightly, and rightly so.

Journals are competitive too. If the JPE refuses a paper because the author won't disclose data, and the QJE publishes it, the paper goes on to great acclaim, wins its author the Clark medal and the Nobel Prize, then the JPE falls in stature and the QJE rises. New journals will spring up with more lax policies. Journals themselves are a curious relic of the print age. If readers value empirical work based on secret data, academics will just post their papers on websites, working paper series, ssrn, repec, blogs, and so forth.

So if there is no demand, why restrict supply? If people are not taking the above steps on their own -- and by and large they are not -- why should journals try to shove it down authors' throats?

Replication is not an issue about which we really can write rules. It is an issue -- like all the others involving evaluation of scientific work -- for which norms have to evolve over time and users must apply some judgement.

Perfect, permanent replicability is impossible. If replication is done with programs that access someone else's database, those databases change and access routines change. Within a year, if the programs run at all, they give different numbers. New versions of software give different results. The best you can do is to  freeze the data you actually use, hosted on a virtual machine that uses the same operating system, software version, and so on. Even that does not last forever. And no journal asks for it.

Replication is a small part of a larger problem, data collection itself.  Much data these days is collected by hand, or scraped by computer. We cannot and should not ask for a webcam or keystroke log of how data was collected, or hand-categorized. Documenting this step so it can be redone is vital, but it will always be a fuzzy process.

In response to "post your data," authors respond that they aren't allowed to do so, and journal rules allow that response. You have only to post your programs, and then a would-be replicator must arrange for access to the underlying data.  No surprise, very little replication that requires such extensive effort is occurring.

And rules will never be enough.

Regulation invites just-within-the-boundaries games. Provide the programs, but no poor documentation.  Provide the data with no headers. Don't write down what the procedures are. You can follow the letter and not the spirit of rules.

Demand invites serious effort towards transparency. I post programs and data. Judging by emails when I make a mistake, these get looked at maybe once every 5 years. The incentive to do a really good job is not very strong right now.

Poor documentation is already a big problem. My modal referee comment these days is "the authors did not write down what they did, so I can't evaluate it." Even without posting programs and data, the authors simply don't write down the steps they took to produce the numbers. The demand for such documentation has to come from readers, referees, citers, and admirers, and posting the code is only a small part of that transparency.

A hopeful thought: Currently, one way we address these problems is by endless referee requests for alternative procedures and robustness checks.  Perhaps these can be answered in the future by "the data and code are online, run them yourself if you're worried!"

I'm not arguing against rules, such as the AER has put in. I just think that they will not make a dent in the issue until we economists show by our actions some interest in the issue.

Proprietary data, commercial data, government data. 

Many data sources explicitly prohibit public disclosure of the data. Disclosing such secret data remains beyond the current journal policies, or policies that anyone imagines asking journals to impose. Journals can require that you post code, but then a replicator has to arrange for access to the data. That can be very expensive, or require a coauthor who works at the government agency. No surprise, such replication doesn't happen very often.

However, this is mostly not an insoluble problem, as there is almost never a fundamental reason why the data needed for verification and robustness analysis cannot be disclosed. Rules and censorship is not strong enough to change things. Widespread demand for transparency might well be.

To substantiate much research, and check its robustness to small variations in statistical method,  you do not need full access to the underlying data. An extract is enough, and usually the nature of that extract makes it useless for other purposes.

The extract needed to verify one paper is usually useless for writing other papers. The terms for using posted data could be, you cannot use this data to publish new original work, only for verification and comment on the posted paper.  Abiding by this restriction is a lot easier to police than the current replication policies.

Even if the slice of data needed to check a paper's results cannot be public, it can be provided to referees or discussants, after signing a stack of non-use and non-disclosure agreements. (That is a less-than-optimal outcome of course, since in the end real verification won't happen unless people can publish verification papers.)

Academic papers take 3 to 5 years or more for publication. A 3 to 5 year old slice of data is useless for most purposes, especially the commercial ones that worry data providers.

Commercial and proprietary (banks) data sets are designed for paying customers who want up-to-the-minute data. Even CRSP data, a month old, is not much used commercially, because traders need up to the minute data useful for trading.  Hedge fund and mutual fund data is used and paid for by people researching the histories of potential investments. Two-year old data is useless to them -- so much so that getting the providers to keep old slices of data to overcome survivor bias is a headache.

In sum, the 3-5 year old, redacted, minimalist small slice of data needed to substantiate the empirical work in an academic paper are in fact seldom a substantial threat to the commercial, proprietary, or genuine privacy interest of the data collectors.

The problem is fundamentally about contracting costs. We are in most cases secondary or incidental users of data, not primary customers. Data providers' legal departments don't want to deal with the effort of writing contracts that allow disclosure of data that is 99% useless but might conceivably be of value or cause them trouble.  Both private and government agency lawyers naturally adopt a CYA attitude by just saying no. 

But that can change.  If academics can't get a paper conferenced, refereed, read and cited with secret data,  if they can't get tenure, citations, or a job on that basis, the academics will push harder.  Our funding centers and agencies (NSF)  will allocate resources to hire some lawyers. Government agencies respond to political pressure.  If their data collection cannot be used in peer-reviewed research, that's one less justification for their budget. If Congress hears loudly from angry researchers who want their data, there is a force for change. But so long as you can write famous research without pushing, the apparently immovable rock does not move. 

The contrary argument is that if we impose these costs on researchers, then less research will be done, and valuable insights will not benefit society. But here you have to decide whether research based on secret data is really research at all. My premise is that, really, it is not, so the social value of even apparently novel and important claims based on secret data is not that large. 

Clearly, nothing of this sort will happen if journals try to write rules, in a profession in which nobody is taking the above steps to demand replicability. Only if there is a strong, pervasive, professional demand for transparency and replicability will things change.

Author's interest 

Authors often want to preserve their use of data until they've fully mined it. If they put in all the effort to produce the data, they want first crack at the results.

This valid concern does not mean that they cannot create redacted slices of data needed to substantiate a given paper. They can also let referees and discussants access such slices, with the above strict non-disclosure and agreement not to use the data.

In fact, it is usually in authors' interest to make data available sooner rather than later. Everyone who uses your data is a citation. There are far more cases of authors who gained notoriety and long citation counts from making data public early then there are of authors who jealously guarded data so they would get credit for the magic regression that would appear 5 or more years after data collection.

Yet this property right is up to the data collector to decide. Our job is to say "that's nice, but we won't really believe you until you make the data public, at least the data I need to see how you ran this regression." If you want to wait 5 years to mine all the data before making it public, then you might not get the glory of "publishing" the preliminary results. That's again why voluntary pressure will work, and rules from above will not work.

Service

One  empiricist who I talked to about these issues does not want to make programs public, because he doesn't want to deal with the consequent wave of emails from people asking him to explain bits of code, or claiming to have found errors in 20-year old programs.

Fair enough. But this is another reason why a loose code of ethics is better than a set of rules for journals.

You should make a best faith effort to document code and data when the paper is published. You are not required to answer every email from every confused graduate student for eternity after that point. Critiques and replication studies can be refereed in the usual way, and must rise to the usual standards of documentation and plausibility.

Why replication matters for economics 

Economics is unusual. In most experimental sciences, once you collect the data, the fact is there or not. If it's in doubt, collect more data. Economics features large and sophisticated statistical analysis of non-experimental data. Collecting more data is often not an option, and not really the crux of the problem anyway. You have to sort through the given data in a hundred or more different ways to understand that a cause and effect result is really robust. Individual authors can do some of that -- and referees tend to demand exhausting extra checks. But there really is no substitute for the social process by which many different authors, with different priors, play with the data and methods.

Economics is also unusual, in that the practice of redoing old experiments over and over, common in science, is rare in economics. When Ben Franklin stored lighting in a condenser, hundreds of other people went out to try it too, some discovering that it wasn't the safest thing in the world. They did not just read about it and take it as truth. A big part of a physics education is to rerun classic experiments in the lab. Yet it is rare for anyone to redo -- and question -- classic empirical work in economics, even as a student.

Of course everything comes down to costs. If a result is important enough, you can go get the data, program everything up again, and see if it's true.  Even then, the question comes, if you can't get x's number, why not?  It's really hard to answer that question without x's programs and data. But the whole thing is a whole lot less expensive and time consuming, and thus a whole lot more likely to happen, if you can use the author's programs and data.

Where we are 

The American Economic Review has a strong data and programs disclosure policy. The JPE adopted the AER data policy. A good John Taylor blog post on replication and the history of the AER policy. The QJE has decided not to; I asked an editor about it and heard very sensible reasons. Here is a very good review article on data policies at journals by By Sven Vlaeminck

The AEA is running a survey about its journals, and asks some replication questions. If you're an AEA member, you got it. Answer it. I added to mine, "if you care so much about replication, you should show you value it by routinely publishing replication articles."

How is it working? The Report on the American Economic Review Data Availability Compliance Project
All authors submitted something to the data archive. Roughly 80 percent of the submissions satisfied the spirit of the AER’s data availability policy, which is to make replication and robustness studies possible independently of the author(s). The replicated results generally agreed with the published results. There remains, however, room for improvement both in terms of compliance with the policy and the quality of the materials that authors submit
However, Andrew Chang and Phillip Li disagree, in the nicely titled "Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say `Usually Not'"
We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. ... Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. 
I read this as confirmation that replicability must come from a widespread social norm, demand, not journal policies.

The quest for rules and censorship reflects a world-view that once we get procedures in place, then everything published in a journal will be correct. Of course, once stated, you know how silly that is. Most of what gets published is wrong. Journals are for communication. They should be invitations to replication, not carved in stone truths.  Yes, peer-review sorts out a lot of complete garbage, but the balance of type 1 and type 2 errors will remain.

A few touchstones:

Mitch Petersen tallied up all papers in the top finance journals for 2001–2004. Out of 207 panel data papers, 42% made no correction at all for cross-sectional correlation of the errors.  This is a fundamental error, that typically cuts standard errors by as much as a factor of 5 or more. If firm i had an unusually good year, it's pretty likely firm j had a good year as well. Clearly, the empirical refereeing process is far from perfect, despite the endless rounds of revisions they typically ask for. (Nowadays the magic wand "cluster" is waved over the issue. Whether it's being done right is a ripe topic for a similar investigation.)

"Why Most Published Research Findings are False"  by John Ioannidis. Medicine, but relevant

A link on the  controversy on replicability in psychology

There will be a workshop on replication and transparency in economic research following the ASSA meetings in San Francisco

I anticipate an interesting exchange in the comments. I especially more links to and summaries of existing writing on the subject

Update On the need for a replication journal by Christian Zimmermann
There is very little replication of research in economics, particularly compared with other sciences. This paper argues that there is a dire need for studies that replicate research, that their scarcity is due to poor or negative rewards for replicators, and that this could be improved with a journal that exclusively publishes replication studies. I then discuss how such a journal could be organized, in particular in the face of some negative rewards some replication studies may elicit.
But why is that better than a dedicated "replication" section of the AER, especially if the AEA wants to encourage replication? I didn't see an answer, though it may be a second best proposal given that the AER isn't doing it.

Update 2

A second blog post on this topic, Secret Data Encore

Rabu, 23 Desember 2015

Tax Oped

Source: Wall Street Journal
An Oped at the Wall Street Journal, "Here's what genuine tax reform looks like." With a new art style by WSJ. (Ungated via Hoover. I have to wait 30 days to post the whole thing.)

 I buried the lead, which I'll excerpt here:
"...Why is tax reform paralyzed? Because political debate mixes the goal of efficiently raising revenue with so many other objectives. Some want more progressivity or more revenue. Others defend subsidies and transfers for specific activities, groups or businesses. They hold reform hostage.

Wise politicians often bundle dissimilar goals to attract a majority. But when bundling leads to paralysis, progress comes by separating the issues. 
Thus, we should agree to first reform the structure of the tax code, leaving the rates blank. We will then separately debate rates, and the consequent overall revenue and progressivity.... we can agree on an efficient, simple and fair tax, and debate revenues and progressivity separately.

We should also agree to separate the tax code from the subsidy code. We agree to debate subsidies for mortgage-interest payments, electric cars and the like—transparent and on-budget—but separately from tax reform.

Negotiating such an agreement will be hard. But the ability to achieve grand bargains is the most important characteristic of great political leaders."
This is, I think, the most novel idea in the oped. All tax reform packages mix changes to the structure of the tax code with specific rates. Then, the wonkosphere goes on a witch hunt of who pays more and who pays less, and the attempt to fix pathological problems in the structure falls apart.

I think our politicians really could negotiate a tax code in which all the rates are left blank. Then, we have a separate debate about what those rates will be.  In fact, tax rates ought to change a lot more often than the tax code itself.

Similarly,  the key to removing the pernicious subsidies in the tax code is again to separate the issues. Taxes are for taxing, then we can debate subsidies.

We need to move from the equilibrium of, I have my subsidy/deduction/credit/special deal, so I won't complain about yours, to the equilibrium of, I gave up my subsidy/deduction/credit special deal, so I'll make darn sure you give up yours too.


Minggu, 20 Desember 2015

2016 goal setting and planning . . . add this to your list


This week is filled with office holiday parties, baskets of holiday goodies being delivered from your specialists, and making sure you have your out-of-office emergency messages all set up just in case a patient needs to reach your doctor. When you return back to your office after the holiday, you will be looking ahead to 2016 with setting new goals, mapping out your appointment book for a more stress-free working day, and looking at new strategies to attract new patients. There is one thing I am asking you to add to your list of planning for 2016 … and that is to please bring your office up to HIPAA compliance.

When I am working with an office or providing a complimentary office assessment, I am still amazed at how many practices are out of compliance. You cannot continue to ignore the three requirements that were mandated by the HIPAA Security Rule in 2005. Yes, it has been more than 10 years since the HIPAA Security Rule was mandated and it is still grossly neglected. There are three questions I want you to ask your doctor or office manager.
  1. Where is your office’s risk assessment documented and what is the results?
  2. Where is your office’s customized HIPAA Security Manual?
  3. When is your next annual office HIPAA training scheduled?

If you cannot get the answers to these three questions, you are out of compliance with the HIPAA Security Rule, which could result in your practice receiving a HIPAA violation. I realize you do not hear of many HIPAA violations, but I am trying to protect you. Becoming compliant is not difficult … just get it done.

I have written on this topic many times and preach it with all my clients. CLICK HERE for more information.

Selasa, 15 Desember 2015

Institutions and experience

These are remarks I prepared for a symposium at Hoover in honor of George Shultz on his 95th birthday. Willie Brown was the star of the symposium, I think, preceded by a provocative and thoughtful speech by Bill Bradley.

Institutions and Experience

Our theme is “learning from experience.” I want to reflect on how we as a society learn from experience, with special focus on economic affairs. Most of these thoughts reflect things I learned from George, directly or indirectly, but in the interest of time I won’t bore you with the stories.

An English baron in 1342 tramples his farmers’ lands while hunting. The farmers starve. Then, insecure in their land, they don’t keep it up, they move away, and soon both baron and farmers are poor.

How does our society remember thousands of years of lessons like these? When, say, the EPA decides the puddle in your backyard is a wetland, or — I choose a tiny example just to emphasize how pervasive the issues are — when the City of Palo Alto wants to grab a trailer park, how does our society remember the hunter baron’s experience?

The answer: Experience is encoded in our institutions. We live on a thousand years of slow development of the rule of law, rights of individuals, property rights, contracts, limited government, checks and balances. By operating within this great institutional machinery, these “structures” as senator Bradley called them last night, these “guardrails” as Kim Strassel called them in this morning’s Wall Street Journal, our society remembers Baron hunter’s experience in 1342, though each individual has forgotten it.


In particular, self-appointed technocrats — us economists — do not offer “advice” to benevolent “policymakers” to implement, though we often so flatter ourselves. Strong institutions of limited government defend against bad and transitory ideas.

Hayek told us how prices transmit information through an economy, information that no individual knows. In a similar manner, these institutions encode memories and wisdom that no individual remembers.

These great institutions do not operate of their own. They need maintenance, repair, continual improvement, and the incorporation of new experience. I am not arguing for mindless conservatism. Many of our legal structures have been, and continue to be, in need of fundamental changes.

But the mechanics who fix them, their operators, and us, their beneficiaries, need to be vaguely aware of how the machine works and why it is built the way it is. When institutions, structures, long standing traditions, rights, separations of power and so forth are abandoned or broken, when guardrails are smashed, the treasure trove of experience involved in their construction can be lost.

The Era of Forgetting

In this regard, I fear we live in an era of great forgetting.

Foreign policy increasingly seems unhinged from simplest lessons of history as well as from the carefully built institutions of the postwar order. Eisenhower and Roosevelt did not call a press conference, announce the US putting 5000 soldiers on Omaha beach, and promise the soldiers would be out by July. They set a goal, and promised to unleash whatever resources are needed for that goal. As senator Bradley reminded us, they knew that managing the peace is just as important as winning the war.

As John Taylor reminds us in his remarks today, monetary and financial policy has veered away from its traditional base in both domestic and international institutions and institutional limitations.

In economic and domestic affairs, the administration and its regulatory agencies are more and more telling people and businesses what to do, unconstrained by conventional rule-of-law restrictions and protections.

But what will happen on a change of administration? Will a new administration retreat, say we must restore rights and rule of law? Or will a new administration — once again — admire an expanded set of tools for ramming through its agenda, punishing political enemies, demanding cooperation of people and business, and set to work institutionally grabbing power for itself?

The temptation will be strong: To direct Lois Lerner’s successor to blackball different applications; to use campaign laws to persecute a different set of officials; to have its environmental, health care, and financial regulators demand the same tribute and that a different set of doors revolve; to wipe out its predecessors executive orders and issue new ones.

Or will it say, no, we eschew these methods, we will go back to respect and rebuild institutional limits, though it will take a long time and reduce our hold on power? Once the traditional restraints are broken, it’s awfully hard to go back.

The leading candidates have already promised which way they’re going. For example, Ms. Clinton, quoted by Kim Strassel, promises to use Treasury regulation to punish companies that legally reduce taxes by moving abroad. And Mr. Trump outrages the law and constitution daily.

Every society needs institutions to pass on its structures and traditions to the next generation. Grade for yourselves how well our schools and universities, even Stanford, are doing to pass on the lessons of limited government, rule of law, individual rights; the institutional wisdom of western democracy.

Our society’s premier institution for collecting, vetting, and passing on experience, science itself, is in trouble. The politicization of climate research is only the latest example.

Our policy debates are taking on a magical tone. Simple lessons of hundreds of years of experience, simple logic of cause and effect, and basic quantification, are disappearing.

Long experience tells us simple steps that encourage economic activity: Low, stable and simple taxes, good public infrastructure, an efficient legal system, predictable simple and uncorrupt regulations, and largely stay out of the way.

Long experience also teaches us many mistakes. For example, price and quantity controls induce scarcity, illegality, sclerosis and poverty. It also teaches that grand plan after grand plan for government directed growth or development has fallen apart.

But our policy debates chase ghosts instead. Rather than fix these humble and broken institutions, we are consumed whether Ms. Yellen might pay banks a quarter of a percentage more on their reserves. Action is regularly demanded over “bubbles,” “imbalances,” “reach for yield” “risk premiums” and so forth, as if anyone had any idea what these meant let alone scientific understanding of what one should do about them.

Serious people and international institutions advocate that the road to prosperity is for the government to borrow money and deliberately waste it; to confiscate wealth by extortionate taxation; to welcome natural disasters for their stimulative rebuilding opportunities; to deliberately throw sand in the gears of productivity; almost magic recommendations that ignore centuries of experience.

(To clarify: yes, we should keep our minds open new ideas. Quantum mechanics sounded like magic when introduced. I play with radical ideas too, such as the idea that higher interest rates lead to more, rather than less, inflation. The issue is, how quickly should new, revolutionary, everything you thought you knew is wrong ideas make their way to public policy? Too much economic policy jumps from "here's a cool idea I thought up on the plane" to "the US should spend a trillion bucks."  I do not advocate that the Fed should act on my latest paper!)

Our regulatory policy seems a parody of making the same mistakes over and over and refusing to learn the lessons.  The Dodd-Frank act is not a new idea. It simply tries again and bigger the same set of ideas that failed in crisis after crisis — guarantee debts, bail out banks, and add more regulators in the vain hope to stop increasingly large, politicized, too big to fail and hugely over leveraged banks from ever losing money again. The ACA/Obamacare is not a new idea. It just adds layer after layer of the same health insurance and care regulations that failed before. This time price controls will surely work to lower costs without cutting supply or innovation — let’s forget the thousands of times they have failed.

And economics is relatively sensible. Magical beliefs pervade our political system’s discussion about terrorism, migration, or the environment. No, a high speed train will not fill California’s reservoirs, or stop terrorism or refugee migration.

There is a late Roman empire feeling in the air. Conventional limitations on action are ignored. People distrust the great institutions of their society, have neglected them, and now they have forgotten how those institutions work. People follow inspirational leaders, who use any tools at their disposal to crush enemies — only to be crushed in turn. New magical faiths sweep through. I fear that our grandchildren will walk among wondrous ruins like medieval villagers, having forgotten how to make concrete.

Optimism

But I learned an important lesson from George Shultz: Any time I start down this sort of line of thought, he says, "Stop being so grumpy!"  As Ronald Reagan famously put it, there must be a pony in here somewhere.  There is.

Our society also has self-correcting institutions. You’re sitting in one, and you’re part of that process today. We’re here. The ideas that define a free — and prosperous — society are alive. The memory of a rule of law structure is alive.

We still have a free press, for now relatively free speech and most people still understand how important that is. The full potential of the regulatory and surveillance state to silence dissent has not yet been used. And in that press, and Internet, horror stories are adding up. People are getting sick of it.

Congress has noticed. There are good people who want to pass simple clear laws and bring back its rule.

For example, In November the House Judiciary Committee passed (WSJ commentary) a package of regulatory reforms. One is, to be guilty of a crime, you must have some intent to violate the law. They can’t charge you after the fact with unknowable laws or regulations, evidence such as statistical discrimination programs that you cannot see or challenge, and fine you millions or put you in jail without even claiming you intended any harm.

This principle of intent, “mens rea”, is a centuries-old bedrock of common law. It encodes a thousand years of experience. It is sad that Federal regulations forgot and trampled it. But it is great news that an effort to fix it is under way. A wider set of rights against regulators, a magna carta for the regulatory state, reestablishing the rights to know the rules ahead of time, to see and challenge evidence, to appeal, and to speedy judgment could well follow.

Financial regulators are seeing daily how ineffective the Dodd-Frank apparatus is. Slowly but surely, the realization that very simple capital standards can obviate this mess is making way. You heard it from Senator Bradley last night.

I see hope on climate. There is a small but increasing alliance between environmentalists and free-marketers. The environmentalists think carbon is such a big problem, that they want policies that will actually do something about it. Free marketers are aghast at the waste and cronyism of energy policy. They are coming together on a deal: A simple straightforward carbon tax in place of wasting money and economic capacity on tax dodges, crony subsidies and ineffective regulations. Sure, there will be a big discussion on the rate, but any conceivable rate will be a big improvement for both environment and economy.

Similar grand bargains on taxes and entitlements are sitting before us, needing only a small amount of leadership and public pressure. The experience of 1982 and 1986 is not forgotten.

A hunger for monetary policy anchored in rules or at least strong institutional traditions and constraints is palpable, even producing bills in Congress. Those may not be perfectly crafted, and may not pass. But the force for rebuilding an institutional structure for monetary policy is there.

Collegiate humanities and social science education has passed the point of the fashionable to the ridiculous, so that study of the successes of western civilization, and not just its many sins, is returning.

I don’t yet hear “it’s your property, do what you want with it” from the Palo Alto zoning board, or the citizens who elect them, but who knows, that too is possible someday.

Even the widely reported disgust with government has a silver lining. People who distrust the government are less likely to vote for the next big personality promising big new programs. Instead, they might be more attracted to candidates who promise restraint and rule of law; to administer competently and to repair broken institutions.

Our society codes its experience into its institutions; in a grand edifice we call limited government and rule of law. The old boat is rusty, but she’s not beyond hope. The bilge pumps are working. And we face no real external pressures. ISIS is the JV; compared to the Visigoths, or to Germany, Japan and the Soviet Union. A rich China should be a godsend, posing no more threat than a rich Europe and Canada. Silicon valley is full of ideas and entrepreneurs waiting to unleash prosperity on the country. If only they can get the permits. If we fail, and the grand forgetting takes over instead, the fault will only be our own.

Tilting at Bubbles


Source: Wall Street Journal
The Wall Street Journal reports on the "Fed's Unsolved Puzzle: How to Deflate Bubbles" (That's the print version headline, much pithier than online.)

I thought I was reading The Onion. There it is, a graph marked "Asset Bubbles," measured, apparently, with interferometer precision.


I must have been asleep or something, since the last time I touched base with finance, mid-yesterday, we still didn't have an operational definition of "bubble," let alone a way of measuring one, beyond academics and Fed officials looking out their office windows and opining that prices seem awfully high (but not quite enough for them to put on a big short.) Let alone any scientific understanding of what policies might calm such bogeymen. How does the Fed know a "bubble" from a "boom," an "irrational valuation" from a rational willingness to take risk in a slow but steady real economy?

And, much more importantly, when did it become the Fed's job to diagnose and prick its perceptions of asset price "bubbles?"

Yet here we read
Six years after the financial crisis ended, the central bank remained ill-equipped to quell the kind of dangerous asset bubbles that destabilized the savings-and-loan industry during the late 1980s, tech stocks in the 1990s and housing in the mid-2000s.
...financial bubbles have been root causes of the past three recessions
 Iowa farmland prices rose 28% between the fourth quarter of 2010 and the fourth quarter of 2011, igniting fears of a dangerous bubble
Apparently "bubbles" have made their way from Monday-morning quarterbacking to established and measurable facts. (To clarify, this is a news story not an editorial, and the reporters, Jon Hilsenrath and David Harrison, are just passing on what they hear. )
Commercial real-estate prices are soaring and Fed officials face the conundrum of what, if anything, to do.
Fed officials said afterward they saw they lacked clear-cut tools or a proper road map of regulatory measures to help stem the simulated booms.
Even though many Fed officials favor using regulatory powers over interest rates to stop bubbles, the U.S. was a “long way” from establishing a regulatory system that could achieve that, Mr. Dudley said in September. 
Your darn tootin' they face that conundrum. Because diagnosing the sources of, and controlling, asset and real estate prices is not, and never has been, part of the Fed's job. 

The Fed has great power and independence. The price of that power and independence is limited sphere of action. It's also wise. Once the Fed becomes the central planner of real estate prices, and allocator of credit to control prices, it will neatly be sandwiched into a political role. Sellers and developers want more, and chant "prices are depressed, stimulate." Buyers want less and chant "pop this bubble" (but give me credit to buy.) The only possible answer is, real estate prices are just not our business.

Central banks have always been severely limited by statute and tradition to what they can try to control, and what tools they can use, in return for their independence. Traditionally, the central bank bought only short-term treasuries, and controlled only short-term interest rates, and its targets were limited to inflation and employment.  Intervening in mortgage backed security and long term treasury markets is already a stretch. Using interest rates to target asset prices is a stretch. Using regulatory power, to allocate credit, to control real estate prices, is way, way beyond the Fed's mandate.

Memo to Fed:  There is already a chorus angry at how much you exceed your sphere now. You may regard them as ill-informed peasants with pitchforks, but they happen to occupy seats in Congress and they're writing bills. If you decide to judge whether the price of farmland in Iowa is a "bubble," and to use your regulatory powers to stifle credit to Iowa farmers with the goal of determining the just price of farmland, those peasants with pitchforks aren't going to take it quietly.

The Fed has neither authority, mandate, road map, nor regulatory measures, because controlling real estate prices is no more its job than controlling carbon emissions. Congress could change that, and give the Fed broad authority. But it has not done so.

To be fair, perhaps this is a natural extension. The Fed took on the job of propping up house, bond, and arguably stock prices in the recession, and there is not a huge outburst of complaint. Perhaps therefore it is entitled to tamp down house, bond, and stock prices in a boom, if it so desires. Oh wait, there is a huge outburst of complaint.
Mr. Rosengren [president of the Boston Fed] had noticed more building cranes in Boston.
“Given our low interest rates, given that it is an interest-sensitive sector, it is probably worthwhile to start thinking about at what point do we become concerned that is growing too rapidly,” he said.
The Fed’s low interest-rate policies have helped drive investors into such assets as commercial real estate as they search for higher returns.
Fed officials said afterward they saw they lacked clear-cut tools or a proper road map of regulatory measures to help stem the simulated booms. (Repeated, with emphasis) 
The vague rationale for intervention is that there is a difference between "boom" and "bubble," between asset prices that are high because of "real" valuations vs. "irrational" ones, between something like "supply" and "demand" and somehow the Fed can tell in real time, offset the bad and allow the good. But that all disappeared in the above paragraphs. Boom and bubble are now the same. And we're not even talking about national or "systemic" "bubbles" anymore. Now the Fed is supposed to worry about the price of farmland in Iowa

This is how it's supposed to work. The Fed lowered interest rates, that raises asset values, higher asset values induce people to invest, which is "stimulative." Q theory 101. How do we know it's "too much?"
Despite the action in commercial real estate, debt levels across the broader financial system are still modest. Overall U.S. financial sector debt— $15.2 trillion in the second quarter—was down 16% from the third quarter of 2008. Financial sector debt has fallen to 84% of economic output from 125%, a sign the economy is less prone to a financial crisis on the scale of 2008.
“Our quantitative measures indicate a subdued level of overall vulnerability in the U.S. financial system,” Fed economists said in an August research paper that sought to assess risks of banks and markets overheating.
Now we're getting somewhere. How are asset price gyrations a "risk" anyway? Answer: if and only if they make their way through debt to default and runs. The right answer to such worries is to make sure there isn't a lot of debt in the way, and let asset prices do whatever they want to do. Keep people from storing gas in the basement; don't try to stop them from ever lighting a candle. The project that the Fed will micro manage prices so nobody ever loses money again is hopeless.

And the bottom graph looks pretty darn good. So what is the worry? If there is no debt in the way, why must the Fed try to control prices?
Some of them, including Ms. George [president of the Federal Reserve Bank of Kansas City] said rates weren’t the right instrument to use against bubbles. She favored demanding banks hold more capital.
Excellent! (I presume she was misquoted, as banks issue capital, they don't hold it, but a minor quibble.)

The graph: I looked up the original here, in a nice paper titled "Mapping Heat in the U.S. Financial System." The paper does not pretend to define or measure "bubbles." It's a nice index number/visualization/forecasting exercise with many more pretty graphs.

All I want for Christmas is a new iWatch

Is a new iWatch on your Christmas list for this year or are you already wearing your iWatch wondering what to do with it? I have a couple of suggestions for you . . . use it to help you manage your time and patients at your dental practice.

Imagine this . . . what if you could get a little buzz on your wrist from your iWatch letting you know that patient in room #1 is ready for an exam and you could check her medical alerts before you even walk into the room? What if you could then speak a command into your iWatch and it would automatically launch the patient’s most recent pano onto the monitor? Using these new apps designed around the efficiency of the dental practice gets me all excited … and it should you too.

I have learned about two new products specifically designed around the iWatch and helping you manage your time around the operatory. Below you will see a video interview I did on a company I spoke with at a recent dental meeting and a link to another product that I absolutely love. If you are interested in this technology, research both products and schedule a demo to look at each one.
 
 
 
CLICK HERE to check out Simplifeye
CLICK HERE to check out OperaDDS

Senin, 14 Desember 2015

Luke Skywalker and ISIS

Via Marginal Revolution, I found "The Radicalization of Luke Skywalker" interesting.

Despised people -- terrorists; slaveholders; Republicans, to the New York Times -- think of themselves as good and worthy, though they do things we find unfathomably evil. Understanding how they see themselves is the first step to any sort of progress in world affairs. Understanding need not mean agreeing or condoning. The language we use -- "terrorist," "radicalize" -- puts them beyond comprehension; useful for ordering drone strikes but not for understanding why people sign up and how they might be turned. The analogy is admittedly strained, but seeing that we might have felt the same feelings that attract terrorists is an unsettling and useful experience.  Even if it's only a movie.

Kamis, 03 Desember 2015

Smith meet Jones

A while ago I wrote up a smorgasbord of policies that I thought could increase US economic growth, at least for a few decades, in "Economic Growth" (pdf, html here.) Noah Smith took me to task in a Bloomberg View column, complaining that I confused growth with levels,
...I want to focus on one bad argument that Cochrane uses. Most of the so-called growth policies Cochrane and other conservatives propose don't really target growth at all, just short-term efficiency. By pretending that one-shot efficiency boosts will increase long-term sustainable growth, Cochrane effectively executes a bait-and-switch.
As it turns out, the difference between "growth" and "level" effects in growth theory and facts is not so strong. Many economists remember vaguely something from grad school about permanent "growth" effects being different and much larger than "level" effects.  It turns out that the distinction is no longer so clear cut; "growth" is smaller and less permanent than you may have thought, and levels are bigger and longer lasting than you may have thought.

Along the way, I offer one quantitative exercise to help think just how much additional growth the US could get from the sort of free-market policies I outlined in the essay.

Part I Growth and Levels 

A quick reply: China.

China removed exactly the sort of "level" or "inefficiency" economic distortions that free-market economists like myself (and Adam Smith) recommend. What happened? Here is a plot of China's per capita GDP, relative to the US (From World Bank). In case you've been sleeping under a rock somewhere, China took off.
GDP per capita in China / US
(Note: This blog gets picked up in several places that mangle pictures and equations. If you're not seeing the above picture or later equations, come to the original.)

Now, in the "growth" vs "level," or "frontier" vs. "development" dichotomy, China experienced  a pure "level" effect. Its GDP increased by removing barriers to "short-term" efficiency, not by any of the "long-term" growth changes (more R&D, say) of growth theory.

But "temporary" "short-run" or "catch-up" growth can last for decades.  And it can be highly significant for people's well-being. From 2000 to 2014, China's GDP per capita grew by a factor of 7, from $955 per person to $7,594 per person, 696%, 14.8% annual compound growth rate (my, compounding does a lot). And they're still at 15% of the US level of GDP per person. There is a lot of "growth" left in this "level" effect!

Lots and lots of people, even "liberals" in Noah's other false dichotomy, use the word "growth" to describe what happened to China, and would not belittle policies that could make the same thing happen here.

Part II. How much better can the US do? 

But can liberalization policies have the same effect for us? Yes, you may say, China had scope for a big "catchup" growth effect. But the US is a "frontier" country. China can copy what we're doing. There is nobody for us to copy. Big increases in levels, which look like growth for a while, are over for us.

But are they? We know how much better China's economy can be, because we see the US. We see how much better North Korea's could be, because we see South Korea. (Literally, in this case.) How much better could the US be, really, if we removed all the distortions as in my growth essay?

To think about this issue, I made the following graph of GDP per capita versus the World Bank's
"Distance to Frontier" overall measure of government interference:
The distance to frontier score...shows the distance of each economy to the “frontier,” which represents the best performance observed on each of the indicators across all economies in the Doing Business sample since 2005.
The individual measures are things like
Starting a Business, Dealing with Construction Permits, Getting Electricity, Registering Property, Getting Credit, Protecting Minority Investors, Paying Taxes, Trading Across Borders, Enforcing Contracts, Resolving Insolvency
(I used GDP data for 2013, and distance for 2014. That gave the largest number of countries.)


The US is $52,000 per year and a distance score of 82. China is $7,000 and a score of 63. The diagonal line is an OLS regression fit.

The distance to frontier measure is highly correlated with GDP per capita. It tracks enormous variation in performance, from the abject poverty of $1,000 per year through the US and beyond.

The correlation would be stronger if not for the outliers. In red, Libya and Venezuela are arguably countries with temporarily higher GDP than the quality of their institutions will allow for long. In green, Rwanda and Georgia may have reasons for temporarily low GDP among improving institutions. Cuba and North Korea are missing. Luxembourg, Kuwait, have obvious stories. And I did not weight by population; large countries seem to be closer to the line.

Update: An attempt at nicer graph art. The countries are weighted by population. The dashed line is a weighted least squares fit, weighted by population. China is red, US is blue. Better?

One might dismiss the correlation a bit as reverse causation. But look at North vs. South Korea, East vs. West Germany, and the rise of China and India. It seems bad policies really can do a lot of damage. And the US and UK had pretty good institutions when their GDPs were much lower. (Hall and Jones 1999 control for endogeneity in this sort of regression by using instrumental variables.)

Too much growth commentary, I think, confounds "frontier" with "perfect." The US has good institutions, but not perfect ones. It takes forever to get a building permit in Lybia. It takes 2 years or more to get one in Palo Alto. It could take 10 minutes. We are not completely uncorrupt. Our tax code is not perfect. Property rights in the US are not ironclad. A lawsuit might take 10 years in Egypt. But it still could take 3 years here. (Disclaimer, all made-up numbers.) And so forth.

So, the big question is, just how much greater "level" -- and how much China-like "growth" on the way -- could the US achieve by improving our good but imperfect institutions?

The Distance to Frontier measure is relative to the best country on each dimension in the World Bank sample. So a score of 100 is certainly possible. I labeled that by a hypothetical country, "Frontierland" (FRO) in the graph.

Perhaps we can do better. Even the best countries in the world are not perfect. Let's call the best possible institutions Libertarian Nirvana (LRN). How good could it be? If the US is currently 82, and the union of best current practices 100, let's consider the implications of a 110 guesstimate.

Country Code Distance GDP/N % > US 20 year growth
China CHN 61 $7,000
United States USA 82 $53,000
Frontierland FRO 100 $163,000 209 5.6
Libertarian Nirvana LRN 110 $398,000 651 14.8

The table shows China and the US along with my hypothetical new countries. Frontierland generates $163,000 of GDP per capita, 209% better than the US. If it takes 20 years to adjust, that means 5.6% per year compound growth. Libertarian Nirvana generates $398,000 of GDP per capita, 651 percent better than the US, a level effect which if achieved in 20 years generates 14.8% compound annual growth along the way.

These numbers seem big. But there are no black boxes here. You see the graph, I'm just fitting the line.  And China just did achieve nearly 20 years of 14% growth, and a 700% improvement.

In a sense, the numbers are conservative. The US is above the regression line in the graph. By the regression line, our GDP per capita should only be $33,000 per capita. I extrapolated the regression line, not the current state of the US.

Summary: It is surprising that bad policies, bad institutions, bad ease of doing business, can do quite so much damage. Harberger triangles just don't seem to add up to the difference between $1,000  and $53,000 GDP per capita. But the evidence -- especially the basically controlled experiments of the Koreas and Germanys -- is pretty strong.

The converse must therefore also be true. If bad institutions and policies can do so much damage, better ones may also be able to do a lot of good.

This is admittedly simplistic. Growth theory does distinguish between "ideas" produced by the "frontier" country, that are harder to improve, and "misallocation", "development" of more efficiently using existing ideas. As traditional macroeconomics thinks about aggregate demand easily raising GDP until we run in to aggregate supply,  there is a point of superb efficiency beyond which you can't go without more ideas. I don't know where that point is. But uniting the existing best practices around the world in Frontierland is surely a lower bound, and an extra 10 percent doesn't seem horribly implausible.

Lots of other new research suggests that level inefficiencies are sizeable. For example, Chang-Tai Hsieh and Pete Klenow measure misallocation -- the extent to which low productivity plants should contract and high productivity plans should expand, largely by just moving people around (yes, I'm simplifying). They report from this source "Full liberalization, by this calculation, would boost aggregate manufacturing TFP by 86%–115% in China, 100%–128% in India, and 30%–43% in the United States." And this is just from better matches. They're not even talking about policies that raise TFP at all plants, like removing regulatory barriers.

Likewise, Michael Clemens argues that opening borders -- again better matching skills and opportunities -- would roughly double world GDP. That too is (as far as I can tell) based only on "level" calculations, not the "scale" effects of better ideas that growth theory (below) would adduce. But you'd get a lot of "growth" on the way to doubling the level!

Part III. Smith, meet Jones; Growth effects are smaller than you thought

Conversely, it turns out that "growth" effects are vanishing from growth theory. Levels are all we have -- but big levels, that take decades of "transitory" growth to achieve.

The crucial references here are Chad Jones' 2005 "Growth and Ideas" and 1995 "R&D based models of economic growth" and 1999 "Sources of U.S. Economic Growth in a World of Ideas" My discussion will pretty freely plagiarize.

Suppose output is produced using labor \(L_Y\) and a stock of ideas \(A\) by \[ Y = A^\sigma L_Y \] New ideas are likewise produced from labor and old ideas, \[ \dot{A} = \delta L_A A^\phi \] where \(L_A\) is the number of people working on ideas, often (but too narrowly, in my view) called "researchers." To keep it simple, suppose a fraction \(s\) of the labor force works in research, \(L_A= s L\) and that population \(L\) grows at the rate \(n\). The classic Romer, Grossman and Helpman, and Aghion and Howitt models specify \(\phi = 1\). Then we have \[ \frac{\dot{A}}{A} = \delta s L \] and growth in output per capita is \[ g_Y \equiv \frac{\dot{Y}}{Y} -\frac{\dot{L}}{L} = \sigma \delta s L. \] Here you see the new growth theory promise: an increase in the fraction of the population doing research \(s\) can raise the permanent growth rate of output per capita! This is a "growth effect" as opposed to those boring old "level effects" of standard efficiency-improving microeconomics.

But here you also see the fatal flaw pointed out by Jones. The growth rate of output should increase with the level of population. As world population increased from 2 billion in 1927 to 7 billion today, growth should have increased from 2% to 7% per year, per capita. The growth rate of output per capita should itself be growing exponentially! Substituting, we should see \[ g_Y = \sigma \delta s L_0 e^{nt} \] The problem is deep. The model with \(\phi = 1\) gets all sorts of scale effects wrong. Not only has the population increased over the last century, the fraction engaged in R&D has increased dramatically. Integration, by which two economies merge and effectively double their populations, should double their growth rates. Yet frontier growth rates are quite steady, if anything declining since the 1970s.

Jones' solution is simple: How about \(\phi < 1\)? Let's think hard about returns to scale in idea-production
If \(\phi > 0\), then the number of new ideas a researcher invents over a given interval of time is an increasing function of the existing stock of knowledge. We might label this the standing on shoulders effect: the discovery of ideas in the past makes us more effective researchers today. Alternatively, though, one might consider the case where \(\phi < 0\), i.e. where the productivity of research declines as new ideas are discovered. A useful analogy in this case is a fishing pond. If the pond is stocked with only 100 fish, then it may be increasingly difficult to catch each new fish. Similarly, perhaps the most obvious new ideas are discovered first and it gets increasingly difficult to find the next new idea.
Or, maybe \(\phi=0\) is a useful benchmark: each hour of work produces the same number of new ideas. But  \(\phi=1\) is a strange case; each hour of effort produces the same increase in the growth rate of new ideas.

Solving the model for \(\phi \lt 1 \) the idea accumulation equation is \[ \frac{\dot{A}}{A} = \delta s L_0 e^{nt} A^{\phi-1} \] Let's look for a constant growth rate solution \(A_t = A_0e^{g_At}\), \[ g_A= \delta s L_0 e^{nt} A_0^{\phi-1} e^{(\phi-1){g_At}} \] This will only work if the exponents cancel, \[n+(\phi-1)g_A = 0 \] \[g_A = \frac{n}{1-\phi} \] The steady state output per capita growth is then \[ g_Y = \sigma g_A = \frac{\sigma n}{1-\phi}\] This change solves the problem: It's still an endogenous growth model, in which growth is driven by the accumulation of non-rivalrous ideas. There are still externalities, and doing more idea-creation might be a good idea itself. But now the model predicts a sensible steady growth in per-capita income.

The model no longer has "growth effects." Jones:
Changes in research intensity no longer affect the long-run growth rate but, rather, affect the long-run level of income along the balanced-growth path (through transitory effects on growth). Similarly, changes in the size of the population affect the level of income but not its long-run growth rate. Finally, the long-run growth rate
On reflection, this distinction isn't really a big deal. The model behaves smoothly, for any finitely long period of time or data, as \(\phi\) approaches one. The "level" effects get larger, and the period of temporary "growth" in transition dynamics to a new level gets longer. Even a century's worth of steady growth can't easily distinguish between values of \(\phi\) a bit below one, and the limit \(\phi=1\) of permanent growth effects.

This should remind you of the great unit root debate. A model \(y_t = \phi y_{t-1} + \varepsilon_t\) with \( \phi=1\) has a unit root, and shocks have permanent effects. A model with \( \phi < 1\) is stationary, with only transitory responses to shocks. But \(\phi=0.99\) behaves for a century's worth of data almost exactly like \(\phi=1\). So the difference between "permanent" and "transitory", like the difference between "growth" and "level" really is not stark.

So where are we? There is no magic difference between permanent growth effects and one-time level increases. All we have are distortions that change the level of GDP per capita.

The big question remains: how bad are the distortions? Which ones have large effects and which are tolerable small effects? Endogenous growth theory still suggests that distortions which interfere with idea production, including embodiment of new ideas in productivity-raising businesses, will have much larger effects than, say, higher sales taxes on tacos. Just why is the correlation between bad government and bad economies so strong?  My essay just suggested getting rid of all the distortions we could find.

Part IV. Needless politicization 

As I hope this extensive post shows, these questions are not political, and the subject of much deep current research.

Noah chooses to make this political. The quote again,
...I want to focus on one bad argument that Cochrane uses. Most of the so-called growth policies Cochrane and other conservatives propose don't really target growth at all, just short-term efficiency. By pretending that one-shot efficiency boosts will increase long-term sustainable growth, Cochrane effectively executes a bait-and-switch.
"Bad argument" may just mean that Noah is unaware of Jones' and related work. "Cochrane and other conservatives" is telling. Look at my profile. You don't find that word.  Open borders, drug legalization, and so forth are not well described as "conservative." I emailed Noah last time he used the word, so his inaccuracy is intentional.

"Pretending" "bait-and-switch" are unsubstantiated charges of intentional deception. And to call permanent increases in efficiency "short-term" is itself a bit of a stretch.

Even the New York Times, and many respectable "liberal" economists use the words "growth" to describe what has happened in China and to describe what "short-term" level effects could do for the US. From the Hilary Clinton Campaign website,
Hillary understands that in order to raise incomes, we need strong growth, fair growth, and long-term growth. And she has a plan to get us there.

Strong growth
Provide tax relief for families. Hillary will cut taxes for hard-working families to increase their take-home pay...

Unleash small business growth. ..She’s put forward a small-business agenda to expand access to capital, provide tax relief, cut red tape, and help small businesses bring their goods to new markets.

...Hillary’s New College Compact will invest $350 billion so that students do not have to borrow to pay tuition at a public college in their state. ..

Boost public investment in infrastructure and scientific research. ... Hillary has called for a national infrastructure bank... She will call for reform that closes corporate tax loopholes and drives investment here, in the U.S. And she would increase funding for scientific research at agencies like the National Institutes of Health and the National Science Foundation.

Lift up participation in the workforce—especially for women...
No, that's not my essay, nor the Bush 4% growth website. There is the word "growth," all over the place, but only the scientific research might count as raising growth in the Noah Smith classificiation. Yet he does not include her among  "conservative" economists displaying "bad arguments," "pretending," or "bait and switching."

Enough. Shoehorning interesting economics into partisan political "conservative" vs. "liberal" categories is not a useful way to understand the issues here.