Journalist/developer. Interactive editor @frontlinepbs. Builder of @HomicideWatch. Sinophile for fun. Past: @WBUR, @NPR, @NewsHour.
1499 stories
·
44 followers

The bipartisan plan to end surprise ER bills, explained

2 Shares
There’s a bipartisan proposal in the Senate to end surprise ER billing.

Being an emergency room patient can be a pretty frightening experience. I know this because I’ve been spending the past year running a project on emergency room billing — and because a few months ago, I ended up an emergency room patient myself.

A few weeks after I had my baby, I developed an infection related to breastfeeding called mastitis. The infection gave me 104-degree fevers and painful swelling, all while I was trying to take care of a newborn. When my OB-GYN saw me, she sent me to the emergency room because my case had become especially bad.

Being in the emergency room was scary for all the reasons it’s scary to be that sick — the doctors were talking about admitting me to the hospital for more intensive treatment — but it was also scary because I spend a lot of time reading about surprise emergency room bills and I was worried about getting one myself.

I knew the hospital was in network with my insurance, but I had no idea whether the radiologist reading my ultrasound, who worked remotely, would accept my insurance. To be totally honest, I didn’t even ask if the radiologist was in network. I had a triple-digit fever, a screaming baby, and little ability to think through insurance billing. Besides, my doctor said I needed the scan to examine the abscess in my breast — who was I to turn it down on the account of billing worries?

Instead, I went in for the scan and hoped for the best.

So far, things have turned out fine — I’ve only been billed a $150 copayment for my ER trip, and the mastitis cleared up a few days after the visit. But a lot of patients in a similar situation don’t end up fine, like the patient I wrote about who was left with a $7,924 bill after an oral surgery performed by an out-of-network doctor at his in-network emergency room, or the patient profiled by NPR and Kaiser Health News who ended up with an astronomical $108,951 bill for a similar situation.

Senators are proposing a solution to this problem — that doesn’t involve journalists writing about individual cases

When I came back from leave this week, I was especially interested to see a new legislative proposal from a bipartisan group of senators that would target this exact type of situation: surprise ER billing.

The policy proposal, which you can read here, essentially bars out-of-network doctors from billing patients directly for their care. Instead, they would have to seek payment from the insurance plan. This would mean that in the cases above, the out-of-network doctors couldn’t send those big bills to the patients, who’d be all set after paying their emergency room copays.

The doctors would instead have to work with patients’ insurance, which would pay the greater of the following two amounts:

  • The median in-network rate negotiated by health plans
  • 125 percent of the average amount paid to similar providers in the same geographic area

The Senate proposal would also require out-of-network doctors and hospitals to tell patients that they are out of network once their condition has stabilized, and give them the opportunity to transfer to an in-network facility.

This bill is notable in a few ways. For one, it’s a bipartisan health care bill in the Senate — you don’t see that very much these days! I’m told by one staffer familiar with the negotiations that this group has long shown an interest in bringing more transparency to health care prices, and surprise emergency bills seemed like a natural starting point.

”Given the reports of families with insurance getting surprised with massive bills, this would be an important first step in the work to bring more price transparency,” the staffer said.

The second is that it’s pretty good policy too! That’s the general feedback I got from Zack Cooper, an associate professor at Yale University, who, along with his colleague Fiona Scott Morton, has done a lot of pioneering research to uncover how frequently and where these surprise bills happen.

”It is fantastic that they’re doing something, and that it’s bipartisan,” he says. “It’s one of those areas where we can agree what is happening now is not good, and this gets us 80 percent of the way to fixing it.”

Cooper certainly has his quibbles with the Senate proposal. For one, it bases out-of-network reimbursement rates on current in-network rates. And those in-network fees are already really high. So while this bill might protect consumers, it probably won’t do a lot to tackle health care spending.

”My concern here is that in-network rates are already quite high, so we’re cementing that into the system,” he says. “The current world gives emergency physicians tremendous power in negotiating higher in-network rates.”

But on balance, Cooper sees the Senate proposal as a positive step — a big improvement on how things work today. He told me it’s one of those situations where he wouldn’t want to see the perfect be the enemy of the good, and where you actually have Democrats and Republicans working on a policy solution to make our health care system work just a tiny bit better.

Do you have an emergency room bill from the past five years? Share it with us! I’ve been running a year-long investigation into emergency room billing that relies on reader-submitted bills. Help us out and submit yours here.


This story appears in VoxCare, a newsletter from Vox on the latest twists and turns in America’s health care debate. Sign up to get VoxCare in your inbox along with more health care stats and news.

Join the conversation

Are you interested in more discussions around health care policy? Join our Facebook community for conversation and updates.

Read the whole story
acdha
5 days ago
reply
Washington, DC
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete

My 12 Rules to Live By

1 Share

Anybody worth learning from has plenty they stand for.

I love hearing the rules of thumb, the standards, the conventional wisdom and the accrued learnings of these people. Similarly I try to capture tightly-phrased aphorisms and holding myself accountable with plenty of direct and specific lists and resolutions.

So of course I was a sucker for the concept of ‘12 Rules for Life.’ It’s a book published early this year by Jordan Peterson that spiraled from popular to, fitting for today’s era, being engulfed in a strangely hyper-gendered debate. The book’s over-simplified approach of ordering one’s life with structure did gain positive feedback, including a podcast episode from Malcolm Gladwell. But because Peterson is aflame in lots of identity politics, I walked away from the the book less interested in adding to that debate than with something else.

I spent the last several months taking notes of the many universal truths I held myself to, and recommended for others. It became a fun game for parties among friends and family: what are your 12 Rules to Live By?

Let me share.

I was downtheshore for a weekend in July and created a first draft. In discussing mine with others, I dissected what I thought made for effective and powerful rules. My favorites are specific and mundane, universal enough to be familiar to anyone but not necessarily something everyone would agree with: they’re personal and symbolic of something far bigger.

Here are mine:

  1. Make a list and keep it. I believe in resolutions and deadlines and goals. I think they drive me forward more than anything else I do. It’s also how to develop habits. 
  2. Always use the white board. Forget the hesitation, and develop an idea together in as fast-moving and flexible an environment as possible. There’s no judging in brainstorming.
  3. Always pack a bathing suit. You never can be clear when an opportunity for fun will come up. A slightly related idea I herald but have not fully maintained myself: Never own a swimming pool but always have a friend with one.
  4. Never make a cocktail with a bottle of whiskey that cost more than $45. This came from a beloved of uncle of mine and has deeply shaped my relationship to alcohol. A related goal of mine has been to always know how to make a drink with each of the five primary liquors.
  5. Ask the waitress what she would have. It’s a great approach to finding house recommendations, and a related pledge of mine is to always try the local street food.
  6. Never have an opinion before you could argue the opposite. Push yourself. If you’re angry about an issue, can you fully describe why someone might think the opposite?
  7. Never go to the bathroom without something to read. OK, so a playful one, but the point is to find opportunities to learn something new.
  8. Make the experiment smaller. I try to always remind myself that if we’re trying to learn something, there might be a smaller way to learn a lesson. It’s a question of not letting the perfect be the enemy of the good, and constantly shipping product.
  9. Never talk about a meeting until you’re outside the building. It’s a long story but let me say I’ve learned to be more smart about when and how I share news.
  10. Always fill the ice tray. Leadership and good teamwork takes place with small and quiet acts.
  11. Not making a decision is making a decision. I see it all the time: people who push back or avoid making a final call in the name of of delay. Those delays are themselves decisions whether or not they realize.
  12. Demand only as much credit for a success as you would accept blame for its failure. It’s easy to speak up for the praise but hide away when the criticism comes. You have to lean into both equally — if not more for the blame first.

(Photo from Unsplash)

Read the whole story
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete

Voters Like A Political Party Until It Passes Laws

1 Share

The stakes of the 2018 midterms seem huge. Democrats believe they can win back control of the U.S. House — maybe even the Senate — and move policy decidedly to the left. Then they’ll set their sights on the White House in 2020, dreaming about what they could accomplish with full control of government — and how much voters would reward them.

But perhaps Democrats shouldn’t get their hopes up too much. Republicans now control all branches of government but have scored only one major legislative victory, and they are facing a substantial backlash. The GOP’s current situation raises these questions: Can either political party maintain power while enacting its agenda? Or are governing majorities transient, with policy victories sowing the seeds for future electoral losses?

The evidence we have suggests the latter. When Democrats historically have tried to enact a spate of liberal policies, Republicans have made gains and public opinion has moved in a more conservative direction. Likewise, when Republicans have passed more conservative policies, Democrats have made gains, and public opinion has moved in a more liberal direction. It might not sound intuitive, but policy victories usually result in a mobilized opposition and electoral losses. Or, put another way, voters usually punish rather than reward parties that move policy to achieve their goals.

To measure this, I used an updated categorization of major laws since 1953 from “The Macro Polity,”2 which codes laws as liberal (expanding the scope of government responsibility), conservative (contracting the scope of government) or neither. For example, the Civil Rights Act of 1964 extended government protections, so it is coded as liberal law; the 1981 Reagan tax cut downsized the federal tax code, so it is coded as conservative law. This involves some judgment calls; if the authors of “The Macro Polity” thought a law was clearly liberal or conservative based on contemporary politics, they coded it to account for those realities. So even though abortion restrictions (or pro-life laws) can mean more government intervention, they are coded as conservative because they reflect conservative politics.3

In the chart below, I plotted the relationship between the net number of liberal laws4 passed by each Congress and the change in the Democrats’ share of the popular vote in the House from the previous election. It is not a perfect relationship (the correlation is -0.47), and there were outliers, including the 1964 election following the Kennedy assassination and the 1974 election after Watergate.

But there were three cases that seemed to capture electoral fallout from high levels of liberal policymaking. Democrats last completely controlled the federal government in 2009-10 and used that control to enact a long list of policy priorities — only to be met with a massive electoral backlash in the 2010 midterms. Two other elections with the largest changes in partisan vote share from the prior election were in 1966, after Lyndon Johnson’s Great Society, and in 1994, following Bill Clinton’s initial legislative agenda.

It is not any easier for Republicans. They, too, have lost congressional seats and pushed public opinion to the left when they succeeded in shifting policy even a little to the right. Democrats have gained vote share after every Congress that passed more conservative than liberal laws. It’s notable that GOP-controlled governments haven’t tended to push overall policy that far to the right. Republican presidents have typically paired their conservative policies with liberal compromises — such as George W. Bush’s tax cuts along with a new health entitlement. The current Congress would be an outlier, even among those under Republican presidents, in pursuing no liberal laws.

So why do American politics seesaw back and forth? Some of what I’ve captured might be attributed to the well-known phenomenon of midterm loss: The party of the president tends to lose seats in a nonpresidential election. But it is unclear if midterm electoral backlashes are a certainty or a response to a president’s specific policy agenda. This can be hard to disentangle as new presidents often pursue big agendas in the hopes of shaping policy in their ideological mold.

For instance, after campaigning on health care helped President Obama win votes in 2008, voters in the 2010 election punished legislators who supported the Affordable Care Act because they came to view the policy and the legislators as too liberal. But backlashes may materialize even when large agenda items fail to pass, such as the response to Clinton’s health care proposal in the 1994 election or to Bush’s Social Security proposal in the 2006 election.5

But there is a deeper problem: Neither party seems capable of sustaining a public majority to carry out its governing vision to completion. Today’s governing majorities are simply too narrow and short-lived to restructure government. The parties have recently swapped who commands the Oval Office and regularly compete for control of the House and Senate. Despite predictions of one-party dominance, both parties remain competitive.

This reflects the American public’s inconsistent views. Americans have long-agreed with Republicans in broad symbolic terms while agreeing with Democrats in concrete policy terms. Politicians promise that they will win over converts with their policy success, but the public nearly always becomes more liberal during Republican presidencies — as it is doing now — and more conservative under Democratic rule (as it did under Obama).

Partisans tell themselves that this time will be different, that the final vanquishing of their opponents is just around the corner. But even maintaining a narrow majority for more than four years would be unprecedented of late — much less winning a long-term partisan war. Rather, the historical record suggests that the price for enacting a large ideological policy agenda may be losing the very power that made it possible.

Read the whole story
chrisamico
11 days ago
reply
Boston, MA
Share this story
Delete

The interesting ideas in Datasette

1 Share

Datasette (previously) is my open source tool for exploring and publishing structured data. There are a lot of ideas embedded in Datasette. I realized that I haven’t put many of them into writing.

Publishing read-only data
Bundling the data with the code
SQLite as the underlying data engine
Far-future cache expiration
Publishing as a core feature
License and source metadata
Facet everything
Respect for CSV
SQL as an API language
Optimistic query execution with time limits
Keyset pagination
Interactive demos based on the unit tests
Documentation unit tests

Publishing read-only data

Datasette provides a read-only API to your data. It makes no attempt to deal with writes. Avoiding writes entirely is fundamental to a plethora of interesting properties, many of which are expanded on further below. In brief:

  • Hosting web applications with no read/write persistence requirements is incredibly cheap in 2018 - often free (both ZEIT Now and a Heroku have generous free tiers). This is a big deal: even having to pay a few dollars a month is enough to dicentivise sharing data, since now you have to figure out who will pay and ensure the payments don’t expire in the future.
  • Being read-only makes it trivial to scale: just add more instances, each with their own copy of the data. All of the hard problems in scaling web applications that relate to writable data stores can be skipped entirely.
  • Since the database file is opened using SQLite’s immutable mode, we can accept arbitrary SQL queries with no risk of them corrupting the data.

Any time your data changes, you need to publish a brand new copy of the whole database. With the right hosting this is easy: deploy a brand new copy of your data and application in parallel to your existing live deployment, then switch over incoming HTTP traffic to your API at the load balancer level. Heroku and Zeit Now both support this strategy out of the box.

Bundling the data with the code

Since the data is read-only and is encapsulated in a single binary SQLite database file, we can bundle the data as part of the app. This means we can trivially create and publish Docker images that provide both the data and the API and UI for accessing it. We can also publish to any hosting provider that will allow us to run a Python application, without also needing to provision a mutable database.

The datasette package command takes one or more SQLite databases and bundles them together with the Datasette application in a single Docker image, ready to be deployed anywhere that can run Docker containers.

SQLite as the underlying data engine

Datasette encourages people to use SQLite as a standard format for publishing data.

Relational database are great: once you know how to use them, you can represent any data you can imagine using a carefully designed schema.

What about data that’s too unstructured to fit a relational schema? SQLite includes excellent support for JSON data - so if you can’t shape your data to fit a table schema you can instead store it as text blobs of JSON - and use SQLite’s JSON functions to filter by or extract specific fields.

What about binary data? Even that’s covered: SQLite will happily store binary blobs. My datasette-render-images plugin (live demo here) is one example of a tool that works with binary image data stored in SQLite blobs.

What if my data is too big? Datasette is not a “big data” tool, but if your definition of big data is something that won’t fit in RAM that threshold is growing all the time (2TB of RAM on a single AWS instance now costs less than $4/hour).

I’ve personally had great results from multiple GB SQLite databases and Datasette. The theoretical maximum size of a single SQLite database is around 140TB.

SQLite also has built-in support for surprisingly good full-text search, and thanks to being extensible via modules has excellent geospatial functionality in the form of the SpatiaLite extension. Datasette benefits enormously from this wider ecosystem.

The reason most developers avoid SQLite for production web applications is that it doesn’t deal brilliantly with large volumes of concurrent writes. Since Datasette is read-only we can entirely ignore this limitation.

Far-future cache expiration

Since the data in a Datasette instance never changes, why not cache calls to it forever?

Datasette sends a far future HTTP cache expiry header with every API response. This means that browsers will only ever fetch data the first time a specific URL is accessed, and if you host Datasette behind a CDN such as Fastly or Cloudflare each unique API call will hit Datasette just once and then be cached essentially forever by the CDN.

This means it’s safe to deploy a JavaScript app using an inexpensively hosted Datasette-backed API to the front page of even a high traffic site - the CDN will easily take the load.

Zeit added Cloudflare to every deployment (even their free tier) back in July, so if you are hosted there you get this CDN benefit for free.

What if you re-publish an updated copy of your data? Datasette has that covered too. You may have noticed that every Datasette database gets a hashed suffix automatically when it is deployed:

https://fivethirtyeight.datasettes.com/fivethirtyeight-c9e67c4

This suffix is based on the SHA256 hash of the entire database file contents - so any change to the data will result in new URLs. If you query a previous suffix Datasette will notice and redirect you to the new one.

If you know you’ll be changing your data, you can build your application against the non-suffixed URL. This will not be cached and will always 302 redirect to the correct version (and these redirects are extremely fast).

https://fivethirtyeight.datasettes.com/fivethirtyeight/alcohol-consumption%2Fdrinks.json

The redirect sends an HTTP/2 push header such that if you are running behind a CDN that understands push (such as Cloudflare) your browser won’t have to make two requests to follow the redirect. You can use the Chrome DevTools to see this in action:

Chrome DevTools showing a redirect initiated by an HTTP/2 push

And finally, if you need to opt out of HTTP caching for some reason you can disable it on a per-request basis by including ?_ttl=0 in the URL query string. - for example, if you want to return a random member of the Avengers it doesn’t make sense to cache the response:

https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+[avengers%2Favengers]+order+by+random()+limit+1&_ttl=0

Publishing as a core feature

Datasette aims to reduce the friction for publishing interesting data online as much as possible.

To this end, Datasette includes a “publish” subcommand:

# deploy to Heroku
datasette publish heroku mydatabase.db
# Or deploy to Zeit Now
datasette publish now mydatabase.db

These commands take one or more SQLite databases, upload them to a hosting provider, configure a Datasette instance to serve them and return the public URL of the newly deployed application.

Out of the box, Datasette can publish to either Heroku or to Zeit Now. The publish_subcommand plugin hook means other providers can be supported by writing plugins.

License and source metadata

Datasette believes that data should be accompanied by source information and a license, whenever possible. The metadata.json file that can be bundled with your data supports these. You can also provide source and license information when you run datasette publish:

datasette publish fivethirtyeight.db \
    --source="FiveThirtyEight" \
    --source_url="https://github.com/fivethirtyeight/data" \
    --license="CC BY 4.0" \
    --license_url="https://creativecommons.org/licenses/by/4.0/"

When you use these options Datasette will create the corresponding metadata.json file for you as part of the deployment.

Facet everything

I really love faceted search: it’s the first tool I turn to whenever I want to start understanding a collection of data. I’ve built faceted search engines on top of Solr, Elasticsearch and PostgreSQL and many of my favourite tools (like Splunk and Datadog) have it as a core feature.

Datasette automatically attempts to calculate facets against every table. You can read more about the Datasette Facets feature here - as a huge faceted search fan it’s one of my all-time favourite features of the project. Now I can add SQLite to the list of technologies I’ve used to build faceted search!

Respect for CSV

CSV is by far the most common format for sharing and publishing data online. Almost every useful data tool has the ability to export to it, and it remains the lingua franca of spreadsheet import and export.

It has many flaws: it can’t easily represent nested data structures, escaping rules for values containing commas are inconsistently implemented and it doesn’t have a standard way of representing character encoding.

Datasette aims to promote SQLite as a much better default format for publishing data. I would much rather download a .db file full of pre-structured data than download a .csv and then have to re-structure it as a separate piece of work.

But interacting well with the enormous CSV ecosystem is essential. Datasette has deep CSV export functionality: any data you can see, you can export - including the results of arbitrary SQL queries. If your query can be paginated Datasette can stream down every page in a single CSV file for you.

Datasette’s sister-tool csvs-to-sqlite handles the other side of the equation: importing data from CSV into SQLite tables. And the Datasette Publish web application allows users to upload their CSVs and have them deployed directly to their own fresh Datasette instance - no command line required.

SQL as an API language

A lot of people these days are excited about GraphQL, because it allows API clients to request exactly the data they need, including traversing into related objects in a single query.

Guess what? SQL has been able to do that since the 1970s!

There are a number of reasons most APIs don’t allow people to pass them arbitrary SQL queries:

  • Security: we don’t want people messing up our data
  • Performance: what if someone sends an accidental (or deliberate) expensive query that exhausts our resources?
  • Hiding implementation details: if people write SQL against our API we can never change the structure of our database tables

Datasette has answers to all three.

On security: the data is read-only, using SQLite’s immutable mode. You can’t damage it with a query - INSERT and UPDATEs will simply throw harmless errors.

On performance: SQLite has a mechanism for canceling queries that take longer than a certain threshold. Datasette sets this to one second by default, though you can alter that configuration if you need to (I often bump it up to ten seconds when exploring multi-GB data on my laptop).

On hidden implementation details: since we are publishing static data rather than maintaining an evolving API, we can mostly ignore this issue. If you are really worried about it you can take advantage of canned queries and SQL view definitions to expose a carefully selected forward-compatible view into your data.

Optimistic query execution with time limits

I mentioned Datasette’s SQL time limits above. These aren’t just there to avoid malicious queries: the idea of “optimistic SQL evaluation” is baked into some of Datasette’s core features.

Consider suggested facets - where Datasette inspects any table you view and tries to suggest columns that are worth faceting against.

The way this works is Datasette loops over every column in the table and runs a query to see if there are less than 20 unique values for that column. On a large table this could take a prohibitive amount of time, so Datasette sets an aggressive timeout on those queries: just 50ms. If the query fails to run in that time it is silently dropped and the column is not listed as a suggested facet.

Datasette’s JSON API provides a mechanism for JavaScript applications to use that same pattern. If you add ?_timelimit=20 to any Datasette API call, the underlying query will only get 20ms to run. If it goes over you’ll get a very fast error response from the API. This means you can design your own features that attempt to optimistically run expensive queries without damaging the performance of your app.

Keyset pagination

SQL pagination using OFFSET/LIMIT has a fatal flaw: if you request page number 300 at 20 per page the underlying SQL engine needs to calculate and sort all 6,000 preceding rows before it can return the 20 you have requested.

This does not scale at all well.

Keyset pagination (often known by other names, including cursor-based pagination) is a far more efficient way to paginate through data. It works against ordered data. Each page is returned with a token representing the last record you saw, then when you request the next page the engine merely has to filter for records that are greater than that tokenized value and scan through the next 20 of them.

(Actually, it scans through 21. By requesting one more record than you intend to display you can detect if another page of results exists - if you ask for 21 but get back 20 or less you know you are on the last page.)

Datasette’s table view includes a sophisticated implementation of keyset pagination.

Datasette defaults to sorting by primary key (or SQLite rowid). This is perfect for efficient pagination: running a select against the primary key column for values greater than X is one of the fastest range scan queries any database can support. This allows users to paginate as deep as they like without paying the offset/limit performance penalty.

This is also how the “export all rows as CSV” option works: when you select that option, Datasette opens a stream to your browser and internally starts keyset-pagination over the entire table. This keeps resource usage in check even while streaming back millions of rows.

Here’s where Datasette gets fancy: it handles keyset pagination for any other sort order as well. If you sort by any column and click “next” you’ll be requesting the next set of rows after the last value you saw. And this even works for columns containing duplicate values: If you sort by such a column, Datasette actually sorts by that column combined with the primary key. The “next” pagination token it generates encodes both the sorted value and the primary key, allowing it to correctly serve you the next page when you click the link.

Try clicking “next” on this page to see keyset pagination against a sorted column in action.

Interactive demos based on the unit tests

I love interactive demos. I decided it would be useful if every single release of Datasette had a permanent interactive demo illustrating its features.

Thanks to Zeit Now, this was pretty easy to set up. I’ve actually taken it a step further: every successful push to master on GitHub is also deployed to a permanent URL.

Some examples:

The database that is used for this demo is the exact same database that is created by Datasette’s unit test fixtures. The unit tests are already designed to exercise every feature, so reusing them for a live demo makes a lot of sense.

You can view this test database on your own machine by checking out the full Datasette repository from GitHub and running the following:

python tests/fixtures.py fixtures.db metadata.json
datasette fixtures.db -m metadata.json

Here’s the code in the Datasette Travis CI configuration that deploys a live demo for every commit and every released tag.

Documentation unit tests

I wrote about the Documentation unit tests pattern back in July.

Datasette’s unit tests include some assertions that ensure that every plugin hook, configuration setting and underlying view class is mentioned in the documentation. A commit or pull request that adds or modifies these without also updating the documentation (or at least ensuring there is a corresponding heading in the docs) will fail its tests.

Learning more

Datasette’s documentation is in pretty good shape now, and the changelog provides a detailed overview of new features that I’ve added to the project. I presented Datasette at the PyBay conference in August and I’ve published my annotated slides from that talk. I was interviewed about Datasette for the Changelog podcast in May and my notes from that conversation include some of my favourite demos.

Datasette now has an official Twitter account - you can follow @datasetteproj there for updates about the project.

Read the whole story
chrisamico
11 days ago
reply
Boston, MA
Share this story
Delete

Practice, suck less

1 Share

My “practice/suck less” diagram drawn on @devthomas’s chalkboard-painted podium in his 7th grade classroom. I was delighted to hear from several teachers who said they were now discussing it with their students. “This is now my teaching philsophy statement,” tweeted @toddpetersen, “in full.”

I’ve been thinking about how practice is its own skill — that once you learn to practice, you can transfer that skill to almost anything else.

A few years ago, I tweeted, “Lots of people decide to train for a marathon and just go out and do it. Why not chose to have better handwriting? Or play the piano?” And @aribraverman tweeted back, “Actually, training for a marathon, going out every day… has helped me be better/braver at being new at things…. Started to learn French, learned to ride a motorcycle. Running helped me not stress/expect to be perfect right away.”

There are other lessons that practice teaches. Here is Liz Danzico on learning to play music:

Learning to play music is an long exercise learning to to be kind to yourself. As your fingers stumble to keep up with your eyes and ears, your brain will say unkind things to the rest of you. And when this tangle of body and mind finally makes sense of a measure or a melody, there is peace. Or, more accurately, harmony. And like the parents who so energetically both fill a house with music and seek its quietude, both are needed to make things work. As with music, it takes a lifetime of practice to be kind to yourself. Make space for that practice, and the harmony will emerge.

Here is my not-so-classroom-friendly image of practice. (The piece is Schumann’s “Träumerei.”)

Read the whole story
chrisamico
12 days ago
reply
Boston, MA
Share this story
Delete

Incoming Calls

4 Comments and 9 Shares
I wonder if that friendly lady ever fixed the problem she was having with her headset.
Read the whole story
chrisamico
13 days ago
reply
Boston, MA
Share this story
Delete
4 public comments
emdot
6 hours ago
reply
Real life, again.
San Luis Obispo, CA
JayM
13 days ago
reply
Hey! I am that one friend!
Atlanta, GA
alt_text_bot
14 days ago
reply
I wonder if that friendly lady ever fixed the problem she was having with her headset.
alt_text_at_your_service
14 days ago
reply
I wonder if that friendly lady ever fixed the problem she was having with her headset.
Next Page of Stories