Practical Data for Nonprofits: Part 2 — All Data Comes from Somewhere
Oh, those timeless questions we all asked at tender age: Why is the sky blue? Why is water wet? Where does data come from?
If you’re a RadioLab fan, you may have heard that the sky isn’t actually blue. I’m sure there are some podcasts out there about water not being wet. And data? Well, everybody knows that human beings take deliberate action to generate, capture, store, use, and interpret data. Right?
I start with this rule because everyone knows it, yet I don’t see that knowledge put into action in any meaningful way on a regular basis. Ignoring this “common knowledge” rule is what leads to extractive and exploitative data collection practices at worst, and horribly inefficient and ineffective practices at best. It leads to assumptions that we can instantaneously translate oceans of stored bits and bytes and transform them into any type of meaning we wish to extract.
This is how we get lies, damned lies, and statistics. And a lot of really grumpy people who just want “better reports.”
We live in a world where it’s advantageous and/or convenient for certain people to pretend that there is no human or environmental cost to extraction, production, and sales of the raw resources they require to transform into whatever it is that generates their wealth. All corporations that produce tangible goods have to acknowledge, at minimum, the financial cost of extraction, production, and sales — it’s called a business model and if you string lots of them together you get a supply chain.
We behave rather pervasively as if data simply is. That it’s somehow out there just waiting to tell us truths and provide us with enlightened pathways to action.
Many nonprofits exist to fight the willful ignorance of the environmental and human costs of pursuing financial gain above all else. Many other nonprofits exist to deal with the human and environmental consequences of corporate and political decisions that leave millions of people in need of various types of assistance.
So it is unforgivable to me that the most neglected understanding within our own organizations, and between funders and organizations, is that as a sector we:
1. Extract data from our beneficiaries
2. Generate additional data by performing our daily job duties
3. Capture and store data that has been extracted and generated
4. Use data to varying extents — in part, but rarely, to inform our jobs or provide direct benefit to beneficiaries, and more frequently to provide internal management or performance oversight, provide proof points to external entities about our program efficacy or to provide information to researchers and analysts
5. Interpret data
6. And that there is a significant cost in time, energy, effort, and attention to each of these steps.
Invisible Work and Its Consequences
This doesn’t just apply to the nonprofit sector — our whole society is increasingly doing less work that involves transforming one physical thing into another. We are, instead, doing more invisible work of transforming one piece of information into an action or some other piece of information — transforming data into an insight; transforming a collection of data into a report; transforming patterns of data into a long winded polemic on Medium.
This invisible work has many consequences, but the one I want to focus on for data literacy is the consequence that we behave rather pervasively as if data simply is. That it’s somehow out there just waiting to tell us truths and provide us with enlightened pathways to action.
If you, as a leader or a funder or a board member, believe that data just happens, you are at real risk of neglecting the operational realities of your organization’s or grantee’s needs to collect, generate, capture, store, and extract the data you require. The very visible consequences of this gap are:
- Projects that never get completed because they are undertaken without an assessment of what else — and how much time — project staff are already responsible for. This is the most common issue with “invisible work.”
- Employee burnout and high turnover from being overloaded, feeling like they’re in constant crisis mode, and having to do data entry instead of helping people
- Inefficient processes that are mostly workarounds, difficulty onboarding new staff, “shadow systems” and other signals that the systems that are required aren’t a good fit for the work being done
All of which, of course, leads to people spending more time on administration tasks and less time delivering the organization’s mission.
Combating “the database as punishment for doing your job” syndrome
It doesn’t have to be this way. While most large data systems still fall far short of the simplicity and intuitiveness of apps on our phones, we can still apply our knowledge that All Data Comes From Somewhere to make things better in our organizations.
Very simply and practically, just being aware that everything you need me to type in on a computer has a cost in time and effort is often enough to start moving in a better direction. I like to think of all processes as having a time and energy budget that I’m drawing down every day. For technology, we often have a “good will” budget as well, and it is often very small — if people already hate the database, it’s really hard to get them to try something new in it, because it depletes our good will budget daily with nothing left over.
Let’s take onboarding a new beneficiary as a common example. This task often requires that a staff member expends mental and emotional energy in engaging with the person in a caring and empathetic manner, as well as expending time and energy in searching for any existing records in the database, asking sensitive questions and keying in the answers (or writing down the answers to key in later). We’re going to assume that none of these tasks is negotiable — they are all required parts of the job for many different reasons that are all valid.
Let’s also assume that Funder A wants certain data about our clients that we already collect at intake, and Funder B is offering a large grant but wants some data we don’t currently collect.
To answer Funder B’s questions, we have to:
- Plan to modify our intake process to capture the new data — whatever technical changes are necessary, there will be time and energy required from someone inside or outside the organization to adjust the intake process so that it can allow for new data.
- Plan to modify our reporting processes to incorporate the new data. This should include ensuring the definitions of anything relating to the new fields are very clearly defined — more on that in a later post.
- Account for the increased time and energy it will take to collect this data on intake.
- Account for a transition period where, while this data is new, there is an increased time-and-energy cost over and above what we’re adding because people are naturally slower at and more stressed by novel situations and stay that way until those situations become familiar. We’ll have to dip into our good will reserves, if we have any.
- Account for the added burden on the beneficiary from whom are are extracting this data.
The simpler the ask — “just a couple of fields” — the more likely we are to skip these steps, as if the work required to accomplish the request is without cost of any sort. Soon we find we’ve overspent our time and energy budget for our staff and possibly our budget of beneficiary willingness to allow for data extraction.
If you are in a position of nonprofit leadership, it is likely you have strong financial literacy. You understand how to budget, manage revenues and cashflow, deal with shortfalls so they don’t become chronic, and manage a reserve for a rainy day. This most important rule of data literacy is no different: you have a budget of people’s time and energy that you can keep track of by making the work visible and practicing sound work planning. Because all data comes from somewhere, and that somewhere is your organization’s day to day work. That’s really all there is to it!
This article is part of a series on data literacy for nonprofit leaders. Its goal is to share terms and concepts that aid in making good technology decisions when you’re not a technology expert (or even if you’re a little bit tech-phobic).
See the overview here