Share This
Everyone’s heard of cloud computing. But for those of us from San Francisco, where we live much of our lives in low clouds, it’s fitting when we discover that much of our data is also lost in a fog.
Imagine you are a business analyst. For you, most of your organization’s data is obscured. You can only see it when you are in close proximity or when it is so fresh that you remember exactly where it is:
Proximity: Analysts can identify valuable data that’s within their immediate area of expertise, but the more removed that data is from their basic sphere of understanding, the less they understand the data and whether it’s of value.
Freshness: The more time your analysts spend studying the data for a specific project, the easier it is for them to see the data’s value. The fog lifts but only for awhile. Once the project is completed, the analyst’s memory of the data begins to recede. The data itself begins to change. The fog has returned.
The problem is that proximity and freshness work for a very small amount of data. Meanwhile, the variety, volume, and velocity of data coming in continues to grow. Organizations become overwhelmed trying to make sense of it all. As one of our customers recently put it, “We have 100 million fields of data. How can anyone find anything?” We agree.