In a recent webinar, I spoke on a topic big data and its challenges, whether applying Scrum would benefit solving big data challenges. We discussed what big data means and how it differs from traditional data. There is a difference between traditional RDBMS data and big data that has billions of data set interactions that requires a new technological approach for handling and processing.
The 3 V’s of big data are volume, velocity and variety. Are these three factors required to drive the need? Do we have a fourth V? These aggregate to provide Value which may be the fourth V. We discussed how Google tracker for2009 H1N1 flu epidemic had so much data and the search queries were optimized providing accurate and real-time data. With data growing every day, there are several industries especially financial, telecommunications, energy, government and retail are using big data solutions.
Big data is complex. Based on just the complexity of data, and how empiricism is the foundation of Scrum, it can provide solutions in solving big data challenges. Whether it is Hadoop’s distributed file system (HDFS) or other popular tools, the Scrum frame-work is good for work
- With a fair degree of complexity,
- Requires innovation,
- Requires invention,
- Product differentiation,
- Productivity and
- Faster launch to market
I say that Big Data needs all of the above.