How big is big data? | Insurance Business America
At risk of becoming a buzzword, “big data” is one of the most common phrases heard in the risk management and insurance industries today. Companies, especially tech-oriented ones, use it a lot.
But what exactly is big data? And how does one differentiate it from ordinary, run-of-the-mill data?
Mark Tainton (pictured above), global head of analytics at Ventiv, defines big data as a term used to describe the massive collection of data, whether structured, semi-structured, unstructured, or raw.
“According to Gartner, big data comprises the Five Vs: velocity, volume, value, variety, and variability,” Tainton told Corporate Risk and Insurance. “Essentially, big data sets are so big and complex that it becomes very challenging to process them using traditional data processing methodologies versus the day-to-day processing of data when using Excel for a few thousand rows of data.”
Cloudtweaks estimates that about 2.5 quintillion bytes of data are created every day. Around 80% of this data is unstructured and gathered from many different sources, such as sensors used to gather weather information, social media posts, digital photos, manufacturing equipment and more.
“So, think of it this way – a cup of water is data, and a raging river is big data,” Tainton said.
According to Tainton, the role of big data varies across all industries, and its usage is growing exponentially. This is especially true in risk management, due to the presence of numerous risks and even more numerous factors influencing those risks. Thus, it is important for risk managers to identify trends from massive collections of data, but that is much easier said than done.
“Maryville University’s industry outlook for business data analytics states that data created and gathered is expected to reach 180 trillion gigabytes by 2025, and the ability of risk managers to assess and identify potential threats that face organizations will be strongly aided by big data and advanced analytics techniques,” he said.
In order to make sense of and process big data, technologies such as AI and machine learning are important tools for risk and insurance managers. According to Tainton, these technologies allow risk managers to predict the impact, severity and frequency of claims, as well as identify trends and potential risks such as fraud. AI also allows claims to go straight through processing to reduce litigation costs.
Other benefits he mentioned include:
- Preventing recurring claims
- Improving the opportunity to reduce insurance premiums
- Allowing proactive assessment and management of risks
- Predicting and forecasting of risk scenarios for better risk management decision-making
However, many risk managers are lagging in adopting analytics and big data capabilities. A 2017 study by Airmic found that more than 50% of its members believe their use of data is limited but that they also recognize analytical literacy is a key capability for the modern risk manager.
“Adopting and leveraging big data is complex, and, as Simon Sinek states, you always need to start with ‘why?’” Tainton said. “Having a clear business use case and/or strategy for big data is critical as the associated costs can be high with relatively low returns without one.”
A report from ClearRisk found that that 50% is the average amount of data organizations capitalize on for decision making, and 70% of late adopters base their decisions on gut feeling or experience.
“Big data strategies will intrinsically become a part of the risk management fabric,” Tainton said. “The sooner risk managers adopt leveraging big data, the sooner they will be able to make superior risk-driven decisions.”