Having biased programmers and data sets could lead to your digital defences being too weak, warns Aarti Borkar, vice-president at IBM Security, IBM’s cybersecurity arm.
Many businesses rely on artificial intelligence (AI) to meet the ever-evolving threat of hackers. Companies hope that dispatching these solutions, their digital defences can automatically respond to the new online dangers.
However, Borkar believes it’s vital companies ensure these solutions are free from bias. Speaking with CNBC, she warned that biased cybersecurity solutions may end up focusing on the wrong things and miss the big real threats.
Borkar explained bias can occur among the developers, the data and the program.
Starting off with the people, Borkar argued that if all developers of cybersecurity solutions come from the same background they will inherently be more prone to view the world in a similar way. If they do, they are bound to miss things simply because they won’t see them. “That is when you start creating tunnel vision and echo chambers,” she added.
Similarly, every AI is only as good as the training data used to develop it. If this data set is biased, then the resulting program will only understand parts of the problem, leaving it blind to potential risks.
Borkar suggested that diversity within cybersecurity companies could lead to better products that end up protecting clients even better. “It’s not like the bad guys are waiting for us to learn how to do this. So, the faster we get there, the better off (we are),” Borkar concluded.
Copyright © FinTech Global