Third-Party AI Tools Pose Increasing Risks for Organizations, Says MIT Sloan

A new report by MIT Sloan Management Review and Boston Consulting Group has found that third-party AI tools pose increasing risks for organizations. The report, based on an executive survey of more than 1,240 respondents representing companies in 59 industries and 87 countries, revealed that 78% of organizations use third-party AI tools, and more than half use third-party tools exclusively. The report also found that more than half (55%) of all AI failures come from third-party tools. A particular concern is "shadow AI," where company leadership might not even know about all of the AI tools that are being used throughout their organizations. The report outlined five ways for organizations to mitigate the risks of third-party AI tools, including developing a responsible AI framework and conducting regular audits of AI tools12345.

Here are the key bullet points:

  • A new report by MIT Sloan Management Review and Boston Consulting Group has found that third-party AI tools pose increasing risks for organizations12345.
  • The report, based on an executive survey of more than 1,240 respondents representing companies in 59 industries and 87 countries, revealed that 78% of organizations use third-party AI tools, and more than half use third-party tools exclusively1.
  • The report also found that more than half (55%) of all AI failures come from third-party tools1.
  • A particular concern is "shadow AI," where company leadership might not even know about all of the AI tools that are being used throughout their organizations1.
  • The report outlined five ways for organizations to mitigate the risks of third-party AI tools, including developing a responsible AI framework and conducting regular audits of AI tools12345.

The report by MIT Sloan Management Review and Boston Consulting Group highlights the potential risks associated with third-party AI tools and the need for organizations to develop responsible AI frameworks and conduct regular audits of AI tools to mitigate these risks.