Editors note: If you are interested in this topic, be sure to catch Larry Pruss from SRM discussing AI at the Summit!
Larry will present “Unlocking the Power of AI in Financial Services” during the morning General Session on May 16. Aspects of artificial intelligence have been gradually introduced into business settings for years, starting with robotic process automation and continuing into fraud detection systems, CRM tools, and other varieties of knowledge management. Given the exponential advancements of large language models and widely available generative AI tools like ChatGPT over the past 18 months, however, policy concerns over the potential misuse of AI technology have intensified just as rapidly. In March, the European Commission ratified the AI Act – the most substantive legislative attempt to date to establish guardrails and rules of engagement. The EU has a track record of taking regulatory action before the United States and other North American countries on technology developments (think data privacy and open banking.) And while the AI Act does not explicitly address financial services, it’s not much of a stretch to extrapolate how its broad strokes could form the model for the financial sector in the US and beyond. As we consult clients globally on the risks and rewards of artificial intelligence, we felt it necessary to share some of the finer points of the EU legislation with our audience. Looking for Clues. Regulators invariably find themselves playing catch-up; cast into a reactive role by nature, they rarely engage with industry players early enough to shape efficient and constructive frameworks to achieve objectives that enjoy consensus support at their core. This dynamic can be frustrating for financial institutions, but it doesn’t alter their need to operate within its constraints. It’s reasonable to assume the AI Act will serve as an EU “umbrella policy,” with the agencies overseeing each sector applying its themes to build specificity for their focus areas. The US rulebook won’t be far afield from the EU’s. For example, the AI Act’s ban on “social scoring systems that could lead to discrimination” certainly sounds applicable to credit scoring models and less directly to concerns of bias inadvertently being programmed into algorithms or “learned” from large data sets. Likewise, controls on using techniques like facial recognition to enable remote biometric identification of individuals in public settings may impose boundaries on envisioned enhancements to some fraud detection systems. The Inescapable Human Factor. Fundamentally, the AI Act’s declaration that “AI should be a human-centric technology” signals a premise where innovators and regulators can hopefully find common ground. Dating back centuries, it’s been human nature to attempt to automate, generating efficiencies in performing routine (and increasingly complex) tasks. It’s hard to escape the human factor at the center, whether in determining how to develop and apply these advancements or managing the behaviors and temptations in any setting. Look no further than fraud – where AI provides a new front on which perpetrators and defenders scramble to stay ahead of each other. One of the most significant risks of AI is its ability to destroy a firm’s reputation rapidly. That danger alone should be sufficient cause to maintain a human-centric approach to implementing these tools. The Bottom Line. Business adoption of artificial intelligence has been more gradual and long-term than recent headlines suggest. Nonetheless, the rapid ascent of generative AI and widely available tools like ChatGPT have created a clear inflection point, with regulators taking notice and stepping into action. Financial institutions should continue to explore opportunities to judiciously deploy AI technology while staying abreast of developments on the legislative and regulatory front. A partner like SRM can assist in decoding the implications of a watershed development like the EU’s AI Act. In a future blog, we’ll elaborate on the critical interplay between human and machine factors in AI business processes. SRM is a DakCU Senior CAP Partner that has assisted a number of Dakota credit unions providing savings on product agreements and contract optimization. They have been selected by more than 700 financial institutions to advise in areas such as payments, digital banking, core processing, and operational efficiencies, unlocking billions of dollars in value and improving the competitive advantage of clients with a reputation for industry-leading subject matter expertise, a proprietary benchmark database, and proven negotiating skills. Contact Scott Eaton, VP Business Development, or George McDonald, DakCU’s Chief Officer of Strategic Services or visit srmcorp.com for more information. Comments are closed.
|
The MemoThe Memo is DakCU's newsletter that keeps Want the Memo delivered straight to your inbox?
Archives
November 2024
Categories
All
|