Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
There’s been a lot of buzz about ChatGPT and the technology’s potential for applications in customer service, writing, and research, especially given the recent launch of GPT-4. It reminds me of the earlier days of artificial intelligence (AI) and the excitement surrounding its potential. In many ways, the excitement was well deserved, and the predictions about the ways companies might adopt AI were spot-on. Machine learning (ML) and AI help make more tailored ecommerce recommendations, supplement customer service teams with chatbots on places like LinkedIn, and keep us all a little safer on the road with lane guidance and emergency braking.
But as with many innovations, some of the hype went far beyond reality. Robots are not taking over in the classroom, and to my dismay, they are still unable to take on all of our more manual, tedious tasks at home or in the office. AI hasn’t ruined the classroom or replaced the need for humans to develop product strategies, design tools, and provide a human layer on top of those chatbots when more complex problems arise.
So when I started reading the hype surrounding ChatGPT, and now GPT-4, I was intrigued but skeptical.
After getting a chance to play with it, I have to admit it’s impressive. Last week our CFO was playing with it to give some context on our financial projections, and it was pretty perfect. There is tremendous potential when it comes to applications of this technology that I would love to see unfold.
That said, we’re still a long way from handing things over to AI while we all sit down on a beach somewhere.
When it comes to financial services, there are still a lot of things that neither ChatGPT nor GPT-4 can fix, at least not yet. This is because financial products involve a lot of risk. Financial institutions (FIs) are responsible not only for ensuring the safety of their clients’ assets, but also for complying with legal obligations around know-your-customer (KYC) and anti-money laundering (AML) requirements. FIs also have a vested interest in minimizing risk and consequently fraud, as lost funds are deducted from their profits. ChatGPT/GPT-4 are not yet prepared to meet these critical risk priorities. This is why.
1. Compliance Checks
Compliance is an essential part of any financial services company. As it should be, as businesses handle money for consumers and businesses. AI can help monitor suspicious activity. However, to ensure compliance with confidence, companies also need experts to evaluate changing rules, define strategies and oversee the compliance program to ensure companies meet these requirements.
2. Making credit acceptance decisions
Data analytics has long been part of the credit underwriting process, but determining the right policies to determine what data is used in those decisions requires human insight. FIs should evaluate their risk priorities to determine which credit thresholds are appropriate for their business. They can then use data from credit bureaus to evaluate whether a customer meets their credit policies.
3. Provide a seamless user experience
When opening an account, customers expect a seamless experience that can be completed in 10 minutes or less. To enable a hassle-free process without increasing their risk, FIs rely on things like phone identity verification and document verification, which can automatically verify a customer’s identity based on information they entered during the onboarding process.
However, when addressing issues after opening an account, customers expect a more immersive experience. While many FIs use chatbots to help customers answer basic questions, if a customer suspects they have been a victim of a social engineering scam, they expect that communicate directly with a bank representative to report the problem.
4. Designing new financial products
Developing new financial products requires in-depth knowledge of market trends, customer needs and regulations. It also includes making strategic decisions beyond what data alone can tell us. While ChatGPT/GPT-4 can provide insights and suggestions based on data analysis, it cannot replace the creativity and intuition of a human designer.
5. Addressing a crisis like a fraud attack
While ChatGPT/GPT-4 can help with customer interactions, quick questions, directions to support materials and documents when a company experiences something like a high-speed fraud attack, they want direct human expertise to guide them through the process.
The same goes for preventing fraud attacks. Fraud models are useful tools, but to truly track fraud, companies need AI/ML teams to ensure their policies are up-to-date, have the right datasets, and are able to provide updates of their workflows to handle attacks as they occur.
The future of ChatGPT and GPT-4
ChatGPT, GPT-4 and any future updates will be powerful tools that can help financial services companies in many ways. However, these products are unable to replace some of the more sophisticated, more nuanced parts of running a financial services company.
That said, companies that are able to strike the right balance between automation and human contact are best positioned to achieve long-term success by delivering value to their customers quickly and consistently.
Charles Hearn is Co-Founder and CTO at Alloy.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers