Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
The infinite monkey theorem professes the idea that a monkey typing for an infinite amount of time would eventually generate the complete works of William Shakespeare, and OpenAI and ChatGPT have unleashed what appears to be a form of this.
ChatGPT, or more generally generative AI, is everything, everywhere, all at once. It feels like magic: ask a question about anything and get a clear answer. Imagine an image in your head and see it immediately visualized. Seemingly overnight, people started proclaiming generative AI as a existential threat for humanity or the most important technological advancement of all time.
In previous waves of technology, such as machine learning (ML), consensus emerged among experts about the capabilities and limitations of the technology. But with generative AI, the disagreement among even AI scientists is striking. A recent leak from a Memo from Google researcher the suggestion that early GenAI pioneers had “no moat” sparked a fiery debate about the nature of AI.
Just a few months ago, AI’s trajectory seemed to parallel previous trends such as the web, cloud, and mobile technology. Exaggerated by some and dismissed as “old news” by others, AI has had a variety of impacts in areas such as healthcare, auto and retail. But the groundbreaking impact of interacting with an AI that appears to understand and respond intelligently has led to unprecedented user adoption; OpenAI attracted 100 million users within two months. This, in turn, has led to a frenzy of both zealous statements of support and vehement rebuttals.
Arguably, it is now clear that generative AI is on the cusp of driving major changes in enterprises at a pace much faster than previous technological shifts. As CIOs and other technology executives struggle to align their strategies with this unpredictable yet influential trend, a few guidelines can help guide them through these evolving currents.
Create opportunities for AI experiments
Understanding the potential of AI can be overwhelming due to its extensive capabilities. To simplify this, focus on encouraging experimentation in concrete, manageable areas. Encourage the use of AI in areas such as marketing, customer service and other simpler applications. Prototype and test internally before defining complete solutions or going through each exception (that is, manageable workflows). AI hallucinations).
Avoid lock-in, buy to learn
The speed of generative AI adoption means entering into long-term contracts with solution providers is more risky than ever. Traditional category leaders in HR, finance, sales, support, marketing and R&D could face a seismic shift due to the transformative potential of AI. In fact, our definitions of these categories could undergo a complete metamorphosis. Therefore, supplier relationships must be flexible due to the potentially catastrophic cost of locking down solutions that don’t evolve.
That said, the most effective solutions often come from people with deep domain expertise. A select group of these providers will seize the opportunities presented by AI in flexible and inventive ways, delivering returns far beyond those typically associated with enterprise application deployment. Engaging with potential revolutionaries can help address immediate practical needs within your business and illuminate the broad patterns of AI’s potential impact.
Today’s market-leading applications may not be able to run fast enough, so expect to see a wave of startups launched by veterans who have left their mothership.
Enable human + AI systems
Large Language Models (LLMs) will disrupt industries such as customer support that rely on humans to provide answers to questions. Therefore, integrating human + AI systems now will bring important benefits and create data for further improvement. Reinforcement learning from human feedback (RLHF) has been at the heart of accelerating the advancement of these models and will be critical to how well and how quickly such systems adapt to and impact business. Systems that produce data that can power future AI systems will be an asset to increase the pace of the creation of increasingly automated models and functions.
This time, believe in a hybrid strategy
With cloud computing, I ridiculed hybrid on-premise and cloud strategies as mere cloudwashing; they were feeble attempts by traditional sellers to maintain relevance in a rapidly evolving landscape. The remarkable economies of scale and pace of innovation made it clear that any applications that attempted to bridge both domains were destined to become obsolete. The triumphs of the likes of Salesforce, Workday, AWS, and Google have firmly dispelled the notion that a hybrid model would become the industry’s dominant paradigm.
As we enter the age of generative AI, the diversity of opinion among the deepest experts, coupled with the transformative potential of information, indicates that it may be premature, even dangerous, to entrust all of our efforts to public providers or some other strategy.
With cloud applications, the shift was simple: we moved the environment in which the technology worked. We have not given our cloud providers unrestricted access to sales and financial statistics within those applications. With AI, on the other hand, information becomes the product itself. Any AI solution thirsts for data and requires it to evolve and move forward.
The battle between public and private AI solutions will depend heavily on the context and technical evolution of model architectures. Business and commercial endeavors, combined with the importance of real and perceived progress, warrant public consumption and partnerships, but in most cases the gen-AI future will be a hybrid – a mix of public and private systems.
Validate the limitations of AI – repeatedly
The generative AI capable of writing an essay, making a presentation or setting up a website about your new product differs significantly from the predictive AI technology that drives autonomous vehicles or diagnoses cancer through X-rays. How you define and approach the problem is a critical first step that requires understanding the scope of the possibilities offered by different AI approaches.
Consider this example. If your company tries to use past production data to predict your ability to meet next quarter’s demand, you get structured data as input and a clear goal to assess the quality of the forecast. Conversely, you can instruct an LLM to analyze the company’s emails and prepare a two-page memo about the likelihood of meeting this quarter’s demand. These approaches seem to serve a similar purpose, but are fundamentally different in nature.
The personification of AI makes it more recognizable, more fascinating or even more controversial. This can add value and facilitate tasks that reliable forecasts alone cannot handle. For example, asking the AI to come up with an argument as to why a prediction will or won’t come true can encourage new perspectives on questions with minimal effort. However, it should not be applied or interpreted in the same way as predictive AI models.
It is also important to anticipate that these boundaries may shift. The generative AI of the future could very well produce the first – or final – versions of the predictive models you will use for your production planning.
Demand that leadership repeat and learn together
Leadership is paramount in crisis situations or rapidly changing situations. Experts will be needed, but hiring a management consulting firm to provide a snapshot of the AI impact study for your business will diminish your ability to navigate this change rather than prepare for it.
Because AI evolves so quickly, it attracts much more attention than most new technologies. Even for companies in industries outside of high-tech, C-suite executives regularly see AI demos and read about generative AI in the press. Make sure you update your C-suite regularly on new developments and potential impacts on core functions and business strategies, so they connect the right dots. Use demos and prototyping to demonstrate concrete relevance to your needs.
Meanwhile, CEOs should drive this level of engagement from their technology leaders, not only to scale learning across the organization, but also to assess the effectiveness of their leadership. This collective and iterative learning approach is a compass for navigating the dynamic and potentially disruptive landscape of AI.
For centuries, the quest for human flight remained grounded as inventors became fixated on mimicking the flapping wing designs of birds. The tide turned with the Wright brothers, who reformulated the problem and focused on fixed-wing designs and the principles of lift and control rather than mimicking bird flight. This paradigm shift led to the first successful human flight.
In the field of AI, similar reframing is vital for every industry and function. Companies that see AI as a dynamic field ripe for exploration, discovery and adaptation will find their ambitions take off. Those who approach it with strategies that have worked past platform shifts (cloud, mobile) will be forced to follow the evolution of their industries from the ground up.
Narinder Singh co-founded Appirio and is currently the CEO of Watch Deep Health.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers