Technology AI-native tech startups can survive an economic nuclear winter

AI-native tech startups can survive an economic nuclear winter


- Advertisment -

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.

Recently I wrote a piece for VentureBeat in which I distinguished between companies that: to be AI-based at its core and those who simply use AI as a feature or small part of their overall offering. To describe the former series of companies, I coined the term ‘AI-Native’.

As a technologist and investor, the recent market downturn made me think about the technologies poised to survive the winter for AI — caused by a combination of reduced investment, temporarily discouraged stock markets, a potential recession exacerbated by inflation, and even customer hesitation on their toes. immerse yourself in promising new technologies for fear of missing out (FOMO).

You can see where I’m going with this. I believe that AI-native companies are in a strong position to come out of a recession healthy and even grow. After all, many great companies are born during downtime – Instagram, Netflix, Uber, Slack, and Square are a few that come to mind.

But while an unheralded AI-native company could become the Google of the 2030s, it wouldn’t be right — or wise — to claim that all AI-native companies are destined for success.


MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

In fact, AI native companies need to be particularly careful and strategic in the way they operate. Why? Because running an AI business is expensive – talent, infrastructure and development process are all expensive, so efficiency is key to their survival.

Do you want to tighten your belt? There’s an app for that

Efficiency isn’t always easy, but luckily there’s an AI ecosystem that’s been brewing long enough to provide good, useful solutions for your particular tech stack.

Let’s start with model training. It’s expensive because models get bigger. Recently, Microsoft and Nvidia trained their Megatron-Turing Natural Language Generation (MT-NLG) model on 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs – costing millions of dollars.

Fortunately, with advances in hardware and software, costs are falling. And algorithmic and systems approaches such as MosaicML and Microsoft’s DeepSpeed ​​create efficiency in model training.

The next step is labeling and developing data, which: [spoiler alert] is also expensive. According to – a company that wants to tackle this problem – “data labeling takes up 35 to 80% of project budgets.”

Now let’s talk about making models. It’s a tough job. It requires specialized talent, a lot of research and endless trial and error. A major challenge when making models is that the data is context specific. There’s been a niche for this for a while now. Microsoft has Azure AutoML, AWS has Sagemaker; Google Cloud has AutoML. There are also libraries and collaboration platforms like Hugging Face that make modeling much easier than in previous years.

Don’t just release models into the wild

Now that you’ve created your model, you need to deploy it. Today, this process is extremely slow, with two-thirds of models taking more than a month to go into production.

Automating the deployment process and optimizing for the broad range of hardware targets and cloud services supports faster innovation, allowing businesses to remain hyper-competitive and agile. End-to-end platforms such as Amazon Sagemaker or Azure Machine Learning also offer deployment options. The big challenge here is that cloud services, endpoints and hardware are constantly moving targets. This means that new iterations are released every year and it is difficult to optimize a model for an ever-changing ecosystem.

So your model is now in the wild. What now? Sit back and kick your feet up? Think again. Breaking models. Continuous monitoring and observability are the keywords. WhyLabs, Arize AI, and Fiddler AI are among a few industry players who are tackling this head-on.

Aside from technology, the cost of talent can also be a barrier to growth. Machine learning (ML) talent is rare and in high demand. Companies will need to rely on automation to reduce reliance on manual ML engineering and invest in technologies that fit into existing app development workflows so that more DevOps practitioners can join the ML game.

The AI-native company: solutions for all these components

I’d like to see us add a phrase about agility/adaptability. When we talk about surviving a nuclear winter, you have the most hyper-competitive and adaptable – and what we’re not talking about here is the actual lack of agility in terms of ML implementation. The automation we bring is not just the part of the adaptability, but the ability to innovate faster – which is currently limited by incredibly slow implementation times

Fear not: AI will mature

Once investors have served their time and paid some dues (usually) in the venture capital world, they have a different perspective. They’ve been through cycles that play with technologies never seen before. As the hype takes hold, investment dollars pour in, companies are formed, and new product development heats up. Often it is the silent turtle that ultimately wins against the investment rabbits while humbly collecting users.

Inevitably there will be bubbles and failures, and after every failure (where some companies fail) the optimistic predictions for the new technology are usually surpassed. Adoption and popularity are so widespread that it is just becoming the new normal.

As an investor, I am very confident that no matter which individual companies are dominant in the new AI landscape, AI will achieve much more than gain a foothold and unleash a wave of powerful smart applications.

Luis Cèze is a business partner at Madrona Ventures and CEO of OctoML

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Shreya Christina
Shreya has been with for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider team, Shreya seeks to understand an audience before creating memorable, persuasive copy.


Please enter your comment!
Please enter your name here

Latest news

The Untold Truth of Tig Notaro’s Wife – Stephanie Allynne

Who is Stephanie Allynne? Born on September 19, 1986 in Claremont, California, USA, Stephanie Allynne is an actress, but possibly...

Are people throwing away your company swag? Five tips for better end-of-year gifts

Lou Elliott-Cysewski, co-founder and CEO of Coolperxthe world's first climate-neutral branded goods company. It's the dirty little secret of corporate...

How companies can accelerate transformation

Couldn't attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here....

3 No-Brainer Stocks to Buy During the Bear Market

The stock market is definitely in bear market territory and is now flirting with new 2022 lows. Clearly significant...
- Advertisement -

NASA’s Artemis I Launch Has Been Officially Delayed Until November

Hurricane Ian arrived in Florida as a Category 4 hurricane, but weakened to a tropical storm by the time...

Meta plans workforce freeze, NASA shoots down an asteroid, and Elon’s tweets are made public •

Hello all! Welcome back to Week in Review, the newsletter where we quickly list some of the most-read...

Must read

- Advertisement -

You might also likeRELATED
Recommended to you