April 15, 2024
A.I

What does 2024 hold for AI in Europe? Predictions from founders and investors

The last 12 months have represented a year of AI mass hysteria, and while 2024 is likely to see some players go even faster, there will also be some car accidents.

This year has been one of great models, with major advances like the release of OpenAI’s GPT-4 and Google DeepMind’s Gemini demonstrating the brute force results of pouring in more and more data and computing power (aka “computing”) in new AI models.

It has also been a year in which Europe has largely watched from the sidelines as Silicon Valley startups have secured large funding commitments from the likes of Amazon and Microsoft.

But the year 2024 could be when AI models for enterprise use cases become smaller and less dependent on the huge amounts of data needed for general purposes.

That’s a recurring theme in Sifted’s AI predictions for 2024, which features the views of some of Europe’s top AI founders and investors.

They also predict a major rise in autonomous “agents,” shifts in the debate over AI safety, and at least one casualty among large European language modeling (LLM) companies.

This is what awaits us for 2024.

Amelia Armour, partner at Amadeus Capital Partners

More AI at the edge

Emerging high-performance but smaller AI models, which now require minimal computing power, will be deployed at a faster rate at the edge of the network over the next year (on physical devices, for example). This will accelerate the acceleration of industrial automation and increase productivity, for example in autonomous robots in distribution centers.

Supercharged data centers

We can expect data centers to focus on network speeds to keep pace with increasing levels of AI computing while controlling their power needs and associated heat generation. Hardware innovation around the use of light for high-speed data movement in energy-efficient photonic chips and more efficient cooling approaches are areas to watch for growth next year.

Peter Sarlin, CEO and co-founder of Finnish AI startup Silo

Efficient and open source models to win in the company

Most companies will conclude that smaller, cheaper, more aligned and more specialized models make more sense for a clear majority of LLM use cases, and open source models will empower them.

AI and software become one

Most of the use and value creation with LLMs will occur with models built into software products. This adoption of LLM will occur across a wide range of existing vertical and horizontal software vendors. No artificial general intelligence (AGI) will be achieved, and in contrast, LLMs create New features will be added to existing software products and the focus will be on the jobs created by AI.

Nathan Benaich, founder of Air Street Capital

Deepfakes will play a role in the elections

So far, the evidence that deepfakes have had an impact on politics has been weak. However, with capabilities advancing rapidly, we believe there will be at least attempts to deploy them maliciously during the 2024 presidential election. Regardless of whether they actually affect the outcome or not, I predict this will spur at least one major regulatory investigation.

New financing structures for computing

I have long predicted that we will need to find a more sustainable way to fund compute-intensive startups. High round prices due to large financings and medium dilutions applied risk becoming the norm. We’ve already started to see later-stage companies raising debt financing with their graphics processing units (GPUs) as collateral. I think at least one forward-thinking financial institution will try to normalize this.

A cooling of the debate on AI governance

I think the conversation about global governance is going to start to lose air. While the UK AI Summit was a significant moment, I simply don’t think there is any real alignment between Western democracies and China beyond superficial opposition to Armageddon. I don’t foresee the follow-up meetings causing the same stir or any concrete action from governments.

Rick Hao, Director of Speedinvest

AI security will become an area of ​​investment opportunity

AI security has reached the Overton window this year. As companies rush to adopt the latest capabilities, issues of transparency, trust and governance are becoming top of mind for more and more people. However, we predict that there will continue to be underinvestment in AI security compared to capabilities. We are already working on a bet in this space and we will do more in 2024.

The speed of innovation in AI will become faster, leading to more amazing breakthroughs

Everyone agrees that AI capabilities are advancing rapidly. However, we assert that the pace of this advancement is also accelerating and that the pace of change is faster than most people expect. The breakthrough in materials science published by Google DeepMind is a perfect example of the kind of surprise that frontier advances in AI continue to bring us. In 2024, we will expect more progress.

AI agents will come of age

We expect agency (the ability of AI to carry out actions autonomously, rather than simply responding to prompts) to continue to be a hot topic, especially when it comes to products. The more agentic a tool is, the more economically useful it will be. We expect this trend to continue in 2024.

Dmitry Galperin, general partner of Runa Capital

Multimodal AI will make a lot of noise

In 2024, the big thing about GenAI will be multimodal models, which can handle different types of content such as sounds, text, images and videos, all in a single user experience. While these advancements will captivate the general public, adopting this technology in business environments may face challenges related to issues such as hallucinations, ethical concerns, and security.

AI will have to become more reliable

The growing importance of AI observability, data quality assurance, and protection will become critical factors in driving widespread adoption of AI. The substantial volume of data and parameters involved in AI training is likely to drive the adoption of novel computing technologies, ranging from photonic interconnects that eliminate data transfer bottlenecks inherent in classical Von Neumann architecture to exploration of alternative architectures such as neuromorphic and analog computing.

Vanessa Cann, CEO and co-founder of German startup GenAI Nyonic

More specialized models

Current LLMs are limiting business use cases. They are too generic, essentially at the level of a high school graduate. In 2024, we will see more specialized models, trained in industry knowledge and jargon and tailored to different industry verticals and tasks that will allow companies to achieve much better results and pave the way for more complex use cases. This will become the foundation of the technological transformation of our economy and workforce.

Companies that use AI will get ahead

Companies that implement basic models have a clear competitive advantage. With the immense increase in the power of basic models that we will continue to see in 2024, companies can significantly improve their productivity, efficiency and speed of innovation; Depending on the industry, recent studies predict improvements of up to 70% (McKinsey). It’s been a long time since technology had such a profound impact on how businesses operate.

Rasmus Rothe, co-founder of Merantix

The environmental impact of AI will become a dominant topic

Frankly, the impact of AI on the environment has been one of the most overlooked topics during the rise of 2023. But I predict that will no longer hold true. The “transformers” that power AI models consume a ton of energy to train and use them. In 2024, as these systems become more embedded in our economy, the explosion of the world of AI will come into tension with societal concerns around climate change and energy consumption. That will put economic and political pressure on AI systems to create better model architecture, that is, the ability to train and use AI models with less data and energy. We need greater efficiency not only for the environmental impact, but also to reduce costs for customers.

A major LLM will close or be included in some type of liquidation

I’m not sure who, but some lively company will die due to its own computing costs, incorrect revenue forecasts or monetization models, or a crisis in demand due to the growing success of competitors that perform better in specific niches. “Open source” AI will become stronger and existing models will already be robust enough for many applications and use cases. The fact is that there is immense competition in the LLM space from big tech companies with deep pockets, better distribution, and more computing power. I expect some highly touted LLMs to cease to exist next year or be snapped up by others for a fraction of their current valuation.

Leave a Reply

Your email address will not be published. Required fields are marked *