• MISSION+
  • Posts
  • January 2025 — Looking back, looking forwards

January 2025 — Looking back, looking forwards

Happy New Year! 2025 is shaping up to be another year of increasingly feeling like I live in the future.

Our newsletter is a bit longer than usual to pack in a summary of some of the most important tech updates of 2024, and a look forward to what we might expect in 2025. Make sure you click the robot links to see the associated videos - it’s worth it!

MISSION+ has also opened its office up to Fractional CTOs as a co-working space. We’ll see how that goes - perhaps we’ll do some small office events soon as well!

2024 Summary: Looking back, looking forwards

Looking back over the last year, it’s pretty crazy to see all the changes that we have now normalised and integrated into our lives. I will attempt to summarise some of the biggest changes I’ve seen and predict a bit of 2025, but I’ve probably missed a lot - it’s been an insane year.

Large Language Models

At the beginning of the year, one of the biggest limitations for AI large language models (LLMs) was the “context window” - the amount of text that could be processed in one go. In February, Google’s Gemini 1.5 Pro model smashed that ceiling - allowing hundreds of pages to be analysed.

Since then it’s just been a non-stop avalanche of foundational model improvements. From Claude 3.5, with an amazing ability to write code, to Meta’s Llama 3.2 which runs on your own infrastructure, we’ve seen the bar continuously raised. But things didn’t slow down for Christmas; oh no - we’ve seen the two best models of the year released in the last two weeks!

The first is OpenAI’s o3 model. This model “reasons”; it doesn’t just produce a single answer, but internally reviews multiple potential pathways and determines the one most likely to result in the correct answer (and reduce hallucinations). The second is DeepSeek V3 from a China-based lab. Performance is comparable to the best commercial models, but like Llama it is completely open! This means that it can be run on your own infrastructure - which is a “must” for many regulated companies that are concerned about data privacy.

But on the other hand, OpenAI’s previous biggest competitor Mistral AI kind of just… disappeared? I hope they reemerge with something mind blowing.

Multi-Modal Models

This year also saw us move from just text, to audio and image input to models as well. From our experiments, text can be more accurately retrieved from a picture of a complex table than from a pure-text representation of the same table! Truly incredible. This of course culminated in the release of Google’s Live API as part of Gemini 2.0. Through this, your AI assistant can see what you see on screen, and comment/suggest accordingly. Pair programming ftw.

No summary of AI in 2024 is complete without mentioning Google’s NotebookLM. This tool takes in documents and outputs study guides, FAQs, or most commonly: podcasts. Personally I find podcasts incredibly annoying, but each to their own. NotebookLM’s analysis of human philosophy podcast is suitably terrifying.

Coding Assistants

Writing code with an AI assistant isn’t that new, although the leading application Cursor certainly improved over the year (along with the launch of a competitor, Windsurf). Skipping the coding and going straight to building & integrated deployment came to age in 2024. Replit Agent started off a bit ropey, but has dramatically improved - I suspect it might win the battle if the trajectory continues. Lovable (formerly GPT Engineer) is the other main contender. Both provide a full environment to prompt, test the results, tweak, then launch - all within the same platform. Try them out, and you can share the same experience “Programmers are also human" had (joking, it’s gotten a lot better. Mostly).

AI Agents

So where will these models go in 2025? If we use the metaphor of a car, the foundational model is the engine, and the integrated tooling around the model is the accelerator, brake and steering wheel. At some point, using a bigger and bigger engine is irrelevant if the vehicle controls are not sensitive enough to channel that power into something useful.

For us, the accelerator and the steering wheel are the agents (AI-driven applications that have memory and access to their environment, and can make multi-step decisions to achieve an objective), and the braking system is the AI guardrail system we put around it. Right now, agentic frameworks are manifold and very fragmented. 2025 is likely to see front-runners emerge. Guardrail frameworks are even more in their infancy - that one might be 2026.

Interestingly, the foundational model labs have realised that they need to play in this space as well, or become interchangeable engines. Things like Anthropic (Claude)’s Model Context Protocol and Google’s Project Astra & Project Mariner show they hope to capture the whole value chain. However, my guess is the winner will be external - people want interchangeable engines rather than lock-in.

Video Generation

If 2023 was the year of progress in AI image generation, 2024 was the year for video. Hailuo AI is probably my favourite right now, but OpenAI’s Sora is finally publicly accessible and can’t be discounted. Luma Labs also has released lots of freely available tools, which I’m pretty sure is responsible for this monstrosity. China in general has produced a lot of GenAI video labs - better source data from TikTok and the like, or a disregard for controlled-IP input video?

I see an interesting progression from “artistic videos” to “physics representations”. See the demo videos from the Genesis platform, a combined project across multiple labs that models physics and creates interactive 3D worlds from what it dreams up. When I first saw the video I thought “meh, looks faked”. Then I saw the labs behind it:

Oh. Oh indeed. If this is real, then this is going to be huge for 2025.

AI Hardware

All this talk about AI segways neatly into the topic of who is powering that AI? Mostly, NVidia.

It has been a good year for them. Their market cap is almost 12% of the US economy, and over 3% of the world economy. This has been driven mainly from the need for their GPUs for AI workloads. But will this change?

At its core, their dominance is due to the proprietary CUDA programming language - originally written for graphics processing (the “GP” in GPU), but repurposed for any parallel computing problem, like Monte Carlo simulations or machine learning/AI. CUDA is the dominant standard, but big tech is trying to displace them. July brought the Apple Intelligence model, which appears to have been trained on Google’s alternative TPU architecture. The other players aren’t ignoring this, with Amazon announcing their LLM-focused Trainium chip at re:Invent this year, and Meta similarly releasing their MTIA chips which uses OpenAI-backed Triton instead of CUDA.

An interesting nuance of the non-Nvidia approach is that training a model is one thing, but using the model (“inference”) is another. We are much less beholden to CUDA at the inference level, so as models mature and agentic usage becomes the battlefield, we’re likely to see Nvidia’s grip begin to soften. This is not financial advice 😉 💸.

The need for more and more powerful infrastructure requires bigger and bigger data centers to host them. These in turn need higher amounts of energy to power them; indeed, Zuck recently said that energy, not compute, will be the bottleneck to progress. That explains the announcements from Meta, Google, Microsoft, Amazon and Oracle that they’re getting into nuclear power!

Humanoid Robotics

With all the focus on AI, the revolution in humanoid robotics has gone relatively unnoticed by many: Boston Dynamic’s Atlas is no longer the only game in town (although this Atlas video is flippin’ amazing). China has come out swinging with Unitree's G1, EngineAI’s SE01 and Pudu’s D9 (although I find some of the videos a little too good to be true). The Figure 02’s hands are very impressive, featuring 16 degrees of freedom. And who can forget Tesla’s Optimus, or as Musk puts it, “the biggest product ever of any kind”.

1X’s NEO is Europe’s contribution to the contest, but I don’t know - there’s something that screams “home intruder” about that tracksuit. Even that is less creepy than Clone Robotic’s Westworld-like artificial muscles. However, the original Atlas is discontinued - I’m not ashamed to admit that this goodbye video brought a tear to my eye.

I see 2025 continuing this rapid progress, but probably not impacting us day to day yet. Maybe the Unitree Go2 ‘dog’ (which is commercially available at only $1600 (?!?!)) will be seen ‘in the wild’, but I can’t imagine it being used for much yet beyond novelty. How did China get so good so quickly? We made it The World’s Factory - let’s not be surprised when they use those skills to build.

Space: The Final Frontier

2024 will surely be seen as an important year in the race towards colonising space. From Starship Super Heavy being caught by its launch tower (important because it allows for immediate refueling and relaunch), Blue Origin also managing to land a booster, to China’s most ambitious year yet when it retrieved rock from the dark side of the moon: the odyssey towards multiplanetary life is on!

2025 Predictions

What other trends do I see in 2025? As AI application development gets adopted, the “haves” and the “have nots” will get further apart. I enjoy how Palantir co-founder Joe Lonsdale describes this process (albeit 10 years ago!), as two lines with an elastic band connecting them. Eventually, the tension gets too much, the band snaps, and the ‘have nots’ will simply drop away. Something to think about.

On that note: in 2025, enterprises will continue to struggle to adopt AI - and will get stuck in PoC hell as I previously described. Those that embrace the risk and uncertainty and move forwards will be at a huge advantage, but it can’t just be lip service.

If you look back at the above, Google repeatedly comes up. They had a rocky start, but that’s now behind them. So a somewhat sad prediction might be that Google wins the foundational model war, Apple wins the consumer distribution war with Apple Intelligence, and Meta wins the open weight war. Big tech, yay ☹️.

My final observation is that the greatest medium-term impact is a trend towards smaller, niche products. In a world where AI is helping us build applications more quickly (and can rapidly create a usable, if shoddy, replica of any SaaS tool), our differentiation point will be understanding a specific business or domain. Expect team sizes to shrink, expectations to expand, and b2b products to become hyper-personalised. This has an impact on the junior -> senior career path, but I’ll write about that another day.

I hope this summary of 2024 and extrapolation for 2025 was interesting. If I could take away two themes it would be “Google” and “China”. But who knows! We’ll do a review in December and look back 🙂. Until then, enjoy the breakneck pace of innovation.

Interesting Articles

  • Meet Willow, our state-of-the-art quantum chip - I’m not a quantum guy at all, but I found this blog post from Google Quantum AI about their latest chip super educational (even for someone with very little background).

  • MAS Artificial Intelligence (AI) Model Risk Management - interesting but nothing particularly new. Focuses more on TradAI (e.g. fraud monitoring) and the GenAI section basically just says “yeah damn this stuff is cool but no real usage”. Even still, super cool that this happens at the regulator/country level full stop!

  • The Journey - My essay on The Journey that product builders must take to get their concepts from idea to successful reality. There are variations, but for most people the journey has a common theme…

This Is The Way

Feature A Fractional

We’re On A Mission

Until next month,

Ned & the MISSION+ team

Enjoyed this? Share this with a friend who might too!