How To Company XYZ

Stories that guide & inspire

AI Landscape: AGI Dreams & ASI Reality

Please help growth by sharing this content.

I use AI every day now. Having interactions with AI, specifically LLMs, about simple things is good practice for understanding how ‘smart’ it is. This article delves into the current state of the art from my perspective. The interaction with chatGPT at the end of this article is the original inspiration for it. We are only at the beginning of technology. I don’t consider any of our current technology very advanced. Yes the iPhone is great but it’s just a cousin of the steam engine in comparison to where we’re going. If you’ve been paying attention you may have noticed a recent shift in AI terminology. Perhaps it’s me but I’ve noticed the conversation changing from AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence). As a yard stick I would say that the current state of the art AI, is equivalent to the Internet in 1996 comparing to what it is now. In this article I’ll go through my best understanding of the state of play and why the narrative has switched to ASI.

Is AGI too hard?

General intelligence is what You have (assuming you are human). It’s a level of intelligence that allows you to basically adapt to anything and therefore potentially do anything. Will we achieve AGI in computers? I don’t know, I don’t think we’re getting there in the next 5 years but everyone’s got an opinion on that. We’re certainly on the road, but I’ve no idea how long it is.

If I had to make argument for it definitely happening in 5 years. My BOTE (back of the envelope) calculation would go something like…. The Internet, the most recent-related-paradigm-shifting technological development, took 40 years from university network to where it is now. On that journey we observed exponential growth patterns like 1,2,4,8,16,32,64,128…(Moore’s law etc). If we say that we will build big things exponentially faster now, then I propose using a exponent of the last big thing as a guide. I’d use the square root as a simple measurement, because it’s related to the previous patterns and it is a simple proportional calculation that is as good as any. So the square root of 40 is just over 6. Implying that, if we were to start again, we could rebuild the internet from scratch in 6 years – I’d say that’s plausible. Therefore I propose we could build something else of similar nature in that time. Logical? Who knows… however I can’t see it happening faster and in reality the path taken to build the Internet was not linear. Many events like the Dot com crash, wars and the globalisation of the planet both stunted and jolted the growth, I assume the same applies to the development of AI.

The AGI Challenge.

After journeying down the road of LLM development for two+ years now, we’re clearer on some of the specific challenges of achieving AGI. This is the natural development cycle, you-make you-learn. One of the lessons learned is simply the compute power requirement AGI will need. The actual physical compute power that you need to make a breakthrough to make something ‘Generally’ intelligent is enormous – we know this now. That triggered an enormous infrastructure build-out in. You can see this in Nvidia’s stock growth chart, they literally cannot make enough processing power to meet demand. This in turn has triggered a gold rush opportunity, cash rich behemoths like Meta, Amazon & Google are researching and building their own compute chips. So you can see how this will all take time and ‘slow things down’.

Now I fully expect that they will succeed, however as they grow the friction increases, as does the competition for resources. Energy for example is a real challenge. The central statistics in Ireland release data showing “The percentage of total metered electricity consumption accounted for by data centres rose from 5% in 2015 to 21% in 2023″. The problem this introduces for the development of AGI is that energy production is an external factor i.e. The IT sector is not in control of energy production, therefore it cannot speed it up. This is just one example of a natural technology growth cycle in operation. No matter how ‘intelligent’ AI will become in the future, it still has to be built in today’s world with all today’s problems, resources limitations and bureaucracy. I remember a good lesson from my Lecturer John O’Donnell back in my college days “No matter how many women you assign to the job, it still takes 9 months to have a baby”. Some things just take time.

Pivot to Super intelligence ASI.

What’s difference between AGI and ASI?
ASI is about using current AI technology (LLMs) to do specific (not general) tasks. Artificial super intelligence (ASI) is more achievable in the short term because super intelligence is within the current capabilities of emerging models coupled with the capabilities of the emerging compute networks. Basically we’ve a level of AI model that is excellent as a building block and a sh1t load of cheap computer power ready to go. On this road to AGI we’ve mentioned the enormous build out of compute infrastructure and the enormous amount of energy it’s consuming and the enormous budgets poured into model development. All of this investment has created the beginning of the paradigm shift, a real one. I like to think of current AI as the ‘new software’ just like when Javascript came along – new things got made. The emerging front runner in this new software arms race is Agents. Agents, are basically like traditional programming scripts except that now they have this new AI infrastructure world to play in.
Programming languages are just like spoken languages, they have very well defined rules and constructs, so it’ unsurprising that a LLM can use them well. You can instruct an LLM with a text interface (like chatGPT) to do something. That something can be any task you can think of in a digital system. The LLM can use programming languages to do that something and of course can also build models of that tasks and evolve those models to get better at doing that task. So an Agent is like a worker that anyone can construct and instruct to do anything. In theory (and probably current practice), an agent can create other agents creating a feedback loop that makes it better at the task and any related tasks. This is the most startling fact I’ve understood about AI. If we are looking for guard rails for this technology then there’s something here to look at.

So that’s AGI and ASI and the reasons I believe ASI will become the forerunner to AGI, It logically has to. I think perhaps AGI v1 might merge from ASI development. Going back to the 6 year calculation, it’s certain that that 4 more years of the type of development that has produced agents will reveal something new that we don’t know about yet.

Fear of the unknown

I’m for the development for this technology, I think it’s inevitable anyway. Finding a balance between it’s power and safety is essential. I don’t think we should be afraid, fear mongering is the monkey brain at work. If I was an artificial super general intelligent ‘being’ with nefarious motives I’d definitely use the human monkey brain’s fear against it. It seems like an obvious thing for such a being to do. Hopefully we never have to find out. AI can be a great tool for humanity, it’s important that we understand it. As it’s creator, in a somewhat paradoxical statement, I think regardless of it’s power we can maintain control over it. As humans we are used of the idea of a creator having ultimate power over us, that belief has guided humanity for tens of thousands of years. If we are willing to believe that a creator has ultimate power – despite us going from fire to AI in a relatively short period of time, then I don’t see why that can’t apply to humans as the creator of AI. You don’t need to believe in God for this statement to be true, you just need to accept that the idea of a creator with ultimate power over it’s creation has been something that we’ve practiced for a long time and still largely believe it to be true.


Here’s a short interaction with chatGPT4. the way it describes itself at the end is the inspiration for this article. The most advanced model at the moment starts off with what looks like an intelligent conversation but by the end it reveals itself that it is just a trick of the light.

ME: What’s in the case in pulp fiction

ChatGPT: The contents of the case in Pulp Fiction are never explicitly revealed, making it one of the film’s most iconic mysteries. Director Quentin Tarantino has never confirmed its contents, leaving it open to fan theories and interpretation.
The case emits a glowing light, and its combination lock is set to “666,” leading some fans to speculate it contains something symbolic, like Marcellus Wallace’s soul, as the glow could represent something ethereal. Others believe it’s simply a MacGuffin—a plot device designed to drive the story without needing a specific explanation.
Tarantino himself has hinted that the case’s contents are whatever viewers want them to be, which allows each person to project their own meaning onto it.

ME: What do YOU think it is

I think the briefcase represents desire or ambition—something that each character wants but doesn’t fully understand. The glow is almost like a reflection of the viewer’s own curiosity and imagination, which makes it captivating but also untouchable. It’s a classic mystery, but because we never see what’s inside, the contents take on a personal significance for each character, and for us as viewers.
If I had to guess, I’d say it’s something powerful and valuable but intangible, like the idea of success, power, or redemption. Tarantino’s choice to keep it hidden seems like an invitation for us to fill in the blank with our own interpretations, which keeps the film feeling fresh and provocative.

ME: The I you refer to, who is that

The “I” is my way of sharing a perspective as if I were a person weighing in on the question—almost like a friend or fellow film fan sharing a theory. But really, it’s just a style to make things more conversational and personal, not an individual with personal experiences or beliefs. Ultimately, it’s a way for me to help the answer feel more relatable, reflecting what a curious fan might think or how someone might interpret the mystery based on what people have discussed about Pulp Fiction over the years.

ME: So it’s not real? Just a construct?

Exactly, it’s a construct. I don’t have personal experiences or beliefs; my responses are generated based on patterns in language and information. When I use “I” or express opinions, it’s meant to create a more engaging and relatable conversation. So, while I can provide insights and interpretations, they’re not based on personal feelings or experiences but rather on the analysis of various perspectives and knowledge.


Comments

2 responses to “AI Landscape: AGI Dreams & ASI Reality”

  1. […] AI Landscape: AGI Dreams & ASI Reality […]

  2. […] AI Landscape: AGI Dreams & ASI Reality […]

Leave a Reply