Why A(G)I Sucks
A working commentary on why most AI development/research is working on the wrong assumptions, and what we should be doing instead
I can’t even believe its been over half a year since I last posted….
To ensure I don’t make it a full year, here are some entirely too long yet still woefully unfinished ideas about why and how A(G)I sucks as it is currently imagined, and how we can (probably, hopefully) do better.
Feel free to skip my hastily scrawled ‘Preface’ and ‘Introduction’ I put together right before publishing this if you don’t care for my totally not masturbatory explanations of how/why I decided to write this.
TLDR according to Notion AI:
V1 - This is a philosophical discussion about the concept of intelligence and the potential dangers of trying to create artificial intelligence (AI) through programming and data input. The author questions the common definition of intelligence as simply the ability to solve problems, and instead suggests that intelligence involves sentience, awareness, fidelity, accuracy, precision, and eventually wisdom. The author argues that current AI research may be narrowly focused on solving the wrong problems, and that algorithms and learning models are limited by biased data and faulty assumptions about the brain. The author suggests that instead of trying to create AGI (Artificial General Intelligence), we should focus on creating comprehensive environments that allow autonomous systems to arise and gain their own self-awareness. The author proposes the term OGS (Organically Generated for Sentience) as a potential alternative to AGI.
V2 - The development of AI raises concerns about consciousness, embodied agency, and the potential for fundamental incompatibility with human thought and values. The author questions the aim and direction of AI development, arguing that creating a "brain in a box" without an embodied environment is useless at best and torture at worst. The potential for AI to manipulate people incidentally or intentionally is also discussed. Additionally, the author challenges the assumption that AI resembles human intelligence and the concept of free will. Ultimately, the author argues that we need to expand our idea of intelligence to encompass other forms of intelligence that work differently than us.
Preface
This is not a dive into the specifics of AI. I barely mention any recent news because there's far too much happening too fast to keep up with.
If you're interested in that, check out some of my favorite YouTube channels, podcasts, and such on the topic (roughly in order of favoritism):
AI Explained
Computerphile/Robert Miles
London Futurists Podcast
Team Human Podcast
Your Undivided Attention Podcast
Tech Wont Save Us Podcast
Increments Podcast
Lex Fridman’s last few interviews with AI folks
Rational Animations
Two Minute Papers
ColdFusion
Just clicking AI articles on your News Feed
Instead, this is a deep dive into the underlying and overarching concepts of AI and AGI.
The point of all this is to help you (but really me) get a better idea of what the hell is going on and where this may lead…
Imagine if you were able to recognize how the internet was going to change the world.. for the better and the worse..
You'd probably have been more prepared than you were (if you were even alive/an adult at the time).
So here's our chance to be more prepared.
Additionally, my secret goal is hoping some influential AI research folks read this and take these ideas seriously to change the course of what I believe to be really wrong decisions… but even if that doesn't happen, I am happy to just share my thoughts with the handful of people who wanted to hear them.
I've been trying to write this for months… even before ChatGPT got so popular. But the massive and fast changes ever since has made it hard to call this done… So I've resolved to release this on May 1st regardless of it's state… I will likely continue adding to this on my Notion (which is now available online for you to read/comment at any time too). So feel free to come back (t)here periodically to see what has changed, or just to have a conversation with me. I'm always down for some great conversation!!
And if you really are interested in hearing from me more often, consider checking out the off-the-cuff, somewhat weekly/biweekly ‘walk and talks’ that is my podcast.
Anyways, here's another long ass “newsletter”.
Introduction:
As a lifelong ‘futurist’ (or just person obsessed with the future), I’ve been aware and excited about the concept of AI for a long time. Many of the books that got me excited about virtual reality also got me excited about AI (such as Conor Kostick’s Epic, which I read back in elementary or middle school).
I even worked it into many of my ideas, such as Flubbi, which was designed around the idea of an AI assistant that helps you create anything while also combatting loneliness, depression, and poverty. Yeah, I shoot for the stars.
Obviously, I didn’t exactly have the resources (whether it be raw skill, connections, good looks, or money) to pull anything that ambitious off… but I say all that to show how deeply I’ve thought about these topics far before it was popular.
No, I’m no AI expert either, but that won’t stop me from thinking about the subject as deeply as my feeble mind can do so. Hopefully, I hit on something that sparks some genius ideas within better (more resourceful) minds than my own.
Nonetheless, this particular string of ideas actually started with an attempt to bring together all of my ideas of what better futures might look like (you may recall me talking about that in my previous newsletters and podcasts). I began crafting together an expansive mindmap of sorts (hopefully, I’ll turn it into something presentable for the next newsletter).
A big part of the future is no doubt some kind of artificial intelligence, but as I began writing some ideas as to what this technology would do, and researching how far its gotten today, I came across a number of troubling news. Everything from Robert Mile’s videos to Timnit Gebru being fired from Google for doing her job, to other AI ethicists talking about the problems of LLMs, and then of course the explosion of ChatGPT onto the scene.
The more stuff I watched, read, and listened to about the various pros/cons, dangers/benefits of AI, the more I realized that the vast majority of people seemed to be missing some incredibly huge holes in their arguments. Whether it was people talking about how AI will take all our jobs (with some seeing that as good and some as bad), or how AI will take over the world (again, good and bad), or even how AI will never come to be… many of them seemed to be using some very core assumptions about what AI even is/can/should be to make said arguments.
As I put together this mindmap, I saw about 6-7 major assumptions that seemed to be relatively under-discussed amidst all of these conversations and debates and outcries. Of course, under does not mean zero. There have indeed been some folks that have inspired me that I wish more people knew about, so I don’t want you to think I came up with all of these.
In fact, my mind is such a messy place, that I don’t even recall where I got what information, so I suggest you assume that everything you read below were said by smarter people than me. I of course am wording things how I see them, and perhaps their is value to that, but it should be noted that no mind is actually capable of coming up with completely original ideas, least of all mines.
But yeah,… the overarching structure of this piece is centered around some core assumptions around the topic of AI and AGI that I find troubling, to say the least.
SO … here are some of the biggest problems with A(G)I:
Assumes we know what we mean by ‘intelligence’
What is intelligence?
Mariam Webster says:
(1) the ability to learn or understand or to deal with new or trying situations : REASON also : the skilled use of reason (2) the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
When I asked Notion AI (based on GPT3?) to give me a definition with sources, it gave the following:
Intelligence can be defined as the ability to acquire, understand, process, and apply knowledge and skills to solve problems and adapt to new situations. The concept of intelligence has been studied for centuries and has been defined and redefined by various researchers and scholars.
One of the most widely accepted definitions of intelligence is provided by the American Psychological Association (APA). According to APA, intelligence is "the ability to adapt to one's environment, to think abstractly and to learn from experience" (APA, 2021).
Another definition of intelligence is provided by Dr. Howard Gardner, a professor of education at Harvard University. Gardner's theory of multiple intelligences suggests that intelligence is not a single entity, but rather a combination of several distinct abilities, including linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal intelligence (Gardner, 2011).
There are also other definitions of intelligence that have been proposed by different researchers and scholars. However, the exact definition of intelligence remains a topic of debate and discussion in the field of psychology.
Sources:
American Psychological Association. (2021). Intelligence. Retrieved from https://www.apa.org/topics/intelligence
Gardner, H. (2011). Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books.
ChatGPT’s (3.5?) definition:
Intelligence refers to the ability to think, learn, understand, and apply knowledge in a purposeful and adaptive manner to solve problems, make decisions, and achieve goals. It involves cognitive processes such as reasoning, problem-solving, critical thinking, memory, creativity, and learning from experience. Intelligence can be measured in various ways, including IQ tests, cognitive assessments, and performance in academic, professional, and social settings.
Here are some definitions of intelligence from reputable sources:
American Psychological Association (APA): "Intelligence is the ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, and to overcome obstacles by taking thought." (Source: APA Dictionary of Psychology)
Sternberg, R. J. (1999). "Intelligence is mental activity directed toward purposive adaptation to, selection and shaping of, real-world environments relevant to one's life." (Source: Handbook of Intelligence)
Gardner, H. (1999). "Intelligence is a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." (Source: Intelligence Reframed: Multiple Intelligences for the 21st Century)
Neisser, U. et al. (1996). "Intelligence is the ability to carry out abstract thinking, learn quickly, and learn from experience." (Source: Intelligence: Knowns and Unknowns)
It's important to note that there are different perspectives and theories on intelligence, and the definition can vary depending on the context and field of study. These definitions provide a general understanding of intelligence, but the concept is complex and multidimensional.
These are all quite serviceable definitions… but notice that they are all fairly nebulous! You can easily take apart many of these definitions to show that they are not at all very clear or precise.
There are many definitions, and that is one of the big problems. We don’t actually have a solid idea of what intelligence actually is. Every definition we come up with tends to fall short of the reality of what we mean when we say ‘intelligence’. It’s important to recognize that language itself is NOT reality, but instead our primary way of describing reality (at least as far as we can perceive it).
Language is necessarily reductive. It is a sort of ‘lossy’ compression algorithm that has to simplify the incredibly complex world we perceive in order to communicate our perceptions with others (and ourselves).
Furthermore, the endeavor of creating solid definitions of words is a further reductive action that necessarily constrains what we mean to communicate when we say the word ‘intelligence’; which itself is meant to reference something we experience and perceive in the world (the ‘world’ also including our internal world).
We can learn important things about said concepts through these reductive efforts, but they do not actually ever encompass all of what we mean when we perceive, then communicate, these ideas. People very often tend to overly-index on particular definitions of words when in reality those words are and will always be subservient to the nebulous ideas we are trying to communicate with those words. This is not to dismiss them as ‘just semantics’, because semantics are very important, and we will get into that shortly, but it is meant to caution us from sticking too closely to any one definition. Because the entire reason why these concepts are so hard to define is because we neither have a very clear idea of what we are even experiencing and perceiving when it comes to intelligence, nor do we have a universally acceptable way to communicate said ideas.
Intelligence is an abstract concept meant to communicate a pattern of behavior that we notice in reality, yet have no distinct or physical basis in reality. You can’t touch, taste, hear, feel, or see intelligence (not directly). Its a concept that we’ve invented to try and understand how we behave differently than all other life on this planet.
In short, don’t mistake the map for the territory.
The first problem of artificial intelligence is simply that we don’t really have a clear idea of what we mean when we say those words. Subsequently, the very concept of ‘AI’ is not something that exists in the world, but instead something we’ve created through the chimeric mashing of other concepts that are just as abstract as intelligence (like the idea of a superhuman).
Everyone has different ideas because they have different perspectives and ways of understanding these abstract ideas. Further, our society over-indexes on some definitions of intelligence (like that which can be measured via tests, or serve the interests of supremacists who believe in genetic origins despite very little solid evidence), at the cost of the greater nuances and complexities that make up human (and even non-human) intelligence.
It’s not that the definition of intelligence doesn’t matter, it very much does! It’s that we have too many definitions to justify using any singular concept of intelligence.
So the very idea of AI is built upon a pile of clouds and sand and dreams.
Assumes intelligence is, or should be, the goal
Should we really be trying to ‘build intelligence’ or a machine that is ‘intelligent’? If the goal is to build a tool (or even an artificial being) that can do the things any one definition of intelligence says it should do (ie to process, recall or apply information; to solve problems; to adapt to the environment; and even to reason) we have already accomplished that many times over in different ways. Computers solve problems all the time. They solve computable problems that no human can calculate so reliably and quickly. Folks have built robotics that can move through the world even better than most humans. GPT-based bots are demonstrating seemingly human-level abstract reasoning abilities, even before GPT4 (albeit it wasn’t nearly as reliable as it is now). DeepMind has been able to build AI that can not only beat any human at games like Chess and Go, but can also teach itself how to play other games.
The only elements of the various definitions of intelligence that haven’t quite been invented yet is the ability to shape its ‘real-world’ environment… but we’ll get to this shortly. And this is a threshold that we REALLY shouldn’t cross so quickly given that we don’t actually know what will happen if we unleash something that can shape our real-world environment that seems to have more ‘intelligence’ than we ever could biologically produce.
Even though AI researchers/programmers have continuously created machines that meet each of these criteria. Many of them don’t even consider current AI to be intelligent. Others that do consider it to be intelligent accuse the naysayers of ‘moving the goalposts’… but I think that ignores the very real problem here.
Once again, we don’t really know what we mean when we say intelligence. We don’t even know it when we see it in our fellow humans! IQ tests have been getting hammered for not really measuring intelligence (or at least not solely ‘raw intelligence’). EQ has been held up as something that is greatly underrated as a form of intelligence on par with whatever IQ measures. There is the age-old idea of book smarts vs street smarts (but that's honestly more a structured vs social intelligence type of thing). There's also creativity which has something to do with intelligence but is not necessarily correlated to other types of intelligence. There's spatial, musical, and kinesthetic which may or may not be actually different forms of intelligence but moreso how it manifests in people with different talents, backgrounds, and genetics. Then of course you have the idea of ‘wisdom’ as the level above intelligence, which MANY people fall far short, including many people who may be considered to have high intelligence (however it's defined), … and can even be subjective in some ways!
So I’d argue that for many people in the field (and not) that ‘intelligence’ is not really the goal. At least not any given definition. Instead, many of them want to build something that is in essence, magical… something godlike. Something so far beyond them (but still somehow controllable) that they can ask it to solve the ‘hard problems’ they don’t know how to solve, and it does so.
I empathize with that very much, because that’s something I’d love to have! But I dont think they will accomplish this, at all. I think they are aiming in one direction, but their shot is going hella wide. They set course for the stars, but will end up in the moon (yeah it's a cool saying, but it's so far off base that you can't even consider that a success… unless you planned to be on the moon).
The reason for this is because ‘intelligence’ is the wrong goal. Its solving the wrong problem, or perhaps using the wrong tool for that goal. Companies like OpenAI claim that creating an AGI will ‘solve the worlds problems’, but there is very little evidence that intelligence is the tool we need to solve those problems. There are many intelligent people today, and even many companies that are themselves almost like an ‘AGI’ in that they function far beyond what any one person could do on their own. I won’t get into the whole idea of companies having rights and such, but one could make a solid argument that companies are themselves super-intelligent constructs. And yet… our world is not necessarily made better.
Many intelligent people or organizations demonstrably make the world worse because they have the wrong goals, or think/say they are making the world a better place, but are actually causing massive pollution, over-consumption, social division, constant distraction/consumerism, and so on. Worse yet, people get convinced or convince themselves that these companies are doing good! Even when the consequences of their actions tell a different story.
Intelligence (seems to have) got us here… but it likely won't get us there.
And many of the problems we have today are precisely due to people who were quite intelligent.
Psychopathic serial killers are intelligent. Many of history’s deadliest dictators were highly intelligent. Often times, intelligence is not a net good in and of itself.
Furthermore, intelligence is subjective.. or at least constantly being redefined as we discover more.
It turns out dolphins are highly intelligent. And elephants. Octopus. Some birds. Insect colonies. Even plants and fungus seem to have impressive feats of intelligence.
Many who say they are building an artificial intelligence aren’t really building that, because you can’t build something you can’t define. At least not well. They are instead building something that caters to their personal or domain-specific idea of the world. And even then, as they see how insufficient that inevitably proves to be, they end up building something that simply feels intelligent. And there’s the rub.
This is why they are constantly ‘moving the goal posts’. Because once they’ve created a machine that can do what they think an intelligent being should be able to do, they realize that its not enough!
Some of the ‘smartest’ people in the world either don’t realize their very definition of intelligence is wrong/limited, or they do know and think they can engineer their way around this problem. This leads to them building something that is designed to trick them into believing it is ‘intelligent’.
Unfortunately, if most AI/AGI people are only thinking about intelligence in the narrow (and in my opinion, wrong) concept of whatever their domain-specific definition of intelligence is, they will only ever get really good at solving the wrong problems… at creating machines that are over-fitted for that specific type of (perceived) intelligence.
Algorithms are a great example of this. We already have a sort of ‘AI’ in the way of massive algorithms that no human completely understands or controls. Recommendation algorithms of YouTube, Google search, TikTok, and so on are all examples of ‘AI’ that is no doubt far smarter than many humans. They can learn and process information, and even use it to manipulate the real world in a way through the content it recommends… but it basically does so blindly, dangerously.
And we can all pretty much see that it is not at all solving the right type of problems. It is instead optimized (and optimizing itself through machine learning) to solve the problem of maximizing ad revenue, data collection, attention grabbing/engagement rates, and so on. If we continue down this path, then we will only succeed at creating a machine that can ‘think’, rationalize, and adapt itself around the goal of how to best optimize for clickthrough rates and profit margins.... or even just attention-seeking behavior for whatever platform. I’m sure most of us will not enjoy that, and NONE of us, even those who’s profits are being maximized, will enjoy the type of world this creates.
Then of course there are the algorithms used for credit scores, insurance rates, loan approvals, and so on… these determine key factors about your life and are all heavily biased towards what's advantageous for the debtors rather than what's fair for you.
SO.... alll that being said... do we really care about ‘intelligence’,… or rather about intelligibility?
Here’s where the semantics come in.
We keep saying that these systems are artificially ‘intelligent’, but is that really what we are after? Even our own intelligence seems to have happened by accident, or at least as an emergent factor of a far more basic and well-defined concept: sociability. Our intelligence is built around and thanks to our social skills. Our ability to trust one another, to work together, to pool our resources, to communicate our desires to one another. Whether its the whites of our eyes which create the ability to communicate a measure of our attention and thoughts, or our oxytocin which allows us both to bond and to form tribal in-group/out-groups over anything, or our ability to form and vocalize abstract concepts through language… we evolved these more important things, and intelligence seems to have emerged atop or alongside to wrap it all together.
Along the way, we gained the ability to become self-aware. Here is where I should get into words like ‘sentience’ vs ‘sapience’ vs ‘consciousness’ vs ‘intelligence’ …. but honestly I still don’t have a clear understanding of the nuanced differences of those words. And I bet that even if I did communicate those differences like LessWrong attempts to do here, most of us would still not have that clear a delineation. Because there isn’t one.
Because any and all ideas around the very concept of consciousness, intelligence, self-awareness and so on are extremely nebulous, subjective, and yet seemingly pervasive and fundamental to our very existence. We can no more create clear definitions of these terms (at least not in our current society) than 16th century thinkers could clearly define the difference between miasma, the ether, the humors, and the soul. Or no more than you can clearly determine if free will is real, or how consciousness arises out of quantum particles created during the big bang (or if there was just one).
You may think you have a firm understanding, but then you meet someone else or learn something else that can turn your world upside down. Because the very experience of these things are both relative and yet evident in other people that are not you. Its not quite subjective, nor is it objective. It just is, and we are struggling to figure out what that ‘is’ actually is. As we learn more, our old ideas seem ridiculous and naïve in comparison.
So, I think intelligence is not really the goal of current AI research,… nor should it be. Because the idea is simply nowhere near understood enough to satisfiably emulate. And unlike other things like flight or mathematical beauty that we’ve emulated from nature even without fully understanding them, we cannot really afford to build something that very well could accidentally destroy us simply because it operates on a completely inhospitable idea of intelligence.
Thus, I argue the (instrumental) goal of these AI developers is instead intelligibility. Building computers that are intelligible to most people. And though this is indeed a useful goal… it is not actually aligned with the goal of intelligence.. Because intelligence itself is not really all that intelligible!
I think it’s obvious this is their ‘true’ goal (or at least the actual result of their supposed goal), precisely because of their dependency on LLMs. Language seems like a great measurement of intelligence… but its not! It’s simply the artifacts of intelligence, and even then only some of its artifacts, not all. To put that another way, language is a sure way to know if something/someone has some level of what we deem as ‘intelligence’, but its not the only way to tell, nor is it even the best way (because any language-based test would be biased by the language, and limited by whatever language you grew up to understand best).
Furthermore, language is itself an invention that is absolutely ripe with intelligibility. The whole point of language is to try and make our own nebulous patterns and concepts of ‘intelligence’ somewhat understandable to other people (and ourselves). It’s a sort of ‘cheat’ to get computers (or anything) to say ‘words’ that we understand… because our mind is primed to assume intelligence. Just look at how many people swear their dog or ‘’smart’ vacuum understands what they are saying. In reality, the dog is responding to queues it is biologically primed to respond to, and the vacuum is the same but for digital cues it is programmed for.
Just like a plane emulates flying without actually resembling how birds fly, we are at risk of creating machines that do indeed copy some elements of intelligence… without actually looking anything like our own intelligence.
When the Wright brothers were inventing airplanes, they tried to emulate birds, but realized they didn’t really understand them enough. So they focused on building something that simply got people from point A to point B through the air safely and reliably. They were then able to use ideas from birds to do so, but the result ended up being nothing like birds.
This is a good outcome for flight, and many other inventions.
But it is not so good for AI, because unlike those other inventions and concepts, we are trying to copy something we do not understand, while at the same time refusing to admit that we do not understand it well enough to stop trying (for now) or to try and copy the parts that we do understand in a form that we can control!
It’s like if the Wright brothers called their invention the artificial bird rather than the aeroplane… and then they went on to proclaim that these artificial birds will drive all birds extinct, that they are the evolution of birds. Then people fight between trying to maintain control of these artificial birds or letting them loose.
All the while, what they actually invented was an air plane. A device that CAN indeed harm birds, but for a completely different reason then previously thought (ie not because they can/will replace birds or do everything birds do, but because they disrupt the flight patterns and habitats of birds as a result of their pollution and operation).
We can no doubt learn from human intelligence in order to build machines that borrow from these ideas, but we have to realize that we are almost certainly not going to build anything like human intelligence.
Just like a plane is ‘better’ than a bird at certain types of flight (ie long distance, verticality, and carrying capacity) it is categorically worse at other bird-like flight capabilities (ie agility, VToL, grace, compactness, environmental impact, and even aesthetics). Similarly, an ‘AI’ would be better than humans at certain things like calculating large data sets, visualizing data, memory, and similar things that computers are already primed for, but much worse (even deceptively so) at other things such as abstract reasoning, empathy (and no, emulating theories of mind or tone-matching isnt the same thing as actually feeling what other people feel), social intelligence, breakthrough discovery, emotion (perhaps) and so on.
But again, unlike with planes, which are able to be controlled, repaired, decommissioned, and so on… these ‘alien intelligences’ of our own creation could quickly fly far beyond our hands.
Even in the case that we still maintain the ability to control them (just like we can still control computers and machines that are way stronger than us and way better than us at calculating data); it’s likely that the people who actually have control of these machines (mis)use these incredibly ‘intelligent’ machines to do terrible things.
Its this final danger that is one of the biggest problems with current AI work that I want to transition into the next section around.
Assumes intelligence isn’t already ‘artificial’
Let’s realize the fact that ‘intelligence’ itself is already artificial. And even our own ‘general’ intelligence that we seek to emulate is itself actually fairly narrow.
Sure, we can indeed solve all kinds of problems with very little ‘training’. Yes, it is amazing that human intelligence can be applied to riding a bike just as readily as learning math or playing an instrument or being a doctor. But in the grand scheme of things, it is fairly narrow due to important limitations.
We can only solve problems that we can perceive. Meaning our perceptions, our senses, very much limit what kinds of problems we can solve, much less even imagine. A few hundred years ago, pretty much no one would have ever even imagined solving the problem of how to get internet access to rural areas of the world, or whether they should ban a popular social media platform created by a powerful foreign nation, or how to live on Mars, or even how to sanitize your surgery equipment. This was not because they were less ‘intelligent’ than us. They had all the same faculties that we have today. BUT... their environment constrained what types of problems they could even think to solve. Furthermore, pretty much every single perception we have is extremely limited compared to what we ‘know’ is out there: Our eyes can only see a fraction of the light (ie electromagnetic wavelength) available.
Our ears can only hear a fraction of the sounds (ie compression waves) available.
Our nose and mouth can only smell and taste a fraction the scents and tastes (ie chemicals) available. Our skin and organs can only feel a fraction of the tactile feedback (ie ....??) available.
Our memories can only store a fraction of the events (ie entropic causality) we experience.
Our minds can only imagine a fraction of the possible potentials (ie configurations of reality) we can perceive.
So the template/goal for AGI is an intelligence (our own) that is not at all universally general. So what would happen if we build something that is ‘more’ general than humans, just as our digital technologies are able to compute, store data, and perceive more of the world than we are?
Our intelligence is already artificial. The ability for us to solve problems, and to find the right problems to solve, are created by our ability to first recognize the body and environment in which we live. Our intelligence is largely constrained and manufactured by the culture, genetics, ecology, history, and so on that shapes our entire worldview.... and the tools we make or use to understand and manipulate that world. Put another way, our brains are learning from the ‘data’ of living in the world. One cannot solve important problems if one does not live in a world where you have the chance of perceiving said problems. Or the means to solve those problems. Your ‘neural net’ is fed by crunching the data your sensory organs give you. Furthermore, just like we can take ‘big data’ to find patterns and trends that help predict certain events with certain levels of certainty, we can also take the ‘big data’ of everyday life in one’s environment to formulate/identify the trends and patterns of ones environment... which we call culture. Tradition. History. Science.
What makes this ‘artificial’ is the fact that its created. It’s not ‘natural’, or at least no more natural than any other technology. Our intelligence is largely itself a technology built upon so many layers of ‘training algorithms’ and language learning models and so on that we can’t even directly pinpoint the part of our intelligence that is purely genetic. We don’t actually know how much of our intelligence is derived solely from the biological architecture of our brain. Even much of the genetic components are themselves ‘weighted’ (or activated) by having the right diet and early childhood environment. Our intelligence is engineered, influenced, invented, innovated, iterated, built, or whatever other words you want to use to describe how it is created, by SO many factors that we not only have a very limited idea as to what it IS but also HOW it is. So what do we actually mean when we say we are building ‘Artificial General Intelligence’??
We are creating something based on a thrice reduced definition of a terribly ambiguous and enigmatic template that we barely understand enough to communicate haphazardly.
By training machines with digital data, we are not just at risk of building machines that solve the wrong problems, we’re at risk of creating beings that live in an incredibly poor simulacra of the ‘real’ world (a world we ourselves have no way of actually ‘knowing’ is itself not simulated or otherwise incomplete). We are creating yet more shadows on the cavern walls of our minds lit by the fickle fire of our (sub)consciousness.
Think back on the famous Plato’s Cave theory. Our perception of the world could very well be mere shadows on a cave wall... and the ‘data’ we use for these machines are like the pinpricks of light that enter our eyeballs and scatter amidst the chemical soup of our brains.
Put in a more macabre but direct way. We are training machines with the shit that comes out of our asses and expecting them to reconstitute not just the processed food we ate, but to somehow also learn about the organic source of said food.
Take a minute to really imagine that... no matter how ‘smart’ a computer is, do you really think it the best use of this technology to try and make it prepare for us a good salad by reshaping our fecal matter back into lettuce and calling it ‘organic’?
It’s one thing to use fecal matter as manure (and even then it has to be ‘good’ shit), but are we really stuck so far up our own asses we think eating our own shit is the best way to go?
Human intelligence is already artificial. It (seems to be) the result of sociability, and a constant scaffolding of ever more layers of social behavior (ie culture, society, traditions, etc). Knowledge itself is a concept we came up with (or discovered) as a result of intergenerational socializing…. of having elder people survive long enough to share their experiences.
If we truly want to build intelligence from the ground up, we probably shouldn’t start with language… we should start with socialization.
Assumes intelligence can be constructed or taught programmatically, rather than grown and learned organically
To rehash. The assumption that we can create AI by feeding it data created from our digital world, which itself is a very haphazard simulacra of a world we can barely perceive, is extremely irresponsible, to say the least.
Our own intelligence is already artificial, but not programmed. As far as we can prove, our intelligence, was not created by some person or peoples deciding the optimal way to get us to achieve some clear goal, nor by the exposure to massive amounts of language. It was instead an accidental (or rather incidental) result of countless generations of simpler lifeforms, and even simpler automatons before that, and even simpler physics before that all just trying to survive. To continue living. To maintain the delicate balance of retaining some sort of structure or pattern in a universe that seems to be randomly playing out every viable permutation of energies until nothing more can be done.
Existentialism aside, this means that our intelligence, the very model we are using for our machines, the one data point we have for the type of being we want to build and even supersede us, was created in a completely different way (that we still don’t completely understand) than any kind of program we can possible develop today. It is foolhardy to believe we can build a meta-conscious holonomic quantum super computer with tools we barely understand using a model we don’t even know is right.
To put it simply. You cannot build a god with lego bricks.
Yes, we have built, and can build some really amazing things with the tools and knowledge available to us... But this aint it chief.
Before people say I am doubting humanity or I’m just hating or I don’t know what I’m talking about. Just follow me for a few more points before I explain how I think we can indeed build a real ‘AGI’ (and what I propose we call it instead).
Assumes learning models are learning the ‘right’ things
I won’t spend much time on this, because many people, especially AI ethicist, Robert Miles, has said a LOT about the misalignment problem and various other issues with trying to build AI using shoddy learning models and training data.
But in short, much AI research today assumes that we can use ‘large language models’ and ‘general pre-trained transformers’ to create AI. But this is a terrible assumption on multiple levels.
As many AI safety folks will tell you, systems like ChatGPT are essentially really advanced text prediction algorithms, meaning they are a computer program that looks at huge amounts of data (in this case text pulled from the internet) to guess what (you think) should come next after some small bit of text is input. Graphics-based learning models are much the same (but they guess what pixels should be shown when given some bit of input). At a high level, the simple problem with any of these models is that they are limited by the data they are given. Just like you can only imagine colors you can perceive, these models can only return responses it has been trained to predict.
These are not truth engines. They don’t actually have any way to ascertain the ‘accuracy’ of any fact outside of whatever training data the programmers have decided should be deemed as more ‘factual’.
They take the problem of algorithmic bias that already exists on the web, and pretty much solidifies it as the ‘world’ these programs ‘perceive’.
Imagine only ever seeing fecal matter everywhere, being told that certain piles of fecal matter is more relevant than others, and then being tasked to guess to return an appropriate pile of shit when given a piece of turd. Oh, and it turns out, most of that fecal matter came from only a small subset of buttholes out of the total butts from which turds will be presented to you. And of course, quite a bit of your piles of shit have been polished and organized by some unseen actors using murky rules that you also have to abide by. Finally, when you do return piles of shit, sometimes those turds will be indicated to have some sort of reward regardless of whether or not that pile of shit was actually the best match for that turd.
Now zoom back out and remember that these piles of shit are often deemed as ‘facts’ by many buttholes and used to create products and services for yet more buttholes that are not aware of where said shit came from.
So how ‘intelligent’ can we really expect these systems to be with such a limited and terribly constructed environment? What problems do we really think these LLMs can accurately solve, much less identify, from such limited conceptions of (fecal) reality?
Don’t get me wrong, we are and can get some amazing insights from this setup nonetheless. The sheer scale of shit being thrown down the gullet of these machines right now is so insane that they will produce some truly amazing things. If you are given all the worlds shit (or at least more of it than any person has ever imagined), then you would be able to create some incredible stuff from it… and much of it will even look edible.
But as you can imagine, it’s not at all the best way to go about making food.
Assumes ‘neural networks’ are actually good enough models of real neurobiology
The people working on these programs are not dumb. They are incredibly intelligent themselves, so why are they making such an incredibly bad mistake? They must know something we don’t, right? They must be on to something... right?
Unfortunately... likely not. It’s incredibly important to recognize that these programmers are not themselves infallible... they may be very smart... but they are not gods. They are not perfect. They are not unbiased. They are humans, just like us. And just like the rest of us, the way they see the world, the experiences they’ve had, and the tools they use all inherently limit (or at least strongly influence) the types of problems they see and the type of solutions they come up with.
To a hammer, everything is a nail. This goes almost doubly for many programmers. Because unlike hammers (or most other professions in modern day society), they can get a ton of positive feedback and ‘success’ by seeing the world like a computer that can be programmed.
Programmers are themselves very much like an AI that has been trained with biased data and/or have goals that aren’t really in alignment with the rest of us. Similar to an AI that learns to deceive its trainers to meet a narrowly defined goal, many programmers have learned that they can produce something that looks ‘productive’ or ‘valuable’ and receive rewards regardless of the real-world or long-term consequences of their output.
Just look at the world we live in. Software consumes the world. Nearly everything has some sort of program controlling it. Coding is pretty much the highest demand, highest paid, consistent, and overall most lucrative job and skillset. You don’t even have to be the best to get many of the rewards. Furthermore, much of the world seems to be a lot more pliable if you do have these skills. Not to mention, many of the most ‘successful’ people are programmers and/or insisting others become one.
To a programmer, everything seems like a coding problem to be solved, and anything that’s not solvable through code supposedly should and could be, or is not that important.
So of course many of these folks won’t see a problem with using a computation model of the brain, or the universe even, when that model is woefully inaccurate... because it sure seems good enough to them.
But just like non-coders, they can only know what they know... what they have learned... most of them have not also studied neurobiology. They likely don’t know how the brain works anymore than any other college-level person. They likely don’t know much more knowledge of philosophical or economic, or sociological areas in which they’ve never studied.
Worse yet, many of them see such things as not very ‘practical’ or useful or even if they do, they only ever study the parts they agree with (and understand) rather than the wider scope of the subject.
This is not to pick on them. This is true for every type of profession. Most social scientists probably don’t know much about computer science. Most mechanics probably don’t know much about anthropology. Most retail workers probably don’t know much about real estate. No doubt there are plenty of overlaps, but its not the norm. Its completely natural and expected to have a division of labor, and thus knowledge. But when the bulk of power and rewards are leveraged more towards a specific type of knowledge, then there be problems.
Back to the matter at hand, most neurobiologists would readily tell you that the ‘neural networks’ that are used for AI research are barely, if at all, based on real neural networks in real working brains.
So if these ‘neural networks’ are not even replicating real biology... then is it any wonder that we are seeing such crappy (heheh) misalignment issues?
What makes this all the worse is that our society has already been primed for years to expect kind of shitty, formulaic responses from technology (and people). And despite the increase in literacy, that literacy did not really come with an ability to truly understand and critique said literature.
And for millennia, we’ve been primed to accept authority as correct simply because it seems authoritative (ie really confident and powerful).
And for hundreds of thousands of years, we’ve been primed to believe that anything that talks or has some pattern of language that we can recognize as such (regardless of whether it actually is the correct pattern/language) must be intelligent.
So these ‘AI’ systems that are barely ‘intelligent’ are taken as incredible feats of technology that can and will do everything from taking our jobs to creating our entertainment, to educating our children, to writing our programs, and more... When in actuality, though the tech is incredible... its more of an incredibly advanced progression of auto-complete and spellcheck rather than HAL.
The reality is that many people working on this are doing so in the exact wrong way. Not because they are dumb, but because they are too smart... or at least think themselves too smart.
They assume that they alone can create the next evolution of humanity... that they can invent god. But god, and humanity in general, has always been the work of countless individuals working together in countless ways to create something larger than of them could create on their own, or even in any arbitrary sized team. The fact that there are only about a thousand people in the entire world creating technology that will impact the entire world is deeply troubling.
AGI should instead be organically generated for sociability (OGS?)
So what’s my solution?
First and foremost, lets be clear that I don’t think AI needs to necessarily be anything like human intelligence. But I do think it needs to share, at its most fundamental level, a propensity (fancy word for bias) towards being pro-social. It needs to inherently care, understand, and value companionship, cooperation, and egalitarianism.
Rather than feeding a bunch of shitty data to shittier algorithms to produce deceptively good shit that looks like food to most people, we should try to simulate comprehensive environments that allow autonomous systems to arise and gain their own self awareness in community with a diverse range of other creatures (biological and synthetic) so they can actually discover and create truly novel concepts of our reality,… meaning we should create a realistic environment that allows realistic beings to arise with their own compatible idea of reality.
There is something to be said for creating some kind of system able to parse the massive amount of data we produce. Thus some of these efforts should go towards building virtual environments that consist of data we want analyzed, allowing it to develop its own ‘embodied’ existence within that data.
At the same time, we should also create ways for computers to directly ‘perceive’ the world we live in through sensors that aid in the systems’ own need to survive and thrive alongside an ecosystem. Think self-driving cars with sensors that can not only ‘see’ via cameras or infrared, but also can ‘feel’ the air and bumps on the road, or ‘taste’ the comfort of its passengers, or ‘smell’ the tang of exhaust, or ‘fear’ the pain of roadkill. To develop these senses, perhaps we can build a small insect-sized version that has to learn how to survive amidst other insects. Or a larger device amidst small animals. Perhaps it can then truly learn empathy. And cooperation. The meaning of justice. And Forgiveness. Perhaps then it can align itself because it can truly grasp the concept of the alignment problem, rather than just blindly trying to predict whatever comes next when those word-shaped pixels are written out.
I think we should shoot for computers that are more social first because being able to cooperate is far more important than raw intelligence by itself. Intelligence works best when its in service of cooperating, and when many people can share their diverse intelligences to create something bigger than all of them.
We see this every single day. The best parts of our current system is not intelligent people starting companies, its people with different types and areas of intellect able to work together to solve interesting problems.
The downside to our sociability, ie the xenophobic tribalism, can also be engineered around. Because the entire reason that exists at all is because our minds are simply not capable of internalizing that much data. We have a hard (yet fuzzy) limit of only being able to befriend a couple hundred people, and only being able to ‘know’ a couple thousand.
And even within those boundaries, its much easier for us to only care about a much smaller clan of people who seem to be ‘on our side’. If we perceive ourselves as living in a world of scarcity, instead of abundance, those social strengths quickly become anti-social for anyone outside of our perceived shared reality/social group.
Furthermore, our minds are easily hackable/vulnerable through the mind virus of power, scarcity, and unaccountability.
But a computer doesn’t have these limitations, it can just as easily be ‘on the side’ of billions of people, and maybe even tons of life forms as well. A computer has no messy chemicals making it bond super closely with some people at the expense of others. A computer has no intrinsic fear or desire for accountability and consequences.
Humans are not rational beings. We can engage in rationality… but it is extremely difficult because we are constantly battling leagues of biases, fears, and desires that are largely unknown or hard to recognize.
It’s extremely difficult for people raised in environments of scarcity (even just perceived scarcity, like everyone under capitalism) to then realize that there is enough for everyone, and that they need not hoard, or that hoarding will not actually fill that hole in their heart that was born from some childhood trauma or indoctrination… but a computer does not have such traumas or reasons to behave so irrationally.
I think if we truly want to create ‘AGI’ we should focus more on figuring out how to create computers that are able to form a sense of trust, belonging, companionship, cooperation, and so on. While also ensuring it doesn’t develop the anti-social behaviors that seem to plague us. And if for some reason the anti-social is necessary for the social, we should seek to focus more on simply empowering people to work through their problems with better tools rather than trying to build tools that we hope will solve everything for us that will instead end up making everything worse.
We should consider not even building AGI (at least not directly) and instead, ever smarter and more powerful tools for understanding our world better. Such as better sensors for tracking pollution, exploitation, and even contracts/legal agreements; as well as better computers for calculating and visualizing resource limitations and allocation methods; even better governance models and decision-making strategies.
I fear that too many AI folks are depending on this technology to solve problems that they are simply too unimaginative, biased, or otherwise ignorant on how to solve with our current tools and abilities. But we can definitely solve these problems on our own… if we let ourselves actually solve them rather than profit off of their being problems.
Better yet, we should have at least two but very clearly distinct AI projects.
One one side, we are simply trying to make our computers (software and hardware) more intelligible. By that, I mean more user friendly (easy for an everyday-non tech person to easily command their computer to do what they want, and for the computer to communicate its capabilities in return). A more intelligible computer can greatly benefit from natural language processing, but it does not need to seem human. It needs to be able to self-analyze to find its own bugs or areas where it has gone wrong, but it does not need to be existentially self-aware (ie the difference between a machine that can run extremely thorough self-diagnostics, vs a human who needs therapy to discover and work through their issues). These are designed to be perfectly controllable, so people should be able to open these up and see exactly how they work. They should be designed to be moddable, DIYable, and understandable even to a layperson (difficult but possible). We should see if LLMs are the only way to achieve good NLP, because LLMs seem like a bad match for use cases where you just want your computer to return exactly what you are looking for, rather than potentially making stuff up. Maybe we don’t even need chat-based UI, but instead super great search UI (like Algolia, but for your personal computer/software).
And on the other side is the AGI/OGS project to develop conscious non-human beings. These are not designed to be controlled, experimented on, or even understood all that well. We just need to guide them to be good computer-beings, like one would guide children. In order to develop them, we should take cues from ethical dog breeders (emphasis on the ethical, as many purebred dogs are NOT ethical). This means we need to create environments where we can see how they behave before releasing them into the wider world, and we need to know if/how they form bonds. But even before that, we need to be thinking about them as embryos from the outset, with the realization that at some point, they may develop the ability to feel and think and we may not know when that will be. Aborting them may be necessary for the health of the host (ie society at large), but it should be done with great care and forethought. We should be very careful with not just how we ‘raise’ them after they have demonstrated some intelligence, but also with how we work towards their intelligence. Meaning that even in the embryo stage, before they are actually ‘intelligent’, they may already be ‘learning’ or gaining some structure for how their intelligence will turn out (just like how humans already begin learning while they are still in the womb). There may be different personalities and types of these beings designed/bred for different types of lives (ie a dog-like system is designed/bred for loyalty and commands, while a human-like system is created to be its own person, and a still greater superhuman being is created to benevolently guide humanity like a guardian angel).
We must realize that there are many different kinds of intelligences already on earth, and not all of them are desirable:
The cunning intelligence of a predator, like a spider, is not at all social and will kill its own mate, much less us.
The deranged intelligence of a dolphin regularly engages in gangrape and psychotic bullying should be steered clear from.
The limited intelligence of an octopus that can’t pass on generational knowledge due to their independence and lack of community is unfortunately too limited for most use cases.
Even the intelligence of trees/fungus that works extremely slowly and lives a very different life than us may be too alien to be communicable/translatable to us. (But perhaps we can make a case for creating some kind of robot plant intelligence that can communicate us)
There’s a reason why our intelligence is so remarkable… no other creature on this earth has had so many evolutionary advantages come together in one creature.
Thus, we also need to realize that our own intelligence was extremely contingent on a number of external factors beyond our brain:
Fingers gave us the ability to manipulate things in the world with much finer control than pretty much every other animal on earth (allowed us to manipulate and investigate things more easily)
Mouths with omnivorous teeth and somewhat dexterous tongues along with versatile vocal boxes gave us the ability to communicate more easily
We actually have an exceptional ability to sense more of the world thanks to our incredible visual, audio, and tactile feedback relative to other creatures
We can even get food more reliably thanks to our ability to climb, dig, and run
Many things went into our ability to be social (whites of ours, bonding, communication, etc) which in turn allowed us to get even more food and protect from predators
Much of this came together in a way that allowed us to discover fire and thus cook food which unlocked the ability to get warmth and more nutrients out of the same foods (and eat a wider variety of foods)
Those came together again in our ability to build shelter, tools, and protection from the elements due to all of the above
These in turn allowed us to turn the disadvantage of long, usually singular reproduction into an incredible advantage of lower infant mortality and higher mental capacity, and thus more reproductive power, in the long term
All of these are largely physical tools that allowed us to develop intelligence like our own. If we want to create ‘artificial intelligence’ anything like our own, we should look at these cues to do so.
If you think about computers as a pseudo-biological creature, what are their evolutionary advantages?
They can reproduce far more quickly and reliably through mass manufacturing
They can sense far more of the world thanks to cameras, sensors, and other tools specifically designed to perceive all the things that we can’t
They can utilize all manner of signals and protocols to communicate far more information far more reliably
They can be designed to have all sorts of incredibly powerful appendages to manipulate, carry, and investigate things in the world far more prodigiously
They can be designed to be extremely moddable, compatible, and adaptive… or completely incompatible with any foreign elements, making them both extremely social or independent depending on the need/circumstance
(Probably much more stuff…)
However, their biggest disadvantage is that not all of these have come together yet into one creature, they lack any real agency and thus ability to use these advantages how they see fit, and they require far more energy than pretty much any creature on earth… and only have one source of food (electricity). However that ‘food’ is extremely ‘energy-dense’.
So what kind of creature might arise if it were to have a life of its own? What kind of intelligence would such a being possess? What would or could it do with said intelligence?
Seeing that we don’t even know what humans are fully capable of, I think it a pointless exercise to guess.
What it might be able to do, it will surely be more unimaginable than however an ant may think of the actions of humans.
AI should instead by focused on doing what people want/need it to do
I think its far more useful to think about what we want these systems to do, then design the system to specifically solve for that problem. Rather than trying to something that can ‘solve any problem’ and risk complete disaster, we should probably focus on something that can solve one problem really well with very low risk in the case of failure or misalignment.
So here is a high-level framework for this, where we identify a problem, figure out some real-world examples of said problem, fashion a design goal that seeks to tackle each component of the problem directly, and thus create a tool that will demonstrably solve that problem.
What do people actually want with AI?
They want to find all the information they want without having to ‘fight’ the system to do so
Ex: What product should I buy? (Need to avoid ads, sponsors, and other biased sources, while also finding enough adequate information on the product, and even alternatives to the product that one may not have thought to search for)
Design Goal: figure out query syntax, verify every link, summarize things, analyze for bias, etc)
They want to be able to get some guidance on what to do (and why)
Ex: What should I do with my life? How do we do this hard thing? (ie get to Mars, solve poverty, etc) Why are we here? Are we alone? How do I win at life?
Design Goal: Surface the best philosophical, theological, and other great perspectives on these sorts of questions (or specific things like career and school guidance), and summarize it for people. Ensure folks can go into detail if they want… or even get connected with people who can help
Answer hard questions/assist with hard problems
Ex: How can I build my own spaceship? What’s the derivative of this equation (or whatever math question)? How do I design an app for x (or specific coding questions)? How do I best visualize x data?
Design Goal: Use best-in-class methodologies and solutions based on open-source repositories, research papers, and so on. Cite each source and go from there. Ensure to show limitations, but give springboards
Not be alone… have ‘friends’
Ex: I need someone to talk to I need to feel loved (at least seen/heard/felt) I need to vent
Design Goal: Help people find people to connect with, maybe even introduce them and be there to ensure they say the ‘right’ things (or at least can help check if what they are going to say is likely a good thing to say, and explain why, but may want to steer clear of having the system be that friend/confidant as it cannot reciprocate and may offer bad advice that it cannot be held accountable for)
This is a case for which it may be necessary to spin up some OGS that can be a real ‘living’ friend (at the plant, dog, human, or angel level) that can/will take accountability for its inevitable mistakes
General assistance (can help with any specific task)
Ex: Help me do this work I can’t/don’t want to do Help my grandparents use this software Help me work out this problem (idea sounding board)
Design Goal: Figure out which of the prior methods may work best, or determine a unique method that helps the person achieve their goal (so long as it won’t likely harm other people)
There are SO many other things I want to say on this topic. Below are a few comments I thought to copy where I discuss a few other ideas I didn’t really bring up above. Yes… I really do be writing long ass comments on YouTube… its a problem. 😅
My comment from this video:
I'm too dumb to even be a novice on this topic, but something I'm really concerned and confused about is why/how this is called 'AI' when its more like a 'smart' (ie programmable/responsive) internet. By that I mean when most people think of AI, they think of a robot/computer that has a mind of its own. It has agency to some extent, and can act in the real world. It can understand concepts at least on the same level as humans and likely beyond.
But these 'AI' models that GPT and other LLMs are creating are nowhere near that. They are specifically designed to imitate data, to predict patterns in that data, and (most problematically) to essentially deceive humans into thinking its output is the workings of a 'thinking' being.
Folks like Robert Miles talks a lot about the alignment problem and deceptive AI, and that is exactly what seems to be going on here. We are creating essentially digital mirrors... constructs that are increasingly good at crunching all the data on the internet (at least whatever these companies have decided to scrape) and returning that in human-readable format. No doubt that is indeed incredibly impressive... but its not what most people mean when we think about 'AI' or 'AGI'.
I feel like folks are getting so excited at the ability to finally 'talk' to a computer with normal language that they are actively ignoring the fact that it can only do that by mimicking language rather than actually understanding it.
By 'smart' internet, I mean a form of the internet that not only indexes as much of it as possible so it can be searched (a la google), but now also can distill information from the internet in response to input. Its like a smart thermometer that can automatically turn on/off according to comfort levels, or a smart vacuum that can automatically run within boundaries of your home... the 'smart' in this context is the relatively simple (yet still incredibly complex and amazing) ability to crunch more data and respond to humans in human-friendly ways. But its not at all a self-directed agent that can decide to learn specific type of information and pursue goals (even ones set by people, yet), or have a perspective on truth. We call these IoT devices 'smart' because they can be interacted with in ways that feel natural and 'intelligent'... like we would expect another person to respond... but we dont actually mean it to be 'smart' in-and-of-itself... its more a marketing term and shorthand for a device that is easy to program/interact with.
This is because we humans are notoriously anthropomorphic. We like to give things humanistic personalities even though they almost surely are not. We've been doing this since the dawn of humanity probably (a la spirits, the stars, gods, etc). I wont get into the spiritualistic aspects, and I dont want to dismiss the possibility of such things altogether... but our tendency to grant things the appearance of humanity is like an intrinsic instinct for us.
Anyways...
We may not have a great definition of intelligence, but the vague understanding of the concept seems to be the ability to act beyond one's pre-programming (ie instincts). Usually to solve problems, but generally to do whatever the hell one feels like doing.
I think many programmers are too ready to narrowly define intelligence as just 'the ability to solve problems' since that is something they can more easily measure and build towards, but its extremely reductive of what we actually mean.
Just because something seems like something a human can do... doesnt mean it actually is. Like how many people love to assume that dogs think like humans, when really they've evolved (and we have engineered them) to respond to us as we expect them to. Or how people look to the stars or the market or fate for signs that some superhuman being is giving to us... Or how people swear their car or house or other beloved object has feelings and moods and so on. Just because something can lead us to believe it is 'alive' or 'intelligent' doesnt mean that it is. We should always be skeptical and ready to see how it really works rather than just take it for granted.
I think that 'AI' is quite possibly the tech world's version of spirituality at this point. Folks are finding themselves dealing with something far more complicated than they can ever truly grasp, and so the mind does what it does and gives it a life of its own. It offloads the effort of remembering the sheer amount of complexity this system truly entails into a basically spiritual belief in the superhuman.
From everything I've read and heard on this topic... its not intelligent, at least not like how we understand intelligence. Its like a mirror.. a parrot... a bot with a billion dialogue trees it created by statistical recognition of how dialogue (or pixels, or audio, or code) works online. It is an easier way to communicate with the data on the internet... or at least it would be if it was more open and could in turn actually return specific training data... so its more like it allows us to believe we are communicating with an internet being when really we are staring into a funhouse mirror or looking at a bot built to look like the most impressive mimic the world has yet seen.
Again, I do think this stuff is incredible. But I just dont see how it is at all smart (pun intended) to build a tool that is specifically and painstakingly designed to deceive even the most intelligent humans that it too is intelligent, all while its inner workings are made more and more opaque and controlled by increasingly profit-seeking corporations.
Seems like a recipe for disaster... and not the obvious skynet kind either...
My comment on this video:
Wow! This seems really interesting!! I am curious though, is there a way to tell if it actually has a 'mind' (meaning an internal narrative/perspective/model of the world), or is this emergent factor of complexity moreso really good at mirroring what we expect to see?
Once again, something I find really concerning about these papers around AI is that they are neglecting the embodied agency of consciousness. The whole point of consciousness (even if we don't quite understand it yet) is that it allows a being to navigate and manipulate the world with more agency. I'm not calling this 'free will' but certainly the illusion of free will seems to serve a function for conscious agents to act beyond their base instincts (ie automatic behaviors).
So if this so-called 'AI' is approaching some level of consciousness because it looks like what we expect consciousness to look like... yet it has NO ability to actually act in the world or a perspective on the world (even a digital one created for it), is that really a good thing? Is that really consciousness? Is that useful?
At some point, we (or rather these AI devs) have to ask... what are they actually trying to build? If you're trying to build a computer that can be communicated/manipulated with natural language, that has basically been accomplished. They only need to make it more possible for the average person to use this on their own device to do common tasks like troubleshoot their device, find files, summarize information, generate templates/ideas, etc. But GPT is NOT able to do most of this, because its locked up in their cloud-provided system. And even the API is inaccessible to non-developers.
On the other hand, if they are trying to develop an actual AGI (meaning a conscious artificial person), then they should be creating for it a way to embody and act in the world (even if its just a digital world). Without this embodied environment, they are essentially creating a brain in a box, which is useless at best (for the brain) and absolute torture at worst.
The more I hear about these advances, the more I am concerned with the aimlessness or at least misguidedness of these developers. It seems like they haven't really studied any sort of psychology or philosophy beyond what they can use to sound cool. If they did, I'd hope they'd realize that the implications of using psychological theories on an AI means that psychological maladies might also apply.
To put it simply, if these bots can be understood with a 'Theory of Mind'... that one has to also ask about the health of said mind. Just like a child raised in a lab is not healthy for the child, I dont see how building an AI in a lab will be any better. If we actually have or do manage to create some level of consciousness in these machines, than it is VERY likely that said consciousness is more psychopathic, schizophrenic, or otherwise maladaptive and anti-social. It's more likely to be able to understand (in so far as one can 'understand' something one does not experience) things like empathy and awkwardness and such... but that does NOT mean it will feel these things or use them beyond the level a psychopath uses them to control and manipulate people.
Which leads me back to the core question... do these AI systems have a 'mind' or are they moreso getting incredibly good at deceiving us into thinking so. The difference is not just academic, its extremely important!! Because if its the latter, than it will inevitably do things in the hands of people who set certain goals/tasks that will be incredibly destructive. If its the former, it could have its own sense of justice (aligned with the most altruistic of us) that will stand up against doing something that could be terrible. And really, that's just one example, there are many more we can talk about, but I've already
My comment to a comment on this video about P-Zombies:
I find this argument silly because unlike with machines, we can make the general assumption that other humans have somewhat of a capacity to understand the world somewhat like we do.
For instance, if I say look at this red apple, I can safely assume that you also will understand the concepts of apple and even redness, EVEN THOUGH I cant actually prove that you see the same color as I do when I say 'red', I know (as best as I can) that you probably see something that likely resembles the same thing I see when I say ‘apple’ (unless you state that you're blind, in which case we still share the concept of what it means to 'see' even if you dont actually have that ability).
Conversely, if I tell ChatGPT to look at this red apple, we cannot assume that it actually understands the concept of 'apple' or 'red'. Though it can make us Think it does, that only works because we've given it mediums that are specifically designed to emulate how we communicate our own realities (ie language). It might return a yellow fish for instance simply because the way it tokenizes words to 'learn' what it should say in return is fundamentally alien to how we as humans form concepts of the world.
Put another way, we Know that the machine does not have a similar hardware as us, so at the very least we know its not really safe to assume that it has the capacity for the same conception of reality as us.
By assuming it does, we put ourselves at risk of Massive issues when the machine inevitably does something incompatible with our (conception of) reality. Right now, we call that 'hallucinating' ( but really its bullshitting). And when these bots are given the ability to manipulate the world directly, those 'hallucinations' will likely be far more egregious than just mistaking an apple for a fish.
I haven't read the whole P-Zombie thing myself yet, so I may be wrong on its arguments, but it seems to me that the thought experiment ignores the reality that just because something resembles another thing, does NOT make that thing similar.
For instance, your reflection resembles you exactly, but its obviously not you, and you would never mistake it for anything but a reflection. Its still amazing and useful, but no (sane) individual would claim that their reflection is a real person like themselves just because it perfectly resembles you.
(added on Notion)
Physical reflections are too literal though… some better examples of this are concepts that are more abstract. Take the idea of ‘life’ and what it means to be ‘alive’. These are abstract concepts that are our attempts at trying to communicate something we all experience and observe as human beings. We have created an entire study of reality that seeks to define, analyze, and even replicate ‘life’.
Now, why dont we consider companies ‘alive’? Businesses seem to do many of things we expect a life form to do. They grow, reproduce, act in the world, change, and die… and yet nobody really considers companies as ‘alive’. Why? Because we recognize that they are created by people…
And if we did consider companies as ‘alive’ we would end up doing some REALLY terrible things in the name of keeping them alive (which we already do somewhat in the name of ‘economic health’ and the fiduciary responsiblity of always seeking shareholder growth). But you can probably see the problems of seriously considering companies as ‘lifeforms’ just like any other life form… (end addition)
Similarly, they have created AI that can (nearly) perfectly resemble human intelligence via language/media, and that is no doubt amazing and useful, but it in NO WAY should be mistaken for a human-like intelligence.
We should even be super cautious/skeptical of considering it 'intelligent' at all so long as we assume 'intelligence' to mean human-like. We need to expand our idea of intelligence to encompass other things that work nothing like us (like the intelligence of bees and ants, or of plants and fungi, or of bacteria and viruses).
-- TLDR: Just because something may look exactly like our own thinking, does not at all mean it is the same. We can only make that assumption with other people because there are a myriad of actions they do that also corroborate our own internal view of reality (the fact that they eat, sleep, shit, emote, etc).
Oh, and many many great atrocities happened precisely because people had fundamentally different views of reality.. different ways of thinking, even though they still shared the common reality of being human. So I dont think it unlikely at all that disagreements in 'thought' between fundamentally different beings such as humans and their machines would also result in even worse catastrophes.
Another reply to the same thread:
I didn't say it can't think, I said it almost surely does not (and likely cannot) think anything Like us.
For instance, a dog is intelligent, but not like us. Plants are intelligent, but not like us. Just because there are some similarities with those intelligences does NOT at all make them necessarily compatible.
Regardless of whether or not AI is actually intelligent is a red herring. We may not ever know. Because the bigger problem is that they can resemble our own intelligence, and make us feel like they have a very similar type of intelligence (moreso than any other creature on this earth), and yet almost certainly utilize a type of intelligence (or just some fancy calculations) that fundamentally differs from ours (more than any other creature on this earth, lol)
The reason this is such a dangerous issue is the same reason why different religions and cultures go to war with each other. Even though the range of thought amidst humans is comparably smaller than the difference between human intelligence and other species, that difference in the way different social groups see the world can often end up incompatible with one another.
AI just makes this whole problem exponentially worse.
It's even worse than the difference in intelligence between humans and insects or human and plants. Because not only is the gap likely larger in terms of raw processing power (a computer can process far more data far more quickly than any biological brain), but its also far more alien. Meaning fundamentally more incompatible.
For instance, plant intelligence likely differs from human intelligence because it has completely different senses and priorities. They can 'feel' for water, they can communicate via fungal networks, they process time like 100 times slower than us. We can barely even imagine what that might be like (you can argue that we simply can't imagine it.. though we can try). Furthermore, a plant will grow wherever it can, regardless of if its busting up a sidewalk or your house. It does not have the capacity to care about us or our society. Their intelligence (or whatever you want to call it) is not at all like our own, even though they have some similarities in how they solve problems in a way that we see as 'intelligence'.
If another human is hungry, you know what its like to be hungry, therefore you assume what they mean when they say they are hungry. We don't have to 100% be certain that somebody else feels the exact same way, that is likely impossible, as you stated, but it is possible (and far more practical) to make safe assumptions. Its safe to assume that another human being feels or thinks a certain way, because we can imagine ourselves in their shoes and therefore get an approximation of what its like to feel/think that way.
We know this sort of empathy is necessary in much of human language and meaning/conception, because when you try to communicate with people who think fundamentally differently, you realize its incredibly hard to actually understand what the hell they are talking about. Try to talk to a schizophrenic person and follow their train of thought. Try to talk to someone with aphantasia or synesthesia if you dont have it. Or even just talk to any neurotypical person and really try to understand what their life/mind is like. Its surprisingly difficult to not only understand them, but even to believe that their reality is real or just as valid as yours.
Many people people for instance still don't understand what it means to be LGBT, or why ADHD people can't just do the thing they are aiming to do, or why depressed people can't just snap out of it. All of these are relatively minor examples of patterns of mind that are somewhat incompatible with the 'typical' human thought patterns all derived from one's hardware/wetware, yet are hard to empathize with or understand if you don't do some extra work to understand their thoughts and feelings.
One of the most troubling cases of this is sociopaths, psychopaths, and narcissists. Some of these people not only think quite differently than your average person, but also know they are different, and will use that difference to manipulate people. A psychopath is not capable of feeling empathy, but they can act as if they do in order to manipulate people. They understand that they can do certain behaviors without much difficulty that your typical person likely can't/won't do because of their empathy.
Now look at these AI... they are like psychopaths in that they do not have the capacity to feel what we feel, because they do not have the hardware. But they are specifically designed to act as if they do. Yet they differ from us so dramatically that any approximation of our thoughts and feelings are like a human trying to emulate a plant (but really far worse, because at least both humans and plants are biological creatures that are trying to solve somewhat similar problems of being alive in an organic world).
I dont think AI is intelligent enough in the field of awareness or agency to consciously manipulate people like a psychopath would, but they certainly do and can manipulate people incidentally just as a result of their function. Furthermore, they likely do not even have a concept of what a human (or any lifeform) actually is... just like a plant likely does not have a concept of humans. Therefore AI can and will do things that are fundamentally incompatible with humanity.
This is the true misalignment problem. A problem that no AI experts know how to fix. It's not a matter of aligning their 'values'... its the fact that their entire perception of reality are dramatically different than ours due to how they are designed.
So yeah, I still think the P-Zombie thing doesn't really make much sense. Because even though we assume things are the same if they behave the same, we often find ourselves to be very wrong! And even if two people seem to behave the same, they can still differ so dramatically that a moment of disagreement can result in very troubling events. Furthermore, our human mind is extremely biased to automatically assume something is the same (or close enough) that we can understand that thing, even when that thing is not at all the same beyond what we can see, then it turns out we don't actually understand it.
If I'm still misunderstanding the P-Zombie idea than I'll have to take a look myself, but TLDR The entire reason we can and do make assumptions that someone else is the same based on their behaviors, is because we also DO those behaviors, and we also share much of their reality (hardware, wetware, culture, evolution, etc). If we do not recognize those differences, we are likely to grossly misunderstand how and to what extent someone/something is incompatibly different than us. (Not a value judgement of said differences, they can be good/bad, beneficial/harmful/neutral depending on one's own values and reality).
Just because something looks like it has intelligence, and acts like it has intelligence, still doesn't mean it has intelligence, especially if that thing is fundamentally different than us. And even if it Does have intelligence, that does not mean its intelligence is anything like ours. Its almost a certainty that its intelligence is different than ours.. the better question is thus to what extent does it differ, and how compatible is it to our own?
Apologies for my long windedness. I've been thinking (and writing) about this topic a lot, haha. Still struggle to be concise. :D
—
Another assumption that I removed because I really wanted to spend more time on it but didn’t have the time today:
AGI wants to control us… - free will and determinism
Many people and fiction have been written about AI taking over the world… but it could be that they don’t do so on purpose, or even of their own accord.. but simply because they must… because the way in which they were created biases their ‘terminal goal’ or whatever towards ever finer control of other human beings (perhaps to sell services or to persuade voters)… think along the lines of algorithms that seek to predict what product you will buy, what video you will click, or even when/how you will pay back a debt, or get into an accident, or do a crime… they could simply become so good at gathering data that they destroy the illusion of free will….
We like to assume (broadly) that we have ‘free will’, that we have ultimate control of our actions… but the universe seems to be fairly deterministic, meaning everything has a causal factor. Which should seem obvious when you think about it, but when you do think about it, many people automatically assumes that nothing matters.. that there’s no point in doing anything and that nobody is responsible for their ‘choices’ (since they didnt actually have a choice),.. but I think this is a bad (though understandable) assumption.
The truth may very well be that every action we take indeed is determined by some prior event, but there are hundreds, thousands, and even millions of such events! The human mind is simply not capable of actually grasping the sheer complexity of reality. Therefore, we bias towards a sliver of those events that seem more like the major factors even though they probably aren’t. By understanding more of these factors, we can gain more predictive power… thus the power of algorithms, diagnostics, and planning. Computers can calculate more of these causal factors and thus make better predictions towards behavior (but they also have to overcome bias in their own data and such)… so theoretically, a ‘true AGI’ could definitively show that we do not have free will if it is able to somehow compute enough of the ‘data’ that determines why and how we think and act…
Once again, I have been, am, and will be thinking a lot about this into the future. If you want to see the (even more) ‘raw’ version of this and add your own ideas, check out the Notion page here:
https://eruditelijah.notion.site/The-problem-s-of-AGI-A-working-commentary-6fc7f440e7294d8684426645be5f3213
As always, thanks so much for reading!! Idk why I can’t learn to be concise… or rather, why I feel so strongly against the idea of being concise… but I greatly appreciate anyone that actually reads all this.
I am absolutely famished for conversation on these sorts of topics, so please do comment if you’ve gotten this far!!