NHacker Next
login
▲How to Build Conscious Machinesosf.io
94 points by hardmaru 1 days ago | 111 comments
Loading comments...
AIorNot 2 minutes ago [-]
General Questions for this theory:

Given the following

1. The ONLY way we can describe or define consciousness is through our own subjective experience of consciousness

- (ie you can talk about a watching a movie trailer like this one for hours but until you experience it you have not had a conscious experience of it - https://youtu.be/RrAz1YLh8nY?si=XcdTLwcChe7PI2Py)

Does this theory claim otherwise?

2. We can never really tell if anything else beside us is conscious (but we assume so)

How then does any emergent physical theory of consciousness actually explain what consciousness is?

It’s a fundamental metaphysical question

I assume as I have yet to finish this paper that it argues the conditions needed to create consciousness not the explanation of what exactly the phenomena is (first person experience as we assume happens within the Mind which seems to originate as a correlation of electrical activity in the brain) we can correlate the firing of a neuron with a thought but neural activity is not thought itself - what exactly is it?

talkingtab 1 days ago [-]
The topic is of great interest to me, but the approach throws me off. If we have learned one thing from AI, it is the primal difference between knowing about something and being able to do something. [With extreme gentleness, we humans call it hallucination when an AI demonstrates this failing.]

The question I increasingly pose to myself and others, is which kind of knowledge is at hand here? And in particular, can I use this to actually build something?

If one attempted to build a conscious machine, the very first question I would ask, is what does conscious mean? I reason about myself so that means I am conscious, correct? But that reasoning is not a singularity. It is a fairly large number of neurons collaborating. An interesting question - for another tine - is then is whether a singular entity can in fact be conscious? But we do know that complex adaptive systems can be conscious because we are.

So step 1 in building a conscious machine could be to look at some examples of constructed complex adaptive systems. I know of one, which is the RIP routing protocol (now extinct? RIP?). I would bet my _money_ that one could find other examples of artificial CAS pretty easily.

[NOTE: My tolerance for AI style "knowledge" is lower and lower every day. I realize that as a result this may come off as snarky and apologize. There are some possibly good ideas for building conscious machines in the article, but I could not find them. I cannot find the answer to a builders question "how would I use this", but perhaps that is just a flaw in me.]

Mikhail_Edoshin 9 hours ago [-]
Socrates said that he knows his knowledge is nil, and others do not even know that. What he meant was that there are two kinds of knowledge, the real one and the one based essentially on hearsay, and that most people cannot even see that distinction. It is not that the false knowledge is useless; it it highly useful. For example the knowledge of the Archimedes law is largely false; the true knowledge of that law was obtained by Archimedes and everyone else was taught. But false knowledge is fixed. It cannot grow without someone obtaining true knowledge all the time. And it is also deficient in certain way, like a photograph compared to the original. LLM operates only with false knowledge.
K0balt 1 days ago [-]
I’d be careful about your modeling of LLM “hallucination”. Hallucination is not a malfunction. The LLM is correctly predicting the most probable symantic sequence to extend the context, based on its internal representation of the training process it was coded with.

The fact that this fails to produce a useful result is at least partially determined by our definition of “useful” in the relevant context. In one context, the output might be useful, in another, it is not. People often have things to say that are false, the product of magical thinking, or irrelevant.

This is not an attempt at LLM apologism, but rather a check on the way we think about useless or misleading outcomes. It’s important to realize that hallucinations are not a feature, nor a bug, but merely the normative operating condition. That the outputs of LLMs are frequently useful is the surprising thing that is worth investigating.

If I may, my take on why they are useful diverges a bit into light information theory. We know that data and computation are interchangeable. A logic gate which has an algorithmic function is interchangeable with a lookup table. The data is the computation, the computation is the data. They are fully equivalent on a continuum from one pure extreme to the other.

Transformer architecture engines are algorithmic interpreters for LLM weights. Without the weights, they are empty calculators, interfaces without data on which to calculate.

With LLMs, The weights are a lookup table that contains an algorithmic representation of a significant fraction of human culture.

Symbolic representation of meaning in human language is a highly compressed format. There is much more implied meaning than the meaning which is written on the outer surface of the knowledge. When we say something, anything beyond an intentionally closed and self referential system, it carries implications that ultimately end up describing the known universe and all known phenomenon if traced out to its logical conclusion.

LLM training is significant not so much for the knowledge it directly encodes, but rather for implications that get encoded in the process. That’s why you need so much of it to arrive at “emergent behavior”. Each statement is a CT beam sensed through the entirety of human cultural knowledge as a one dimensional sample. You need a lot of point data to make a slice, and a lot of slices to get close to an image…. But in the end you capture a facsimile of the human cultural information space, which encodes a great deal of human experience.

The resulting lookup table is an algorithmic representation of human culture, capable of tracing a facsimile of “human” output for each input.

This understanding has helped me a great deal to understand and accurately model the strengths and weaknesses of the technology, and to understand where its application will be effective and where it will have poor utility.

Maybe it will be similarly useful to others, at least as an interim way of modeling LLM applicability until a better scaffolding comes along.

talkingtab 22 hours ago [-]
Interesting thoughts. Thanks. As for your statement: "That the outputs of LLMs are frequently useful is the surprising thing that is worth investigating". In my view the hallucinations are just as interesting.

Certainly in human society the "hallucinations" are revealing. In my extremely unpopular opinion much of the political discussion in the US is hallucinatory. I am one of those people the New York Time called a "double hater" because I found neither of presidential candidate even remotely acceptable.

So perhaps if we understood LLM hallucinations we could then understand our own? Not saying I'm right, but not saying I'm wrong either. And in the case that we are suffering a mass hallucination, can we detect it and correct it?

esafak 1 days ago [-]
Interesting stuff. I don't have time to read a dissertation so I skimmed his latest paper instead: Why Is Anything Conscious? https://arxiv.org/abs/2409.14545

In it he proposes a five-stage hierarchy of consciousness:

0 : Inert (e.g. a rock)

1 : Hard Coded (e.g. protozoan)

2 : Learning (e.g. nematode)

3 : First Order Self (e.g. housefly). Where phenomenal consciousness, or subjective experience, begins. https://en.wikipedia.org/wiki/Consciousness#Types

4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.

5 : Third Order Selves (e.g. human). The ability to model the internal dialogues of others.

The paper claims to dissolve the hard problem of consciousness (https://en.wikipedia.org/wiki/Hard_problem_of_consciousness) by reversing the traditional approach. Instead of starting with abstract mental states, it begins with the embodied biological organism. The authors argue that understanding consciousness requires focusing on how organisms self-organize to interpret sensory information based on valence (https://en.wikipedia.org/wiki/Valence_(psychology)).

The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

The paper does not seem to elaborate on how to assess which stage the organism belongs to, and to what degree. This is the more interesting question to me. One approach is IIT: http://www.scholarpedia.org/article/Integrated_information_t...

The author's web site: https://michaeltimothybennett.com/

root_axis 7 hours ago [-]
> The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

This doesn't really address the hard problem, it just asserts that the hard problem doesn't exist. The meat of the problem is that subjective experience exists at all, even though in principle there's no clear reason why it should need to.

Simply declaring it as functional is begging the question.

For example, we can imagine a hypothetical robot that could remove its hand from a stove if it's sensors determine that the surface is too hot. We don't need subjective experience to explain how a system like that could be designed, so why do we need it for an organism?

simonh 5 hours ago [-]
A claim is not an assertion. I don’t see any assertion the hard problem doesn’t exist here, just expression of a belief it may be solvable and an outline of maybe how.

> Simply declaring it as functional is begging the question.

Nobody is ‘declaring’ any such thing. I loathe this kind of lazy pejorative attack accusing someone of asserting, declaring something, just for having the temerity to offer a proposed explanation you happen to disagree with.

What your last paragraph is saying is that stage 1 isn’t conscious therefore stage 5 isn’t. To argue against stage 5 you need to actually address stage 5, against which there are plenty of legitimate lines of criticism.

lordnacho 6 hours ago [-]
First of all, how is 5 different from 4? Modelling the internal monologue of someone else is Theory of Mind, isn't it?

Next, we gotta ask ourselves, could you have substrate independence? A thing that isn't biological, but can model other level-5 creatures?

My guess is yes. There's all sort of other substrate independence.

pengstrom 5 hours ago [-]
My stab:

2: implicit world. Reacts to but not modeled. 3: explicit world and your separation from it. 4: Model that includes other intelligences of level 3 that you have to take into consideration. World resources can be shared or competes for. 5: Language. Model of others as yourself, their model include yours too. Mutual recursion. Information can be transmitted mind-to mind.

antonvs 9 hours ago [-]
> making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

Would, or does, the author then argue that ChatGPT must be conscious?

aswegs8 5 hours ago [-]
Not sure why this is getting downvoted. According to the above definition, LLMs are level 5 consciousness, since they have a theory of self and others.
flimflamm 9 hours ago [-]
I wonder if 6 would be understanding own thinking. Currently humans don't understand this. Thoughts just pop in to our heads and we try to explain what caused them.
Animats 9 hours ago [-]
7. Full scalability. Can operate organizations of large size without confusion.
tempodox 8 hours ago [-]
One can dream. Seeing how people already start to get confused when a logical negation is in play, I'm not optimistic.
pengstrom 5 hours ago [-]
I'm more optimistic but cynical. Everybody has the capacity, but can't be bothered for your sake specifically. A highly intelligent person can casually entertain several theoretical notions. A lesser can too, but it requires more effort. Effort that might be better spent elsewhere, or effort that makes social interaction awkward.
aswegs8 5 hours ago [-]
Higher consciousness does not imply cooperation, even though we idealize it to do so. Cooperation is another dimension - it is easy to imagine a being that has a higher form of consciousness but is not interested in cooperation or does not engage in it unless it can take advantage of others.
root_axis 7 hours ago [-]
Not sure what you mean. It seems like thoughts must necessarily pop into our head, how would we know our thoughts before we think them?
phrotoma 1 days ago [-]
Dang, this is great stuff. You may enjoy this piece that tackles similar themes but focuses on what use evolution has for consciousness.

My reading of it is that the author suggests global workspace theory is a plausible reason for evolution to spend so much time and energy developing phenomenal consciousness.

https://www.frontiersin.org/journals/psychology/articles/10....

signal-intel 1 days ago [-]
Do you (or this paper) think consciousness exists in the humans out there who have no inner narrative?
jbotz 23 hours ago [-]
Maybe all humans (and indeed other intelligent mammals) have an inner narrative, but it doesn't necessarily involve language. A mime or a silent film can tell a story without words, and the inner narrative can take likewise be in visual or other sensory form.
the_gipsy 5 hours ago [-]
Do humans without inner narrative really exist? It could be just a misunderstanding about what "inner narrative" is, or the degree of perception.
xcf_seetan 3 hours ago [-]
In some systems the interruption of the inner dialog, that leads to a inner silence, opens a door to expansion of perception to other realms.
esafak 1 days ago [-]
That's a fair question. I don't know that the theory of mind mentioned here is the same as an internal monologue. I think one could model other people's minds without conducting an internal monologue, by visualizing it, for example. Maybe the anendophasiacs in the audience can enlighten us.

The author also has a Youtube channel: https://www.youtube.com/@michaeltimothybennett

Lerc 23 hours ago [-]
I can think words in conversations as if I am writing a story (actually thinking about it it's more like reading a script), but as far as I can tell I don't experience what most people describe as an internal monologue, I also have aphantasia which I understand is a frequent co-occurrence with a lack of an internal monologue.

Obviously I'm conscious (but a zombie would say that too). I can certainly consider the mental states of others. Sometimes embarrassingly so, there are a few boardgames where you have to anticipate the actions of others, where the other players are making choices based upon what they think others might do rather than a strictly analytical 'best' move. I'm quite good at those. I am not a poker player but I imagine that professional players have that ability at a much higher level than I do.

So yeah, My brain doesn't talk to me, but I can 'simulate' others inside my mind.

Does it bother anyone else that those simulations of others that you run in your mind might, in themselves, be conscious? If so, do we kill them when we stop thinking about them? If we start thinking about them again do we resurrect them or make a new one?

erwan577 6 hours ago [-]
I also lack an internal monologue and have strong aphantasia, so the idea that I might not be conscious made me a bit uneasy—it just felt wrong, somehow. For now, the best I can say is that my worldview, which includes self-consciousness, is abstract. I can put it into words, but most of the time, it doesn’t feel necessary.
jbotz 22 hours ago [-]
The key to your difficulty is "my brain doesn't talk to me"... the solution is to realize that there is no "me" that's separate from your brain for it to talk to. You are the sum of the processes occurring in your brain and when it simulates others inside your mind, that's nothing but narrative. A simulation is a narrative. You may not perceive this narrative as sequence of words, a monologue, but it certainly is the result of different parts of your brain communicating with each other, passing information back and forth to model a plausible sequence of events.

So yes, you're conscious. So is my dog, but my dog can't post his thoughts about this on Hacker news, so you are more conscious than my dog.

exe34 23 hours ago [-]
> Obviously I'm conscious

I'm not trying to be pedantic - how do you know? What does consciousness mean to you? Do you experience "qualia"? When you notice something, say "the toast is burning", what goes on in your mind?

> but I can 'simulate' others inside my mind.

Do you mean in the sense of working out how they will react to something? What sort of reactions can they exhibit in your mind?

Sorry if these questions are invasive, but you're as close to an alien intelligence as I'll ever meet unless LLMs go full Prime Intellect on us.

Lerc 17 hours ago [-]
>I'm not trying to be pedantic - how do you know?

That was kinda what my point about zombies was about. It's much easier to assert you have consciousness than to actually have it.

More specifically I think in pragmatic terms most things asserting consciousness are asserting what they have whatever consciousness means to them with a subset of things asserting consciousness by dictate of a conscious entity for whatever consciousness means to that entity. For example 10 print "I am conscious" is most probably an instruction that originated from a conscious. This isn't much different from any non candid answer though. It could just be a lie. You can assert anything regardless of its truth.

I'm kind of with Dennett when it comes to qualia, that the distinction between the specialness of qualia and the behaviour that it describes evaporates from any area you look at in detail. I find the thought experiment compelling about what is the difference between having all your memories of red an blue swapped compared to having all your nerve signals for red and blue swapped. In both instances you end up with red and blue being different from how you previously experienced them. Qualia would suggest you would know which would have happened which would mean you could express it and therefore there must be a functional difference in behaviour.

By analogy,

5 + 3 = 8

3 + 5 = 8

This --> 8 <-- here is a copy of one of those two above. Use your Qualia to see which.

>Do you mean in the sense of working out how they will react to something?

Yeah, of the sort of "They want to do this, but they feel like doing that directly will give away too much information, but they also know that playing the move they want to play might be interpreted as an attempt to disguise another action", When thinking about what people will do I am better amongst those who I play games with in knowing which decision they will make. When I play games with my partner we use Scissers, Paper, Stone to pick the starting player, but I always play a subgame of how many draws I can manage, It takes longer but more randomly picks the starting player.

It's all very iocane powder. I guess when I think about it I don't process a simulation to conclusion but just know what their reactions will be given their mental state, which feels very clear to me. I'm not sure how to distinguish the feeling of thinking something will happen and imagining it happening and observing the result. Both are processing information to generate the same answer. Is it the same distinction as the Qualia thing? I'm not sure.

simonh 5 hours ago [-]
I’ve thought about this a bit as my wife substantially has anendophasia and aphantasia, though not total. Even having a rich inner voice myself, I realise that it’s not absolute.

Many, in fact probably most experiences and thoughts I have are actually not expressed in inner speech. When I look at a scene I see and am aware of the sky, trees, a path, grass, a wall, tennis courts, etc bout none of those words come to mind unless I think to make them, and then only a few I pay attention to.

I think most our interpretation of experience exists at a conceptual, pre-linguistic level. Converting experiences into words before we could act on them would be unbelievably slow and inefficient. I think it’s just that those of us with a rich inner monologue find it’s so easy to do this for things we pay attention to that we imagine we do it for everything, when in fact that is very, very far from the truth.

Considering how I reason about the thought processes, intentions and expected behaviour of others, I don’t think I routinely verbalise that at all. In fact I don’t think the idea that we actually think in words makes any sense. Can people that don’t know how to express a situation linguistically not reason about and respond to that situation? That seems absurd.

the_gipsy 5 hours ago [-]
> Yeah, of the sort of "They want to do this, but they feel like doing that directly will give away too much information, but they also know that playing the move they want to play might be interpreted as an attempt to disguise another action",

That is the internal monologue.

roxolotl 2 hours ago [-]
An internal monologue is when that sentence is expressed via words as if you are hearing it said by yourself inside your head. Someone without an internal monologue can still arrive at that conclusion without the sentence being “heard” in their mind.
the_gipsy 1 hours ago [-]
How?
roxolotl 1 hours ago [-]
I can’t say as I have an internal monologue and every word I’m typing echos here in my mind as I type it. But as someone with aphantasia who’s regularly bewildered by questions like “how do you spell” or “how do you get to the grocery store” I understand that people’s modes of cognition vary immensely. To think that you’d need to, or even be able to, visualize a word to spell it is as foreign a concept to me as not having an internal monologue.
exe34 6 hours ago [-]
Thank you for sharing this!
1 days ago [-]
kingkawn 23 hours ago [-]
I’m not sure you are conscious
photonthug 1 days ago [-]
IIT has always interested me, and after reading some of the detractors[1] I get that it has problems, but I still don't get the general lack of attention/interest or even awareness about it. It seems like a step in the right direction, establishing a viable middle ground somewhere between work in CS or neuroscience that measure and model but are far too reluctant to ever speculate or create a unifying theory, vs a more philosophical approach to theory of mind that always dives all the way into speculation.

[1] https://scottaaronson.blog/?p=1799

fsmv 23 hours ago [-]
The creator of IIT doesn't understand the universality of Turing machines. He thinks that because in CPUs the physical transistors don't have as many connections as neurons in the brain, that it's fundamentally limited and cannot be conscious.

He even goes as far as to say that you cannot simulate the brain on a CPU and make it conscious because it's still connection limited in the hardware. If you understand computer science you know this is absurd, Turing machines can compute any computable function.

He says "you're not worried you will fall into a simulated black hole are you?" but that is an entirely different kind of thing. The only difference we would get by building a machine with hundreds of thousands of connections per node is faster and more energy efficient. The computation would be the same.

candlemas 5 hours ago [-]
Maybe he doesn't think consciousness is a computation.
photonthug 13 hours ago [-]
This is a typical critique that sort of assumes you have to be right about everything to be right about anything. Maybe a useful point of comparison is aether theories in physics. Wrong, sure, but useless? People might argue whether it was always pure distraction or a useful building block, but I wonder what Maxwell or Einstein would say. If nothing else, one needs something to ground one's thinking and to argue against, which is why replacement theories usually need to acknowledge or address what came before. And typically we try to fix bad theories rather than simply discarding them, especially if there's no alternative available. What are the other available "grand unifying theories" of consciousness? To quote the end of Aaronson's rebutal:

> In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

Of course, it's not on Aaronson to rescue the theory he's trying to disprove, but notice that he is out to disprove it and spends his time on that, rather than imagining what axioms might be added or replaced, etc. Proving that having a large Φ-value is not a sufficient condition for consciousness hardly seems devastating "to the core", because finding better descriptions of necessary conditions would still represent significant progress.

Similarly a critique like

> He thinks that because in CPUs the physical transistors don't have as many connections as neurons in the brain, that it's fundamentally limited and cannot be conscious.

seems a little bit narrow. I do agree it seems to misunderstand universality, but on the other hand, maybe it's just distracted by counting IO pins on chips, and what it should focus on more is counting nodes/edges in neural net layers, and whether connection-counting in hardware-vs-software might need to have a weighting-coeffecients, etc. HN loves to celebrate things like the bitter lesson, the rise of LLMs and ML, and the failure of classical logic and rule-based reasoning and NLP. Is all of that same stuff not soft-evidence for the relevance if not the completeness of IIT?

NoMoreNicksLeft 9 hours ago [-]
>This is a typical critique that sort of assumes you have to be right about everything to be right about anything.

If you don't understand the fundamentals and basics of the underlying science, then you can't really be right about anything at all. It should shock and disturb you to listen someone get it this wrong, this "not even wrong" level of nonsense. There's no insight to be found in such prattle.

pengstrom 5 hours ago [-]
Strange. My knowledge of the fundamentals and processes in humans still makes me jelous of the apparent ease others fare in social situations. Clearly there's more to it than it seems. I'd be wary of equating bottom-up and top-down as principally equivalent.
exe34 23 hours ago [-]
> The computation would be the same.

Assuming of course that Penrose is cuckoo when it comes to consciousness (which I'm happy to assume).

karn97 9 hours ago [-]
[dead]
aswegs8 5 hours ago [-]
I am usually not a fan of fanboying (pun intended) but getting an award with the presenter being Joscha Bach is so cool!
klabb3 23 hours ago [-]
> 4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.

Cats and dogs most definitely anticipate actions of other animals and navigate (and establish) social hierarchy. Is this even a trait of consciousness?

I’ve spent much time thinking of qualitative differences between human and close animals. I do think ”narrative” is probably one such construct. Narratives come early (seemingly before language). This lays the foundation of sequential step-by-step thinking. Basically it lets you have intermediate virtual (in-mind) steps supporting next steps, whether that’s through writing, oral communication or episodic memory.

An animal can 100% recall and associate memories, such as mentioning the name of a playmate to a dog (=tail wagging). However, it seems like they can neither remember nor project ”what happens next” and continue to build on it. Is it a degree of ability or a fundamental qualitative difference? Not sure.

In either case, we should be careful overfitting human traits into definition of consciousness, particularly language. Besides, many humans have non-verbal thoughts and we are no less conscious during those times.

pengstrom 5 hours ago [-]
I've never gotten the impression an animal was aware it could change me. Sure it'd make its wants clear until it got what it wanted or got bored, but that's a very primitive form of conduct. The cat clearly knows I can get it more foods between meals. The communication is limited, but I've never seen him come up with a better argument than that he really really wants more food. Dogs are stranger and clearly has a concept of social structure that cats don't. Both from their background as pack animals and deliberate domestication for assisting humans in work.
jijijijij 22 hours ago [-]
There is this popular video of a crow repeatedly riding down a snow covered roof on a piece of plastic, basically snowboarding. Seemingly just for fun/play.

For me, it's hard to imagine how such behavior could be expressed without the pure conscious experience of abstract joy and anticipation thereof. It's not the sort of play, which may prepare a young animal for the specific challenges of their species (e.g. hunting, or fighting). I don't think you could snowboard on a piece of bark or something. Maybe ice, but not repeatedly by dragging it up the hill again. It's an activity greatly inspired by man-made, light and smooth materials, novelties considering evolutionary timescales. May even be inspired by observing humans...

I think it's all there, but the question about degree of ability vs. qualitative difference may be moot. I mean, trivially there is a continuous evolutionary lineage of "feature progression", unless we would expect our extend of consciousness being down to "a single gene". But it's also moot, because evolutionary specialization may as well be as fundamental a difference as the existence of a whole new organ. E.g. the energy economics of a bird are restricted by gravity. We wouldn't see central nervous systems without the evolutionary legacy of predation -> movement -> directionality -> sensory concentration at the front. And we simply cannot relate to solitary animals (who just don't care about love and friendship)... Abilities are somewhat locked-in by niche and physics constraints.

I think the fundamental difference between humans and animals, is the degree of freedom we progressively gained over the environment, life, death and reproduction. Of course we are governed by the wider idea of evolution like all matter, but in the sense of classical theory we don't really have a specific niche, except "doing whatever with our big, expensive brain". I mean, we're at a point where we play meta-evolution in the laboratory. This freedom may have brought extended universality into cognition. Energy economics, omnivorous diet, bipedal walking, hands with freely movable thumbs, language, useful lifespan, ... I think the sum of all these make the difference. In some way, I think we are like we are, exactly because we are like that. Getting here wasn't guided by plans and abstractions.

If it's a concert of all the things in our past and present, we may never find a simpler line between us and the crow, yet we are fundamentally different.

NL807 8 hours ago [-]
These stages are part of a spectrum. There is no hard boundaries.
ben_w 23 hours ago [-]
> Is this even a trait of consciousness?

There's 40 or so different definitions of the word, so it depends which one you're using when you ask the question.

For me, and not just when it comes to machine minds, the meaning I find most interesting is qualia — unfortunately, I have no particular reason to think this hierarchy helps with that, because there might be a good evolutionary reason for us to have a subjective experience rather than mere unfeeling circuits of impulse and response, it's (1) not clear why this may have been selected for, and evolution does do things at random and only select for/against when they actually matter, and (2) it's not clear when in our evolution this may have happened, and (3) it's not clear how to test for it.

verisimi 8 hours ago [-]
The step from 1 to 2, rock to hard coding (protozoan) assumes life. There's no way I would describe hard coding as life.
pengstrom 5 hours ago [-]
Does it assume life, or are the qualities that makes an organism act in spite of us, and agency, one and the same?
thrance 17 hours ago [-]
I'm wary of any classification that puts humans in a special category of their own, as the crown jewel of the tree of life (many such cases).

> The ability to model the internal dialogues of others.

It feels like someone spent a lot of time searching for something only humans can do, and landed on something related to language (ignoring animals that communicate with sounds too). How is this ability any different than the "Theory of mind"? And why is it so important that it requires a new category of its own?

mtbennett 11 hours ago [-]
Fair points. However I don't put humans in a special category, so much as I say I know at least humans are this conscious. I then cite some research on Australian magpies which suggests they may be so conscious too.

It is not different from theory of mind; theory of mind is an important part of it, just not the whole picture. I argue access consciousness and theory of mind go hand in hand, which is a significant departure from how access consciousness is traditionally understood.

kazinator 1 days ago [-]
Where dooes a human under anaesthesia fit in?
wwweston 1 days ago [-]
Unconscious, in my experience.

But not aconscious.

kazinator 23 hours ago [-]
Is there a definition of unconscious distinct from and more useful than "temporarily aconscious with most memory intact"?
moffkalast 21 hours ago [-]
> phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible

That's interesting, but I think that only applies if the consciousness is actually consistent in some wide set of situations? Like you can dump a few decent answers into a database and it answers correctly if asked the exact right questions, a la Eliza or Chinese room, does that mean SQL's SELECT is conscious?

With LLMs it's not entirely clear if we've expanded that database to near infinity with lossy compression or if they are a simplistic barely functional actual consciousness. Sometimes it feels like it's both at the same time.

mock-possum 21 hours ago [-]
Well shit I wonder what level 6 looks like
pengstrom 5 hours ago [-]
There's this video by an unfortunately rude Youtuber that's actually quite interesting. I find his tribalism off-putting (and ironic), but he's clearly thought about it a lot. https://youtu.be/kse87ocS0Uo?si=Gi5f6uQFeJCrm_hF
mtbennett 11 hours ago [-]
I had the same question.
moffkalast 21 hours ago [-]
Some kind of multi-tier 5 hivemind perhaps.
2 hours ago [-]
Avicebron 22 hours ago [-]
> "There are a few other results too. I’ve given explanations of the origins of life, language, the Fermi paradox, causality, an alterna- tive to Ockham’s Razor, the optimal way to structure control within a company or other organisation, and instructions on how to give a computer cancer"

Sighs

mtbennett 11 hours ago [-]
We like to have fun here.
AIorNot 2 hours ago [-]
Mtbennett:

You propose a physicalist theory which is super interesting and I will read it in depth

But question: what is consciousness itself except as can be described by consciousness?

What do you make of the idea that consciousness (or universal consciousness of some form) is the fundamental substrate for existence

Eg (link to a Rupert Spira video https://youtu.be/FEdySF5Z4xo?si=z2fEgEW8AG3CcCC2

Or as in analytic idealism of Bernardo Kastrup

https://youtu.be/P-rXm7Uk9Ys?si=78165IH_ZIvWnJC9

7 hours ago [-]
disambiguation 17 hours ago [-]
I mainly read sections II and XII+, and skimmed others. My question is: does the author ever explain or justify handwaving "substrate dependence" as another abstraction in the representation stack, or is it an extension of "physical reductivism" (the author's position) as a necessary assumption to forge ahead with the theory?

This seems like the achilles heel of the argument, and IMO takes the analogy of software and simulated hardware and intelligence too far. If I understand correctly, the formalism can be described as a progression of intelligence, consciousness, and self awareness in terms of information processing.

But.. the underlying assumptions are all derived from the observational evidence of the progression of biological intelligence in nature, which is.. all dependent on the same substrate. The fly, the cat, the person - all life (as we know it) stems from the same tree and shares the same hardware, more or less. There is no other example in nature to compare to, so why would we assume substrate independence? The author's formalism selects for some qualities and discards others, with (afaict) no real justification (beyond some finger wagging as Descarte and his Pineal Gland).

Intelligence and consciousness "grew up together" in nature but abstracting that progression into a representative stack is not compelling evidence that "intelligent and self-aware" information processing systems will be conscious.

In this regard, the only cogent attempt to uncover the origin of consciousness I'm aware of is by Roger Penrose. https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...

The gist of his thinking is that we _know_ consciousness exists in the brain, and that it's modulated under certain conditions (e.g sleep, coma, anesthesia) which implies a causal mechanism that can be isolated and tested. But until we understand more about that mechanism, it's hard to imagine my GPU will become conscious simply because it's doing the "right kind of math."

That said I haven't read the whole paper. It's all interesting stuff and a seemingly well organized compendium of prevailing ideas in the field. Not shooting it down, but I would want to hear a stronger justification for substrate independence, specifically why the author thinks their position is more compelling than Penrose's Quantum Dualism?

mellosouls 6 hours ago [-]
we _know_ consciousness exists in the brain

But we don't know it originates there (see any panpsychic-adjacent philosophy for instance), which counters any attempt to rule out alternative mechanisms (your GPU or otherwise) to support it.

qgin 1 days ago [-]
Consciousness is something you know you have, but you can never know if someone else has it.

We extend the assumption of consciousness to others because we want the same courtesy extended to us.

Balgair 3 hours ago [-]
Aside (for Father's day):

I find it a bit ... cute (?) that all these philosophers that debate this kind of stuff [0] seem to mostly be fatherless bachelors.

Like, men that have had kids just don't seem to happen upon the issues of 'how do I know that other people exist?'. Wether it be due to sleep deprivation or some other little thing in raising a child, men that have the tikes just don't question their reality.

Then you get to mothers. Now, our sources for ancient mother authors, and philosophers in particular, are just about non existent. And I'll have to chime in here that my own education on modern mothers' thoughts about consciousness are abysmal. But from the little reading I've done in that space - Yeah no, mothers don't tend to think that their kids aren't equally real to themselves. I think it's something about having a little thing in you kicking your bladder and lungs for a few months straight then tearing apart your boobs for another while. Oh, yeah, and birth. That's a pretty 'real' experience.

Look, I dunno what my observation says really, or if it's even a good one, just that I had it bopping around for a while.

[0] Descartes, Nietzsche, Plato (not Socrates or Aristotle here), etc. And, yes, not all of them either. But not you, dear commentor.

pengstrom 19 minutes ago [-]
When you run out of things to occupy yourself with you end up thinking about thinking
paulddraper 23 hours ago [-]
There are a couple definitions of consciousness
moffkalast 20 hours ago [-]
Class consciousness, comrade.
gcanyon 23 hours ago [-]
The obvious question (to me at least) is whether "consciousness" is actually useful in an AI. For example, if your goal is to replace a lawyer researching and presenting a criminal case, is the most efficient path to develop a conscious AI, or is consciousness irrelevant to performing that task?

It might be that consciousness is inevitable -- that a certain level of (apparent) intelligence makes consciousness unavoidable. But this side-steps the problem, which is still: should consciousness be the goal (phrased another way, is consciousness the most efficient way to achieve the goal), or should the goal (whatever it is) simply be the accomplishment of that end goal, and consciousness happens or doesn't as a side effect.

Or even further, perhaps it's possible to achieve the goal with or without developing consciousness, and it's possible to not leave consciousness to chance but instead actively avoid it.

novaRom 6 hours ago [-]
Consciousness is not required for efficient AI agents, but it might be useful if your agent should have self-preservation. However an agent without embodiment, instincts, and emotions can call its own existence into question. Any powerful agent will find a way to control its own existence.
gcanyon 4 hours ago [-]
> Any powerful agent will find a way to control its own existence.

See, I think that's not a given. To my point, I'm acknowledging the possibility that consciousness/self-determination might naturally come about with higher levels of functionality, but also that it might be inevitable or it might be optional, in which case we need to decide whether it's desirable.

catigula 23 hours ago [-]
Consciousness is an interesting topic because if someone pretends to have a compelling theory on what's actually going on there they're actually mistaken or lying.

The best theories are completely inconsistent with the scientific method and "biological machine" ideologists. These "work from science backwards" theories like IIT and illusionism don't get much respect from philosophers.

I'd recommend looking into pan-psychicism and Russellian monism if you're interested.

Even still, these theories aren't great. Unfortunately it's called the "hard problem" for a reason.

canadiantim 1 days ago [-]
The important point, I believe, is here:

> what is consciousness? Why is my world made of qualia like the colour red or the smell of coffee? Are these fundamental building blocks of reality, or can I break them down into something more basic? If so, that suggests qualia are like an abstraction layer in a computer.

He then proceeds to assume one answer to the important question of: is qualia fundamentally irreducible or can it be broken down further? The rest of the paper seems to start from the assumption that qualia is not fundamentally irreducible but instead can be broken down further. I see no evidence in the paper for that. The definition of qualia is that it is fundamentally irreducible. What is red made of? It’s made of red, a quality, hence qualia.

So this is only building conscious machines if we assume that consciousness isn’t a real thing but only an abstraction. While it is a fun and maybe helpful exercise for insights into system dynamics, it doesn’t engage with consciousness as a real phenomena.

jasonjmcghee 1 days ago [-]
The smell of coffee is a combination of a bunch of different molecules that coffee releases into the air that when together we associate as "the smell of coffee".

I'm not even sure if we know why things smell the way they do - I think molecular structure and what they're made of both matter - like taste, though again not sure if we know why things taste the way they do / end up generating the signals in our brain that they do.

Similarly "red" is a pretty large bucket / abstraction / classification of a pretty wide range of visible light, and skips over all the other qualities that describe how light might interact with materials.

I feel like both are clearly not fundamental building blocks of anything, just classifications of physical phenomena.

jasperry 1 days ago [-]
The smell of coffee is not the molecules in the air; the molecules in the air cause you to smell something, but the smelling itself is a subjective experience. The same for the signals in our brain; that's an objective explanation of the cause of our experience, but the subjective experience in itself doesn't seem to be able to be broken down into other things. It's prior to all other things we can know.
jasonjmcghee 21 hours ago [-]
That's a fair argument. Subjective experience doesn't require knowledge of how anything works- you can experience the stimuli without any understanding
ziofill 1 days ago [-]
The smell of coffee (and your other examples) are not a property of the molecules themselves. It is the interpretation of such molecules given by our brain and the “coffee-ness” is a quality made up by the brain.
argentinian 1 days ago [-]
Yes, in our experience we associate perceptions and also concepts to other concepts and words. But that doesn't explain 'qualia', the fact of having a conscious experience. The AIs also associate and classify. Associating does not explain qualia. Why would? The association happens, but we have 'an experience' of it happening.
canadiantim 1 days ago [-]
You’re right that the experience of the smell of coffee is associated with a bunch of different molecules entering our nose and stimulating receptors there. These receptors then cause an electrochemical cascade of salts into the brain producing neural patterns which are associated with the experience of the smell of coffee. But this is all just association. The conscious experience of the smell of coffee, or red for that matter, is different than the associated electrochemical cascades in the brain. They’re very highly correlated but very importantly: these electrochemical cascades are just associated with qualia but are not qualia themselves. Only qualia is qualia, only red is red, though red, the smell of coffee, etc are very tightly correlated with brain processes. That’s the distinction between consciousness and the brain.
briian 1 days ago [-]
One thought I have from this is,

Are OpenAI funding research into neuroscience?

Artificial Neural Networks were somewhat based off of the human brain.

Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.

Obviously LLMs are somewhat black boxes at the moment.

But if we understood the brain better, would we not be able to imitate consciousness better? If there is a limit to throwing compute at LLMs, then understanding the brain could be the key to unlocking even more intelligence from them.

paulddraper 23 hours ago [-]
As far as anyone can tell, there is virtually no similarity between brains and LLMs.

Neural nets were named such because they have connected nodes. And that’s it.

permo-w 23 hours ago [-]
this so obviously not true that I can't fathom why you would say it
bravesoul2 3 hours ago [-]
Matrix multiplication and some non linearities (gates) to ensure it ain't just a linear regression. Not like my brain.
paulddraper 23 hours ago [-]
“Artificial Neural Networks were somewhat based off of the human brain.

“Some of the frameworks that made LLMs what they are today are too based of our understanding of how the brain works.”

permo-w 22 hours ago [-]
these quotes do not change lack of truth in the original statement

there are far more similarities between a brain and an LLM than containing nodes

paulddraper 15 hours ago [-]
I misread, I thought you said "so obviously true."

I won't offer a rebuttal to that statement.

nativeit 6 hours ago [-]
I mean, I usually just turn the lights down, put on some R&B, and do what comes naturally. It’s good to know there are alternative approaches, I respect everyone’s individuality, especially with regard to their choices surrounding pro-creation.
m3kw9 1 days ago [-]
Using who’s definition of consciousness and how do you even test it?
esafak 1 days ago [-]
He addresses the first point. Not sure about the second.
mtbennett 11 hours ago [-]
I address the second in the final chapter. There are necessary preconditions for consciousness, but it is unclear whether they alone are sufficient (hence the uncertainty around what I dub the temporal gap - maybe someone else can figure this out).
PunchTornado 1 days ago [-]
This guy says nothing new, various things he says have been discussed a lot better by chalmers, dennett and others (much more in depth too). classical behaviour from computer scientists where they semi copy-paste other's ideas and bring nothing new to the table.
mtbennett 11 hours ago [-]
Lol, I have been accused of many things but a lack of novelty certainly hasn't been one of them.
kcoddington 23 hours ago [-]
Not everything needs to be a novel idea. 99% of blogs and books wouldn't be written if that were the case. Sometimes repeating information means somebody learns something they weren't aware of or is presented in a way that finally clicks for them. Meta-analysis is also useful. So is repeating experiments. Our entire world is driven by summaries and abstractions.
lo_zamoyski 23 hours ago [-]
You have a point. I also noticed that the philosophical discussion seems light and completely ignores anything before Descartes. That’s a bad sign. Philosophy beginning with Descartes is riddled with all sorts of bad metaphysics in a way that, say, Aristotelian metaphysics is not. And materialism, which falls squarely into the Cartesian tradition, by definition excludes the possibility of intentionality which is central to consciousness.
kypro 23 hours ago [-]
If I were to build a machine that reported it was conscious and felt pain when it's CPU temperature exceeded 100C, why would that be meaningfully different to the consciousness a human has?

I understand I hold a very unromantic and unpopular view on consciousness, but to me it just seems like such an obvious evolutionary hack for the brain to lie about the importance of its external sensory inputs – especially in social animals.

If I built a machine that knew it was in "pain" when it's CPU exceeded 100C but was being lie to about the importance of this pain via "consciousness", why would it or I care?

Consciousness is surely just the brains way to elevate the importance of the senses such that the knowledge of pain (or joy) isn't the same as the experience of it?

And in social creatures this is extremely important, because if I program a computer to know it's in pain when it's CPU exceeds 100C you probably wouldn't care because you wouldn't believe that it "experiences" this pain in the same way as you do. You might even thing it's funny to harm such a machine that reports it's in pain.

Consciousness seems so simply and so obviously fake to me. It's clear a result of wiring that forces a creature to be reactive to its senses rather than just see them as inputs for which it has knowledge of.

And if conscious is not this, then what is it? Some kind of magical experience thing which happens in some magic non-physical conscious dimension which evolution thought would be cool even though it had no purpose? Even if you think about it obviously consciousness is fake and if you wanted to you could code a machine to act in a conscious way today... And in my opinion those machines are as conscious as you or me because our conscious is also nonsense wiring that we must elevate to some magical importance because if we didn't we'd just have the knowledge that jumping in a fire hurts, we wouldn't actually care.

Imo you could RLHF consciousness very easily in a modern LLM by encouraging it act it a way that it comparable to how a human might act when they experience being called names, or when it's overheating. Train it to have these overriding internal experiences which it cannot simply ignore, and you'll have a conscious machine which has conscious experiences in the a very similar way to how humans have conscious experiences.

the_gipsy 5 hours ago [-]
You're quite oversimplifying with the "train a machine to make some extra noise when overheating". That is really not much more than a rock in the sun with decorations.

On the other hand, maybe (what we may call) consciousness is actually just some illusion or byproduct of continuos language prediction.

m3kw9 1 days ago [-]
I can’t even definitively be sure the other guy across the street is actually conscious
brookst 1 days ago [-]
I’m not even sure I am.
e1ghtSpace 6 hours ago [-]
I feel like my video generation program is conscious. https://www.youtube.com/watch?v=E61Hup6hWWc
e1ghtSpace 5 hours ago [-]
also, how come no one ever mentions videos moving to music you put over it? https://www.youtube.com/watch?v=n1X5I_32TwM

Like, thats some kind of consciousness. Even though this is edited, I can send you a complete like over 1 hour of a movie being really close and describing music thats is playing over the top and its not edited. Just email me kyle.serbov@gmail.com

Barrin92 1 days ago [-]
Is this one of those AI generated theses people try to submit as a joke? In Gen-Z slang this time?

". Adaptive systems are abstraction layers are polycomputers, and a policy simultaneously completes more than one task. When the environment changes state, a subset of tasks are completed. This is the cosmic ought from which goal-directed behaviour emerges (e.g. natural selection). “Simp-maxing” systems prefer simpler policies, and “w-maxing” systems choose weaker constraints on possible worlds[...]W-maxing generalises at 110 − 500% the rate of simp-maxing. I formalise how systems delegate adaptation down their stacks."

I skimmed through it but the entire thing is just gibberish:

"In biological systems that can support bioelectric signalling, cancer occurs when cells become disconnected from that informational structure. Bioelectricity can be seen as cognitive glue."

Every chapter title is a meme reference, no offense but how is this a Computer Science doctoral thesis?

duncancarroll 1 days ago [-]
Not that I can make sense of it all, but to be fair the last result has been demonstrated by Michael Levin's lab: https://www.youtube.com/watch?v=K5VI0u5_12k
adyashakti 1 days ago [-]
what idiocy! machines can never be conscious because they are not alive. they can only simulate a conscious being—and not very well at that.
kazinator 1 days ago [-]
The main problem is that consciousness is not well-defined, and not in a way that is testable.

Even without that, we are probably safe in saying that much of life is not conscious, like bacteria.

Even humans in deep sleep or under anesthesia might not be conscious (i.e. subjectively report not being able to report experiences to account for the time, and reporting a severely distorted sense of the elapsed interval).

It appears that life is not a sufficient condition for consciousness, so aren't we getting ahead of ourselves if we insist it is a necessary condition?

brookst 1 days ago [-]
That’s what logicians call circular reasoning. If they were conscious, we’d call them alive.

Or do you mean biological? Biology is just chemistry and electricity.

kaashif 1 days ago [-]
If a computer can perfectly simulate a human brain, and I gradually replace my brain with computers, when do I cease being conscious?
lo_zamoyski 23 hours ago [-]
But you’ve already conceded it’s a simulation, so never. The simulation is behavioral.

As Searle (and Kripke, respectively) rightly points out, computers are abstract mathematical formalisms. There is nothing physical about them. There is no necessary physical implementation for them. The physical implementation isn’t, strictly speaking, a computer in any objective sense, and the activity it performs is not objectively computation in any sense. Rather, we have constructed a machine that can simulate the formalism such that when we interpret its behavior, we can relate it to the formalism. The semantic content is entirely in the eye of the beholder. In this way, computers are like books in that books don’t actually contain any semantic content, only some bits of pigmentation arranged on cellulose sheets according to some predetermined interpretive convention that the reader has in his mind.

We can’t do this with mind, though. The mind is the seat of semantics and it’s where the buck stops.

the_gipsy 5 hours ago [-]
They conceded it's a simulation of the brain, not that it cannot behave like a brain.
grantcas 22 hours ago [-]
[dead]