The Ontelligency Research.
A Quest to Understand the Principles of Consciousness, Intelligence, Agency, and Tinker Around with It.
A Quest to Understand the Principles of Consciousness, Intelligence, Agency, and Tinker Around with It.
13th April 2024
Intuitional Understanding of Transformer Mechanism:
Tokens and their positions in space and what they mean:
Think of it like this: every word and number you can think of is placed in an infinite volume of space with infinite dimensions (here for simplicity, let's imagine three dimensions).
During the learning phase, words and numbers with similar meanings or contexts are grouped or clustered together in this space. Thus, the context or inherent meaning of a word is defined by its position in the space relative to other words that mean the same. Therefore, there is no absolute context, only relative context. For instance, "dark" is defined by how far it is from "light," something like that. However, it's uncertain whether this is how our brains process information, but there is a good possibility that contextual understanding works this way. Contextual understanding is unique to living beings; there is no other matter that exists that understands context at all.
Vectors, their directions, and the relationship between the tokens:
Now that we have placed these tokens in this infinite volume of space, how do they relate to each other?
Each vector is attributed to a relationship, the mathematical definition of which is maybe assigned a token, like a word or something. These vectors, based on where they point out in the space and from where they point out, shall define the relationship between, let's say, two tokens.
A complex mathematical function of a third vector emerges from these tokens and previous vectors, and tokens give a probabilistic result of what the next token may be like.
I think this gives a very basic and abstract understanding of what is going under the hood of the transformer models.
12th April 2024
The Consciousness Experiment: Understanding Human Perception?
Continuing from March 20th, 2024's chain of thoughts.
Thought Experiment: If a simulated "cognitive architecture model" ends up asking the same questions in the same patterns as humans do—highly predictable yet relatable—along with what is unique to humans, that has never been programmed into this "cognitive architecture model" before, would that be an indicator of what we may declare as "successfully simulated consciousness" or a basic simulation of how humans perceive the world? Thought processes like self reflection? Questioning the first principles behind it's own intentions?
Mind you, this is not about emulating human capabilities or more advanced capabilities. This is about understanding the human thought process pipeline. If transformers' principles are how the brain processes data and contexts, what if we may just miss the right architecture on how our brain utilizes this component?
This is a very abstract architecture on how we may simulate the whole pipeline:
Hardcoded Intents: Coding hardcoded intents, let’s say long term goals: survival, food, multiplication or something short term: find cake and eat. These are hardcoded. No rational reasoning is fed into subsequent layers, the subsequent layers may or may not be expected to make sense of these hardcoded intents. The subsequent layers do not have open access to these intents. Just receives commands in some form (text or something). I am not sure how we receive these commands, we often "feel" that we should eat, we rarely consciously decide when it comes to these intents. This is a question to ponder over more.
Output: Intents
Converter Algorithm: Hardcoded intents to end goal relevant to the environment. More often that not, there is a connecting mechanism between environmental desire to these hardcoded intents that are rarely rationalised. Like saying "I am craving a cake" to "Eat for better survival". I don't find these thoughts are as rational as let's say cooking a food. The above is some sort of passive thinking.
Input: Intents
Output: Relevant environmental desires or objectives
High level Intelligence: Supposed to make sense of the end goal actions and make sure it is done.
Input: Desires or Objectives
Output: Getting the action done
Intermediate processes: Contextual understanding of the objective (a lot of questioning), implementation of the actions, planning and completing the objective.
Components of intelligence: Question everything to understand the context better.
But at any given point,
Does it question the intents behind the actions it does?
Does it question it’s purpose?
Does it ask itself why it is doing this?
Does it ask itself who programmed it's intents?
Does it self reflect?
If it does, we may have a skeletal architecture of consciousness.
I may be wrong, the goal is to be less wrong.
I think this is crucial because, the self awareness we say we have is often our intelligence asking ourselves of things that we don't yet make sense of, we don't yet understand the underlying mechanism of ourselves. Our intelligence does not possess inherent control over our other significant cognitive functions.
More questions to ponder over for implementation:
Approaches of RL for agent simulation with intents.
Embodied AI Agents.
Defining High Level Intelligence and checking AI substitutes for implementation.
Defining various experiments:
Finalising a cognitive architecture
Defining environments
Hardcoding intents suitable to different environments
Does it arrive at the same "self-reflection" part?
April 5th, 2024
Definition of Better Intelligence, Context?:
When we say we want to reproduce intelligence, we are essentially saying that we have to create systems that interpret the given environment (mixed data) as accurately and fast as possible before solving the chosen problem.
So the abstracted function should go like this:
Better Intelligence = How better is the context of the problem understood / How fast is the context of the problem understood
Other types of intelligence like artistic creativity, linguistics are no less but I don't think it is depended by the progress of technology. I think that the core accelerator of human civilisation is solving unsolved problems. Technological advancement singlehandedly plays a huge role in differentiating a decade from its predecessors. Coming back to contextual understanding, when we dig deep, this lands us in the question of what do we mean by understanding the data or environment. I think that it’s about having the right context about the given information. Current narrow artificial intelligence is excellent at classification, and frameworks like Reinforcement Learning are teaching bots to solve unseen problems. But what’s the guarantee that these agents actually understand their assigned problem? Checking for agency in the bots ~ seeing if it can use the context obtained from the environment and data to do something creative with it / ask it to develop tools or something novel, is this a good indicator of contextual understanding? Advanced contextual intelligence is maybe synonymous with generative intelligence - To paint the picture, you must understand the colours right? Does this imply that the application of Transformer theory is the closest we've ever gotten to reproducing actual intelligence?
The amazing merits of Transformer models are proven by their current real-world utility, which is used extensively in:
Video Generation Models and more versatile use cases.
March 20th, 2024
What is Consciousness?:
Is consciousness simply an emergent property of stacked complex mathematical functions/neurons/neural networks, or is it a separate fundamental physical phenomenon (akin to magnetism)? Is intelligence merely a component of consciousness, or is it a byproduct of one of them? Before we attempt to "create" consciousness, we must accurately define what consciousness is.
If we assume that our brains are inherently wired towards a goal, wouldn't this give underlying meaning to consciousness? What if current state-of-the-art LLM models lack this aspect called "intent"? What if we are consciously unaware of hardcoded "intents"?
Intent (Survival & Multiplication)
->
Consciousness (Intermediate processing unit not necessarily aligned with truth, designed to perform according to the best interests of the system)
->
Intelligence (Complex Problem Solving, must align with truth for efficiency)
I recognize that I may be incorrect; the aim is to be less wrong. According to this perspective, consciousness would lack meaning without "intent." Yes, the intent of living beings may seem irrational based on our current understanding, but so too does the purpose of the universe.
Are these "intents" hardcoded in simplest of the single cellular organisms as well? How and why does it know that it has to multiply or seek food, if they don't even have a neural network? So "intent" clearly created higher level cognition (Intelligence) here because living beings only got smarter as time progressed.
Is there a possibility that intelligence is just an abstraction level over something fundamental that is going on deep down? Something that constitutes the "Intent" or the mysterious Consciousness?
Nothing Personal, Just Unfiltered Thought Processes.