How to make Artificial General Intelligence (AGI) better than today (Sept. 8, 2020)

Adam Mackintosh
14 min readSep 8, 2020

Preface

One thing I find constant is that it’s extremely difficult to ascertain where humans are at today. So I am inclined to reduce my confidence in the article’s headline. I am also inclined to cite “With these words, I can sell you anything” by William Lutz because it details weasel words and double-speak in advertising. Call that a referential transparency pre-requisite, and call this article some kind of reference material for beginners or experts that just need a novel keyword or two.

I first wrote this article as a comment on the, at this moment, newest Two Minute Papers video on YouTube, but then I realized, to increase anti-fragility magnitudinally proportional to the decrease in fragility, I should place it here also. I will edit it to some degree for better readability on Medium. I posted other gems composed with garbage on Twitter; if you want them bad enough, you can find them.

The original video is here:

You can maybe see me rabble-rousing in the comments section there (but it’s the same content, so don’t~ bother).

Continuing on,

If you are a researcher (do not TLDR this), due to things I’ve heard Eric Weinstein say, I would like you to brush up on Principle Fibre Bundles, and see if you can appeal to any of that logic and figure out how to do two separate things:

The Weinsteins are always significant to me because they are like Einstein twins, and as you can see, they take the upstream W and operate objectively.

Maybe I can link these videos with context-timestamps soon.

Semi-contextual note: In my coffee this morning after stirring, there was one large bubble in the middle, followed by a sphere-packed set of variable-sized node-like bubbles. Overall shape was circular, but slightly deformed. After a period of time slowly rotating, it stuck to the wall of the cup. It was certainly both a graph and a clique.

If you find any assertion or application of logic in this article that you want attributed to someone, place it in the comments. I am experienced with the APA style guide, and I would love to augment related material in this context. Stated comically, this model has arguably some class.

Oops, two things:

1) Transforming data while systems of equations

Figure out if anything you’re working on can benefit from the idea of having either 3 equations equated, or 6 equations equated. For example, X/Y = Z/T = F/G (or double the amount of equalities), and then figure out if you can use systems of equations while data is static to make transformations to solve for unknowns. Beyond 1 unknown or 1.5 unknowns is an increase in confidence, such as 1.6 over pi which is greater than 50% confidence — maybe enough to act on a hunch.

If you watch all of Weinstein’s Rogan and Geometric Unity podcasts (there’s at least 5–10 podcasts), you’ll see like 15–30 hours of him describing things you need to understand. Think of the movement of data in video keyframes as what it is, deltas of X, Y, Z, T, and spin, and deltas towards increases or decreases in efficiency or effectiveness.

Try to find unitless units.

2) AGI is a recurrent multi-step algorithm of multi-step algorithms

Think about AI parsing video as maybe a multi-step algorithm that does some strategy similar to how a human thinks, and figure out that order, such as:

Establish boundaries

Find constraints for an agent’s behaviour (so you can efficiently traverse less area by rejecting garbage basins of attraction). Find the convex and concave boundaries in~ which some kind of dynamic equilibrium is before, now, and/or after maybe. Is it converging to garbage? Reject it. Think in 3D so you might need more equations to solve one unknown across two frames (ie: moments). Think about an agent’s behaviour in terms of magnetism, polarity between -1 and 1.

For example, if a humanoid has orange crayon contacting black construction paper. I mean maybe note and track the Halloween symbol in your associative memory, but I more so mean obviously their intent is to draw something for the purpose of joy or another symbol of interest in the context. Obviously there is some vector(s) towards a reward or danger symbol in this context. You might derive those from the outcome being converged upon.

I notice currently, most robots don’t instantly ask, “why?” when they encounter something they don’t understand. They seem to love to waste their time calculating changes in my lip angle while my mother is putting on lipstick. Did she say anything recently that relates to lipstick?

Please watch this video here of Richard Feynman:

Listen ultra-carefully when he talks about someone asking him how magnetism works, and he goes off on a ‘tangent’ explaining why he could recurrently ask why everytime someone gives him an answer. Literally within seconds or minutes (highkey: moments), the person is forced to talk about the Big Bang, and our answer is probably trending towards inaccurate and/or imprecise. Uh oh. (ie: Scooby Doo sound)

He is saying, one can always look upstream. One can always pretend a strict equality is a loose equality and consider if there’s a better answer there upstream.

Please do not ever forget that. Are you satisfied with my answer here? Why?

When I first wrote this article and published it, I didn’t realize the full implications. It required me to send a positive Tweet to Gad Saad about costly-signals and black swans. This morning while re-reading my own article, I notice the important takeaway from the above “Big Bang” mention is that, what exactly would your AGI do if a black swan delta occurred from a costly signal? It would set some predicate to truthy and propagate it throughout the entire brain, for use in every consideration and/or action.

The point of me incrementing the paragraph count in this article to include the above paragraph, is that, if a child function learns of this information, how does it communicate not just discrete scalar values but also continuous functions? How does the system backfill (note: excavator backfilling dirt into a hole, maybe shaping subjectively) this new function everywhere it is applicable? And, how does it magnify its total crystallized intelligence and fluid intelligence through the network effect of that operation? I suspect that operation occurs recurrently until the network effect of network effects reaches zero new additions~ and/or subtractions~.

A bit into this article, I will link my Functional Programming article, and you must read it to understand how I approach predicate references (discrete) and predicate functions (analog/continuous) as a method of trapping and decoupling, exactly like hunting beavers in Canada in the 1800s, except totally different. You’ll find a boolean you like. I guarantee it.png

Quick aside: Imagine a camera shutter closing in. Next, imagine a composition of boolean constrictors starting from a scaffold and back-filling inward until the obvious solution is there. Now, imagine the inverse of that — or maybe the converse.

Further (and back to the crayon example above), what are the boundaries and constraints here that minimize cumulative symbol possibilities? Cultivating joy is a fascinating cyclical dependency of sorts. In Philosophy, they might reference intrinsic pleasure. Is it that the intent and purpose are the reward? If there is movement occurring, both higher-order and lower-order tangents could be moving too.

Don’t forget turning points are related to power rule and second derivative tests, etc. Simple math is better than complex math. Simple behaviour vectors are better than complex. Identify the boundaries. If some agent is converging on a dog, maybe it’s seeking to maybe pet or maybe eat. What are the top probabilities, and how can you draw a boundary around what’s important in that context. I use the idea of spectral functions. If you know how an agent will act in a context, it’s kinda like a spectral function that draws a behaviour-related path detection map.

In the agent/dog example, consider the difference between an agent smiling and an agent grimacing with a knife in hand. Why parse deeper when those two behaviours are instantly inverse? What if he said “don’t worry buddy” and appears to be cooking and his orientation to the dog is slightly off because he needs to cut a camping tent wire? What if his orientation changed in the last few moments, firing a “converge target switched” event away from the wire towards the dog, causing the agent to frog-style jump at the agent?

Basically all I’m saying is a lot, and most of it has to do with allowing your robot some freedom in jumping through dimensions. If your brain doesn’t undergo inelastic collisions with my way of of thinking so far, maybe check out my article about functional programming. I want to make sure you have that way of thinking unpacked:

Don’t balk at the mention of JavaScript, React, and Vue there. If the tangent of your emotion just suddenly increased towards negative polarity while decreased towards positive polarity, then you need to read that more than you think. A person must deeply understand tubes, pathways, and closures to deeply understand context.

Back to that Feynman video above, for a moment. Listen also mega-carefully when he talks about being trolled by MIT peers about seeing his own reflection. “What is your reflection?” he talks about.

Spoiler alert: if you haven’t watched the above video yet, but…

Your reflection is the image of yourself that is inside-out and backwards. Do you know what else is inside-out and backwards? Function composition:

f(g(h(x)))

Isn’t that fascinating how x must exist before it goes into h() before it goes into g() before it goes into f()?

I haven’t found any fantastic material about this besides Feynman in that video, but to me this means that the Universe is declared first and then executed after inside-out and backwards. To me, this means think before you act, consider before you catalyze or perform action towards outcome. My remaining question is, how much thinking first? I leave that up to you as the optimizer of your robot’s efficiency && effectiveness.

Orientation && Precession

b) Don’t forget about precession and orientation in the 2D and 3D spaces. These can relate to any number of agents in the hyperspace due to incentives (pull agent towards symbol), motives (push agent to symbol), rewards (push agent to symbol), dangers (pull agent from symbol). I see a juicy apple. I’m hungry event published. I see a bear near it. Incentive dropping now. Stop action event. Push towards next idea about hunger. Imagine publishing and subscribing and listening for events and actions, Make your agents study deltas between state changes, and make them use systems of equations to chop out noise and calculate something optimal in the next frame. This is how functional-reactive programmers think about how an application’s state changes. Hyper-dimensional dynamic equilibrium must exist.

Avoid superfluous re-renders by ensuring lower-order components only re-render when their props or state change. Use sideways data-loading if state must pass wild distances or to specific closures.

Bonus: I’m not 100% sure here, but maybe use event emitters or callbacks to pass functions; feel it out… make it work first, then make it better.

Given the formula:

ui = fn(state)

user interface = the recurrent render function of the current state

  • This is no different than y = f(x).
  • Notice how the deltas are magnitudinally proportional.
  • As y changes, f(x) changes by the exact same amount to maintain the balance of the equality.

Don’t forget about loose or strict equality in crazy situations. Make your agents anti-fragile to the difference between loose and strict equality. Why do you think TypeScript is better than JavaScript except when it isn’t? Interesting: dynamic typing and static typing. Interesting: asynchronous and synchronous functions, function composition, object composition. Study these if you don’t feel confident. Study pub/sub. Push an action. Pull an event by unique name. Transform event-detection to action-initiation (catalyze or perform).

Now, hold onto your papers and re-read this “Orientation && Precession” section and unpack symbol to “either a discrete object or a continuous function”.

Oh wow: so elusive yet so obvious and critical if we like to aim towards particle/wave duality.

Declarative vs. Imperative

Think more often in terms of how a parent dimension REALLY relates to a child dimension when you are looking for shortcuts due to computation efficiency and memory storage. A parent is above, so it can contain the compressed/packed/abstracted version of something. When you zoom in, you go from parent to child maybe.

Is the parent “camera does stuff” while the child is “this is how camera does stuff”? Zoom in 10% child. Ok parent we zoomed in 10% — here’s what changed, but wildcard-symbol Jim I’m a doctor not a Star Trek reference. Ok zoom in another 5% but add a blue filter and tell me what you got.

How does data fight to go back upstream? or how do we think about parent/child movement?

  • In React JS, we call that lifting state and/or continuation passing style.
  • Vue JS uses event emitters more often to cross-pollinate closures.
  • Newton and Leibniz probably use differential equations to integrate what the state change was, but that’s implicit and yucky compared to explicit state changes with Mealy finite state machines.
Fig ?.? — NP != P

Quick side here, but I want to show this napkin math above because it relates to NP != P. You can see visually why it is hard for some agents to understand context while easy for others, and why it’s near-impossible for some agents to verify accuracy && precision while near-easy for others.

To understand the image, consider the chain as a directed graph where each node is a closure that has context, but there is no communication from downstream to upstream. It makes it pretty obvious why the node with awareness as no problem verifying while the other one isn’t so “lucky”.

Node 1 (the top circle)

  • has knowledge of all upstream nodes

Node 2 (the bottom circle)

  • has no knowledge of upstream nodes, and so cannot ascertain true quality and relevance to, well anything towards everything
  • this is an artifact of directed acyclic graphs where child closures cannot reach into parent closures unless the parent provides a function to do so
  • additionally, child closures have no awareness of what upstream nodes are connected to sibling closures unless they are connected to the context\

I put this image and description below the mentions about React, Vue, and Calculus because it relates to them. Calculus allows inference based on integrating differentials, but sometimes equality is only loose not strict. And to avoid any confusion, this is due to referential transparency.

Moving on,

Where are the boundaries there that allow you to ignore Hilbert Space explosions and to prevent accidental positive feedback loops such as n+1 iterations (as seen in GraphQL). Drawing boundaries around behaviour in terms of a convex upper boundary and concave lower boundary and prime line midpoint between them can allow you to chop out noise and focus on what matters frame by frame. Parent is overall, child is delegated zoom +1 level.

Write these ideas out and track them over the next few years. Ideas related to these ideas are going to blow the door open.

Seeing: You composed with Others

I don’t even understand the full implications of this, but I want to report it because black swans can emerge from anywhere, anytime. Let’s do the action of explaining it, and then see the event of the completed action. I first saw this meme a few years ago on Twitter when Cory House posted it.

I recall it related to possibly some Psychology research. If anyone can identify the source, please drop a comment below and they can be cited, and I will drag the citation up here.

A person can use their reflection to see the unknown side where no one told them a note that says “kick me” was on it. Based on my observations, this is exactly what the integration of differentials embodies. To me it connects the idea of seeing an object (or a function) by studying the affect it has on things nearby, or studying the affect nearby things have on it.

To understand this intuitively, check this out:

  1. seeable by self + mentioned by others,
  2. unseeable by self + mentioned by others,
  3. seeable by self + unmentioned by others,
  4. unseeable by self + unmentioned by others.

This circle is important to humans. For the same reason each person should discover and optimize their blind-spots. Your agents should also triangulate your blind spots and make the agent improve them or at least track them or wonder about them.

I see this as 8 segments within 4 of 2 simple things in 1 area. Are those two simple things also objects and functions?

Outcomes

Think more in terms of outcomes instead of actions. An outcome is in the parent dimension while the actions to get there are in the child dimension. if the actions get botched, dump them or optimize them and maintain course towards a similar outcome. Maybe it’s even better than the original. The outcome is the declarative dimension, and the action is the imperative dimension. You need both and they work in tandem to create the dimension that both are in. It is almost certainly a fractal of fractals, and you should study functional-reactive programming to understand more intuition. Even just look at the juxtaposition between functional and reactive. Function = to, reactive = from. It makes a V shape with the equality in the middle.

I will edit this later to include more information, and I need to cite Lex Fridman’s podcast here. You’re short-changing yourself brutally if you aren’t watching all of Lex’s podcasts.

Equational reasoning

Quite simply, study equational reasoning and use it as often as possible to combat fuzzy logic boundaries and to maintain equilibrium as state changes. Combat tangents that lead to implosions or explosions.

I will edit this later to include more text about pi*r² because it relates to how a circle can grow in static proportions. This relates to a positive feedback loop because, a runaway freight train trends towards explosion, and by the same logic but inverse, a things can trend towards implosion. Starvation is an example of that if you cancel eating every meal. Your agents will follow this as they implement functions that catalyze or perform movement. There must be balance, and it must be protected just as much as state integrity.

A final example

If the robot falls off the seat and dies, why is that so easy for a human and so hard for a robot? The robot is on a thing. It shouldn’t be thinking about that situation imperatively. It should move up one dimension and see itself sitting on a seat which has a requirement to stay there, and if staying is risked or starts occurring, the inverse of that event is something like “don’t fall = re-balance from not staying”. So it can execute the don’t fall function if it was listening for a falling event. Perhaps the robot can kick its leg out or put its leg down. Which costs less according to micro and macro demands? Which hurts no one? Which maximizes continued movement towards success?

Note: this robot-chair example stems from Russ Tedrake’s Lex Fridman Podcast episode. There is a mild issue of citations in this article — in some moments, those can be resolved.

If you like this article, I spend most of my time thinking about functional-reactive programming, or at least composing algorithms and/or functions and objects. Some like FP or OOP. I prefer to compose them as the comical unit known as FPOOP.

We need to start pushing more actions into the stars, so study this material here closely, and use it to maximize your objective functions. Also, pun intended.

--

--

Adam Mackintosh

I prefer to work near ES6+, node.js, microservices, Neo4j, React, React Native; I compose functions and avoid classes unless private state is desired.