How to make Artificial General Intelligence (AGI) better than today (Sept. 8, 2020)

Preface

One thing I find constant is that it’s extremely difficult to ascertain where humans are at today. So I am inclined to reduce my confidence in the article’s headline. I am also inclined to cite “With these words, I can sell you anything” by William Lutz because it details weasel words and double-speak in advertising. Call that a referential transparency pre-requisite, and call this article some kind of reference material for beginners or experts that just need a novel keyword or two.

1) Transforming data while systems of equations

Figure out if anything you’re working on can benefit from the idea of having either 3 equations equated, or 6 equations equated. For example, X/Y = Z/T = F/G (or double the amount of equalities), and then figure out if you can use systems of equations while data is static to make transformations to solve for unknowns. Beyond 1 unknown or 1.5 unknowns is an increase in confidence, such as 1.6 over pi which is greater than 50% confidence — maybe enough to act on a hunch.

Try to find unitless units.

2) AGI is a recurrent multi-step algorithm of multi-step algorithms

Think about AI parsing video as maybe a multi-step algorithm that does some strategy similar to how a human thinks, and figure out that order, such as:

Establish boundaries

Find constraints for an agent’s behaviour (so you can efficiently traverse less area by rejecting garbage basins of attraction). Find the convex and concave boundaries in~ which some kind of dynamic equilibrium is before, now, and/or after maybe. Is it converging to garbage? Reject it. Think in 3D so you might need more equations to solve one unknown across two frames (ie: moments). Think about an agent’s behaviour in terms of magnetism, polarity between -1 and 1.

Please do not ever forget that. Are you satisfied with my answer here? Why?

When I first wrote this article and published it, I didn’t realize the full implications. It required me to send a positive Tweet to Gad Saad about costly-signals and black swans. This morning while re-reading my own article, I notice the important takeaway from the above “Big Bang” mention is that, what exactly would your AGI do if a black swan delta occurred from a costly signal? It would set some predicate to truthy and propagate it throughout the entire brain, for use in every consideration and/or action.

f(g(h(x)))

Isn’t that fascinating how x must exist before it goes into h() before it goes into g() before it goes into f()?

Orientation && Precession

b) Don’t forget about precession and orientation in the 2D and 3D spaces. These can relate to any number of agents in the hyperspace due to incentives (pull agent towards symbol), motives (push agent to symbol), rewards (push agent to symbol), dangers (pull agent from symbol). I see a juicy apple. I’m hungry event published. I see a bear near it. Incentive dropping now. Stop action event. Push towards next idea about hunger. Imagine publishing and subscribing and listening for events and actions, Make your agents study deltas between state changes, and make them use systems of equations to chop out noise and calculate something optimal in the next frame. This is how functional-reactive programmers think about how an application’s state changes. Hyper-dimensional dynamic equilibrium must exist.

ui = fn(state)

user interface = the recurrent render function of the current state

  • This is no different than y = f(x).
  • Notice how the deltas are magnitudinally proportional.
  • As y changes, f(x) changes by the exact same amount to maintain the balance of the equality.

Declarative vs. Imperative

Think more often in terms of how a parent dimension REALLY relates to a child dimension when you are looking for shortcuts due to computation efficiency and memory storage. A parent is above, so it can contain the compressed/packed/abstracted version of something. When you zoom in, you go from parent to child maybe.

  • Vue JS uses event emitters more often to cross-pollinate closures.
  • Newton and Leibniz probably use differential equations to integrate what the state change was, but that’s implicit and yucky compared to explicit state changes with Mealy finite state machines.
Fig ?.? — NP != P

Node 1 (the top circle)

  • has knowledge of all upstream nodes

Node 2 (the bottom circle)

  • has no knowledge of upstream nodes, and so cannot ascertain true quality and relevance to, well anything towards everything
  • this is an artifact of directed acyclic graphs where child closures cannot reach into parent closures unless the parent provides a function to do so
  • additionally, child closures have no awareness of what upstream nodes are connected to sibling closures unless they are connected to the context\

Seeing: You composed with Others

I don’t even understand the full implications of this, but I want to report it because black swans can emerge from anywhere, anytime. Let’s do the action of explaining it, and then see the event of the completed action. I first saw this meme a few years ago on Twitter when Cory House posted it.

  1. unseeable by self + mentioned by others,
  2. seeable by self + unmentioned by others,
  3. unseeable by self + unmentioned by others.

Outcomes

Think more in terms of outcomes instead of actions. An outcome is in the parent dimension while the actions to get there are in the child dimension. if the actions get botched, dump them or optimize them and maintain course towards a similar outcome. Maybe it’s even better than the original. The outcome is the declarative dimension, and the action is the imperative dimension. You need both and they work in tandem to create the dimension that both are in. It is almost certainly a fractal of fractals, and you should study functional-reactive programming to understand more intuition. Even just look at the juxtaposition between functional and reactive. Function = to, reactive = from. It makes a V shape with the equality in the middle.

Equational reasoning

Quite simply, study equational reasoning and use it as often as possible to combat fuzzy logic boundaries and to maintain equilibrium as state changes. Combat tangents that lead to implosions or explosions.

A final example

If the robot falls off the seat and dies, why is that so easy for a human and so hard for a robot? The robot is on a thing. It shouldn’t be thinking about that situation imperatively. It should move up one dimension and see itself sitting on a seat which has a requirement to stay there, and if staying is risked or starts occurring, the inverse of that event is something like “don’t fall = re-balance from not staying”. So it can execute the don’t fall function if it was listening for a falling event. Perhaps the robot can kick its leg out or put its leg down. Which costs less according to micro and macro demands? Which hurts no one? Which maximizes continued movement towards success?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Adam Mackintosh

I prefer to work near ES6+, node.js, microservices, Neo4j, React, React Native; I compose functions and avoid classes unless private state is desired.