By -


Is Artificial Intelligence what we make or is it was it makes itself?
There are 3 kind of AI. Or, better:
there are three steps. ANI, AGI, ASI.
Narrow, General, Super.
We human can deal just with the first
two Artificial Intelligence(s),
but it’s the last one to be their evolution.

This article, now public, is written for the 5th Commission of The Future Makers Association.


There are 3 types of Artificial Intelligence: the first one is ANI, the second one AGI and the third one ASI.


  • ANI, Artificial Narrow Intelligence (Artificial Intelligence)


Artificial Narrow Intelligence, also known as Weak AI, is an AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll shut off.

Today we see plenty of ANI systems: IBM Watson. Google algorithm. Amazon service that suggest you what to buy, Facebook that suggest you to add a friend, Linkedin to follow a company, Google that begs you to join G+.. oh wait – is it still online? Email spam filter. Cars with electronics systems (like the Volkswagen’s that cheated emission tests). Phones (maps, Siri, Pandora and so on). You find ANI system in the financial mrket, where when one once glitched took away something like a trillion dollars. When a plane lands, it’s no human to decide which gate it should go to. Just like no human decided how much you were asked to pay (see: evil AI).

They should not be considered anything but little tiny bricks that one day will take us to AGI or ASI. Our ANI “are like aminoacids in Earth’s primordial ooze”.


  • AGI, Artificial General Intelligence  (Artificial Intelligence)


AKA Strong AI, Human-level AI. A computer that can compete with a human across a long list of tasks (and that can increase its abilities). It involves the ability to reason, think abstractly, comprehend complex ideas, learn and solve problems. Easy, right?


First we need to understand how brain works. Then copy it. Then add tons of computational firepower. Then mix and blend it together and hope something different than CLippy pops out of Word when you open it.

Another way to think it is the Coherent Extrapolated Volition. Coherent Extrapolated Volition was a term developed by Eliezer Yudkowsky, a research fellow of the Machine Intelligence Research Institute (Berkley, CA), that thought we should find a way to program AI so that it would act in our best interests – so do what we want it to do and not what we tell it to.


  • ASI, Artificial SuperIntelligence  (Artificial Intelligence)


ASI is the OverLord. The Intelligence that no man (or woman, to be fair and equal) has match with. Something way smarter than us, that knows everything or run close by. Plus, it’s friendly and attractive (because he knows if humans find something attractive they have a bias judging it). He can solve any healthcare problem. With its long sight can see solutions we cannot foreseen, decades before we even get to think about them. Woah!

Incidentally, it’s what we fear the most because (since we’re speaking of “fried air” right now) we can’t (won’t be able to) control it. It will (probably) have the ability to code himself. WHAT if he finds out human kind is ruining the planet? What if he’s just very anxious and doesn’t like us? What if we can’t understand it?

What if – and I feel sadly a bit confident about it – he/she/it is the last step of human growth and evolution?




    Lascia un commento

    Il tuo indirizzo email non sarà pubblicato.