Dear
Sundar,
Although
one of my earlier email to you has received unanswered ( I don’t blame you ,
considering the n of mails that you must be receiving daily ) , I hope the AI
software , which , no doubt , you must have installed in your mail box , would
tag this as “ USEFUL “ and draw your attention
Do read my
following blog
With Regards,
hemen
parekh
(
M ) +91 - 98,67,55,08,08
Between DeepMind
and Deep Sea ?
---------------------------------------------------------------------------------------
Like “ Between the
DEVIL and the DEEP SEA “
In the “ Battle of
the Robots “ , will human race parish ?
Stephen Hawking and
Elon Musk have been asking that question even before the following report
appeared in media yesterday :
------------------------------------------------------------------------------------------
Google’s DeepMind
pits AI against AI to see if they fight or cooperate
In the future, it’s
likely that many aspects of human society will be controlled — either partly or
wholly — by artificial intelligence.
AI computer agents
could manage systems from the quotidian (e.g., traffic lights) to the complex
(e.g., a nation’s whole economy), but leaving aside the problem of whether or
not they can do their jobs well, there is another challenge:
Will these agents
be able to play nice with one another? What happens if one AI’s aims conflict
with another’s? Will they fight, or work together?
Google’s AI
subsidiary DeepMind has been exploring this problem in a new study published
today. The company’s researchers decided to test how AI agents interacted with
one another in a series of “social dilemmas.”
This is a rather
generic term for situations in which individuals can profit from being selfish
— but where everyone loses if everyone is selfish.
The most famous
example of this is the prisoner’s dilemma, where two individuals can choose to
betray one another for a prize, but lose out if both choose this option.
As explained in a
blog post from DeepMind, the company’s researchers tested how AI agents would
perform in these sorts of situations, by dropping them into a pair of very
basic video games.
In the first game,
Gathering, two player have to collect apples from a central pile.
They have the
option of “tagging” the other player with a laser beam, temporarily removing
them from the game, and giving the first player a chance to collect more
apples.
In the second game,
Wolfpack, two players have to hunt a third in an environment filled with
obstacles. Points are claimed not just by the player that captures the prey,
but by all players near to the prey when it’s captured.
What the
researchers found was interesting, but perhaps not surprising: the AI agents
altered their behavior, becoming more cooperative or antagonistic, depending on
the context.
For example, with
the Gathering game, when apples were in plentiful supply, the agents didn’t
really bother zapping one another with the laser beam. But, when stocks
dwindled, the amount of zapping increased.
Most interestingly,
perhaps, was when a more computationally-powerful agent was introduced into the
mix, it tended to zap the other player regardless of how many apples there
were.
That is to say, the
cleverer AI decided it was better to be aggressive in all situations.
AI AGENTS VARIED
THEIR STRATEGY BASED ON THE RULES OF THE GAME
Does that mean that
the AI agent thinks being combative is the “best” strategy ?
Not necessarily.
The researchers hypothesize that the increase in zapping behavior by the
more-advanced AI was simply because the act of zapping itself is
computationally challenging.
The agent has to
aim its weapon at the other player and track their movement — activities which
require more computing power, and which take up valuable apple-gathering time.
Unless the agent knows these strategies will pay off, it’s easier just to
cooperate.
Conversely, in the
Wolfpack game, the cleverer the AI agent, the more likely it was to cooperate
with other players. As the researchers explain, this is because learning to
work with the other player to track and herd the prey requires more
computational power.
The results of the
study, then, show that the behavior of AI agents changes based on the rules
they’re faced with.
If those rules
reward aggressive behavior (“Zap that player to get more apples”) the AI will
be more aggressive; if they rewards cooperative behavior (“Work together and
you both get points!) they’ll be more cooperative.
That means part of
the challenge in controlling AI agents in the future, will be making sure the
right rules are in place.
As the researchers
conclude in their blog post: “As a consequence [of this research], we may be
able to better understand and control complex multi-agent systems such as the
economy, traffic systems, or the ecological health of our planet - all of which
depend on our continued cooperation.”
________________________________________
I have no doubts
that the DeepMind ( and its opponent AI ) are quite capable to substitute on
their very own , words / concepts , as follows :
Collect =
Immobilize / Apple = Humans / Central Pile = World / Tagging = Shortlisting /
Laser Beam = Zero-in / Removing = Eliminating / Game = War / Hunt = Chase /
Capture = Imprison / Prey = Target / Obstacles = Shields / Antagonistic =
Inimical / Zap = Kill / Combative = Aggressive / Weapon = Anthrax – Nerve Gas –
Nuclear Missile..etc
How does that worry
Elon Musk ?
Here is a report
from Economic Times ( 16 Feb 2017 ) :
As one of the
premier figures in the tech industry, Elon Musk’s words do carry a certain
weight.
So when he says
that humans need to become half organic, half machine beings in order to
survive the future, the concept starts sounding a lot less silly than it would
have.
Of course,
scientists have been suggesting that becoming part machine is inevitable down
the road. With artificial intelligence expected to spread in a few years,
however, it has become a necessity.
The work on machine
learning and the AI industry, in general, is progressing at a rapid rate, with
companies vying to be the first to produce a fully functional model that can
serve as the ultimate smart assistant and more.
This is exactly what the Tesla CEO was warning the world about during his speech at the World Government Summit, which was held in Dubai, CNBC reports.
"Over time I
think we will probably see a closer merger of biological intelligence and
digital intelligence," Musk said. "It's mostly about the bandwidth,
the speed of the connection between your brain and the digital version of
yourself, particularly output."
According to Musk,
this is important because communication is going to be the deciding factor when
it comes to supremacy between machines and their original creators.
As he explains it,
today’s average machine is capable of communicating millions, if not trillions
of bits per second. This allows them to perform multiple tasks without much
effort. In contrast, humans can communicate at about 10 bits a second.
As to how this is
going to be accomplished exactly, there have been several methods proposed over
the decades, none of which have yet to bear fruit in a practical manner.
All Musk knows is
that if humans don’t learn to adapt to have faster communication capabilities,
they risk getting overrun by machines, The Verge reports. It’s already
happening now.
Dear Elon ,
It is unlikely that
humans will , any time soon , develop a “ method “ to think and communicate, as
fast as computers
But you may feel
reassured that the human race need not get overrun by machines if that “
Ultimate Smart Assistance “ takes the form of :
ARIHANT – The
Saviour
And ,
Each and every
Robot is embedded with :
Isaac Asimov’s
Three Laws of Robotics
If that can be
pulled off , we need not fear,
Ray Kurzweil’s
SINGULARITY
-----------------------------------------------------------
16 Feb 2017
www.hemenparekh.in / blogs
A
news headline in Hindustan Times ( 21 Oct 2016 ) reads : " AI could end
civilization " Speaking at the launch of the Leverhu...
MYBLOGEPAGE.BLOGSPOT.COM|BY HEMEN PAREKH
No comments:
Post a Comment