Racing towards ARIHANT ?
Today { year 2017 } :
Mumbai Mirror ( 03 Aug ) carries following news report
:
“ Robots tackle extremist videos better
: Google “
Google has said that its artificial
intelligence (AI)-driven robots are more accurate than humans in identifying
and blocking the extremist
videos online.
According to a report in The Telegraph on Tuesday, Google claimed that its robots flag three in four offensive videos from YouTube even before they are reported by users.
"With over 400 hours of content
uploaded to YouTube every minute, finding and taking action on violent extremist content poses a
significant challenge," the report quoted Google as saying.
"But over the past month, our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down," the company added.
The AI-driven bots at Google which are meant to identify the offensive content have identified more than 75 per cent of videos removed from YouTube in July.
The company sees this ability of bots as "dramatic" and better than humans.
Facebook is also using AI to remove terrorist propaganda and related content, using image recognition software and video technology.
Social media companies and other Internet companies have been under scanner for doing little to curb the spread of violence and terror-related content on their platforms.
Flashback { year 2013
} :
Eric Schmidt and Jared
Cohen ( both from Google ) wrote in : THE NEW DIGITAL
AGE ( page : 181 ) :
“ We can already see
early steps in this direction, as some companies make clear statements in
policy and procedure
At YouTube, there is
the challenge of content volume . With more than four billion videos viewed
daily ( and sixty hours of video uploaded each minute ), it is impossible for
the company to screen that content for what is considered inappropriate material,
like advocating terrorism
Instead YouTube relies
on a process in which users flag content they consider inappropriate ; the
video in question goes to a YouTube team for review, and it is taken down if it
violates company policies
Eventually, we will see
the emergence of industry-wide standards
All digital platforms
will forge a common policy with respect to dangerous extremist videos online,
just as they have coalesced in establishing policies governing child
pornography
There is a fine line
between censorship and security, and we must create safeguards accordingly
THE INDUSTRY WILL WORK AS A
WHOLE TO DEVELOP SOFTWARE
THAT MORE EFFECTIVELY IDENTIFIES VIDEOS WITH TERRORIST CONTENT
Some in the industry
may even go so far as employing speech-recognition software that registers strings of keywords or facial
recognition software that identifies known terrorists
=======================================================
Fast
Forward { year 2020 } :
Sergy Brin / Elon Musk / Satya
Nadela / Mark Zuckerberg / Jeff Bezos / Ray
Kurzweil / Tim Cook / Vishal Sikka / Nandan
Nilekani / Pramod Verma etc , come
together and examine the following :
Fast
Forward to Future [ F 3 ]
==================================================
==================================================
====================================================
======================================================
04 Aug 2017
www.hemenparekh.in
/ blogs
With Regards,
hemen
parekh
(
M ) +91 - 98,67,55,08,08
No comments:
Post a Comment