Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday 17 September 2023

EU adopts Parekh’s Laws of Chatbots

 Thank you , Lucilla Sioli

 

 

 

Context :

 

Europe to Open AI 'crash test' centres to ensure safety     /   Bloomberg  /  28 June 2023

 

Extract :

 

The Europe Union is introducing "crash test" systems for artificial intelligence to ensure new innovations are safe before they hit the market.

The trade bloc launched four permanent testing and experimental facilities across Europe on Tuesday, having injected ₹220 million ($240 million) into the project.

 

The centers, which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities

 

Innovators are expected to bring "trustworthy artificial intelligence" to the market, and can use the facilities to test and validate their applications, said Lucilla Sioli, [  Lucilla.SIOLI@ec.europa.eu ]

 

-          director for artificial intelligence and digital industry at the European Commission, at a launch event in Copenhagen on Tuesday.

 

-          She highlighted disinformation as one of the key risks introduced by artificial intelligence.

The facilities, which will complement regulation such as the EU's AI Act, are a digital version of the European crash test system for new cars, the Technical University of Denmark, which will lead one of the centers, said in a statement.

 

They will act as a "safety filterbetween technology providers and users in Europe and also help inform public policy, the university said.

 

 

MY  TAKE :

 


>   Parekh’s Law of Chatbots  ……….  25  Feb  2023

 

 Extract :

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply withbefore these can be launched for public use.

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “

 

Parekh’s Law of ChatBots “ ) :

 ( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

 

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

     

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

   

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

     Personality “

      

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 ( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

  

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  28 June 2023

 

Related Readings :

My 33 Blogs on ChatBots ……………………( as of 05 Apr 2023 )

Thank You, Ashwini Vaishnawji………………… 10 April 2023

No comments:

Post a Comment