Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Thursday, 21 September 2023

YOUR PETITION FOR REGULATING AI DEVELOPMENTS

 Dear Friends,

 

Congratulations on this very timely initiative

 

In this context , you might be interested in my following “ Chat “ with ChatGPT

 

I urge you to forward this proposal to all the 1100 signatories of the petition

 

I will appreciate your ack of this email

 

With regards,

 

Hemen Parekh

 

www.hemenparekh.ai   /   www.hemenparekh.in

 

hcp@RecruitGuru.com  /  Mumbai - India

 

( M ) 091 - 9867550808

 

 

( Source : https://myblogepage.blogspot.com/2023/04/ai-new-wmd-history-repeats-itself.html  ) :

 

 

“ Parekh’s Law of Chatbot “ is vindicated – by ChatGPT  :

 

My poser to ChatGPT ::

 

In the following blog, I have tried to compare the actual past development of Atom / Hydrogen Bomb with a probable future development of Conversational AI ( Chatbots ).

Then I examined the petition and found that it contains vaguely similar mechanism for future regulation of AI , as proposed by me in " Parekh's Law of Chatbots " .

Please tabulate these parallel developments ( WMD and AI ) and comment on my proposal's feasibility

 

ChatGPT replied :

 

It is interesting to draw a parallel between the development of WMDs (Weapons of Mass Destruction) and the development of AI, specifically chatbots.

In the past, the development of WMDs such as the Atom/Hydrogen bomb was not regulated, leading to significant risks for humanity. Similarly, the development of AI, especially chatbots, is rapidly advancing without a regulatory framework, and some experts warn about its potential risks.

The Future of Life Institute, backed by Elon Musk, Steve Wozniak, and DeepMind engineers, has recently called for a pause in developing advanced AIs until a regulatory framework is established. OpenAI founder Sam Altman and Microsoft founder Bill Gates have also expressed concerns about AI's risks.

The call for a pause in AI development is similar to Parekh's Law of Chatbots, which proposes that chatbots should be regulated to prevent harm to humans.

The law states that chatbots should be designed with human safety in mind, and their development and deployment should be regulated by an international body.

The comparison between the development of WMDs and AI is intriguing, as both pose significant risks to humanity if not regulated appropriately.

It is crucial to have a regulatory framework to ensure that AI is developed and used safely.

Parekh's Law of Chatbots is a step in the right directionand it is feasible to implement such regulation as long as all stakeholders, including AI developers, governments, and the public, work together to achieve this goal.

No comments:

Post a Comment