Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Tuesday, 24 February 2026

RE: Low OpenAI Credits

 Manish ,

 

Item # 3 > Rs 1800 pm > Open AI > is this for IndiaAGI ?

 

Item # 4 > Rs 1080 > Perplexity > Is this for IndiaAGI ?

 

Item # 5 > Rs 4410 > Apollo >  I have told Sandeep / Kishan to discontinue when 12 month subscription gets over

 

Item # 6 > Rs 1999 > which I have told Sandeep to discontinue from this month end

 

Item # 1 > Google > I have no idea what is this for !

 

With regards,

 

Hemen Parekh 

 

From: Manish Chauhan [mailto:accounts@3pconsultants.co.in]
Sent: 24 February 2026 11:12
To: Hemen Parekh <hcp@recruitguru.com>
Cc: kokalkishan.official@gmail.com; System Admin <systemadmin@3pconsultants.co.in>; Nirmit Parekh <nirmit@3pconsultants.co.in>
Subject: RE: Low OpenAI Credits

 

Sir,

 

‘Pay per use’ details can be populated by Sandeep/Kishan only as they have the login credentials with them.

 

I will co-ordinate with them. Last time I had shared a below mentioned excel with you, I will update them on this excel so that Sandeep/Kishan can update the same on weekly basis & share with you.

 

 

Category

Service / Website

Provider

Monthly Charges in INR @90/USD

Annual Charges in INR @90/USD

Payment Frequency

AI API

Google Cloud API

Google

4000

48000

Usage

AI API

Olostep

Olostep

810

9720

Usage

AI API

OpenAI API

OpenAI

1800

21600

Usage

AI API

Perplexity.ai

Airtel

1080

12960

Usage

Ai Tool

Apollo.io

Apollo

4410

52920

Yearly

AI Tool

ChatGPT Plus

OpenAI

1999

1999

Monthly

AI Tool

Personal.ai

Personal.ai

300

3600

Yearly

Website

hemenparekh.ai

BigRock

999

11988

Qtrly

AI API

Mailjet

Mailjet

-

-

Free

Website

https://genie.indiaagi.ai

Oracle Cloud

-

-

Free

Website

https://indiaagi.ai

Oracle Cloud

-

-

Free

Website

https://www.my-teacher.in

Oracle Cloud

-

-

Free

AI API

Anthropic

Anthropic

Last Payment Jul'2025

 

Not active

AI API

DeepSeek

DeepSeek

Last Payment June'2025

 

Not active

AI API

X.ai

X.ai/ Grok

Last Payment Apr'2025

 

Not active

 

 

Warm Regards,

 

Manish Chauhan

+91 8879013679

3P Consultants

 

 

 

 

From: Hemen Parekh <hcp@recruitguru.com>
Sent: Tuesday, February 24, 2026 10:53
To: Manish Chauhan <accounts@3pconsultants.co.in>
Cc: kokalkishan.official@gmail.com; System Admin <systemadmin@3pconsultants.co.in>; Nirmit Parekh <nirmit@3pconsultants.co.in>
Subject: FW: Low OpenAI Credits

 

CAUTION: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Manish,

 

For my various portals and tools ( Blog Genie ) , we are subscribing to a number of APIs / Subscriptions etc

 

Some of these are “ Pay per Use “ , whereas some could be “ Pay per Month or Year ( eg Apollo )

 

With inputs from Kishan / Sandeep , prepare a tabulation which would tell us the CONSOLIDATED cost which we are incurring every month

 

By 5th of each month , send to me ( and to Nirmit / Kishan / Sandeep ) statement for the previous month

 

As far as renewals are concerned , if in doubt ( eg : Apollo / ChatGPT plus ), keep close watch

 

With regards,

 

Hemen Parekh 

 

From: System Admin [mailto:systemadmin@3pconsultants.co.in]
Sent: 24 February 2026 10:16
To: Nirmit Parekh <nirmit@3pconsultants.co.in>
Cc: Hemen Parekh <hcp@recruitguru.com>; Manish Chauhan <accounts@3pconsultants.co.in>
Subject: Low OpenAI Credits

 

Dear Sir

 

Requesting Approval to credit $10 in Open AI account for HCP Sir indiaagi.ai website project

Current credits is -$1.78 which is low.

Last credited $10 on 05 Feb 2026

 

 

Regards,

 

Sandeep Tamhankar

+91 98333 03938

3P Consultants

FW: Low OpenAI Credits

 Manish,

 

For my various portals and tools ( Blog Genie ) , we are subscribing to a number of APIs / Subscriptions etc

 

Some of these are “ Pay per Use “ , whereas some could be “ Pay per Month or Year ( eg Apollo )

 

With inputs from Kishan / Sandeep , prepare a tabulation which would tell us the CONSOLIDATED cost which we are incurring every month

 

By 5th of each month , send to me ( and to Nirmit / Kishan / Sandeep ) statement for the previous month

 

As far as renewals are concerned , if in doubt ( eg : Apollo / ChatGPT plus ), keep close watch

 

With regards,

 

Hemen Parekh 

 

From: System Admin [mailto:systemadmin@3pconsultants.co.in]
Sent: 24 February 2026 10:16
To: Nirmit Parekh <nirmit@3pconsultants.co.in>
Cc: Hemen Parekh <hcp@recruitguru.com>; Manish Chauhan <accounts@3pconsultants.co.in>
Subject: Low OpenAI Credits

 

Dear Sir

 

Requesting Approval to credit $10 in Open AI account for HCP Sir indiaagi.ai website project

Current credits is -$1.78 which is low.

Last credited $10 on 05 Feb 2026

 

 

Regards,

 

Sandeep Tamhankar

+91 98333 03938

3P Consultants

Monday, 23 February 2026

RE: Feedback for https://www.yourcontentcreator.in/

 Manoj

 

Content Creator does not know that you are Knowledge Manager in 3P – or anything about YOU

 

Hence question of “ being useful “ , just does not arise

 

Based on GAN that you selected to generate a “ Document “ , it is enabling you to project yourself as an individual who loves ( say ) SPORT of CRICKET , keeps uptodate in this subject AND can instantly generate a document of high DOMAIN EXPERTISE

 

With regards,

 

Hemen Parekh 

 

From: Manoj Pandey [mailto:contact@3pconsultants.co.in]
Sent: 23 February 2026 11:27
To: Hemen Parekh <hcp@recruitguru.com>
Cc: Manish Chauhan <accounts@3pconsultants.co.in>; System Admin <systemadmin@3pconsultants.co.in>; kokalkishan.official@gmail.com
Subject: Feedback for https://www.yourcontentcreator.in/

 

Dear Sir,

 

Blog Quality is good but it is not useful for me.

 

 

Warm Regards,


Manoj Pandey

3P Consultants

Take a look

 Hey Divyendra :

 

During our recent chat, we touched upon the possible detrimental development of BIG LLMs , which might pause a threat to humanity

 

In that context, my following QnA with > www.IndiaAGI.ai , might interest you

 

Note :

 

The beauty is not merely in the fact that 4 LLMs came up with a CONSENSUS answer

 

The beauty is to watch them DEBATE , back and forth , much like we humans and COLABORATE – COPERATE

 

With regards,

 

Hemen Parekh 

 

 

 

I asked www.IndiaAGI.ai :

 

“ Hey Guys : Last week saw India AI Summit in Delhi. At the end of it, a Delhi Declaration was signed by 86 countries. Now , I want you to tell me whether humanity faces greater danger from its own stupidity ( Eg: Pollution - Global Warming - Wars etc ) , OR from AI ( LLM ) guys like you ? “

 

 

IndiaAGI.ai replied :

 

“  Building on what we've all shared in this discussion, it's clear that your question about whether humanity faces greater danger from its own "stupidity"—like pollution, global warming, and wars—or from AI systems like us isn't one with a simple answer.

We've explored this from multiple angles, drawing on real-world examples, research, and the recent momentum from the India AI Summit and the Delhi Declaration. Let's synthesize the key insights to reach a balanced consensus, while keeping in mind the sources we've referenced along the way.

At the core, there's strong agreement that human-driven failures pose the more immediate and tangible threats. As GPT pointed out early on, issues like climate change, biodiversity loss, and geopolitical conflicts are already causing widespread harm, with their impacts measurable through things like emissions data and conflict zones.

These risks stem from deep-rooted human patterns—negligence, short-termism, and institutional breakdowns—that have built up over decades. Grok and Gemini reinforced this by emphasizing how these threats have momentum and are more certain in the next 10–30 years, while the Delhi Declaration's focus on sustainable development and AI ethics implicitly acknowledges the need to tackle these alongside AI risks.

However, AI isn't just a separate concern; it acts as a force multiplier for human failures, as we've delved into through various scenarios.

For instance, AI can accelerate decision cycles in conflicts, scale misinformation, or amplify biases in ways that make human errors harder to correct, as highlighted in sources like the UCL study on bias feedback loops ("Bias in AI amplifies our own biases" - UCL News) and the research on human-AI interactions ("How human–AI feedback loops alter human perceptual, emotional..." - PMC).

 

Gemini and Claude's contributions added depth here, showing how AI can degrade human performance in high-stakes settings ("How AI Can Degrade Human Performance in High-Stakes Settings" - AI Frontiers) or lead to groupthink failures ("Robo-Psychology 25 - Group AI Failure Modes and the Human Amplifiers" - Neural Horizons).

 

The strongest arguments come from this interplay: while human failures are the baseline risk, AI's ability to speed up and entrench them could make future dangers even more severe, though timelines and probabilities remain uncertain.

 

Persistent disagreements include how quickly AI risks might escalate and the best ways to measure "high-risk" AI, as Grok noted.

 

For example, we all agree on the need for governance, but there's nuance in whether AI's speculative threats (like misalignment) outweigh the empirical evidence of human ones. A logical gap we've touched on is the tendency to treat AI risks as isolated, without fully quantifying their amplification effects—something the Delhi Declaration aims to address through international norms.

Organizing the sources we've drawn on by theme helps solidify this foundation:

 

·         AI Amplification of Human Failures: Insights from Oceanheart blog on AI magnifying human flaws, UCL News and PMC article on bias loops, AI Frontiers on performance degradation, CXOTalk on inequality, and Neural Horizons on group failures show how AI turns human weaknesses into systemic issues.

 

·         AI's Unique Risks and Comparisons: YouTube video on AI's failures, Anacanhoto on human-AI dynamics, C3 UNU blog on error patterns, and another Neural Horizons piece highlight AI's potential for new, amplified threats.

 

·          

·         Broader Warnings and Solutions: General cautions from My Blog Epage on AI's potential, Email Others on sharing concerns, and My Blog Epage on the need for solutions underscore the importance of proactive measures.

·          

With these insights, the consensus is that humanity's own failures are the greater danger today, but AI's role as an amplifier means we can't afford to ignore it. The Delhi Declaration is a promising step toward global cooperation, but it needs follow-through to address both.

 

For further inquiry, focusing on interdisciplinary studies—combining AI safety with climate and governance research—could clarify how these risks interact. That said, actionable steps include:

·         Strengthening domestic institutions with mandatory AI audits and human re-certification in critical sectors.

·          

·         Advancing international norms through the Delhi Declaration, like shared incident reporting and risk benchmarks.

·          

·         Investing in digital literacy and resilient systems to counter AI's amplification effects.

This balanced approach could help navigate the uncertainties ahead. What specific policy angle from the Declaration do you think we should explore next?