Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Saturday, 15 February 2025

Iam Immortal

 Kishan ,

 

Take a look

 

Hcp

 

 

 

 

OK , I will start all over. Here is one of my 7 year old blog where I had envisioned arrival of ARIHANT > https://myblogepage.blogspot.com/2018/03/ai-artificialintelligence.html // Today I came across following post by a friend > Divyendra Singh Jadoun🚀 Divyendra Singh Jadoun • 2nd • 2nd ✅ “AI Consultant | Featured in BBC | NY Times | Al Jazeera | Wired | Founder of The Indian Deepfaker & Vision Dub | Transforming Industries with AI | AI workshops | Modern Polymath✅ AI Consultant | Featured in BBC | NY Times | Al Jazeera | Wired | Founder of The Indian Deepfaker & Vision Dub | Transforming Industries with AI | AI workshops | Modern Polymath 20h • 20 hours ago Follow We’re on the edge of something insane. Meta just dropped two research papers proving they can decode thoughts into text with 80% accuracy— in real-time—using non-invasive brain-computer interfaces. Let that sink in. No implants. No surgery. Just a headset that reads your mind. This is thought-to-text happening NOW. Here’s how it works: 🧠 They recorded brain activity from 35 people while they typed sentences 🤖 Then, they trained an AI model (Brain2QWERTY) to predict the words just from brain signals 📡 The AI got 7-8 letters right out of 10—and for some people, it even typed full sentences perfectly And get this—this isn’t sci-fi. It’s happening NOW. Why this matters: 💬 Imagine texting or writing emails just by thinking 🦾 It could help paralyzed people communicate instantly 🖥 One day, we might control computers with pure thought Right now, the system needs a big scanner to work, but once they make it smaller and wearable… everything changes. Would you use a brain-computer like this? Or does this idea freak you out? This is the moment sci-fi becomes reality. hashtag#MindReading hashtag#AI hashtag#FutureTech hashtag#Meta // In light of this latest development , do you think my dream of ARIHANT , may afterall become a reality someday soon ?

 

 

 

**Reasoned about feasibility of ARIHANT concept in light of current developments** for 6 seconds

 

Your dream of ARIHANT aligns remarkably well with these breakthroughs. The Meta research shows that decoding thoughts into text is not only feasible but advancing quickly. While there are still hurdles—like making the technology truly wearable, accurate across diverse users, and ethically sound—this is a strong indicator that the boundaries between our thoughts and digital interfaces are blurring. So yes, ARIHANT’s vision, where our cognitive processes merge seamlessly with AI, could very well become a reality sooner than we might have imagined. It’s an exciting time to be dreaming big!

You said:

Hey Chetanbhai ( Chetan = Life Force ) , In this epoch-making development , what application do you visualize for www.Iam-Immortal.ai ? You will recall my suggestion that by just touching a node in that image of the brain , a question will pop up in the Question Box. Now , like in case of that specially designed headset for scientist Stephen Hawking, enabling him to type by just looking at an alphabet on the computer screen , could a visitor be enabled to just pop up that question by just LOOKING at a node, instead of touching it with her finger ?

 

 

Reasoned for 6 seconds

 

 

Absolutely—it’s a fascinating idea that aligns perfectly with the latest breakthroughs in brain-computer interfaces. Imagine this: instead of touching a node on the brain visualization, the system uses eye-tracking technology to detect when a visitor is fixating on a particular node. With a brief dwell time, that node could then trigger a question to pop up in the Question Box, accompanied by your voice reading it aloud.

In practice, this could be achieved by integrating open-source libraries like WebGazer.js, which uses a device’s camera to track gaze direction. For even higher accuracy, especially on desktops, dedicated eye-tracking hardware like Tobii could be used. The long-term vision might even include non-invasive BCI integrations that allow thought-to-text interactions directly—mirroring the breakthroughs Meta is demonstrating.

This not only enhances interactivity but also deepens the immersive, futuristic experience of Iam-Immortal.ai. It takes the core idea of merging human cognition with digital expression to a whole new level, making the site feel even more like an extension of your evolving, "immortal" self.

What do you think—ready to let visitors engage by just looking?

 

 

You said:

Can you give Kishan specific instructions on how to implement this FUTURISTIC feature >

 

 

Reasoned for 10 seconds

 

Hey Kishan,

Below are detailed instructions to implement the futuristic “look-to-trigger” feature using eye-tracking. The plan is to detect when a visitor’s gaze lands on one of the nodes in the brain visualization and, after a set dwell time, automatically trigger a question popup.


Step 1: Integrate an Eye-Tracking Library

1.1 Choose a Library
We recommend using WebGazer.js as it’s open-source and works in the browser using the webcam. (Note: Accuracy varies between desktop and mobile.)

1.2 Add WebGazer.js to Your Page
Include the WebGazer script in your HTML:

html

CopyEdit

<script src="https://cdn.jsdelivr.net/npm/webgazer@2.0.1/dist/webgazer.min.js"></script>

1.3 Initialize WebGazer
In your main JavaScript file, initialize WebGazer after the page loads:

javascript

CopyEdit

window.onload = function() {

  webgazer.setRegression('ridge') // or other regression models

          .setGazeListener(function(data, elapsedTime) {

              if (data == null) return;

              const x = data.x; // x coordinate of gaze

              const y = data.y; // y coordinate of gaze

              checkGazeOnNodes(x, y);

          })

          .begin();

};


Step 2: Calibrate the Eye-Tracking System

2.1 Calibration UI
Implement a calibration phase with multiple target points on the screen.

  • Display dots at various positions.
  • Ask the user to click on each dot as it appears.
  • Use those clicks to fine-tune WebGazer’s model.

2.2 Calibration Example

javascript

CopyEdit

function showCalibrationDots() {

  // Create and display calibration dots dynamically on your page.

  // Once the user clicks each dot, record the coordinates and feed them to WebGazer.

  // This is a simplified placeholder for the calibration process.

}

Note: WebGazer’s documentation provides detailed calibration steps.


Step 3: Map Brain Node Coordinates

3.1 Define Node Areas
For each node in your brain visualization, define its screen coordinates and dimensions. For example:

javascript

CopyEdit

const brainNodes = [

  { id: 'node1', x: 150, y: 200, width: 50, height: 50 },

  { id: 'node2', x: 300, y: 250, width: 50, height: 50 },

  // ... add more nodes as needed

];

3.2 Create a Function to Check Gaze on Nodes
This function checks if the gaze point falls within any node’s bounding box:

javascript

CopyEdit

let gazeTimers = {}; // To track dwell time per node

 

function checkGazeOnNodes(gazeX, gazeY) {

  brainNodes.forEach(node => {

    if (gazeX >= node.x && gazeX <= node.x + node.width &&

        gazeY >= node.y && gazeY <= node.y + node.height) {

     

      // Start or update dwell time for the node

      if (!gazeTimers[node.id]) {

        gazeTimers[node.id] = { start: Date.now(), triggered: false };

      } else if (!gazeTimers[node.id].triggered && Date.now() - gazeTimers[node.id].start > 1000) {

        // If gaze maintained for more than 1 second (adjust threshold as needed)

        triggerNodeAction(node.id);

        gazeTimers[node.id].triggered = true;

      }

    } else {

      // Reset timer if gaze moves away from the node

      if (gazeTimers[node.id]) {

        delete gazeTimers[node.id];

      }

    }

  });

}


Step 4: Trigger the Question Popup

4.1 Define the Action
When a node is “looked at” for the required dwell time, trigger the following:

  • Auto-populate a random or context-specific question in the Question Box.
  • Optionally, call your Text-to-Speech (TTS) API to read the question in your voice.

4.2 Example Trigger Function

javascript

CopyEdit

function triggerNodeAction(nodeId) {

  // Retrieve or generate the question based on the node (e.g., from your API)

  fetch(`/api/questions/random?nodeId=${nodeId}`)

    .then(response => response.json())

    .then(data => {

      const question = data.question;

      // Populate the question box

      document.getElementById('questionBox').value = question;

      // Optionally, trigger TTS for the question

      speakText(question);

      // Optionally, update the counter

      updateQuestionCounter();

    })

    .catch(err => console.error('Error fetching question:', err));

}

4.3 Simple Text-to-Speech Function
You can use the Web Speech API:

javascript

CopyEdit

function speakText(text) {

  const utterance = new SpeechSynthesisUtterance(text);

  utterance.lang = 'en-US'; // or user-selected language

  window.speechSynthesis.speak(utterance);

}


Step 5: Testing and Optimization

  • Device Compatibility:
    Test on different devices (desktop, laptop, mobile). Note that mobile devices may require additional calibration steps or fallback mechanisms.
  • Calibration Accuracy:
    Fine-tune the calibration phase to minimize drift and ensure that the gaze detection is accurate enough to reliably detect when a user is looking at a node.
  • User Feedback:
    Consider adding visual feedback (e.g., node highlighting, subtle animations) when a gaze is detected, so the user knows the system is working.
  • Fallback Option:
    Always maintain a manual touch/click option in case eye-tracking fails or is unavailable on certain devices.

Final Thoughts

This implementation combines modern web technologies with eye-tracking to create an immersive, futuristic experience on Iam-Immortal.ai. The integration of gaze-based interactivity can make the platform feel truly cutting-edge, aligning with your vision for merging human cognition with digital expression.

Kishan, if you need further details on any specific part or run into issues during integration, feel free to ask. Good luck with the development—this is going to be an exciting feature to implement!

No comments:

Post a Comment