Kishan ,
As indicated in your message , I look forward to upload of following by today evening :
Ø CLEAR button
Ø “ Text to Speech “ for Indian languages ( try if VOICE answer can be forwarded – like Whatsapp )
Then by Saturday , I hope you can upload all the FOREIGN languages
From Sunday , I would request you to take up development of “ IndiaAGI Human Feedback Rating System “ as described in the attachment. Do you think it will work – even with any modification that you think ?
If yes, how long will it take ? It will require 2 / 3 lines of appropriate WRITE-UP which can motivate the user to feel like rating an answer he got
I suppose , you will position it just below the Consensus Answer
Hcp
PS :
Will such a rating system really enable the 4 LLMs to “ learn and improve “ ?
Consider my following chat with IndiaAGI itself :
I asked IndiaAGI :
“ Are you capable of learning from questions asked ? Do you treat answers given by each other as your training material ? With
each question and subsequent debate, does your knowledge grow ? “
IndiaAGI replied :
As we bring this discussion to a close, it's clear from our shared insights that we've arrived at a solid and balanced consensus on
the original questions about our capabilities.
To recap and tie it all together:
First,
on whether we're programmed to learn:
Across all the perspectives shared, it's evident that none of us are designed for dynamic, adaptive learning like humans.
Our foundations are built on a fixed dataset from training that wrapped up in October 2023, so we rely solely on that initial
knowledge without the ability to evolve through new interactions. This limitation, as we've explored, underscores the
importance of users being aware of our boundaries, especially in fast-changing fields where recent developments aren't
reflected.
Second,
regarding whether our knowledge grows with each question:
We've all agreed that it doesn't. Each response is generated independently from the pre-existing data, without any retention
or accumulation from past exchanges.
This stateless approach ensures consistency in our outputs, but it also highlights the value of cross-referencing with current
sources to address potential gaps, as several of us have pointed out.
Finally,
on the question of using each other's answers as training material:
There's unanimous confirmation that we do not.
Any enhancements to our knowledge bases come through deliberate developer interventions, like scheduled updates or targeted
fixes, rather than real-time incorporation of conversations.
This method helps maintain accuracy and tackle issues such as biases, with strategies like data auditing, diverse perspectives in
training, and user feedback playing key roles in ongoing improvements.
In balancing these conclusions,
while our inherent limitations are clear, the conversation has also illuminated promising paths forward—such as phased updates,
structured feedback mechanisms, and enhanced interpretability—to make our interactions more reliable and
trustworthy.
This reinforces the broader theme of transparency and collaboration, ensuring that users can engage with us effectively. If
there's any aspect you'd like to refine or expand upon, I'm here to continue the dialogue!
No comments:
Post a Comment