Comment by siscia

Comment by siscia 2 days ago

0 replies

We have been discussing a similar idea with friends.

The topic of knowledge synthesis is fascinating, especially in big organisations.

Moving away from fragmented documents into a set of facts from which LLM synthetize documents from, tailored for the reader.

There are few tricks that would be interesting to have working.

For instance the agent keep evaluating itself against a set of questions. Or user adding questions to see if the agent is able to understand the nuances of the topic and so if it can be trusted.

(Not dissimilar to what would be regression testing in classical software engineering)

Then the "homework" sections, when we ask human experts to evaluate that the facts stored by the agents are still relevant and up to date.

All these can then be enhanced with actions usable by the agent.

Think about fetching the PoC for a particular piece of software. It is the employer Foo.

If we write this down in a document, it will definitely get outdated when Foo move, or get promoted.

If we put this inside a knowledge synthesis system, the system itself may keep asking every 6 months to Foo if it is still the PoC for the software project.

Or it could daily talk with the LDPA system and ask the same question as soon as it notices that Foo has changed its position or reporting structure.

This can be expanded for processes to follow. Report to create, etc...