Looking for some help. I’ve build an agent for an author to critique a book manuscript and need the agent to use the entire manuscript (150,000 words) for context and continuity when looking at specific chapters.
This means that I need the user to be able to use a user input block to upload updated versions of the whole manuscript and have that chunked and vectorized as source data. I don’t see a way to do this.
I have a way for the user to load the manuscript at runtime and then use a function and external embedding service to chunk/vectorize it but that’s less than ideal. It means that every time the user interacts with the agent he has to upload the entire file.
I’m using Gemini 2.5 for a giant context window of 1m tokens, but it seems like the current set up is problematic.
I’d love for the file to be a chunked and vectored source data file and then have the user use the user input block to update it as needed.
Ideas/guidance?
Thanks!