You will need to feed the model with the conversation log every time you query it, and as such you’d be limited by the context length on the model.
With a 100k context model you’d be able to keep a chat log of about 70-100 000 words, which is about the length of a normal book.
Would love it if instead of proving LLMs are concious, we prove that none of us are. Or, I guess, I wouldn’t be since I wouldn’t be concious