I recently built an AI tool which breaks US common application essays down and gives feedback. I have been testing with different essays where I compare the AI’s feedback to real feedback given by a professional.
When I spot inconsistencies or limitations in feedback, what should I do to correct this given that I also have feedback from the professional (my goal is to make close in terms of clarity and creativity, so not generic feedback).
I try to give example feedback to the “send message” block’s prompt, but I feel it might confuse the AI if I start adding too many examples. Any better ideas?
Adjusting the prompt by adding sections like examples, overall feedback, etc. is definitely the way to go. In most cases, a more detailed prompt means higher quality output.
For your second question, it depends on your agent. The System Prompt applies to the entire workflow and will have the biggest impact during chat interactions. Task prompts in blocks like Generate Text only affect that specific block. You can read more here: