I’m using the standard LLM integration (not Realtime) with gpt-4o via AgentSession, but images via ImageContent still aren’t reaching OpenAI, when images are attached at end of turn. Is there anything special I need to do to prevent images from radically bloating chat context over time, or will OpenAI’s API do something to mitigate context bloat from the images? e.g. Will OpenAI’s service compress older images in chat history to a text description, so that full-res images will not pile up over many turns in a chat?