Inconsistent Intent Recognition
While working with the Building-Simple-and-Efficient-Chatbots-Demo, I noticed that the chatbot struggles to correctly interpret user commands when the conversation imitates fast-paced game interactions. To test this, I created a simple sequence of short commands inspired by quick reaction games, similar in tempo to mr flip, where users switch actions rapidly.
When I send commands quickly—such as “jump,” “retry,” “score?”, “restart,” and similar short phrases—the chatbot often misinterprets the user’s intention. It tends to rely too strongly on the previous context instead of focusing on the most recent message. As a result, it sometimes assigns an incorrect intent, falls back to the default response even when the command is valid, or generates answers meant for earlier messages rather than the current one.
To reproduce the issue, I ran the demo with its default configuration, then sent a series of short imperative messages in quick succession. When the same messages were sent more slowly, the chatbot behaved correctly. The inconsistency appears only when the pace increases, which suggests that the model may not be properly resetting its context between rapid messages.
The expected behavior is that the chatbot should consistently interpret each short command based solely on the newest user input. Instead, the bot mixes past and current context whenever the message rhythm speeds up.
I am running the latest version of the demo on a local Ubuntu 22.04 environment with the default lightweight model. Improving how the bot handles rapid context changes, especially for short commands, would significantly enhance its reliability. I am happy to share the test logs or the small dataset I used if that would help with debugging.