Agent that refuses to run commands without human approval #
As the use of AI tools in production is becoming more common, sadly so will the high profile incidents like the one mentioned.
Fewshell is a terminal agent specifically designed to avoid this.
There is no setting to enable command auto-approval. This is by-design, so that the user never has to second-guess or worry about accidentally having it enabled.
Originally my intention was to build an AI mobile terminal to make typing shell commands easy. But with so many mobile-enabled 'claw' agents being available, I decided to make Fewshell the opposite of an autonomous agent.
Please star if you like, let me know what you think. Happy to answer questions.
About me: I'm an ex Amazon Sr. SDE for Alexa AI, and currently am working in AI safety research for agentic RLVR. I use this tool to run and check on my lab experiments.