The tool that stops 10x more AI slop than anything else my team has tried. Open source, FREE, and growing ๐
AI slop doesn't crash your app. It passes your tests, your linters, your type checks. It looks like code a competent person wrote. Then three weeks later you're staring at a service full of abstrac...

Source: DEV Community
AI slop doesn't crash your app. It passes your tests, your linters, your type checks. It looks like code a competent person wrote. Then three weeks later you're staring at a service full of abstractions that exist for no reason, functions that do the same thing behind slightly different signatures, and variable names that technically make sense but communicate nothing to the next person reading them. That's the version of slop nobody talks about. The kind that compounds. How this started I've been writing software for a long time. IC, staff, principal, EM, director, now CTO. When AI coding assistants became part of our daily workflow I started doing something that felt almost paranoid at first: after an agent finished implementing something, I'd spin up separate Claude Code agents to review the output, each with a different focus area. One looking at architecture, one at security, one at quality. It worked. Way better than single-pass review. But it was completely ad-hoc. I'd manually