Killing a product you built is harder than it looks. It looks like giving up. It is usually the opposite.Most build diaries celebrate what worked. This one is about what didn’t — and the specific moment when I decided to stop. These are not failures I’m embarrassed about. They are the most expensive lessons I’ve paid for, and the only waste would be not writing them down.
1. ScheduleKit (2019) — a scheduling automation tool
What it was: A SaaS tool for automating team scheduling and leave management for small companies (10-50 people). Built in response to a pain I had as a team lead: managing leave requests in Slack was chaos. The build: Four months of weekends. Next.js frontend, Node backend, Google Calendar integration, Slack bot. I was proud of the code. It was clean, well-tested, and genuinely solved the problem I had. The signal it wasn’t working:- Signed up 11 companies in beta
- 9 of them stopped using it within 6 weeks
- The 2 who kept using it were using it in ways I hadn’t built for
2. A Daily Habit Tracking App (2021) — unnamed
What it was: A minimal habit tracker. I was frustrated with existing apps being too gamified, too complex, or too ugly. I wanted something calm and private, no streaks, no badges, no social features. The build: Six weeks. React Native for iOS and Android. Local storage only — no server, no account. Three screens. Very clean. The signal it wasn’t working:- Shipped to App Store (free)
- Got 340 downloads in the first month from a single mention in a newsletter
- Had 12 active users after 60 days
- I was not one of them
3. A Startup Feature — Bayesian Skill Matching (2022)
What it was: This one wasn’t my product — it was a feature I championed at a company I was consulting for. A Bayesian model for matching engineers to tasks based on their skill signal from code reviews, PR history, and ticket resolution. I was convinced this would be the most impactful thing the engineering team could build. I wrote the proposal. I got it approved. I led the build. The signal it wasn’t working:- Shipped to internal alpha users (engineers in the company)
- The model worked technically — predictions were measurably better than random assignment
- Engineers hated it. Not “didn’t prefer it.” Hated it.
- The principal complaint: “It makes me feel like a resource, not a person.”
4. The Productivity Dashboard (2023)
What it was: An aggregated personal productivity dashboard — pulling in data from Toggl, Notion, GitHub, and a custom sleep tracker. A single screen showing energy, output, and focus quality across the week. The build: Three months. Vercel, Next.js, a tangle of OAuth integrations, Recharts for visualization. Genuinely one of the most satisfying builds I’ve done technically. The signal it wasn’t working: It worked. I used it for four months. Then I noticed I was checking the dashboard more than I was working. The act of tracking had become the activity, not the signal. When I killed it: I deleted the dashboard on a Tuesday morning when I realized I’d spent 45 minutes reviewing my productivity metrics instead of doing the work they were supposed to be measuring. What it taught me: Measurement and action are not the same thing. Some systems you build as products; some you build to learn something and then stop. The dashboard taught me my peak focus windows (Tuesday/Wednesday mornings) and my energy patterns. I use that knowledge. I don’t need the dashboard anymore. This is the kill I’m most at peace with. It worked, it was useful, and it ended when it had taught me what it could.The Pattern Across All Four
Looking at what I killed, the common threads:- I built before I validated — ScheduleKit and the habit app both came from “I have this problem” without “and others have this exact problem in this exact form.”
- I confused technical success with product success — The skill matching feature worked perfectly by the metric I’d defined. It failed by the metric that mattered.
- I was slow to act on clear signals — In each case, I knew something was wrong 2-4 weeks before I killed it. I spent that time adding features instead of accepting the evidence.
See the Thinki.sh build diary for a case study of a product that survived these same forces. The pre-mortem framework is how I now surface kill conditions before I start.
