Discussion about this post

User's avatar
Alex Beal's avatar

Great article! A lot of it resonated with my own experience trying to stay afloat.

Something that happened recently is that I read Anthropic’s article on Claude Code best practices (https://www.anthropic.com/engineering/claude-code-best-practices). Before reading it, I thought I was a heavy user of AI, especially for work. I’m constantly asking AI questions to help me think through problems and double-check my understanding, and I’ve been trying my best to get value from tools like Cursor. However, that article made me realize there’s a long tail of AI usage, and I’m like a little baby. For example, it wasn’t until I read the article that I started using .cursorrules files, a file that Cursor automatically adds to the context window. But how useful they are! On one project, Cursor’s agent kept invoking the wrong Python package manager, causing enough disruption that I questioned if I was truly benefiting. But duh! Just put it in the .cursorrules file! Spending five minutes adding explicit instructions about the right package manager saved cursor from being nearly unusable.

I’ve also started using a more structured flow for prompting Cursor, essentially investing more effort upfront:

• First, I inform the model that I’m providing context for a task and paste in everything: design documents, chat logs, emails, everything.

• Next, I describe the specific task in as much detail as possible and ask if the AI has questions.

• Then, I ask it to propose a design and a set of steps before writing any code.

• Finally, I request that it carries out the steps one by one, monitoring its output closely.

• I iterate, being very specific about the changes I want made.

Additionally, in languages like Python, which offer optional type checking and linting, I’ve found it valuable to set up and integrate all these tools with Cursor. Since the model sees the warnings produced, it can try to fix them. I also instruct Cursor through .cursorrules to run these tools after making changes.

I’m developing a better intuition for when the model is getting stuck. Sometimes, I just need to eject and read the documentation myself, but I’m increasingly using Cursor’s feature to point at external webpages. If I locate the correct documentation, simply dropping a link can sometimes get it unstuck (depending on what exactly it's getting stuck on).

The best practices article I mentioned goes even further. Related to your point about never waiting for a response, it recommends checking out multiple copies of a repository (or using git worktrees) so multiple instances of Claude can simultaneously work on independent tasks. The article even discusses using multiple Claude instances for the same task: one writes code while another reviews it. I haven’t yet reached this level of vibe coding yet but I’m making a conscious effort to keep experimenting.

Which reminds me, I still need to make time to set up MCP servers for GitHub and other services. It seems like these could simplify adding context.

On a more personal note, I’ve found integrating AI into my workflow surprisingly challenging. Some days it's just so tempting to go back to how I used to do things: stare at the code and think really hard. But the pace of change has been so rapid that the anxiety of being left behind and the need to keep up, has felt more urgent than ever.

Expand full comment
1 more comment...

No posts