My thoughts on working with AI have evolved a lot over the last year or so that I’ve been actively using them.
I’ve gone from “this is an interesting toy, but it doesn’t seem very useful” to “wow, people are using this all the time and it gets important things wrong” to “actually if you’re smart about how you use it this can do some really useful things” to “I’m using this more and more every day, it seems unlikely that’s going to change.”
Now I can see more and more of the places where not using AI, at least a little, is hurting me. Or at least slowing me down. In some cases that’s fine. I’m typing this by hand right now, and I know Claude could probably do it in about 7 seconds from a bulleted list of the points I want to make. The end result might even be better, in the sense that Claude is generally better at making easy-to-read content than I am.
I type like I speak, and I speak like a college professor. That’s not meant to be a good thing. But it is what it is and it’s who I am.
All that being said, I think I need to revisit my “100% Human Generated” policy. It’s still true, at least as of the time I’m writing this, but I think I’m missing out on an opportunity to at least collaborate with the machine. The sentiment that went into that policy is still absolutely core to my position: this blog is a craft to be honed, not a task to be automated.
I gave a version of these thoughts to Claude, and asked it to help me craft a new policy. Over several rounds of iteration, fixing parts I didn’t like and mulling over suggestions I hadn’t considered, we arrived at something that I think represents a better way to handle the challenges I’m facing.
Human-Led, Collaborative Content Policy
This blog remains fundamentally human-driven. All topics, ideas, and creative direction come from my own artistic sensibility and experiences. I sometimes collaborate with AI language models in my creative process, similar to working with a thoughtful writing partner who helps me refine and articulate my ideas.
When and How I Collaborate with AI:
- Refinement and focus: Sometimes I share my rough thoughts with an AI to help extract key ideas or sharpen my message
- Editorial dialogue: AI might help me restructure or clarify my existing ideas
- Creative exploration: Occasionally, through conversation with AI, we develop phrasings or explanations that effectively capture what I wanted to express
What Remains Purely Human:
- All topic choices and creative direction
- The initial ideas and perspectives being expressed
- The decision of what to publish and when
- The overall voice and style of the blog
Transparency: When I collaborate with AI on a post in any substantial way, I'll acknowledge that collaboration and specify which AI I worked with. I believe in being honest about the role AI plays in my creative process while maintaining my commitment to human creativity and authentic expression.
As mentioned previously, I worked on Claude with that. Claude Sonnet 3.5 to be precise, although I think in general I’m not going to specify exact versions. That gets into the weeds, and is also somewhat meaningless with how the frontier labs routinely update their models in meaningful ways without updating the name.
I’m curious to see what people think about this new policy. I’m open to feedback before formally enacting it. Am I making some kind of huge tactical error by letting AI into my workflow? Are there things I should be drawing a line on that I’m neglecting entirely?
Let me know.