Perspectives on the Future of DevRel
I was at Google Cloud Next last week and someone asked me a simple question, “What is the future of DevRel?” That’s something I’ve been thinking about a lot over the last year, especially the last twelve months. And despite spending a lot of hours thinking about it, I didn’t have a great answer at the time. I’d like to tackle that question now, with a bit more sleep and a bit less caffeine so I can hopefully be more coherent.
The simple truth is that I don’t know what the future of DevRel is. I have some ideas, but we’re all on an AI journey together, and sometimes it seems like the tech changes every day. In that world I’m a bit suspicious of anyone who claims to know the future, unless they have a time machine. But I am comfortable talking about trends that I’m seeing in how I personally work and what I’m hearing from the folks I talk to. But upfront caveat, I can guarantee my sample is biased. I’d love to know where others have experiences that differ.
The Tale of the Reluctant Gopher
One thing I’m noticing in my own work is that AI-assisted development makes it much easier to try new things and experiment with new tools, languages, and frameworks. I’m seeing this in my own workflows. I put off learning Go (Golang) for years. While I value all the strengths of the language, I didn’t have any problems where I absolutely, 100% needed the power of Go, so I hadn’t been able to justify to myself dedicating the time to learn the language and become a Gopher.
Now, writing Go is as easy as writing any other language, so I’m preferentially picking Go for new projects. Why not? I can get high performance, native concurrency, and high readability with no additional friction for me. Additionally, because I find Go very readable I’m still comfortable doing code reviews on the code I generate for side projects. Are my code reviews as detailed as someone who knows the language well? Probably not. But also, the code that’s being generated isn’t a mystery to me.
Goals Not Syntax
Another thing I’m noticing in myself and in a lot of demos I’m seeing from my colleagues, is that the way we’re working with AI is changing. When I started using AI for development assistance I was asking questions about the argument order of specific methods in LangChain and what values an enum could have in a library I was trying to integrate into a demo.
This week I asked Gemini how to fix the images on social media links to my blog. I didn’t ask about X/Twitter cards or Open Graph or SEO. I just asked how to make the links to my blog look better. It was able to identify the issue and fix it. A lot of folks refer to this change in prompting style as “Spec-Driven Development”, but I’m still not sure how I feel about that term. I did end up having to nudge Gemini to use the Jekyll SEO Tag plugin rather than just adding the necessary fields itself. But that’s my design preference to use a maintained library rather than building integrations from scratch. The generated code worked in both cases.
No Really, Solve My Problem
One more trend I’m seeing is that folks want learning personalized to them. Either they want explanations that relate to their existing knowledge and strengths, or they want knowledge presented in the context of their existing project. So instead of getting a generic explanation of how concurrency works in Go, a web dev may want that explanation given in contrast to JavaScript. Or someone may want an LLM’s answer to take into account the libraries they’ve already imported into a project, rather than pulling in a completely new library.
I regularly ask Gemini to explain things to me, or more accurately I’ll say “tell me more…”. And at this point I pretty much expect that its explanation will take any additional context into account. Usually that’s my codebase, but sometimes that’s additional information I’ve shared about me and my previous knowledge. When it was teaching me Agent Development Kit (ADK) I asked it to use Go and told it I knew JavaScript. It explained critical Go concepts to me based on how they compared, or didn’t, to similar things in JavaScript. For me at least, it seems natural that answers would be customized to me and the things I’m currently working on.
What Does This Mean for DevRel
So what does this mean for DevRel? While I do think some things will change pretty significantly, I think a lot of what we already do is still relevant. Since it is easier for folks to try new languages and frameworks, we need to ensure the communities we belong to are welcoming to newcomers, especially newcomers who may not have followed a traditional path to this particular technology. I think my team is probably tired of me saying it, but everyone is welcome at my developer (and builder) party.
I also think we need to focus a larger percentage of our content around higher level tasks. Some DevRel teams do this really well already. I’ve always appreciated that the Firebase docs have great walk-throughs for common multi-product tasks. If folks are asking higher level questions we need to make sure that the tutorials and documentation exist to help achieve those higher level goals, especially when the solution is multi-product.
Finally, every day I feel more and more urgency to have great evaluation tools. If folks are expecting personalized answers, I want confidence that those answers are correct. I was a huge proponent of Test-Driven Development (Red, Green, Refactor for life). I liked TDD because it gave me confidence my code was right, or at least right within the context of the tests that I wrote. Evaluation seems to be the unit and integration testing of the AI age. And I want really great evaluation tools so I can be confident that personalized answers are correct.
Conclusion
I’m curious, what trends you are seeing in your communities? Are folks embracing personalized learning or do they prefer traditional documentation and tutorials? Are your colleagues trying new tools more readily now that the time needed to experiment is lower, or are folks sticking with what they know well? Please let me know in the comments.
AI Usage Note: Google's image generation models were used to create the social card image for this post. Gemini was used to correct spelling and grammar and for minor improvements to readability.