This is an adapted and translated transcript of Kenrick’s panelist remarks from the Dicoding Developer Conference 2026 in Bandung, Indonesia.
- On Technical Wisdom: The ultimate differentiator for talent today is knowing the right tool for the right problem. Not everything can (or should) be solved with an LLM; traditional machine learning is often better suited for specific use cases.
- On Data Strategy: Your AI is only as good as your data. The high success rate of AI coding agents at Google (generating 75% of new code) isn't just because of the model; it is due to phenomenal context engineering and highly curated internal data.
- On The Future of Developers: Writing code is easy; architecture is hard. With the rapid rise of AI and "vibe coding," the role of a Software Engineer will inevitably shift into a Software Architect. Focus on the fundamentals (infrastructure, data pipelines, security).
The Transcript
Moderator:
Kenrick, you have a dual perspective as a practitioner at a tech giant like Google Cloud and as an instructor at Dicoding. From an industry standpoint, what is the skill that acts as the "differentiator" between candidates who just know about AI and those who are truly sought after by the industry?
Kenrick:
I think the main differentiator between someone who just knows about AI and someone who truly understands it comes down to something called technical wisdom. Simply put, it’s knowing the right tool for the right problem.
As a Customer Engineer at Google Cloud, I receive client requests daily, including for AI. But a pattern I’ve noticed over the last two years is that while AI requests are skyrocketing, what they actually mean is LLMs. Requests for traditional machine learning models have almost disappeared. This feels strange to me, because people assume LLMs can do everything.
But an LLM is a Large Language Model. For tasks that require linguistic intelligence, an LLM is absolutely the best tool for it. However, for specific use cases like credit scoring, fraud detection, and so on, I believe traditional machine learning models are still the best tools. Not everything has to use an LLM.
Now, knowing the right problem is a different story. We all know AI can hallucinate. Usually, when AI hallucinates, we blame the model, so we try to "upgrade" to a better one—for example, moving from Gemini Flash to Gemini Pro. However, I actually think that's a trade-off, not an upgrade. By changing to a Pro model, you are increasing latency and cost. Often, the root problem lies in the data, not the model.
Because today, data strategy is AI strategy. Your AI is only as good as your data.
Let me give you a simple example. Today, 75% of new code in Google's codebase is generated by AI (Sundar, 2026). I also work with the Google codebase, and our AI coding agent is indeed incredibly good. If you think about it, 75% is a massively high score.
Why is it considered so high? Because Google's codebase has a lot of internal abstractions. If you take a random, general coding agent or LLM out there and plug it into Google's codebase, it simply won't work. It is nearly impossible for a general AI to understand how to code inside our environment because those external agents are trained on external context.
For instance, if you are using Node.js and want to build a slider window feature, an external coding agent will likely suggest you install a third-party package like Splide and generate the code for it. At Google, you can't just do that. Third-party packages have to be heavily curated by internal teams before they can be used. So, I don't think it's just that the model is inherently better; it’s that the data is better. The context engineering is better. The model is trained specifically on the complex context of Google's internal codebase, which is why the agent's output is so effective.
So, I think that is what separates those who just know AI from those who truly understand how to use it: knowing the right tool for the right problem.
Moderator:
If you had to give one piece of practical advice for the people here who are currently learning but feel intimidated by the rapid pace of AI development, what is one concrete step they should take to stay relevant?
Kenrick:
I completely agree with Gabriella (Gaby) on this, especially regarding "vibe coding." I actually heard someone shouting "vibe coding! vibe coding!" from the far left side of the room earlier. Vibe coding is incredibly helpful for us. But I think we need to look at it from a different perspective.
From my perspective, Software Engineers will gradually become Software Architects. Writing code is actually easy. Why do we even have "sample code"? Sample code is just a program written by someone else that we duplicate into our own codebase, right? So it’s just duplicating patterns; it's easy. The difficult part is software architecture, abstraction, code structuring, and knowing which parts should be separated into a new function. That is the hard part, and I believe that is the work humans need to do.
If I were to do vibe coding, I would challenge the idea. What I usually do is tell the coding agent, "Do not implement the code before I explicitly tell you to implement." I do this because I want to brainstorm the idea and challenge the agent, iterating until we reach an agreed-upon solution—then we implement it.
One more thing: go back to the fundamentals. AI updates will keep coming. Just 2 or 3 days ago we had Google Cloud Next, where they announced around 100 to 200 new AI updates. But the fundamentals remain the most important: infrastructure, data analytics, data pipelines, security, and networking. Later on, whatever new AI updates arrive, you will be able to implement them on top of that solid foundation.
Moderator:
Do either of the speakers have any final thoughts to add?
Kenrick:
I think events like this are a great opportunity to connect with each other. And thank you, Gaby, for sharing those career opportunities. Please do network with the people around you, because who knows—one day they might just help you at some point in your life.