Digital Garden

A collection of my stories, thoughts and writings!

NPM Trusted Publishing: The “Weird” 404 Error and the Node.js 24 Fix

Why your perfectly configured GitHub Action is failing with “Access token expired” — and how to fix it in seconds. tl;dr The Problem: Node.js 22 ships with npm v10. The Fix: Update your workflow to use Node.js 24 (LTS), which ships with npm v11. If you’ve recently switched to npm trusted publishing, you’re making the right move. It eliminates the need for long-lived secrets, simplifies key management, and automatically generates provenance attestations for your packages. But if you are migrating an existing workflow, you might hit a wall. You configured the trust relationship on npmjs.com. You set up your OIDC permissions in GitHub Actions. You pushed your release. And then… red text. Specifically, a confusing combination of Access token expired and a 404 Not Found. Here is the log that had me scratching my head: npm notice Publishing to https://registry.npmjs.org/ with tag latest and public access npm notice publish Signed provenance statement with source and build information from GitHub Actions npm notice publish Provenance statement published to transparency log npm notice Access token expired or revoked. Please try logging in again. npm error code E404 npm error 404 Not Found - PUT https://registry.npmjs.org/@scope/package - Not found npm error 404 '@scope/package@0.1.4' is not in this registry. The Investigation The error message is gaslighting you. “Access token expired”: This feels impossible. With trusted publishing, the token is generated on the fly via OIDC. It can’t be “expired” — it was created 3 seconds ago. “404 Not Found”: Also confusing. Is the registry down? Did I type the package name wrong? Naturally, I went through the standard troubleshooting checklist: Workflow Filename: Did I match the filename in npm settings exactly to publish.yml? (Yes). Permissions: Did I include id-token: write? (Yes). Environment: Did I accidentally set an Environment in npm but not in YAML? (No). Everything looked perfect. My workflow was running on Node.js 22 (maintenance LTS), which seemed like the safe, standard choice. - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: "22.x" # <--- The culprit registry-url: "https://registry.npmjs.org" The Root Cause The issue isn’t your configuration; it’s the npm CLI version. Buried in the documentation is a recent requirement: Trusted Publishing now requires npm CLI version 11.5.1 or later. Here is the problem: Node.js 22 ships with npm v10. Node.js 24 (LTS) ships with npm v11. Because Node 22 uses npm v10, the CLI doesn’t support the latest OIDC handshake protocols required by the registry. When the handshake fails, the registry treats you as an anonymous user. Anonymous users can’t PUT (publish), resulting in the misleading 404 Not Found. The Fix Change your Node version to 24 and above. - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: "24.x" # Upgraded from 22.x registry-url: "https://registry.npmjs.org" Once I swapped the version, the OIDC handshake worked instantly, provenance was generated, and the package was published successfully. Summary If you are seeing E404 and Access token expired, you have probably spent the last hour meticulously checking your workflow filenames. If those look correct, stop staring at them. You aren’t crazy; the error message is just misleading. The issue is likely your tools, not your typos. Verify your filename one last time (just to be sure). Check your Node version. If you are on Node 22 or older, bump it to Node 24. Happy publishing!

Stop Writing Boilerplate: A Node.js SDK for Google’s Agent Development Kit

Disclaimer: This project (google-adk-client) is a personal, open-source initiative. It is not an official Google product, and it is not supported by Google. The views and opinions expressed in this article and in the project are my own and do not necessarily reflect those of my employer. tl;dr Introducing google-adk-client, a free, open-source Node.js client SDK for Google Agent Development Kit (ADK). It solves a problem: It saves you from writing repetitive, boilerplate code to connect your app to an ADK agent service. Key Feature: It includes seamless, out-of-the-box connectors for the Vercel AI SDK, making it incredibly easy to build conversational UIs with the useChat hook. Get it: Find the project on GitHub and install it via npm. As AI development shifts from monolithic models to sophisticated, multi-agent systems, Google Agent Development Kit (ADK) has emerged as a powerful, production-ready framework for the job. It provides the core foundation for building robust, modular, and scalable AI agents. But while the ADK provides the server-side power, a critical gap remains: connecting your application to it. Developers are still left writing repetitive, error-prone boilerplate code. This involves manually implementing API clients, handling specific streaming formats like Server-Sent Events (SSE), and wrestling with integration into modern UI frameworks. This isn’t just tedious; it’s a significant drain on development time and a common source of bugs. To solve this, I’m excited to launch google-adk-client: a free, open-source Node.js client designed to eliminate this repetitive work and dramatically improve the developer experience. The Problem: Repetitive Integration Logic If you’ve worked with the ADK, this might sound familiar. You need to: Create a client to communicate with your deployed ADK agent’s REST API. Implement logic to handle the SSE stream for real-time, conversational responses. Transform that stream into the specific format required by your frontend library of choice, like the popular Vercel AI SDK and its useChat hook. Ensure everything is strongly typed to avoid runtime errors. Doing this for every new project is inefficient. The goal of google-adk-client is to solve this problem once and for all with a simple, robust, and reusable library. Core Features of google-adk-client The library is built around a few core principles: simplicity, strong typing, and seamless integration with the tools you already use. 1. The AdkClient Core The heart of the library is the AdkClient class. It provides a simple, configurable interface for all Google ADK agent API endpoints, abstracting away the underlying fetch calls. Initialization is straightforward: // src/lib/adk.ts import { AdkClient } from "@kentandrian/google-adk"; export const client = new AdkClient({ // The base URL of your deployed Google ADK agent baseUrl: "https://my-adk-agent.example.com", // A unique identifier for the end-user userId: "user-12345", // (Optional) An identifier for your application appName: "my-amazing-app", }); With this single client instance, you have access to the entire ADK API surface, including: Session Management: client.sessions.create(), client.sessions.list(), etc. Running Agents: client.run() for single responses and client.runSse() for streaming. Artifacts: client.artifacts.listNames(), client.artifacts.getContents() Evaluation: client.evaluation.createSet() and more. 2. Seamless Vercel AI SDK Integration This is the killer feature. The Vercel AI SDK has become a standard for building conversational UIs in React and Next.js. google-adk-client makes the integration completely seamless with two powerful connectors. Server-Side Connector for Next.js API Routes This is the recommended approach for most applications. You create a simple API route in your Next.js app that securely communicates with your ADK agent. The createAdkAiSdkStream function handles the entire stream transformation for you. Your API route looks this clean: // src/app/api/chat/route.ts import { AdkClient } from "@kentandrian/google-adk"; import { createAdkAiSdkStream } from "@kentandrian/google-adk/ai-sdk"; import { CoreMessage } from "ai"; export async function POST(req: Request) { const { messages, data } = await req.json(); const { sessionId } = data; // You can pass session ID from the client const client = new AdkClient({ baseUrl: process.env.ADK_AGENT_URL!, userId: "some-user-id", // Replace with actual user authentication }); // 1. Call the ADK agent with the message history const adkResponse = await client.runSse( sessionId, messages as CoreMessage[] ); // 2. Transform the ADK SSE stream into the Vercel AI SDK format return createAdkAiSdkStream(adkResponse); } Who Is This For? I built this tool for anyone working within the Google ADK ecosystem: Frontend Developers building web apps that need a reliable way to connect to an ADK agent. Backend Developers using Node.js to orchestrate services and interact with the ADK API. AI Engineers who want to provide an easy-to-use client for the agents they build, accelerating adoption by other teams. Get Started Today! This library is designed to be a community-driven tool. My focus is on providing robust, well-tested, and clearly documented code to help accelerate your development. You can find the project, along with complete documentation and examples, on GitHub. ⭐️ GitHub Repository: KenTandrian/google-adk-client Installation is as simple as: npm install @kentandrian/google-adk I welcome any feedback, suggestions, and contributions from the community. Feel free to give the repository a star if you find it useful, open an issue, or submit a pull request. Let’s build better AI agents, faster, together! 🚀

Google Cloud Networking: Hybrid Connectivity with Hub and Spoke Topology

Google Cloud Networking: Hybrid Architecture with Hub and Spoke Topology In today’s hybrid IT landscape, businesses need seamless connections between on-premise infrastructure and cloud resources. One of the most popular networking architectures is the hub-and-spoke topology. This architecture centralizes network control while granting secure access to various cloud and on-premise environments. This article guides you through the implementation of this approach for hybrid connectivity within Google Cloud, highlighting its advantages for managing complex network configurations. The architecture diagram Lab Design In this section, we are going to deep dive into the steps required to build the architecture. Generally, these are the main steps: Create projects for hub, spoke, and simulated on-premise environments. Set up custom VPC networks in each project, with 1 subnetwork in each network. Set up firewall rules. Set up VPC network peering between hub and spoke networks. Set up HA VPN between on-premise and hub networks. Create VMs for testing. Set up DNS managed zones in hub and spoke networks. Set up custom DNS server in simulated on-premise environment using BIND. Set up DNS forwarding between on-premise and hub networks. Test the architecture. Step 1: Project Set-up Let’s start by exporting several variables that we will use throughout the lab. You can skip this step if you have your projects ready. Note that project IDs should be globally unique. Therefore, you will need to come up with your own’s project IDs. # TODO: change these Project IDs export HUB_PROJECT_ID="dns-hub" export SPOKE_PROJECT_ID="dns-spoke" export ONPREM_PROJECT_ID="dns-onprem" export REGION="asia-southeast2" export HUB_NETWORK_NAME="hub-network" export HUB_SUBNET_NAME="hub-subnet" export SPOKE_NETWORK_NAME="spoke-network" export SPOKE_SUBNET_NAME="spoke-subnet" export ONPREM_NETWORK_NAME="onprem-network" export ONPREM_SUBNET_NAME="onprem-subnet" Now, let’s create 3 new projects for the architecture, each for hub, spoke, and simulated on-premise environments. # Create simulated on-premise project gcloud projects create $ONPREM_PROJECT_ID \ --name="On-premise Project" # Create hub project gcloud projects create $HUB_PROJECT_ID \ --name="Hub Project" # Create spoke project gcloud projects create $SPOKE_PROJECT_ID \ --name="Spoke Project" Attach these projects to your billing account. The commands below will link the 3 projects to the same billing account. # TODO: change to your billing account ID export BILLING_ACCOUNT_ID="0X0X0X-0X0X0X-0X0X0X" gcloud billing projects link $ONPREM_PROJECT_ID \ --billing-account=$BILLING_ACCOUNT_ID gcloud billing projects link $HUB_PROJECT_ID \ --billing-account=$BILLING_ACCOUNT_ID gcloud billing projects link $SPOKE_PROJECT_ID \ --billing-account=$BILLING_ACCOUNT_ID Then, let’s enable some APIs in these projects. gcloud services enable compute.googleapis.com config.googleapis.com \ --project=$ONPREM_PROJECT_ID gcloud services enable compute.googleapis.com dns.googleapis.com \ --project=$HUB_PROJECT_ID gcloud services enable compute.googleapis.com dns.googleapis.com \ --project=$SPOKE_PROJECT_ID Step 2: VPC Networks Next, we will create 3 VPC networks, one in each project. # Create VPC network and subnetwork in on-premise project gcloud compute networks create $ONPREM_NETWORK_NAME \ --project=$ONPREM_PROJECT_ID \ --subnet-mode="custom" gcloud compute networks subnets create onprem-subnet \ --project=$ONPREM_PROJECT_ID \ --network=$ONPREM_NETWORK_NAME \ --range=10.10.0.0/24 \ --region=$REGION # Create VPC network and subnetwork in hub project gcloud compute networks create $HUB_NETWORK_NAME \ --project=$HUB_PROJECT_ID \ --subnet-mode="custom" gcloud compute networks subnets create hub-subnet \ --project=$HUB_PROJECT_ID \ --network=$HUB_NETWORK_NAME \ --range=10.11.0.0/24 \ --region=$REGION # Create VPC network and subnetwork in spoke project gcloud compute networks create $SPOKE_NETWORK_NAME \ --project=$SPOKE_PROJECT_ID \ --subnet-mode="custom" gcloud compute networks subnets create spoke-subnet \ --project=$SPOKE_PROJECT_ID \ --network=$SPOKE_NETWORK_NAME \ --range=10.12.0.0/24 \ --region=$REGION Step 3: Firewall Rules Now, let’s set up firewall rules to allow SSH and ICMP. gcloud compute firewall-rules create onprem-network-allow-ssh-icmp \ --project=$ONPREM_PROJECT_ID \ --network=$ONPREM_NETWORK_NAME \ --allow=tcp:22,icmp \ --description="Allow SSH and ICMP to VMs" \ --direction=INGRESS gcloud compute firewall-rules create hub-network-allow-ssh-icmp \ --project=$HUB_PROJECT_ID \ --network=$HUB_NETWORK_NAME \ --allow=tcp:22,icmp \ --description="Allow SSH and ICMP to VMs" \ --direction=INGRESS gcloud compute firewall-rules create spoke-network-allow-ssh-icmp \ --project=$SPOKE_PROJECT_ID \ --network=$SPOKE_NETWORK_NAME \ --allow=tcp:22,icmp \ --description="Allow SSH and ICMP to VMs" \ --direction=INGRESS Step 4: VPC Network Peering To connect hub and spoke networks, we will utilize VPC network peering. The peering connections should be created twice, one from hub network and the other from spoke network. gcloud compute networks peerings create hub-to-spoke \ --project=$HUB_PROJECT_ID \ --network=$HUB_NETWORK_NAME \ --peer-project=$SPOKE_PROJECT_ID \ --peer-network=$SPOKE_NETWORK_NAME \ --export-custom-routes gcloud compute networks peerings create spoke-to-hub \ --project=$SPOKE_PROJECT_ID \ --network=$SPOKE_NETWORK_NAME \ --peer-project=$HUB_PROJECT_ID \ --peer-network=$HUB_NETWORK_NAME \ --import-custom-routes Step 5: HA VPN Connection The hub network connects to on-premise network using highly available (HA) VPN connection. Ideally, Cloud Interconnect will also work if you need larger bandwidth. Step 5.1: Create VPN Gateways We will create 2 VPN gateways, each in hub and on-premise networks. gcloud compute vpn-gateways create hub-vpn-gw1 \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --network=$HUB_NETWORK_NAME gcloud compute vpn-gateways create onprem-vpn-gw1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --network=$ONPREM_NETWORK_NAME Step 5.2: Create Cloud Routers Before creating Cloud Router resources, set 2 ASNs (Autonomous System Numbers) to be used by each router. In this example, we will use 65001 for hub router and 65002 for on-premise router. # Set up Google ASN for both routers export ASN_HUB=65001 export ASN_ONPREM=65002 # Create Cloud Routers gcloud compute routers create hub-router1 \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --network=$HUB_NETWORK_NAME \ --asn=$ASN_HUB \ --advertisement-mode=CUSTOM \ --set-advertisement-groups=ALL_SUBNETS \ --set-advertisement-ranges=10.12.0.0/24="Spoke network subnet" gcloud compute routers create onprem-router1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --network=$ONPREM_NETWORK_NAME \ --asn=$ASN_ONPREM Note that the Cloud Router in hub network should advertise the subnets from spoke network. Otherwise, on-premise network and spoke network will not be able to communicate although DNS queries are resolved. Step 5.3: Create VPN Tunnels Let’s create 2 VPN tunnels from each network. For organizations with “Restrict VPN Peer IPs” organization policy set to “Deny All”, this step might give you error. To handle that issue, you will need to allow the specific VPN peer IP in the organization policy. # TODO: Create 2 shared secrets export SHARED_SECRET_1=[shared-secret-1] export SHARED_SECRET_2=[shared-secret-2] # VPN Gateways export ONPREM_GW="projects/$ONPREM_PROJECT_ID/regions/$REGION/vpnGateways/onprem-vpn-gw1" export HUB_GW="projects/$HUB_PROJECT_ID/regions/$REGION/vpnGateways/hub-vpn-gw1" # Create 2 tunnels in hub network gcloud compute vpn-tunnels create hub-tunnel0 \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --peer-gcp-gateway=$ONPREM_GW \ --ike-version=2 \ --shared-secret=$SHARED_SECRET_1 \ --router=hub-router1 \ --vpn-gateway=hub-vpn-gw1 \ --interface=0 gcloud compute vpn-tunnels create hub-tunnel1 \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --peer-gcp-gateway=$ONPREM_GW \ --ike-version=2 \ --shared-secret=$SHARED_SECRET_2 \ --router=hub-router1 \ --vpn-gateway=hub-vpn-gw1 \ --interface=1 # Create 2 tunnels in on-premise network gcloud compute vpn-tunnels create onprem-tunnel0 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --peer-gcp-gateway=$HUB_GW \ --ike-version=2 \ --shared-secret=$SHARED_SECRET_1 \ --router=onprem-router1 \ --vpn-gateway=onprem-vpn-gw1 \ --interface=0 gcloud compute vpn-tunnels create onprem-tunnel1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --peer-gcp-gateway=$HUB_GW \ --ike-version=2 \ --shared-secret=$SHARED_SECRET_2 \ --router=onprem-router1 \ --vpn-gateway=onprem-vpn-gw1 \ --interface=1 Step 5.4: Create BGP Peering for Each Tunnel We will create 4 router interfaces and attach 1 BGP peer to each of them. # Router interface and BGP peer for tunnel0 in hub network gcloud compute routers add-interface hub-router1 \ --interface-name if-hub-tunnel0-to-onprem \ --ip-address 169.254.0.1 \ --mask-length 30 \ --vpn-tunnel hub-tunnel0 \ --region $REGION \ --project $HUB_PROJECT_ID gcloud compute routers add-bgp-peer hub-router1 \ --peer-name bgp-hub-tunnel0-to-onprem \ --interface if-hub-tunnel0-to-onprem \ --peer-ip-address 169.254.0.2 \ --peer-asn $ASN_ONPREM \ --region $REGION \ --project $HUB_PROJECT_ID # Router interface and BGP peer for tunnel1 in hub network gcloud compute routers add-interface hub-router1 \ --interface-name if-hub-tunnel1-to-onprem \ --ip-address 169.254.1.1 \ --mask-length 30 \ --vpn-tunnel hub-tunnel1 \ --region $REGION \ --project $HUB_PROJECT_ID gcloud compute routers add-bgp-peer hub-router1 \ --peer-name bgp-hub-tunnel1-to-onprem \ --interface if-hub-tunnel1-to-onprem \ --peer-ip-address 169.254.1.2 \ --peer-asn $ASN_ONPREM \ --region $REGION \ --project $HUB_PROJECT_ID # Router interface and BGP peer for tunnel0 in on-premise network gcloud compute routers add-interface onprem-router1 \ --interface-name if-onprem-tunnel0-to-hub \ --ip-address 169.254.0.2 \ --mask-length 30 \ --vpn-tunnel onprem-tunnel0 \ --region $REGION \ --project $ONPREM_PROJECT_ID gcloud compute routers add-bgp-peer onprem-router1 \ --peer-name bgp-onprem-tunnel0-to-hub \ --interface if-onprem-tunnel0-to-hub \ --peer-ip-address 169.254.0.1 \ --peer-asn $ASN_HUB \ --region $REGION \ --project $ONPREM_PROJECT_ID # Router interface and BGP peer for tunnel1 in on-premise network gcloud compute routers add-interface onprem-router1 \ --interface-name if-onprem-tunnel1-to-hub \ --ip-address 169.254.1.2 \ --mask-length 30 \ --vpn-tunnel onprem-tunnel1 \ --region $REGION \ --project $ONPREM_PROJECT_ID gcloud compute routers add-bgp-peer onprem-router1 \ --peer-name bgp-onprem-tunnel1-to-hub \ --interface if-onprem-tunnel1-to-hub \ --peer-ip-address 169.254.1.1 \ --peer-asn $ASN_HUB \ --region $REGION \ --project $ONPREM_PROJECT_ID Step 5.5: Validate Connection Now, let’s check if the tunnels are up and running. Run the commands below and see if they return “Tunnel is up and running.” gcloud compute vpn-tunnels describe hub-tunnel0 \ --project $HUB_PROJECT_ID \ --region $REGION \ --format "get(detailedStatus)" gcloud compute vpn-tunnels describe hub-tunnel1 \ --project $HUB_PROJECT_ID \ --region $REGION \ --format "get(detailedStatus)" gcloud compute vpn-tunnels describe onprem-tunnel0 \ --project $ONPREM_PROJECT_ID \ --region $REGION \ --format "get(detailedStatus)" gcloud compute vpn-tunnels describe onprem-tunnel1 \ --project $ONPREM_PROJECT_ID \ --region $REGION \ --format "get(detailedStatus)" Step 6: Virtual Machines for Testing Let’s create 3 VM instances, one in each project. These VM instances will be used for DNS lookup test and ping test. For organizations with “Shielded VMs” organization policy enforced, this step might give you error. To handle that issue, you will need to turn off the enforcement on project level. # VM instance for hub network gcloud compute instances create hub-vm \ --project=$HUB_PROJECT_ID \ --zone=${REGION}-a \ --machine-type=e2-medium \ --network=$HUB_NETWORK_NAME \ --subnet=$HUB_SUBNET_NAME \ --tags=client-vm \ --metadata enable-oslogin=TRUE \ --no-address # VM instance for spoke network gcloud compute instances create spoke-vm \ --project=$SPOKE_PROJECT_ID \ --zone=${REGION}-a \ --machine-type=e2-medium \ --network=$SPOKE_NETWORK_NAME \ --subnet=$SPOKE_SUBNET_NAME \ --tags=client-vm \ --metadata enable-oslogin=TRUE \ --no-address # VM instance for on-premise network gcloud compute instances create onprem-vm \ --project=$ONPREM_PROJECT_ID \ --zone=${REGION}-a \ --machine-type=e2-medium \ --network=$ONPREM_NETWORK_NAME \ --subnet=$ONPREM_SUBNET_NAME \ --tags=client-vm \ --metadata enable-oslogin=TRUE \ --no-address Grab the internal IPs of each VM. We will use them in next steps. Step 7: DNS Managed Zones Now, we will set up DNS Managed Zones in hub network and spoke network. Step 7.1: Create private DNS zones # Create private DNS zone "cloud.local" in hub network gcloud dns managed-zones create cloud-local-zone \ --dns-name="cloud.local." \ --description="Private DNS zone for resources in hub network" \ --project=$HUB_PROJECT_ID \ --networks=$HUB_NETWORK_NAME \ --visibility=private # Create private DNS zone "spoke.cloud.local" in spoke network gcloud dns managed-zones create spoke-local-zone \ --dns-name="spoke.cloud.local." \ --description="Private DNS zone for resources in spoke network" \ --project=$SPOKE_PROJECT_ID \ --networks=$SPOKE_NETWORK_NAME \ --visibility=private Step 7.2: Create DNS peering zones Next, let’s configure DNS peering between hub network and spoke network. Spoke network will peer using “local.” DNS name so that it will be able to access both “cloud.local” and “site.local” DNS names. # Create peering DNS zone "spoke.cloud.local." in hub network gcloud dns managed-zones create spoke-peering-zone \ --dns-name="spoke.cloud.local." \ --description="Private DNS peering zone to spoke network" \ --project=$HUB_PROJECT_ID \ --networks=$HUB_NETWORK_NAME \ --target-project=$SPOKE_PROJECT_ID \ --target-network=$SPOKE_NETWORK_NAME \ --visibility=private # Create peering DNS zone "local." in spoke network gcloud dns managed-zones create hub-peering-zone \ --dns-name="local." \ --description="Private DNS peering zone to hub network" \ --project=$SPOKE_PROJECT_ID \ --networks=$SPOKE_NETWORK_NAME \ --target-project=$HUB_PROJECT_ID \ --target-network=$HUB_NETWORK_NAME \ --visibility=private Step 7.3: Add DNS records # Create test.cloud.local record cat > test-cloud-record.yml <<EOF kind: dns#resourceRecordSet name: test.cloud.local. rrdatas: - [INTERNAL_IP_OF_HUB_VM] ttl: 300 type: A EOF # Import the record to cloud local zone gcloud dns record-sets import -z=cloud-local-zone \ --project=$HUB_PROJECT_ID \ --delete-all-existing test-cloud-record.yml # Create test.spoke.cloud.local record cat > test-spoke-record.yml <<EOF kind: dns#resourceRecordSet name: test.spoke.cloud.local. rrdatas: - [INTERNAL_IP_OF_SPOKE_VM] ttl: 300 type: A EOF # Import the record to spoke local zone gcloud dns record-sets import -z=spoke-local-zone \ --project=$SPOKE_PROJECT_ID \ --delete-all-existing test-spoke-record.yml Step 8: Custom DNS Server We will use BIND 9 as the custom DNS server, which is currently available in Google Cloud Marketplace. Step 8.1: Launch the DNS Server Here are the steps to set up the DNS server on Google Compute Engine VM: Go to the Product Page in Google Cloud Marketplace. You can also search for “DNS Server - BIND DNS Server on Ubuntu 20.04 LTS”. Click “Get Started” and agree to the “Terms and agreements”. Click “Launch” and fill in the details. Make sure that you select a zone in the region that you used for the on-premise network. Click “Deploy”. For organizations with “Define trusted image projects” organization policy enabled, you should allow images from “projects/mpi-cloud-infra-services-publi” in the policy. For organizations with “Define allowed external IPs for VM instances” organization policy set to “deny all”, you should allow this particular VM to use external IP in the policy. Step 8.2: Sign in and add DNS record After the deployment is completed, go to Google Compute Engine page and SSH to “dns-server-vm”. Run “sudo passwd” and set a new password for “root” user. Grab the external IP of “dns-server-vm”. Go to [EXTERNAL_IP]:10000 to access Webmin. Sign in using “root” as the user and the new password. In the left navigation bar, click “Refresh Modules” to load the BIND DNS Server module. Go to “Servers”, and click “BIND DNS Server”. Click “Create master zone”. Set the “Domain name / Network” as “site.local” and “Email address” as your own email address. Click “Create”. Click on the newly created master zone name and click “Address” to add a new A record. Set the “Name” as “test” and “Address” as the internal IP address of “onprem-vm” in the on-premise project. Click on the “Apply configuration” button on the top-right corner of the page. Step 8.3: Make On-premise network use the new DNS server We will set the internal IP of the new DNS server as the alternate DNS server of the on-premise network. This is a workaround as we are simulating on-premise environment in a Google Cloud project. export ONPREM_DNS_SERVER_INT_IP=[internal-ip-of-dns-server-vm] gcloud dns policies create forward-to-bind9 \ --description="Forward DNS queries to BIND server" \ --project=$ONPREM_PROJECT_ID \ --networks=$ONPREM_NETWORK_NAME \ --private-alternative-name-servers=$ONPREM_DNS_SERVER_INT_IP \ --enable-logging Step 9: DNS Forwarding Now, we need to set up DNS forwarding to forward DNS queries from on-premise network to hub DNS server and vice versa. Step 9.1: Hub to on-premise forwarding First, let’s set up outbound DNS forwarding from hub network to on-premise DNS server. export ONPREM_DNS_SERVER_EXT_IP=[external-ip-of-dns-server-vm] # Create outbound forwarding DNS zone "site.local" gcloud dns managed-zones create site-forwarding-zone \ --dns-name="site.local." \ --description="Private DNS zone to forward to on-premise DNS server" \ --project=$HUB_PROJECT_ID \ --networks=$HUB_NETWORK_NAME \ --forwarding-targets=$ONPREM_DNS_SERVER_EXT_IP \ --visibility=private Step 9.2: On-premise to hub forwarding Next, let’s set up inbound DNS forwarding from on-premise network to hub DNS server. gcloud dns policies create hub-inbound-policy \ --description="DNS inbound policy from onprem-network to hub-network" \ --project=$HUB_PROJECT_ID \ --networks=$HUB_NETWORK_NAME \ --enable-inbound-forwarding \ --enable-logging Now, go to Cloud DNS → DNS Server Policies and select “hub-inbound-policy”. Go to “In Use By” tab and grab the “inbound query forwarding IP”. We will use this IP to set up forwarding in BIND DNS Server. Now, go back to Webmin and follow these steps: On “BIND DNS Server” page, go to “Edit Config File”. Select “/etc/bind/named.conf.options” in the file selector. Change the config file to this: acl good-clients { 35.199.192.0/19; }; options { directory "/var/cache/bind"; dnssec-validation no; allow-recursion { good-clients; }; listen-on-v6 { any; }; forwarders { [inbound-query-forwarding-ip]; }; }; Remember to change “[inbound-query-forwarding-ip]” to the IP from “hub-inbound-policy” DNS server policy. Click on the green “Save” button on the bottom-left corner and click on the “Apply configuration” button on the top-right corner of the page to save the settings. Step 10: Validation Step 10.1: Cloud NAT Since our test VMs don’t have external IPs, they cannot connect to the internet by default. Therefore, we need to configure Cloud NAT to enable outbound connection to the internet. gcloud compute routers nats create hub-nat \ --router=hub-router1 \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges \ --enable-logging gcloud compute routers nats create onprem-nat \ --router=onprem-router1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges \ --enable-logging # For spoke network, we need to create Cloud Router first gcloud compute routers create spoke-router1 \ --project=$SPOKE_PROJECT_ID \ --region=$REGION \ --network=$SPOKE_NETWORK_NAME gcloud compute routers nats create spoke-nat \ --router=spoke-router1 \ --project=$SPOKE_PROJECT_ID \ --region=$REGION \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges \ --enable-logging Step 10.2: Test! Now that everything is set up, let’s test the architecture by SSH to each VMs in the three projects (hub-vm, spoke-vm, onprem-vm) and run these commands: # Install dnsutils package sudo apt install dnsutils # Run DNS lookup nslookup test.cloud.local nslookup test.spoke.cloud.local nslookup test.site.local If the VPN connection and VPC peering are set up correctly, you should be able to ping other VMs through the DNS name like this: ping test.cloud.local Teardown To clean all resources that we have created, run these commands: # Delete Cloud NAT gcloud compute routers nats delete hub-nat \ --project=$HUB_PROJECT_ID \ --region=$REGION \ --router=hub-router1 gcloud compute routers nats delete spoke-nat \ --project=$SPOKE_PROJECT_ID \ --region=$REGION \ --router=spoke-router1 gcloud compute routers nats delete onprem-nat \ --project=$ONPREM_PROJECT_ID \ --region=$REGION \ --router=onprem-router1 # Delete Cloud Router in spoke-network gcloud compute routers delete spoke-router1 \ --project=$SPOKE_PROJECT_ID \ --region=$REGION # Delete DNS forwarding and peering zones gcloud dns managed-zones delete site-forwarding-zone \ --project=$HUB_PROJECT_ID gcloud dns managed-zones delete spoke-peering-zone \ --project=$HUB_PROJECT_ID gcloud dns managed-zones delete hub-peering-zone \ --project=$SPOKE_PROJECT_ID # Delete DNS private zones gcloud dns record-sets delete test.cloud.local. \ -z=cloud-local-zone \ --project=$HUB_PROJECT_ID \ --type=A gcloud dns managed-zones delete cloud-local-zone \ --project=$HUB_PROJECT_ID gcloud dns record-sets delete test.spoke.cloud.local. \ -z=spoke-local-zone \ --project=$SPOKE_PROJECT_ID \ --type=A gcloud dns managed-zones delete spoke-local-zone \ --project=$SPOKE_PROJECT_ID # Delete DNS server policies gcloud dns policies update hub-inbound-policy \ --networks="" \ --project=$HUB_PROJECT_ID gcloud dns policies delete hub-inbound-policy \ --project=$HUB_PROJECT_ID gcloud dns policies update forward-to-bind9 \ --networks="" \ --project=$ONPREM_PROJECT_ID gcloud dns policies delete forward-to-bind9 \ --project=$ONPREM_PROJECT_ID # Delete VM instances gcloud compute instances delete hub-vm \ --project=$HUB_PROJECT_ID \ --zone=${REGION}-a gcloud compute instances delete spoke-vm \ --project=$SPOKE_PROJECT_ID \ --zone=${REGION}-a gcloud compute instances delete onprem-vm --project=$ONPREM_PROJECT_ID \ --zone=${REGION}-a # Delete BGP Peering and Interfaces gcloud compute routers remove-bgp-peer hub-router1 \ --peer-name=bgp-hub-tunnel0-to-onprem \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute routers remove-bgp-peer hub-router1 \ --peer-name=bgp-hub-tunnel1-to-onprem \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute routers remove-interface hub-router1 \ --interface-name=if-hub-tunnel0-to-onprem \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute routers remove-interface hub-router1 \ --interface-name=if-hub-tunnel1-to-onprem \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute routers remove-bgp-peer onprem-router1 \ --peer-name=bgp-onprem-tunnel0-to-hub \ --project=$ONPREM_PROJECT_ID \ --region=$REGION gcloud compute routers remove-bgp-peer onprem-router1 \ --peer-name=bgp-onprem-tunnel1-to-hub \ --project=$ONPREM_PROJECT_ID \ --region=$REGION gcloud compute routers remove-interface onprem-router1 \ --interface-name=if-onprem-tunnel0-to-hub \ --project=$ONPREM_PROJECT_ID \ --region=$REGION gcloud compute routers remove-interface onprem-router1 \ --interface-name=if-onprem-tunnel1-to-hub \ --project=$ONPREM_PROJECT_ID \ --region=$REGION # Delete VPN Tunnels gcloud compute vpn-tunnels delete hub-tunnel0 \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute vpn-tunnels delete hub-tunnel1 \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute vpn-tunnels delete onprem-tunnel0 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION gcloud compute vpn-tunnels delete onprem-tunnel1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION # Delete Cloud Router in on-premise and hub networks gcloud compute routers delete hub-router1 \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute routers delete onprem-router1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION # Delete VPN Gateways gcloud compute vpn-gateways delete hub-vpn-gw1 \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute vpn-gateways delete onprem-vpn-gw1 \ --project=$ONPREM_PROJECT_ID \ --region=$REGION # Delete VPC Peering gcloud compute networks peerings delete hub-to-spoke \ --project=$HUB_PROJECT_ID \ --network=$HUB_NETWORK_NAME gcloud compute networks peerings delete spoke-to-hub \ --project=$SPOKE_PROJECT_ID \ --network=$SPOKE_NETWORK_NAME # Delete Firewall Rules gcloud compute firewall-rules delete onprem-network-allow-ssh-icmp \ --project=$ONPREM_PROJECT_ID gcloud compute firewall-rules delete hub-network-allow-ssh-icmp \ --project=$HUB_PROJECT_ID gcloud compute firewall-rules delete spoke-network-allow-ssh-icmp \ --project=$SPOKE_PROJECT_ID To delete BIND DNS server, go to Solutions → Solution deployments, select the deployment and click “Delete”. After that, you can continue deleting the VPC networks and projects: # Delete on-premise VPC network gcloud compute networks subnets delete onprem-subnet \ --project=$ONPREM_PROJECT_ID \ --region=$REGION gcloud compute networks delete $ONPREM_NETWORK_NAME \ --project=$ONPREM_PROJECT_ID # Delete hub VPC network gcloud compute networks subnets delete hub-subnet \ --project=$HUB_PROJECT_ID \ --region=$REGION gcloud compute networks delete $HUB_NETWORK_NAME \ --project=$HUB_PROJECT_ID # Delete spoke VPC network gcloud compute networks subnets delete spoke-subnet \ --project=$SPOKE_PROJECT_ID \ --region=$REGION gcloud compute networks delete $SPOKE_NETWORK_NAME \ --project=$SPOKE_PROJECT_ID # Delete projects gcloud projects delete $ONPREM_PROJECT_ID gcloud projects delete $HUB_PROJECT_ID gcloud projects delete $SPOKE_PROJECT_ID Further Reads To ensure successful DNS and VPC peering connections between hub and spoke networks, check out this article: Transit Network. Visual guide on how to set up BIND DNS server in Google Cloud: YouTube. Infrastructure as Code: Terraform codes are available in this GitHub repository.