I’ve been studying for the Google Cloud Associate Cloud Engineer (ACE) exam, but there’s a big difference between watching a video course and actually shipping code. Today, I decided to bridge that gap. I took a local WordPress stack, containerized it, automated the build, and deployed it to Google Cloud.

It didn’t go smoothly. In fact, it broke at almost every stage. And that’s exactly how I learned.

Prerequisites

To follow this lab, I used:

  • Local: Ubuntu Server, Docker Engine & Vim
  • Cloud: A GCP Project with Billing enabled
  • Tooling: Google Cloud SDK (gcloud) installed

The Architecture

The goal was to move from a stateful local stack to a stateless serverless architecture.

graph LR
    User((User)) -->|HTTPS| Service[Cloud Run Service<br/>WordPress :80]
    subgraph Google Cloud Project
        Service -->|Unix Socket| Proxy[Auth Proxy]
        Proxy -->|TCP :3306| DB[(Cloud SQL MySQL)]
    end

Hurdle 1: The “Mirror” Trap

After getting my CD pipeline green via GitHub Actions, I tried to deploy my image directly from Docker Hub.

I encountered this error immediately:

Error

Image 'mirror.gcr.io/droid_eleven/my-custom-wordpress:v1' not found.

The Issue

Google Cloud Run attempts to route Docker Hub requests through mirror.gcr.io to cache images. This mirror works great for official images (like ubuntu), but it fails for custom or private user repositories because it can’t authenticate on your behalf.

The Fix (The Standard Pattern)

Instead of fighting the mirror, the robust solution is to push the image to Google Artifact Registry, which is native to the platform.

1. One-time Setup (Create Registry & Auth)

gcloud artifacts repositories create devops-repo \
  --repository-format=docker \
  --location=us-central1
 
# Configure Docker to trust Google's registry
gcloud auth configure-docker us-central1-docker.pkg.dev

2. Tag & Push to Google

docker tag droideleven/my-custom-wordpress:v1 \
  us-central1-docker.pkg.dev/MY_PROJECT/devops-repo/blog:v1
 
docker push us-central1-docker.pkg.dev/MY_PROJECT/devops-repo/blog:v1

3. Deploy

gcloud run deploy my-blog \
  --image=us-central1-docker.pkg.dev/MY_PROJECT/devops-repo/blog:v1 \
  --allow-unauthenticated \
  --region=us-central1

Hurdle 2: The Port Mismatch

Once the image downloaded, the deployment crashed again with a health check failure:

Error

The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.

The Issue

Cloud Run expects all containers to listen on Port 8080 by default. However, standard WordPress containers listen on Port 80. Google was knocking on the door at 8080, getting no answer, and killing the container assuming it was broken.

The Fix

I didn’t need to rebuild the image. I simply edited the Cloud Run revision settings to change the Container Port from 8080 to 80.


Hurdle 3: The “Stateless” Reality (The Database)

I successfully deployed the app, but I couldn’t just run a database inside the container like I did on my laptop. Cloud Run containers are ephemeral—if the container crashes or scales down, the data inside it vanishes.

I had to spin up a managed Cloud SQL instance.

Step 1: Create the Database

I used the CLI to spin up a small MySQL 5.7 instance:

gcloud sql instances create my-sql-server \
  --database-version=MYSQL_5_7 \
  --tier=db-f1-micro \
  --region=us-central1 \
  --root-password=supersecret

Security Note

In a real production environment, never pass passwords plainly in the CLI. Use GCP Secret Manager and reference the secret version in your deploy command.

Step 2: Connect via Auth Proxy

This was the trickiest part. You don’t use an IP address to connect Cloud Run to Cloud SQL; you use a Unix Socket created by the Cloud SQL Auth Proxy. This keeps the connection secure without exposing the database to the public internet.

I redeployed the service with the necessary connection flags:

gcloud run deploy my-blog \
  --image=us-central1-docker.pkg.dev/MY_PROJECT/devops-repo/blog:v1 \
  --add-cloudsql-instances=MY_PROJECT:us-central1:my-sql-server \
  --set-env-vars "WORDPRESS_DB_HOST=localhost:/cloudsql/MY_PROJECT:us-central1:my-sql-server,WORDPRESS_DB_USER=root,WORDPRESS_DB_PASSWORD=supersecret,WORDPRESS_DB_NAME=wordpress"

Hurdle 4: The “Mixed Content” Bug

When the site finally loaded, it looked terrible. No CSS, no formatting—just raw Times New Roman text.

The Issue

This is the classic Mixed Content problem.

  1. The User connects to Google Cloud via HTTPS.
  2. Google Cloud talks to the container via HTTP.
  3. WordPress sees the HTTP traffic and tries to load all its CSS files via HTTP.
  4. The browser blocks these “insecure” styles on a secure page.

The Fix

I injected a PHP snippet via an environment variable to force WordPress to recognize the SSL connection:

gcloud run services update my-blog \
  --set-env-vars WORDPRESS_CONFIG_EXTRA="if (strpos(\$_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false) \$_SERVER['HTTPS']='on';"

Retrospective: What I’d Do Differently

This lab was a great exercise in “lifting and shifting” a Docker stack, but is Cloud Run the best home for WordPress?

  • Pros: It scales to zero (costs nothing when no one visits).
  • Cons: WordPress filesystem plugins don’t work well because the container file system is read-only.

Next Steps: In a production environment, I would mount a Cloud Storage FUSE bucket to handle user uploads (images/media), or I might switch to Google Compute Engine if I needed a simpler, stateful VM approach.

For now, the resources are deleted, but the architecture is documented.