Back to Blogs

GCP: Migrate Cloud SQL b/w Projects

Introduction

There are countless enterprise scenarios where you might need to perfectly clone a Google Cloud SQL instance between entirely separate GCP projects. Whether you are bootstrapping a mirrored environment for staging, executing disaster recovery drills, or securely isolating client data into dedicated project boundaries, cross-project database migration is a fundamental Cloud Engineering capability.

While Google Cloud does not currently support direct "one-click" instance cloning across isolated project boundaries, you can achieve the exact same pristine result through a highly deterministic two-step maneuver: aggressively capturing a backup from the source instance and meticulously restoring it into an identically provisioned target instance within the destination project.

The Challenge: Navigating Cross-Project Boundaries

The core challenge lies in GCP's rigid Identity and Access Management (IAM) perimeter. Cloud SQL backups are fundamentally tied to the lifecycle of their source instance and project. You cannot simply point a new instance in a different project at a bucket containing a database dump without carefully choreographing permissions and API calls.

Furthermore, minimizing downtime during this process requires executing these operations swiftly, often through automation, to ensure the destination database perfectly matches the state of the source at the time of the cutover.

Common Pitfall: Many engineers attempt to execute a `pg_dump` or `mysqldump` and pipe it over the network to the new project. For multi-terabyte databases, this approach is catastrophically slow, prone to network interruptions, and locks tables for unacceptable durations.

[Insert Image: High-level architecture diagram illustrating the IAM boundaries between Project A (Source) and Project B (Target), highlighting the cross-project Cloud SQL Admin API interaction.]

The Solution/Process: The Native Restoration Strategy

This native guide will walk you through exactly how to bypass network-bound dumps and perform a rapid block-level clone of Cloud SQL instances across projects using the Google Cloud CLI (gcloud) and heavily restricted Service Accounts.

Phase 1: Secure the Source Cloud SQL Backup

Because cross-project cloning isn't a native one-click feature, the very first step is safely triggering a point-in-time snapshot backup from the source.

gcloud sql backups create --instance=[SOURCE_INSTANCE] --project=[SOURCE_PROJECT]

Once completed, query the backup registry to retrieve the immutable ID of the backup that was just generated for restoration:

gcloud sql backups list --instance=[SOURCE_INSTANCE] --project=[SOURCE_PROJECT]

Pro-Tip: Never use a scheduled automated backup ID for a live migration cutover. Always manually execute a fresh create command to ensure your target database reflects the absolute latest transaction state before you route traffic to it.

Phase 2: Provision the Target Cloud SQL Instance

Before restoring the backup payload, you need a structurally identical target Cloud SQL instance provisioned in the destination project to receive the restoration data block.

gcloud sql instances create [TARGET_INSTANCE] \
    --database-version=[SOURCE_INSTANCE_DATABASE_VERSION] \
    --cpu=2 --memory=8GiB --region=us-west1 \
    --root-password=[PASSWORD] \
    --project=[TARGET_PROJECT]

CRITICAL: Make sure that the database version (e.g., POSTGRES_14) exactly matches that of the source instance! The restoration will hard-fail otherwise.

Phase 3: Cross-Project IAM Configuration

To automate this cross-boundary data transfer, you must use a dedicated Service Account (SA). The SA will physically live in the Target project but requires specific viewing grants in the Source project.

1. Create the Service Account in the TARGET_PROJECT:

gcloud iam service-accounts create [SERVICE_ACCOUNT] --project=[TARGET_PROJECT]

2. Grant Viewer permissions in the SOURCE_PROJECT so the target SA can see and read the source backup:

gcloud projects add-iam-policy-binding [SOURCE_PROJECT] \
   --member="serviceAccount:[SERVICE_ACCOUNT]@[TARGET_PROJECT].iam.gserviceaccount.com" \
   --role="roles/cloudsql.viewer"

3. Grant Cloud SQL Admin permissions in the TARGET_PROJECT so it has the authorization to aggressively overwrite the new instance with the payload data:

gcloud projects add-iam-policy-binding [TARGET_PROJECT] \
   --member="serviceAccount:[SERVICE_ACCOUNT]@[TARGET_PROJECT].iam.gserviceaccount.com" \
   --role="roles/cloudsql.admin"

Phase 4: Executing the Cross-Project Restore

Now, you will orchestrate the backup restoration from the source project directly into the target project's instance. As this currently lacks a single unified gcloud flag, we interact directly with the Cloud SQL Admin REST API using the authenticated SA token.

curl -X POST \
 -H "Authorization: Bearer $(gcloud auth print-access-token)" \
 -H "Content-Type: application/json" \
 "https://sqladmin.googleapis.com/v1/projects/[TARGET_PROJECT]/instances/[TARGET_INSTANCE]/restoreBackup" \
 --data-raw '{
   "restoreBackupContext": {
     "backupRunId": [SOURCE_BACKUP_ID],
     "project": "[SOURCE_PROJECT]",
     "instanceId": "[SOURCE_INSTANCE]"
   }
 }'

[Insert Image: A sequential diagram visualizing the REST API call flow: Authenticating as the SA, hitting the restoreBackup endpoint, and the GCP internal plane migrating the data.]

Key Takeaways

  • Native instance cloning across projects is impossible; the process relies on a backup from Project A restored into Project B.
  • Avoid traditional logical dumps (like pg_dump) for large migrations due to network latency and excessive downtime.
  • The Target Instance must be provisioned with the exact equivalent Database Version as the Source Instance.
  • Strict IAM choreography is required: A Service Account in the target project needs cross-project Cloud SQL Viewer permissions to read the source data payload.

Conclusion

By leveraging GCP's native Cloud SQL Backup and Restore API, combined with precise Cross-Project IAM bindings, you can rapidly and securely clone massive databases across strict project boundaries. Whether executed manually for a one-off migration or baked into a robust automated bash script for CI/CD pipeline disaster recovery drills, this architecture guarantees zero data loss and significantly dramatically reduces database cutover latency.

Further Reading