BigQuery Multi-Project Service Account

Introduction

This guide walks through the process of configuring a Google Cloud service account to access multiple BigQuery projects. This setup is particularly useful for data engineering workflows that require querying or manipulating data across different projects within your organization's Google Cloud environment.

Prerequisites

Step-by-Step Configuration

1. Create a Service Account

First, you'll need to create a service account in your primary Google Cloud project:

  1. Select your primary project from the project dropdown

  2. Go to IAM & Admin > Service Accounts

  3. Click Create Service Account

  4. Enter the following details:

    • Service account name: bq-multi-project-sa (or your preferred name)

    • Service account ID: This will auto-generate based on the name

    • Description: "Service account for accessing multiple BigQuery projects"

  5. Click Create and Continue

2. Assign Roles in the Primary Project

Assign the necessary BigQuery roles to your service account in the primary project:

  1. On the "Grant this service account access to project" screen:

  2. Click Add Role and add the following roles:

    • BigQuery Data Editor

    • BigQuery Job User

  3. Click Continue

  4. Click Done to complete the service account creation

3. Create and Download the Service Account Key

Generate a key file for authentication:

  1. From the Service Accounts list, click on your newly created service account

  2. Navigate to the Keys tab

  3. Click Add Key > Create new key

  4. Select JSON as the key type

  5. Click Create

  6. The key file will automatically download to your computer

  7. Store this key file securely as it grants access to your Google Cloud resources

4. Grant Access to Additional Projects

Now, you need to grant this service account access to your additional BigQuery projects:

  1. Navigate to the Google Cloud Console

  2. Select the second project where you want to grant access

  3. Go to IAM & Admin > IAM

  4. Click Grant Access

  5. In the "New principals" field, enter the service account email (it should look like [email protected])

  6. Click Add another role and add the following roles:

    • BigQuery Data Editor

    • BigQuery Job User

  7. Click Save

Repeat steps 1-7 for each additional project that needs to be accessed

Setting Up dbt™ with BigQuery to Read and Write Across Different Projects

With our BigQuery service account now configured for multi-project access, we need to set up our dbt project to leverage these cross-project capabilities.

This configuration will enable our data transformation workflows to read source data from one BigQuery project and write the transformed results to another project, all while maintaining a clean, maintainable codebase.

What We'll Configure

In this section, we will:

  1. Configure source definitions to explicitly reference external source projects

  2. Establish proper model configurations to control where transformed data is written

1. Source Definitions with Explicit Project References

When working with data from different BigQuery projects, you must specify the source project ID in your source definitions.

sources:
  - name: marketing_data
    database: raw-marketing-project-123  # Source project ID goes here
    schema: google_ads
    tables:
      - name: campaigns
      - name: ad_groups
  
  - name: sales_data
    database: raw-sales-project-456  # Another source project ID
    schema: transactions
    tables:
      - name: orders
      - name: line_items

The database parameter in BigQuery corresponds to the project ID.

2. Control Where dbt™ Models are Written

To specify which project and schema a model should be written to, use the config block at the top of your model file.

{{ 
  config(
    materialized = 'table',
    schema = 'marketing_analytics',  # Target schema
    database = 'analytics-target-project-789',  # Target project ID
    partition_by = {
      "field": "date_day",
      "data_type": "date"
    }
  ) 
}}

WITH campaign_data AS (
  SELECT * FROM {{ source('marketing_data', 'campaigns') }}
)

-- Transform and aggregate your data
SELECT
  date_day,
  campaign_id,
  campaign_name,
  SUM(impressions) AS total_impressions,
  SUM(clicks) AS total_clicks
FROM campaign_data
GROUP BY 1, 2, 3

Set Default Project level Destinations

For larger projects, set default destinations by model category in your dbt_project.yml file.

dbt_project.yml
name: 'project_analytics'
version: '1.0.0'
config-version: 2

...

# Project-specific database destinations
models:
  cross_project_analytics:
    # Default project for all models
    database: analytics-target-project-789
    
    # Override for specific model categories
    staging:
      database: staging-project-567
      schema: stg
      materialized: view
      
    marts:
      marketing:
        database: marketing-analytics-project-345
        schema: marketing
      
      finance:
        database: finance-analytics-project-901
        schema: finance

Last updated

Was this helpful?

OSZAR »