Firebase Functions Cost Optimisation: A Practical Guide for Developers

Firebase Functions Cost Optimisation: A Practical Guide for Developers

Firebase Functions Cost Optimisation

A Practical Guide for Developers

Firebase is a powerful serverless platform, but without proper optimisation, Cloud Functions and Firestore can become costly. Many developers unknowingly incur high charges due to unnecessary function invocations, inefficient database queries, and excessive memory usage. This guide provides actionable steps to optimise Firebase costs while maintaining performance.

Understanding Firebase Billing for Cloud Functions

Before diving into optimisation strategies, let’s understand how Firebase bills for Cloud Functions. Firebase charges based on three main factors:

  • Invocation Count – Every function execution incurs a cost, regardless of whether it completes successfully or fails.
  • Execution Time – The longer a function runs, the higher the cost. This is billed in 100-millisecond increments.
  • Memory Usage – Higher memory allocations cause increased costs, as Firebase charges based on the memory assigned to a function, not the actual usage.
  • Networking – Making external API calls and sending data outside of Google comes with a cost, priced per GB sent.

1. Minimise Function Invocations

Optimising Cloud Function triggers to eliminate unnecessary invocations is the most obvious way to cut costs. It saves money by reducing resource usage and execution costs.

Use Precise Database Triggers

  • Firestore: Avoid broad onWrite triggers, which fire on any document change, leading to excessive function calls. Instead, use more precise triggers like onUpdate or onDelete.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Inefficient: Triggers on ANY change  
exports.anyWrite \= functions.firestore  
  .document('users/{userId}')  
  .onWrite((change, context) \=\> { /\* ... \*/ });

// Optimised: Triggers only when 'status' changes  
exports.statusUpdate \= functions.firestore  
  .document('users/{userId}')  
  .onUpdate((change, context) \=\> {  
    if (change.before.data().status \!== change.after.data().status) {  
      // Perform necessary actions  
    }  
  });

Avoid excessive calls with debounce/Throttle Function Calls

Often with Firebase, you may find that a user triggers a function multiple times when only one action is sensible to perform at a time. Whilst client-side debouncing is a useful guard against this, we can also implement debouncing and throttling on the server side too.

What is Debouncing? Debouncing ensures that a function executes only after a specific period of inactivity. This prevents excessive executions from multiple rapid changes.

What is Throttling? Throttling limits the number of times a function can be executed within a certain timeframe, preventing overuse of resources.

Why? Rapid consecutive updates can trigger multiple function invocations, increasing costs. Implementing a flag in Firestore helps to debounce redundant calls.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
exports.onDataChange \= functions.firestore  
  .document('myCollection/{docId}')  
  .onUpdate(async (change, context) \=\> {  
    const docRef \= change.after.ref;  
    const docData \= change.after.data();

    if (docData.isProcessing) {  
      return null; // Skip duplicate execution  
    }

    await docRef.update({ isProcessing: true });

    try {  
      // Processing logic  
    } finally {  
      await docRef.update({ isProcessing: false });  
    }  
  });

2. Use Concurrency to Avoid Cost Spikes

Concurrency in Firebase Functions allows a single function instance to handle multiple requests simultaneously, improving performance and efficiency by minimizing cold starts and optimizing resource utilization. This leads to faster response times, better scaling, and increased resilience to traffic spikes.

  • Why? Without concurrency, Firebase creates a new function instance for every request, which can lead to excessive cold starts and increased costs, especially under sudden traffic spikes. Enabling concurrency allows multiple requests to be handled by a single instance, helping to smooth out resource usage and prevent abrupt cost increases.
  • Fix: By setting a higher concurrency limit, a single function instance can serve multiple requests, reducing the need for Firebase to create additional instances. This results in better resource utilisation and prevents cost spikes during high-demand periods.
1
2
exports.myFunction \= functions.runWith({ concurrency: 80 })  
  .https.onRequest((req, res) \=\> { /\* ... \*/ });

3. Optimise Memory Usage

Split Functions to Reduce Memory Overhead

  • Why? Bundling all logic and dependencies into a single function increases memory usage and slows execution times. Splitting functions into smaller, more specialised functions helps reduce memory overhead and improves scalability.
  • Fix: Instead of having one large function that includes all business logic, break it into smaller functions that handle specific tasks. This ensures that each function only loads the dependencies it actually needs.

Example: Splitting Functions for Efficiency

Instead of loading all dependencies for every function, separate logic into different functions to optimise memory use.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// Inefficient: A single function handling multiple tasks with all dependencies  
const admin \= require('firebase-admin');  
admin.initializeApp();  
const { sendEmail } \= require('./emailService');  
const { processOrder } \= require('./orderService');

exports.mainFunction \= functions.https.onRequest(async (req, res) \=\> {  
  await sendEmail(req.body);  
  await processOrder(req.body);  
  res.send('Processed\!');  
});

// Optimised: Separate functions for email and order processing  
exports.sendEmailFunction \= functions.https.onRequest(async (req, res) \=\> {  
  const { sendEmail } \= require('./emailService');  
  await sendEmail(req.body);  
  res.send('Email Sent\!');  
});

exports.processOrderFunction \= functions.https.onRequest(async (req, res) \=\> {  
  const { processOrder } \= require('./orderService');  
  await processOrder(req.body);  
  res.send('Order Processed\!');  
});
  • Result: Each function only loads the dependencies it needs, reducing memory footprint and improving execution speed.

Monitor Memory Usage and Right-Size Instances

  • Why? Firebase charges based on allocated memory, not actual usage. Monitoring memory consumption helps identify over-allocated resources, leading to cost savings.
  • Fix: Use Firebase’s Cloud Monitoring tools to track memory usage over time and adjust function configurations accordingly.

Track Memory Usage with Cloud Monitoring

Google Cloud’s built-in monitoring allows developers to see memory usage trends and identify inefficiencies.

gcloud functions describe myFunction \--format="value(runtime.memory)"

Right-Size Your Function Instances

By default, Firebase assigns 256MB of memory to Cloud Functions. However, higher memory allocations can also impact cold start times, as functions with larger memory settings take longer to initialise. Developers should carefully balance memory allocation based on actual usage patterns to minimize both cold starts and unnecessary costs. However, many workloads don’t need this much. Consider reducing memory allocation for lighter tasks.

1
2
3
// Increase memory allocation only if necessary  
exports.optimizedFunction \= functions.runWith({ memory: '128MB' })  
  .https.onRequest((req, res) \=\> { /\* ... \*/ });

Regularly review memory usage metrics and adjust allocations to balance performance and cost savings.

  • Why? Firebase charges for memory allocation, not just actual usage. Inefficient memory management can lead to higher-than-necessary costs.
  • Fix: Reduce memory footprint by streaming data instead of loading entire datasets, clearing unused variables, and leveraging garbage collection where possible.

Use Streaming Instead of Loading Large Datasets

Fetching and processing large datasets in memory can increase execution time and cost.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Inefficient: Loads entire dataset into memory  
const snapshot \= await firestore.collection('users').get();  
snapshot.forEach(doc \=\> {  
  processUser(doc.data());  
});

// Optimized: Stream documents to process one at a time  
const stream \= firestore.collection('users').stream();  
for await (const doc of stream) {  
  processUser(doc.data());  
}

Minimise Unused Dependencies

Avoid importing unnecessary libraries that increase memory allocation and execution time.

1
2
3
4
5
6
7
// Inefficient: Loads the entire Firebase Admin SDK  
const admin \= require('firebase-admin');  
admin.initializeApp();

// Optimised: Load only necessary services  
const { getFirestore } \= require('firebase-admin/firestore');  
const firestore \= getFirestore();

Why? Longer execution times increase billing costs, and cold starts slow down response times.

Minimise Dependencies

  • Why? Unnecessary dependencies increase function loading times and memory usage.
  • Fix: Only import the necessary modules and use Firestore’s REST API instead of gRPC when applicable.
1
2
3
4
5
6
7
// Inefficient  
const admin \= require('firebase-admin');  
admin.initializeApp();

// Optimised: Initialize only necessary services  
const { getFirestore } \= require('firebase-admin/firestore');  
const firestore \= getFirestore();

4. Reduce Execution Time & Cold Starts

Carefully Consider Excessive Triggers

  • Why? Excessive triggers can lead to multiple function invocations, increasing execution time and cost. Functions that unnecessarily fan out (i.e., loop through and trigger multiple new functions) or that update a database and subsequently trigger more function executions can cause excessive cold starts.
  • Fix: Minimise cascading triggers by structuring database updates efficiently and ensuring functions do not create unintended feedback loops.

Avoid Function Fanning Out

Fanning out occurs when a function loops through multiple records and triggers a new function for each entry. This can create significant cold starts.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
// Inefficient: Triggers multiple functions, causing cold starts  
exports.fanOutTrigger \= functions.firestore.document('orders/{orderId}')  
  .onWrite(async (change, context) \=\> {  
    const ordersRef \= firestore.collection('orders');  
    const snapshot \= await ordersRef.get();  
    snapshot.forEach(doc \=\> {  
      triggerAnotherFunction(doc.data());  
    });  
  });

// Optimised: Batch processing instead of individual triggers  
exports.batchedProcessing \= functions.firestore.document('orders/{orderId}')  
  .onWrite(async (change, context) \=\> {  
    const ordersRef \= firestore.collection('orders');  
    const snapshot \= await ordersRef.get();  
    const batch \= firestore.batch();  
    snapshot.forEach(doc \=\> {  
      batch.update(doc.ref, { processed: true });  
    });  
    await batch.commit();  
  });

The shame, is that this approach is really nice - but unfortunately, it’s not cost efficient due to the cold-boot times.

Prevent Trigger Loops

Functions that update Firestore documents can trigger other functions unintentionally, leading to recursive loops and excessive costs. For example, imagine a function that updates a lastUpdated timestamp in Firestore every time a document changes. If another function is triggered by updates to that document and modifies it again, this can create an endless loop of updates and function executions. This kind of runaway execution can rapidly inflate costs and consume resources unnecessarily. In fact, this is a leading cause of runaway costs in firebase: https://flamesshield.com/blog/how-to-prevent-firebase-runaway-costs/

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
// Inefficient: Updates a document that triggers another function  
exports.triggerLoop \= functions.firestore.document('users/{userId}')  
  .onUpdate(async (change, context) \=\> {  
    await change.after.ref.update({ lastUpdated: Date.now() }); // This may re-trigger the function  
  });

// Optimised: Use flags to prevent re-execution  
exports.preventTriggerLoop \= functions.firestore.document('users/{userId}')  
  .onUpdate(async (change, context) \=\> {  
    const before \= change.before.data();  
    const after \= change.after.data();  
    if (before.lastUpdated \=== after.lastUpdated) return null; // Prevent loop  
    await change.after.ref.update({ lastUpdated: Date.now() });  
  });

Reduce Cold Starts

Cold starts occur when Firebase needs to initialise a function from scratch, adding delay and increasing execution time (and cost, as a result)

You can avoid this by using minInstances to keep a function warm and avoid global initialisations. However, note that setting minInstances is only effective for functions with consistent traffic. If a function is rarely invoked, setting minInstances may lead to unnecessary costs without providing tangible performance benefits.

Keep Warm vs. Concurrency: Which is Better?

  • Keep Warm (Min instances) ensures that at least one function instance is always running, reducing cold starts. This is useful for APIs with steady but infrequent traffic, ensuring consistent response times. However, it incurs constant costs, even when no requests are being handled
  • Concurrency - A single function instance to handle multiple requests simultaneously, reducing the number of instances required during high-traffic periods. This helps prevent cost spikes during bursts of traffic but may not eliminate cold starts if traffic is sporadic. sporadic.

Comparison:

FeatureKeep Warm (minInstances)Concurrency (concurrency)
Best forLow but steady trafficHigh and variable traffic
Cold Start ReductionYesPartial (only during high load)
Cost ImpactHigher baseline costLower cost but may scale up rapidly
ScalabilityLimited (predefined instances)High (dynamically scales)
  • When to Use Keep Warm? If your function is used consistently and you want predictable performance.
  • When to Use Concurrency? If your function handles bursty traffic and needs efficient scaling.

By balancing these two approaches, you can optimise Firebase costs while maintaining responsiveness.

  • Why? Cold starts occur when Firebase needs to initialise a function from scratch, adding delay and increasing execution time.
  • Fix: Use minInstances to keep a function warm and avoid global initialisations. However, note that setting minInstances is only effective for functions with consistent traffic. If a function is rarely invoked, setting minInstances may lead to unnecessary costs without providing tangible performance benefits.
1
2
exports.myFunction \= functions.runWith({ minInstances: 1 })  
  .https.onRequest((req, res) \=\> { /\* ... \*/ });

5. Reducing Invocation Time

Why It Matters

  • The longer a function runs, the higher the cost. Since Firebase bills execution time in 100-millisecond increments, even slight inefficiencies can add up.
  • Reducing function execution time improves responsiveness and lowers billing costs.

Optimise Code Execution

  • Avoid unnecessary computations inside the function.
  • Use asynchronous operations efficiently to prevent blocking execution.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Inefficient: Blocking execution  
exports.slowFunction \= functions.https.onRequest(async (req, res) \=\> {  
  let result \= computeHeavyTask();  
  res.send(result);  
});

// Optimised: Use async operations  
exports.fastFunction \= functions.https.onRequest(async (req, res) \=\> {  
  let result \= await computeHeavyTaskAsync();  
  res.send(result);  
});

Reduce Firestore and Database Reads

  • Excessive Firestore reads slow down function execution and increase costs.
  • Fetch only necessary data and use indexing to speed up queries.

// Inefficient: Fetches entire collection
const users = await firestore.collection(‘users’).get();

// Optimised: Use selective fields and indexing
const users = await firestore.collection(‘users’).select(’name’, ’email’).get();

Use Caching to Avoid Redundant Work

  • Store frequently accessed data in memory or a caching service to reduce processing time.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
let cache \= {};  
exports.cachedFunction \= functions.https.onRequest((req, res) \=\> {  
  if (cache.data) {  
    res.send(cache.data);  
    return;  
  }

  cache.data \= fetchData();  
  res.send(cache.data);  
});

Final Takeaways

Reduce unnecessary function invocations using precise triggers, debouncing, and batching.
Minimise cold starts by optimising dependencies, setting minInstances, and using caching.
Optimise memory usage with efficient data handling, streaming, and manual garbage collection.
Optimise database usage with efficient queries, indexing, and selective field retrieval.
Reduce network costs by caching, compressing responses, and minimising API calls.
Continuously monitor function performance and cost metrics.

By following these best practices, Firebase developers can significantly reduce costs while maintaining high performance. However, cost optimization is an ongoing process, and it’s essential to continuously monitor Firebase usage, adapt to new features, and refine strategies as the platform evolves. 🚀

Ready to Get Started?

Don't get landed with a $7,000 bill. Get started with Flames Shield today.