Making API Calls to a Salesforce server using a Static IP from a serverless environment in GCP.
- Basic Understanding of Cloud deployments.
- A Google Cloud Console project.
- Knowledge of deploying Node.js app on GCP App Engine.
- Knowledge of deploying Infrastructure on GCP using Terraform.
- Setting up an Nginx Reverse Proxy Server.
This article illustrates how we set-up a GCP infrastructure that enabled our application deployed on App Engine to make API calls to a server that requested a Static IP for whitelisting.
We needed to deploy this infrastructure because App Engine is a highly scalable, fully managed serverless platform  but does not provide Static IP addresses to make API calls.
We had an application deployed on App Engine that needed to make API calls to a Salesforce server residing at an on-premise location. The requirement was that there be a whitelisted static IP address from where the Salesforce server is being hit. The API was a traditional SOAP API request which returned data specific to each user that logged into the application.
We needed three significant components to make the SOAP API requests successfully.
- A server that was set up as a reverse proxy that routed requests to the correct destination.
- A NAT Gateway with a Static External IP address that made it possible to make API requests over the internet.
- A VPC Connector that allowed communication between our serverless application and our VPC network.
The final architecture would look like this:
The Compute VM, Cloud NAT and VPC connector resided in the VPC network we created in the us-central1 region.
We used Terraform and Google Cloud Foundation Toolkit scripts to deploy our infrastructure. Terraform is the most popular Infrastructure as Code (IaC) tool in the market. Using Terraform has a lot of advantages which are extensively listed here.
The significant advantage we got is that we maintained only one repository to deploy infrastructure on both Dev and Production environments which reduced a lot of time and effort. We followed the below steps for creating the architecture shown above.
Configuring the Network:
The first step was to create the VPC network and the sub-network where the NAT Gateway and Compute VM would reside. Below is the HCL in the network.tf file that creates a VPC network and an associated sub-network:
Creating the Cloud NAT Router:
Since our Compute VM had to be an internal machine with no access to the internet and no external IP address, we had to configure a Cloud NAT Router that would make the SOAP API request via the internet to the Salesforce server.
The two-fold advantage we got using a Cloud NAT instead of having the Compute VM make the requests directly through the internet was:
- We reduced the need for individual VMs to each have external IP addresses and be accessed by all and controlled access to the VMs using Google’s Identity Aware Proxy service.
- We had one external IP address that was attached to Cloud NAT gateway which was whitelisted on the Salesforce Server.
Below is the HCL in the network.tf file that creates the Cloud NAT with a Router:
The parameter nat_ips is where we assign the Static External IP address to the Cloud NAT Gateway.
Instantiating the Compute VM:
After a lot of discussions and brainstorming within our team, we decided to set up an Nginx Reverse Proxy server on a Compute VM of machine type e2-small (2 vCPUs, 2 GB memory) for our Dev environment.
We chose an e2-standard-2 (2 vCPUs, 8 GB memory) for our Production environment, which was big enough to handle the daily load of about 1500–2000 users logging into our application per hour.
This machine needed to have a fixed internal IP address that our application deployed on App Engine would call to make the SOAP API request. Below is the HCL in the main.tf file that instantiates the said Compute VM:
The parameter startup_script contains the bash script for setting up the VM as an Nginx Reverse Proxy, so as soon as the Instance is provisioned, the Reverse Proxy server will be active.
The parameter static_ips is where we assign the Static Internal IP address to the Compute VM.
Configuring the VPC Connector
We also needed to configure a Serverless VPC Connector that acted as a bridge between our Application deployed on App Engine and the Compute VM that was set up in a VPC network on GCP. VPC Connector is a Google-managed service that enables you to connect from a serverless environment on GCP directly to your VPC network. Below is the HCL in the network.tf file that configures the VPC Connector:
For enabling our Node.js application to use this VPC Connector for making the API calls the only addition needed was inserting the below configuration code in our app.yaml deployment file.
We replaced PROJECT_ID with our project id and VPC_CONNECTOR_NAME with the name of the VPC connector we created above.
In this way, we could successfully make the SOAP API requests to the Salesforce server from our serverless application. One can follow a similar procedure to make REST API requests from the serverless application if the server being hit needs a Static IP for whitelisting.
The use of Terraform scripts allowed us to swiftly get this infrastructure up and running and also avoided a lot of rework in deploying the same infrastructure to different environments individually.
The only drawback with this approach is that once the traffic on the application increases, it does not scale automatically, and we would need to add an instance to handle this increase in traffic manually.
In the next article, I will describe how we overcame this drawback by slightly tweaking the architecture to auto-scale based on the traffic using an Internal TCP Load Balancer and a Managed Instance Group.
The link to all the HCL scripts and a step-by-step guide to setting up the infrastructure can be accessed here.