A common requirement that we have to deal with in our custom applications is to have some sort of an API. One reason is to allow others to talk to our application, but another option is for parts of our application to talk to each other. The latter scenario includes situations like if you’re building a single page application (SPA) where a client-side app written in Angular needs to talk to the back end. Another is in the case of a microservice based application where different services need to talk to other services.
Most of us are familiar with building some sort of a REST API that receives requests and responds using JSON, a human readable format. If your application (either client-side or server-side) is written in JavaScript working with JSON is a piece of cake. Other languages have great support for working with JSON such as the popular JSON.NET library for .NET projects. One aspect of it is that you do have to use / create some sort of HTTP server to host the REST API. There are self-hosting options & techniques, but most of us just go with the HTTP server option.
I’m not knocking this approach… it is not hard to implement, there’s tons of support for it and it works great. However, there is another option. Instead of creating a REST API service you can use remote procedure calls (RPC). There are popular implementations out there like Apache Thrift with support for just about every language you can think of including Node.js, JavaScript, C#, Java and so many others.
In this post, I want to show you another option that I’ve started using: gRPC with protocol buffers. Personally, I like this option because:
- there’s a tight coupling between the client and server with defined contracts
- it’s easy to create the server and…
- client
- its crazy fast because all communication is encoded in a binary format and sent across the wire and finally
- you don’t have a lot of plumbing you have to write on the client / server like you do with a REST API like HTTP requests, responses, headers, etc.
Overview - Protocol Buffers (protobuf)
Most of us recall using XML to serialize message requests but they big, bloated and slow to parse. Then JSON came to be a much more popular option because it was smaller, easier to work with and much faster to parse. However, it wasn’t fast enough. Google came up with a new format for their own needs for communication between index servers. This new technology was dubbed protocol buffers and, according to the Google overview of Protocol Buffers:
Protocol buffers were designed to solve many of these problems:
- New fields could be easily introduced, and intermediate servers that didn’t need to inspect the data could simply parse it and pass through the data without needing to know about all the fields.
- Formats were more self-describing, and could be dealt with from a variety of languages (C++, Java, etc.)
- In addition to being used for short-lived RPC (Remote Procedure Call) requests, people started to use protocol buffers as a handy self-describing format for storing data persistently (for example, in Bigtable).
- Server RPC interfaces started to be declared as part of protocol files, with the protocol compiler generating stub classes that users could override with actual implementations of the server’s interface.
The latest version of Protocol Buffers (commonly referred to as protobuf), version 3, greatly simplifies the language and makes it available to many more languages. These languages include Java, Python, JavaScript, ObjectiveC & C#.
Why Consider Protobuf over XML or JSON?
Good question. When you serialize / encode a protobuf, it’s converted to a binary format. This means it’s significantly smaller than even JSON. In addition its much faster than JSON or XML to parse and encode. Like working with JSON, you can create strongly typed objects to make working with them easier, but they also have some advantages over the other formats?
- less ambiguous with explicit data types
- smaller (3-10 times smaller than XML)
- faster (20-100 times faster than XML)
Protobuf Implementation
So what a protobuf look like? It’s a data definition, like a schema, that defines what the data looks like. Here’s an example of one from a project I’m working on:
package ratekeeper;
message Meter {
string meterId = 1;
string publicCloud = 2
string displayName = 3;
string category = 4;
string subcategory = 5;
}
That’s pretty self-explanatory to what you’re looking at. There’s another piece to it, but before we do that, let’s look at another bit of tech.
Overview - gRPC
In the opening of this post, I mentioned how building REST APIs with JSON is pretty common. RPC is another option. RPC stands for remote procedure calls and isn’t anything new. It basically makes calling a remote service as familiar as calling a local method or function.
So what is gRPC? For 15yrs, Google has been using their own implementation of RPC called Stubby. This framework is designed to handle “internet-scale of tens of billions of requests per second”. In late August 2016, they released this technology to the world as open source called gRPC.
gRPC uses protocol buffers by default as the definition language and message format, but you can swap it out to use something else if you like (such as JSON). Just like when creating a custom REST API, you are left to create the server & client implementations, you do the same with gRPC. The steps are simple:
- create a service definition as a protocol buffer
- create a server implementation of the definition
- create client(s) that call the server
gRPC supports a few different styles of communication. Like a normal REST-based API, you can do the classic request-response. Another style is streaming where the server can stream large responses back to the requestor, or the client can stream big requests to the server or you can do bidirectional streaming.
A bunch of companies has already jumped to adopt gRPC & protobufs including Netflix, Square, CoreOS, Cisco & Juniper Networks to name a few. Recently the Google Cloud Platform Podcast interviewed the CTO of CoreOS about their experience and why they switched to gRPC.
gRPC Implementation
The first step is to define the service. Let’s extend the protobuf (*.proto) from the definition above and add an RPC call to do a read for meters:
service RateKeeperService {
rpc GetMeter(GetMeterRequest) returns (GetMeterResponse) {}
}
message GetMeterRequest {
string meterId = 1;
}
message GetMeterResponse {
Meter meter = 1;
}
The service
section defines two RPC methods for the gRPC server. It accepts a GetMeterRequest
object and returns a GetMeterResponse
object. You can see these two objects below, defined as messages just like the Meter
in the first snippet have properties. The request object accepts a string for the ID of the meter to lookup. The response returns an instance of the Meter
.
Creating the gRPC Server
The next step is to implement the server & is dependent upon the language you elect to use to implement your gRPC server… the quickstarts have examples for all the supported languages including, including Python, Go, Ruby, Node.js (JavaScript), C#, Objective-C & PHP. I’ll go into more detail in that in my next post when I show you how I do it with TypeScript in Node.js, but for now, here’s what it looks like to spin up a server for the above example:
Assuming the following protobuf:
syntax = "proto3";
package ratekeeper;
service RateKeeperService {
rpc GetMeter(GetMeterRequest) returns (GetMeterResponse) {}
}
message GetMeterRequest {
string meterId = 1;
}
message GetMeterResponse {
Meter meter = 1;
}
message Meter {
string meterId = 1;
string publicCloud = 2;
string displayName = 3;
string category = 4;
string subcategory = 5;
}
Here’s a simple server implementation written in TypeScript:
import * as path from 'path';
let grpc: any = require('grpc');
import { Logger, IPublicCloudMeter, IRateKeeperGetMeterCall, IRateKeeperServer, PublicCloudEnum } from 'voyager-shared';
export class RateKeeperServer {
public static getMeter(call: IRateKeeperGetMeterCall, callback: any): void {
let meter: IPublicCloudMeter = <IPublicCloudMeter>{
meterId: call.request.meterId,
publicCloud: PublicCloudEnum.MICROSOFT_AZURE,
category: 'Virtual Machines',
subcategory: 'Standard_D14 VM (Windows)',
displayName: 'Compute Hours'
};
callback(null, { meter: meter });
}
public start(host: string = '0.0.0.0', port: number = 50051): void {
let listerAddress: string = host + ':' + port;
let server: any = this._createServer();
server.bind(listerAddress, grpc.ServerCredentials.createInsecure());
server.start();
}
private _createServer(): any {
let protoPath: string = path.join('[relative-path-to]/ratekeeper.proto');
// load proto
let proto: any = grpc.load(protoPath).ratekeeper;
// create & define service
let server: any = new grpc.Server();
server.addProtoService(proto.RateKeeperService.service, <IRateKeeperServer>{
getMeter: RateKeeperServer.getMeter
});
return server;
}
}
To get this started, I have a file index.ts that I startup with with $ node index.js. The TypeScript to start the server is simply:
import { RateKeeperServer } from './RateKeeperServer';
let grpcServer: RateKeeperServer = new RateKeeperServer();
grpcServer.start();
How does it work? When the server’s start()
method is called, it creates a new instance of the server and binds it to a host & port… in this case localhost:50051… and then calls start()
method. Creating the server involves loading the protobuf definition file first, and then defining the implementation of the methods. Here you see the getMeter
function I defined in the protobuf file maps to the getMeter()
function. That creates an instance of a Meter
object that matches the signature of the Meter
message defined in the protobuf. When complete, it calls a callback function passing in null for the error and an object that matches the protobuf’s GetMeterResponse
message type which has a single Meter
property.
Creating the gRPC client
Having a server isn’t enough! We want to be able to call it. That code is pretty simple as well. What follows is the TypeScript implementation, but keep in mind because the client & server are separate, it can be written in any language.
import * as path from 'path';
let grpc: any = require('grpc');
import {
IPublicCloudMeter, IProtobufTimestamp, IRateKeeperClient, IRateKeeperGetMeterResponse
} from 'voyager-shared';
let protoPath: string = path.join('[relative-path-to]/ratekeeper.proto');
let proto: any = grpc.load(protoPath).ratekeeper;
let connectionString: string = '0.0.0.0:50051';
let client: IRateKeeperClient =
proto.RateKeeperService(connectionString, grpc.credentials.createInsecure());
client.getMeter(
{ meterId: 'd83fa551-8030-416a-b443-306aef06a5d2' },
(error: any, response: IRateKeeperGetMeterResponse) => {
if (error) {
console.error(error);
process.exit(1);
} else {
console.log('meter:', (<IPublicCloudMeter>response.meter));
}
} // callback
); // client.getMeter()
First, create an instance of the client by loading the protobuf definition and creating a connection to the server. Note that as the method implies, you can use HTTP or HTTPS for the connection… there are a few authentication options supported by gRPC.
Then, call the method on the server by simply calling client.getMeter()
passing in the object & callback to execute.
Conclusion
As I said, in the next post I’ll go into more detail how I’m using gRPC in a project. That project is a microservice-based application with multiple containers that talk to each other all implemented using TypeScript & Node.js. I chose gRPC over custom HTTP REST APIs because gRPC is more performant and because I’ve found it’s much quicker to implement with less work required to write the code to consume the server. Plus, I like the strong contracts between the client & server. If you send data that doesn’t conform to the protobuf, or if you try to send data back that doesn’t conform, it fails.
Also, don’t take this post as me saying you should stop writing REST APIs and instead use gRPC… I’m just showing you another option.
One More Thing…
You may be thinking “hey wait a minute… not everyone is using gRPC but virtually all developers are familiar with REST APIs… I don’t want my API to be some special case that requires them to retool.” Very good point! While you can create clients that talk to the servers using JavaScript just like you would if you were working with REST APIs, sometimes it’s easier to use what people are familiar with.
Someone has already thought of this too! Google also create a gRPC ecosystem that includes a bunch of cool open source projects. One of these is the grpc-gateway, a gRPC to JSON proxy generator which generates a reverse-proxy server that translates RESTful JSON API into gRPC. So you can write your APIs using gRPC for internal communication between your components, but you can also host a thin wrapper that lets clients who want to communicate with it using familiar REST API calls do that as well.