Archive for May 2016

Using Kubernetes for Oauth2_proxy

In this tutorial we are going to set up a Kubernetes minion server that combines a basic guestbook app with oauth2_proxy.


Kubernetes is an open-source container orchestration and management system. Ultralinq has made a decision to move towards a microservices architecture, and Kubernetes will be one of our principal tools as we make this change.

 Let’s get started.

This tutorial assumes you have a functioning Kubernetes cluster. I did this in AWS.


  • Kubernetes
  • Docker
  • AWS
  • kubectl CLI tool
  • kube-aws CLI tool
  • oauth2_proxy


This tutorial is based on the Kubernetes guestbook example. I have added a few adjustments to this in a git repository, but for most of it, you can follow along with their documentation.

1. Set up Redis Deployment and Service

  •  Set Up Redis Master Deployment .yaml file
  •  Set Up Redis Master Service .yaml file
  •  Create Redis Master Service, the Redis Master Deployment.

2. Check Services, Deployments, and Pods

  •  Using kubectl, check your services, pods, and deployments. You can also check the logs of a single pod. The instructions on how to do this are in Kubernetes guestbook readme.

3. Repeat Steps 1 & 2 for Redis Slave Service and Deployment, as well as the Front end Service and Deployment.

4. Create an additional External Elastic Load Balancer in AWS (or somewhere else).

  • This allows external traffic into our minion.
  • I’ll let you figure this one out.
  • PS: There is an appendix in the guestbook  docs on this.

5. Set up Oauth2_proxy Service

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • The .yaml files for oauth2_proxy can be found in my git repository.
  • Set up an oauth2proxy_service.yaml file as follows:
apiVersion: v1
kind: Service
 name: oauth2-proxy
   app: oauth2-proxy
   tier: backend
 type: LoadBalancer
 - port: 80
   targetPort: 4180
   app: oauth2-proxy
   tier: backend

6. Create the Oauth2_proxy Service

7. Set up Oauth2_proxy Deployment

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • Set up an oauth2proxy_deployment.yaml file as follows:

 apiVersion: extensions/v1beta1
 kind: Deployment
   name: oauth2-proxy
   replicas: 1
         app: oauth2-proxy
         tier: backend
   - name: oauth2-proxy
    # This sets the port at 4180
    image: estelora/oauth2_proxy
    - mountPath: /etc/nginx/conf.d
     name: nginx-volume
   - containerPort: 4180
   # Command sets up the oauth2-proxy command line args.
   # Please set these variables for your project.
   - oauth2_proxy
   # Here is an example of service discovery.
   - --upstream=http://frontend
   - --client-secret="google-client-secret"
   - --redirect-url=
   - --cookie-secret="secret-chocolate-chip-cookie"
   # This variable stays the same - this is an internal IP
   - --http-address=
  •  The port is set to 4180 in the container, service, and deployment.
  •  The upstream shows Kubernetes’ service discovery – the internal address is http://frontend.

8. Create the Oauth2_proxy Service

9. Debug Network Issues as necessary!

  •  You can adjust your network with .yaml on the Kubernetes side.
  •  You can use kubectl logs and ssh into a docker container in your cluster.
  • Adjust firewalls for both your services and external load balancers.

Kuberenetes Glossary

  • Deployment: set of parameters for the desired state of pods
  • Minion: a server that performs work, configures networking for containers, and runs tasks assigned to containers.
  • Service: internal load balancer.
  • Node: provisioned hardware (in this case, a VM in the cloud).
  • Pod: container or group of containers that support each other to run tasks.
  • Service Discovery: allows you to hard-code host names within your Kubernetes minion into your code.
  • Containers: a Docker container or google container.

A visual

I made a little diagram to give a picture of how it works.  Number of pods is not the same as the actual files. Kubernetes is an amazing tool but understanding all the pieces takes longer than it takes to spin up servers for you. 😉


How to Log Variables in Nginx

What is Nginx?

Nginx, (pronounced ‘enginex’), is a flexible HTTP server that can be configured for reverse proxy, mail proxy, & a generic TCP/UDP proxy server. It was created by Igor Syseov, a Russian software engineer. As of April 2016, according to Netcraft, it served or proxied about a fourth of the web’s biggest sites.

Although Nginx is popular, it is tricky to get from the setup stage to a complete stage. I frequently use Nginx, and I would describe it as flexible yet finicky.

Logging as a Problem-Solving Tool

Setting up any kind of of infrastructure is blackbox development process, which, in my opinion, requires a guess-and-check problem-solving approach. Logs are one of the principal sources of information in the check process, and are essential when refining Nginx.

Error.log and Access.log

This post focuses on access.log

1. error.log

Default location: /var/log/nginx/error.log
  • Can only be configured by severity level.
  • The severity levels are: warn, error, crit, and alert.
  • The most sensitive level is warn and alert is the least sensitive level.

2. access.log

Default location: /var/log/nginx/acess.log
  • Writes information about client requests as they are processed by the server.
  • The default setting for this log is ‘combined’ format.

Setting Variables for access.log

You can log any set of Nginx variables.

Common Variables [listed alphabetically]:
  • $host
  • $http_HOST
  • $http_referrer
  • $server_name
  • $remote_addr
  • $request
  • $upstream
  • $scheme
  • $time_local

Set the Log Format in nginx.conf


log_format <name of format> '"variable1" "variable2" variable3"' 


log_format debug '"[$time_local]" "$host" 
"$http_HOST"  "$server_name"  "$request" 
"$remote_addr" "$upstream_addr:"';

Set Configurations at the Server Level in sites-enabled directory


access_log </path/to/access.log> <name of format>

Example in Server Block:

 server {
  access_log /var/log/nginx/access.log debug;

Happy logging!

Run the command below to see your access.log in real time:

tail -f /var/log/nginx/access.log



Efficient Authentication: Configure Oauth2_proxy for Multiple Upstreams

What is Oauth2_proxy?

Authentication is necessary for security. There are some places where you need a username and password to reach. A common solution to this is BasicAuth, which gets the job done, but it is not the most elegant solution. Basicauth is not difficult to implement, but it must be repeated across multiple applications, whereas oauth2_proxy can be set in one place and applied to many applications(upstreams). An upstream is the url a user attempts to reach before getting prompted for authentication with a third party.
Oauth-2 proxy, an open-source reverse proxy by bitly, provides authentication by email, domain or group with a third-party authentication provider, such as Github or Google. It has been programmed with Go. This proxy is a great solution for my company because it helps us to securely manage the authentication of multiple users across multiple applications (upstreams) with a single application. I have recently created a prototype with a Google API, Oauth2_proxy, and Nginx for internal applications used by the product team. It will most likely be scaled up in the future as a Kubernetes cluster.

Assumptions to Get Started

I made all external web traffic come through port 8080 and created an elastic load balancer in Amazon Web Services for HTTPS with a TCP passthrough of 443. Below is an example configuration for your reference:

1. Oauth2_proxy Configuration

Here are the configuration settings for the Oauth_2 proxy, which can be set via the command line.

--cookie-domain="" \ 
--cookie-name="oauth2_proxy" \  
--cookie-secret="<cookie-secret>" \
--client-id="<client-id>" \
--client-secret="<client-secret>" \
--redirect-url="" \
--email-domain="" \ 
--request-logging=true \ --pass-host-header=true \
--tls-cert="/path/to/my.crt" \
--tls-key="/path/to/my.key" \

2. Nginx Configurations

You need to create serveral files in Nginx to set this up.

Sites-enabled Folder: google-auth

 upstream google-auth {

server {
    rewrite_log on;
    listen *:8080;
    server_name ~^auth.(?<domain>internal.*)$;
    location = /oauth2/callback {
    proxy_pass https://google-auth;
    location ~/(?<sub>[^/]+)(?<remaining_uri>.*)$ {
       rewrite ^ https://$sub.$domain$remaining_uri;

server {
    listen *:8080;
    server_name ~^(.+).internal.*;
    location = /oauth2/start {
    proxy_pass https://google-auth/oauth2/start?rd=%2F$1;
    location / {
       proxy_pass https://google-auth;
       proxy_set_header Host $host;

 Sites-enabled Folder: upstreams

upstream app2 {
    server <app2 private IP>;

upstream app1 {
    server <app1 private IP>;

server {
    access_log /var/log/nginx/default.log oauth;
    listen 8081;
    server_name "";
   add_header Content-Type text/plain;
 # app1
 server {
    access_log /var/log/nginx/app1.log oauth;
    listen 8081;
    server_name ftp.internal.*;
    location / {
      proxy_pass http://ftp;
      proxy_redirect off;
 # app2
 server {
    access_log /var/log/nginx/app2.log oauth;
    listen 8081;
    server_name app2.internal.*;
    location / {
       proxy_pass http://app2;
       proxy_redirect off;

Nginx.conf: add this for logging

Because logs make life better 🙂

 rewrite_log on;
 log_format oauth '"[$time_local]" host: "$host" host_header:
   "$http_HOST" server_name: "$server_name" "$request"
   remote address: "$remote_addr" to upstream: "$upstream_addr:"';