Using Kubernetes for Oauth2_proxy

In this tutorial we are going to set up a Kubernetes minion server that combines a basic guestbook app with oauth2_proxy.


Kubernetes is an open-source container orchestration and management system. Ultralinq has made a decision to move towards a microservices architecture, and Kubernetes will be one of our principal tools as we make this change.

 Let’s get started.

This tutorial assumes you have a functioning Kubernetes cluster. I did this in AWS.


  • Kubernetes
  • Docker
  • AWS
  • kubectl CLI tool
  • kube-aws CLI tool
  • oauth2_proxy


This tutorial is based on the Kubernetes guestbook example. I have added a few adjustments to this in a git repository, but for most of it, you can follow along with their documentation.

1. Set up Redis Deployment and Service

  •  Set Up Redis Master Deployment .yaml file
  •  Set Up Redis Master Service .yaml file
  •  Create Redis Master Service, the Redis Master Deployment.

2. Check Services, Deployments, and Pods

  •  Using kubectl, check your services, pods, and deployments. You can also check the logs of a single pod. The instructions on how to do this are in Kubernetes guestbook readme.

3. Repeat Steps 1 & 2 for Redis Slave Service and Deployment, as well as the Front end Service and Deployment.

4. Create an additional External Elastic Load Balancer in AWS (or somewhere else).

  • This allows external traffic into our minion.
  • I’ll let you figure this one out.
  • PS: There is an appendix in the guestbook  docs on this.

5. Set up Oauth2_proxy Service

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • The .yaml files for oauth2_proxy can be found in my git repository.
  • Set up an oauth2proxy_service.yaml file as follows:
apiVersion: v1
kind: Service
 name: oauth2-proxy
   app: oauth2-proxy
   tier: backend
 type: LoadBalancer
 - port: 80
   targetPort: 4180
   app: oauth2-proxy
   tier: backend

6. Create the Oauth2_proxy Service

7. Set up Oauth2_proxy Deployment

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • Set up an oauth2proxy_deployment.yaml file as follows:

 apiVersion: extensions/v1beta1
 kind: Deployment
   name: oauth2-proxy
   replicas: 1
         app: oauth2-proxy
         tier: backend
   - name: oauth2-proxy
    # This sets the port at 4180
    image: estelora/oauth2_proxy
    - mountPath: /etc/nginx/conf.d
     name: nginx-volume
   - containerPort: 4180
   # Command sets up the oauth2-proxy command line args.
   # Please set these variables for your project.
   - oauth2_proxy
   # Here is an example of service discovery.
   - --upstream=http://frontend
   - --client-secret="google-client-secret"
   - --redirect-url=
   - --cookie-secret="secret-chocolate-chip-cookie"
   # This variable stays the same - this is an internal IP
   - --http-address=
  •  The port is set to 4180 in the container, service, and deployment.
  •  The upstream shows Kubernetes’ service discovery – the internal address is http://frontend.

8. Create the Oauth2_proxy Service

9. Debug Network Issues as necessary!

  •  You can adjust your network with .yaml on the Kubernetes side.
  •  You can use kubectl logs and ssh into a docker container in your cluster.
  • Adjust firewalls for both your services and external load balancers.

Kuberenetes Glossary

  • Deployment: set of parameters for the desired state of pods
  • Minion: a server that performs work, configures networking for containers, and runs tasks assigned to containers.
  • Service: internal load balancer.
  • Node: provisioned hardware (in this case, a VM in the cloud).
  • Pod: container or group of containers that support each other to run tasks.
  • Service Discovery: allows you to hard-code host names within your Kubernetes minion into your code.
  • Containers: a Docker container or google container.

A visual

I made a little diagram to give a picture of how it works.  Number of pods is not the same as the actual files. Kubernetes is an amazing tool but understanding all the pieces takes longer than it takes to spin up servers for you. 😉


Share me!Tweet about this on TwitterEmail this to someoneShare on LinkedInShare on RedditShare on Google+Share on Facebook

How to Log Variables in Nginx

What is Nginx?

Nginx, (pronounced ‘enginex’), is a flexible HTTP server that can be configured for reverse proxy, mail proxy, & a generic TCP/UDP proxy server. It was created by Igor Syseov, a Russian software engineer. As of April 2016, according to Netcraft, it served or proxied about a fourth of the web’s biggest sites.

Although Nginx is popular, it is tricky to get from the setup stage to a complete stage. I frequently use Nginx, and I would describe it as flexible yet finicky.

Logging as a Problem-Solving Tool

Setting up any kind of of infrastructure is blackbox development process, which, in my opinion, requires a guess-and-check problem-solving approach. Logs are one of the principal sources of information in the check process, and are essential when refining Nginx.

Error.log and Access.log

This post focuses on access.log

1. error.log

Default location: /var/log/nginx/error.log
  • Can only be configured by severity level.
  • The severity levels are: warn, error, crit, and alert.
  • The most sensitive level is warn and alert is the least sensitive level.

2. access.log

Default location: /var/log/nginx/acess.log
  • Writes information about client requests as they are processed by the server.
  • The default setting for this log is ‘combined’ format.

Setting Variables for access.log

You can log any set of Nginx variables.

Common Variables [listed alphabetically]:
  • $host
  • $http_HOST
  • $http_referrer
  • $server_name
  • $remote_addr
  • $request
  • $upstream
  • $scheme
  • $time_local

Set the Log Format in nginx.conf


log_format <name of format> '"variable1" "variable2" variable3"' 


log_format debug '"[$time_local]" "$host" 
"$http_HOST"  "$server_name"  "$request" 
"$remote_addr" "$upstream_addr:"';

Set Configurations at the Server Level in sites-enabled directory


access_log </path/to/access.log> <name of format>

Example in Server Block:

 server {
  access_log /var/log/nginx/access.log debug;

Happy logging!

Run the command below to see your access.log in real time:

tail -f /var/log/nginx/access.log



Share me!Tweet about this on TwitterEmail this to someoneShare on LinkedInShare on RedditShare on Google+Share on Facebook

Efficient Authentication: Configure Oauth2_proxy for Multiple Upstreams

What is Oauth2_proxy?

Authentication is necessary for security. There are some places where you need a username and password to reach. A common solution to this is BasicAuth, which gets the job done, but it is not the most elegant solution. Basicauth is not difficult to implement, but it must be repeated across multiple applications, whereas oauth2_proxy can be set in one place and applied to many applications(upstreams). An upstream is the url a user attempts to reach before getting prompted for authentication with a third party.
Oauth-2 proxy, an open-source reverse proxy by bitly, provides authentication by email, domain or group with a third-party authentication provider, such as Github or Google. It has been programmed with Go. This proxy is a great solution for my company because it helps us to securely manage the authentication of multiple users across multiple applications (upstreams) with a single application. I have recently created a prototype with a Google API, Oauth2_proxy, and Nginx for internal applications used by the product team. It will most likely be scaled up in the future as a Kubernetes cluster.

Assumptions to Get Started

I made all external web traffic come through port 8080 and created an elastic load balancer in Amazon Web Services for HTTPS with a TCP passthrough of 443. Below is an example configuration for your reference:

1. Oauth2_proxy Configuration

Here are the configuration settings for the Oauth_2 proxy, which can be set via the command line.

--cookie-domain="" \ 
--cookie-name="oauth2_proxy" \  
--cookie-secret="<cookie-secret>" \
--client-id="<client-id>" \
--client-secret="<client-secret>" \
--redirect-url="" \
--email-domain="" \ 
--request-logging=true \ --pass-host-header=true \
--tls-cert="/path/to/my.crt" \
--tls-key="/path/to/my.key" \

2. Nginx Configurations

You need to create serveral files in Nginx to set this up.

Sites-enabled Folder: google-auth

 upstream google-auth {

server {
    rewrite_log on;
    listen *:8080;
    server_name ~^auth.(?<domain>internal.*)$;
    location = /oauth2/callback {
    proxy_pass https://google-auth;
    location ~/(?<sub>[^/]+)(?<remaining_uri>.*)$ {
       rewrite ^ https://$sub.$domain$remaining_uri;

server {
    listen *:8080;
    server_name ~^(.+).internal.*;
    location = /oauth2/start {
    proxy_pass https://google-auth/oauth2/start?rd=%2F$1;
    location / {
       proxy_pass https://google-auth;
       proxy_set_header Host $host;

 Sites-enabled Folder: upstreams

upstream app2 {
    server <app2 private IP>;

upstream app1 {
    server <app1 private IP>;

server {
    access_log /var/log/nginx/default.log oauth;
    listen 8081;
    server_name "";
   add_header Content-Type text/plain;
 # app1
 server {
    access_log /var/log/nginx/app1.log oauth;
    listen 8081;
    server_name ftp.internal.*;
    location / {
      proxy_pass http://ftp;
      proxy_redirect off;
 # app2
 server {
    access_log /var/log/nginx/app2.log oauth;
    listen 8081;
    server_name app2.internal.*;
    location / {
       proxy_pass http://app2;
       proxy_redirect off;

Nginx.conf: add this for logging

Because logs make life better 🙂

 rewrite_log on;
 log_format oauth '"[$time_local]" host: "$host" host_header:
   "$http_HOST" server_name: "$server_name" "$request"
   remote address: "$remote_addr" to upstream: "$upstream_addr:"';


Share me!Tweet about this on TwitterEmail this to someoneShare on LinkedInShare on RedditShare on Google+Share on Facebook

Happy Path Chef: How to Set up VSFTPD with Chef on CentOS 6

Automated Infrastructure vs. Manual Infrastructure

There are many tutorials about how to configure an VSFTPD server by hand. However, manual infrastructure is neither easy to replicate or document. If you create a server with Chef or another infrastructure-as-code tool, you can reproduce it in a fraction of the time it would take to do by hand. Automated infrastructure is becoming a necessity and therefore, organizations’ infrastructure must change quickly while remaining flexible and reliable to meet market demands.

FTP-like Servers allow you to receive and view files from anywhere. This tutorial is a walkthrough of the first iteration of a (much more) secure server VSFTPD server for QA testing and validation. The purpose of the server is to serve as a building block for more automated testing and it makes any manual testing done much more reliable and faster. The final version of this server should be a huge timesaver for our QA Team 🙂


  • Set up a VSFTPD server with client-side access.
  • Login as a specific user.
  • Upload, download, and view files.

Your Toolbox

  • FTP Client application. (I like Cyberduck, the UI is nice)
  • Vagrant and Virtualbox.

Your Cookbooks

You will need the following cookbooks for this server:

Put these in your chef/cookbooks directory.

Getting Started

The first step is to get the cookbooks installed on our CentOS 6 machine.

Go to your local machine, and in your local chef directory, enter:

$ EDITOR=vim knife node edit vsftpd-demo

This will allow you to set up an initial run list to install your cookbooks.

Let’s set up an initial run_list with our two cookbooks:

Set the run_list as shown below:



Save the knife file in vim

On your virtual machine, enter:

$ sudo chef-client

Check your Progress in Cyberduck

This server functions with anonymous authentication. For security purposes, a user besides anonymous should be able to log into their home directory to view, upload, and download files. We need to adjust the vsftpd configurations to get past this version insecure, anonymous FTP.

Configure VSFPTD with Attributes

Navigate to vsftpd/attributes/default.rb in your text editor or IDE. This is not the configuration file itself, but this is where chef gets the information to create the configuration file.

To get all these things ready from the vsftpd side, set the default[‘vsftpd’][‘config’] section  of the attributes/default.rb file in your cookbook like this:

default['vsftpd']['config'] = {

    'port_enable' => 'YES',

    'anonymous_enable' => 'NO',

    'local_enable' => 'YES',

    'chroot_local_user' => 'YES',

    'write_enable' => 'YES',

    'ascii_upload_enable' => 'YES',

    'ascii_download_enable' => 'YES',

    'local_umask' => '022',

    'dirmessage_enable' => 'YES',

    'connect_from_port_20' => 'YES',

    'listen' => 'YES',

    'background' => 'YES',

    'pam_service_name' => 'vsftpd',

    'userlist_enable' => 'YES',

    'tcp_wrappers' => 'YES',


    'pasv_enable' => 'YES',

    'pasv_address' => 'YES',

    'pasv_max_port' => '50744',

    'pasv_min_port' => '50624',

    'pasv_address' => "#{node['ipaddress']}"


Save the file and upload the new cookbook with:

knife cookbook upload vsftpd

Inside the chef directory of your local machine.

Check Yourself in Cyberduck

Open Cyberduck and try logging in as vagrant, with the password ‘vagrant’. Make sure you set the ‘Connect’ dropdown to ‘Active’. We are making an active FTP server for the purposes of this demo. 

Now you can view, upload and download files as the vagrant user, which is a step closer to our requirements.

Create a User

Although the vagrant user is a specific user, it isn’t a secret user. The standard vagrant password is not too hard to guess and easy to find. Let’s create another user with chef.

Add the following lines to the recipes/default.rb file:

directory '/home/ftpdemo' do

  owner 'ftpdemo'

  group 'ftpdemo'

  mode '0755'

  action :create


user 'ftpdemo' do

  supports :manage_home => true

  comment 'FTP Demo User'

  home '/home/ftpdemo'

  shell '/bin/bash'

  password 'ftpdemopass'


On your local machine, run:

$ knife cookbook upload vsftpd

Run sudo chef-client again in the virtual machine.

Check yourself, again

Open Cyberduck and log in as user ‘ftpdemo’.

You should be able to upload, download and view files as user ftpdemo.

Check in the browser at ‘ftp://<ipaddress>’ Log in again as ftp-demo. You should see the same files. We’re wrapping up things at this point.

Create a Chef Role

Now, we want to integrate these two recipes into a role. This makes replication in the future much simpler.

Create a file in the roles directory called vsftpd-demo.rb

It should look like this:

name "vsftpd-demo"

description "role to install and configure basic VSFTPD daemon for demo"






    "_default" => []


Upload the new role with this command: 

 knife role from file vsftpd-demo.rb

Edit the knife.rb file again with EDITOR=vim knife node edit vsftpd-demo in the local machine.

Change the run list to:


Run sudo chef-client in the virtual machine.

Cyberduck will allow you to upload, download and view files, while the browser should only allow you to download view files.

Congratulations! You just Cheffed FTP!


Share me!Tweet about this on TwitterEmail this to someoneShare on LinkedInShare on RedditShare on Google+Share on Facebook

What is DevOps?

DevOps, a portmanteau of ‘development’ and ‘operations’, is a concept that refers to anything empowering organizations to deliver high-quality products to productions while maintaining flexibility within a reliable system.

Traditionally, a DevOps engineer role is seen as a blend of a Systems Administrator (Ops) and Software Developer (Dev) role. A DevOps engineer is often involved in all parts of the software / infrastructure lifecycle – from planning, to coding, to testing, to deployment, to operations.

Although DevOps is often thought of as only a movement within IT, DevOps can be applied in organizations who produce hardware, software, and hardware / software hybrids.

I think of DevOps as a process of collaboration across the traditional IT roles (Development, Operations, Quality Assurance, and Security) to deliver functional solutions to help entire systems. Common examples of this would be infrastructure-as-code, monitoring systems, and automated testing processes. The DevOps movement arose from software companies of huge scale, but is common in very small organizations as well.

Enabling & Improving World-Class Systems

World-class systems successfully implement DevOps, whether an organization calls it that or something else. A world-class system must meet three requirements:

  1. Highly scalable
    A world-class system must be prepared to scale up to billions of users, and have an infrastructure that can be adjusted to support that. A world-class system doesn’t necessarily need to have billions of users, but they need prepared to scale that large.
  2. Provide superior value
    Every system delivers a great idea that solves some kind of problem. A world-class system is delivering a  great product to the end user. The product may be tangible (a package from Amazon arriving at a doorstep), or intangible (a great article from Wikipedia that helps someone learn something new).

  3. Flawless experience of the product for users / customers
    In order to reach a certain level of scalability, a system must keep its users. Every experience a user/customer has with a product must satisfy them so they come back again. This does not necessarily mean a beautiful user interface, but an experience where users keep using your product. Reddit and Pinterest look very different, but maintain a great deal of users.

    No one loves to use a website where they feel frustrated by constant errors or cannot update their account information because services are down, again.  Very few users / customers would stay loyal to a company that charges them incorrectly for services or products, or has a leak in their personal information.

Why DevOps?

A DevOps culture allows a system to change quickly, stay reliable, and still deliver high quality to the end user. Successful implementation of DevOps saves organizations time, money, and morale while empowering them to focus on the things that matter most to them. At Ultralinq, we are seeking to create a DevOps culture on our engineering team whilst moving towards greater efficiency, collaboration, and quality.

Share me!Tweet about this on TwitterEmail this to someoneShare on LinkedInShare on RedditShare on Google+Share on Facebook