Archive for Infrastructure

Using Kubernetes for Oauth2_proxy

In this tutorial we are going to set up a Kubernetes minion server that combines a basic guestbook app with oauth2_proxy.


Kubernetes is an open-source container orchestration and management system. Ultralinq has made a decision to move towards a microservices architecture, and Kubernetes will be one of our principal tools as we make this change.

 Let’s get started.

This tutorial assumes you have a functioning Kubernetes cluster. I did this in AWS.


  • Kubernetes
  • Docker
  • AWS
  • kubectl CLI tool
  • kube-aws CLI tool
  • oauth2_proxy


This tutorial is based on the Kubernetes guestbook example. I have added a few adjustments to this in a git repository, but for most of it, you can follow along with their documentation.

1. Set up Redis Deployment and Service

  •  Set Up Redis Master Deployment .yaml file
  •  Set Up Redis Master Service .yaml file
  •  Create Redis Master Service, the Redis Master Deployment.

2. Check Services, Deployments, and Pods

  •  Using kubectl, check your services, pods, and deployments. You can also check the logs of a single pod. The instructions on how to do this are in Kubernetes guestbook readme.

3. Repeat Steps 1 & 2 for Redis Slave Service and Deployment, as well as the Front end Service and Deployment.

4. Create an additional External Elastic Load Balancer in AWS (or somewhere else).

  • This allows external traffic into our minion.
  • I’ll let you figure this one out.
  • PS: There is an appendix in the guestbook  docs on this.

5. Set up Oauth2_proxy Service

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • The .yaml files for oauth2_proxy can be found in my git repository.
  • Set up an oauth2proxy_service.yaml file as follows:
apiVersion: v1
kind: Service
 name: oauth2-proxy
   app: oauth2-proxy
   tier: backend
 type: LoadBalancer
 - port: 80
   targetPort: 4180
   app: oauth2-proxy
   tier: backend

6. Create the Oauth2_proxy Service

7. Set up Oauth2_proxy Deployment

  • I assume you know something about oauth2_proxy. If not, read the documentation.
  • Set up an oauth2proxy_deployment.yaml file as follows:

 apiVersion: extensions/v1beta1
 kind: Deployment
   name: oauth2-proxy
   replicas: 1
         app: oauth2-proxy
         tier: backend
   - name: oauth2-proxy
    # This sets the port at 4180
    image: estelora/oauth2_proxy
    - mountPath: /etc/nginx/conf.d
     name: nginx-volume
   - containerPort: 4180
   # Command sets up the oauth2-proxy command line args.
   # Please set these variables for your project.
   - oauth2_proxy
   # Here is an example of service discovery.
   - --upstream=http://frontend
   - --client-secret="google-client-secret"
   - --redirect-url=
   - --cookie-secret="secret-chocolate-chip-cookie"
   # This variable stays the same - this is an internal IP
   - --http-address=
  •  The port is set to 4180 in the container, service, and deployment.
  •  The upstream shows Kubernetes’ service discovery – the internal address is http://frontend.

8. Create the Oauth2_proxy Service

9. Debug Network Issues as necessary!

  •  You can adjust your network with .yaml on the Kubernetes side.
  •  You can use kubectl logs and ssh into a docker container in your cluster.
  • Adjust firewalls for both your services and external load balancers.

Kuberenetes Glossary

  • Deployment: set of parameters for the desired state of pods
  • Minion: a server that performs work, configures networking for containers, and runs tasks assigned to containers.
  • Service: internal load balancer.
  • Node: provisioned hardware (in this case, a VM in the cloud).
  • Pod: container or group of containers that support each other to run tasks.
  • Service Discovery: allows you to hard-code host names within your Kubernetes minion into your code.
  • Containers: a Docker container or google container.

A visual

I made a little diagram to give a picture of how it works.  Number of pods is not the same as the actual files. Kubernetes is an amazing tool but understanding all the pieces takes longer than it takes to spin up servers for you. 😉


How to Log Variables in Nginx

What is Nginx?

Nginx, (pronounced ‘enginex’), is a flexible HTTP server that can be configured for reverse proxy, mail proxy, & a generic TCP/UDP proxy server. It was created by Igor Syseov, a Russian software engineer. As of April 2016, according to Netcraft, it served or proxied about a fourth of the web’s biggest sites.

Although Nginx is popular, it is tricky to get from the setup stage to a complete stage. I frequently use Nginx, and I would describe it as flexible yet finicky.

Logging as a Problem-Solving Tool

Setting up any kind of of infrastructure is blackbox development process, which, in my opinion, requires a guess-and-check problem-solving approach. Logs are one of the principal sources of information in the check process, and are essential when refining Nginx.

Error.log and Access.log

This post focuses on access.log

1. error.log

Default location: /var/log/nginx/error.log
  • Can only be configured by severity level.
  • The severity levels are: warn, error, crit, and alert.
  • The most sensitive level is warn and alert is the least sensitive level.

2. access.log

Default location: /var/log/nginx/acess.log
  • Writes information about client requests as they are processed by the server.
  • The default setting for this log is ‘combined’ format.

Setting Variables for access.log

You can log any set of Nginx variables.

Common Variables [listed alphabetically]:
  • $host
  • $http_HOST
  • $http_referrer
  • $server_name
  • $remote_addr
  • $request
  • $upstream
  • $scheme
  • $time_local

Set the Log Format in nginx.conf


log_format <name of format> '"variable1" "variable2" variable3"' 


log_format debug '"[$time_local]" "$host" 
"$http_HOST"  "$server_name"  "$request" 
"$remote_addr" "$upstream_addr:"';

Set Configurations at the Server Level in sites-enabled directory


access_log </path/to/access.log> <name of format>

Example in Server Block:

 server {
  access_log /var/log/nginx/access.log debug;

Happy logging!

Run the command below to see your access.log in real time:

tail -f /var/log/nginx/access.log



Happy Path Chef: How to Set up VSFTPD with Chef on CentOS 6

Automated Infrastructure vs. Manual Infrastructure

There are many tutorials about how to configure an VSFTPD server by hand. However, manual infrastructure is neither easy to replicate or document. If you create a server with Chef or another infrastructure-as-code tool, you can reproduce it in a fraction of the time it would take to do by hand. Automated infrastructure is becoming a necessity and therefore, organizations’ infrastructure must change quickly while remaining flexible and reliable to meet market demands.

FTP-like Servers allow you to receive and view files from anywhere. This tutorial is a walkthrough of the first iteration of a (much more) secure server VSFTPD server for QA testing and validation. The purpose of the server is to serve as a building block for more automated testing and it makes any manual testing done much more reliable and faster. The final version of this server should be a huge timesaver for our QA Team 🙂


  • Set up a VSFTPD server with client-side access.
  • Login as a specific user.
  • Upload, download, and view files.

Your Toolbox

  • FTP Client application. (I like Cyberduck, the UI is nice)
  • Vagrant and Virtualbox.

Your Cookbooks

You will need the following cookbooks for this server:

Put these in your chef/cookbooks directory.

Getting Started

The first step is to get the cookbooks installed on our CentOS 6 machine.

Go to your local machine, and in your local chef directory, enter:

$ EDITOR=vim knife node edit vsftpd-demo

This will allow you to set up an initial run list to install your cookbooks.

Let’s set up an initial run_list with our two cookbooks:

Set the run_list as shown below:



Save the knife file in vim

On your virtual machine, enter:

$ sudo chef-client

Check your Progress in Cyberduck

This server functions with anonymous authentication. For security purposes, a user besides anonymous should be able to log into their home directory to view, upload, and download files. We need to adjust the vsftpd configurations to get past this version insecure, anonymous FTP.

Configure VSFPTD with Attributes

Navigate to vsftpd/attributes/default.rb in your text editor or IDE. This is not the configuration file itself, but this is where chef gets the information to create the configuration file.

To get all these things ready from the vsftpd side, set the default[‘vsftpd’][‘config’] section  of the attributes/default.rb file in your cookbook like this:

default['vsftpd']['config'] = {

    'port_enable' => 'YES',

    'anonymous_enable' => 'NO',

    'local_enable' => 'YES',

    'chroot_local_user' => 'YES',

    'write_enable' => 'YES',

    'ascii_upload_enable' => 'YES',

    'ascii_download_enable' => 'YES',

    'local_umask' => '022',

    'dirmessage_enable' => 'YES',

    'connect_from_port_20' => 'YES',

    'listen' => 'YES',

    'background' => 'YES',

    'pam_service_name' => 'vsftpd',

    'userlist_enable' => 'YES',

    'tcp_wrappers' => 'YES',


    'pasv_enable' => 'YES',

    'pasv_address' => 'YES',

    'pasv_max_port' => '50744',

    'pasv_min_port' => '50624',

    'pasv_address' => "#{node['ipaddress']}"


Save the file and upload the new cookbook with:

knife cookbook upload vsftpd

Inside the chef directory of your local machine.

Check Yourself in Cyberduck

Open Cyberduck and try logging in as vagrant, with the password ‘vagrant’. Make sure you set the ‘Connect’ dropdown to ‘Active’. We are making an active FTP server for the purposes of this demo. 

Now you can view, upload and download files as the vagrant user, which is a step closer to our requirements.

Create a User

Although the vagrant user is a specific user, it isn’t a secret user. The standard vagrant password is not too hard to guess and easy to find. Let’s create another user with chef.

Add the following lines to the recipes/default.rb file:

directory '/home/ftpdemo' do

  owner 'ftpdemo'

  group 'ftpdemo'

  mode '0755'

  action :create


user 'ftpdemo' do

  supports :manage_home => true

  comment 'FTP Demo User'

  home '/home/ftpdemo'

  shell '/bin/bash'

  password 'ftpdemopass'


On your local machine, run:

$ knife cookbook upload vsftpd

Run sudo chef-client again in the virtual machine.

Check yourself, again

Open Cyberduck and log in as user ‘ftpdemo’.

You should be able to upload, download and view files as user ftpdemo.

Check in the browser at ‘ftp://<ipaddress>’ Log in again as ftp-demo. You should see the same files. We’re wrapping up things at this point.

Create a Chef Role

Now, we want to integrate these two recipes into a role. This makes replication in the future much simpler.

Create a file in the roles directory called vsftpd-demo.rb

It should look like this:

name "vsftpd-demo"

description "role to install and configure basic VSFTPD daemon for demo"






    "_default" => []


Upload the new role with this command: 

 knife role from file vsftpd-demo.rb

Edit the knife.rb file again with EDITOR=vim knife node edit vsftpd-demo in the local machine.

Change the run list to:


Run sudo chef-client in the virtual machine.

Cyberduck will allow you to upload, download and view files, while the browser should only allow you to download view files.

Congratulations! You just Cheffed FTP!