Antrea CNI pluginの導入とフローテーブル

Kubernetes用のCNIプラグインとしてVMWareがオープンソースで開発しているのが”Antrea”です。大きな特徴は、データパスにOpenVSwitchを使っていて、プロジェクトが発表された2019年の投稿によると、以下の利点があるようです。

  • 性能 … OVSはiptablesよりパフォーマンスが良く、ルールが多いほど差は顕著になる
  • 携帯性 … OVSはLinux、Windows、その他のOSでサポートされている
  • 運用性 … 既存のOVSの資産を使って障害解析や監視ができる
  • 柔軟性と拡張性 … OVSは新機能の統合を容易にできる

この投稿では、AntreaをCNIプラグインとしてKubernetesを構築し、Overlayネットワークとネットワークポリシーがどのように構成されているのかを見てみましょう。

ここでは次の構成で組んでいます:

  • 3x AWS EC2 instance (one master, two worker node)
  • kubeadm
  • antrea (v0.9.1)

1. Kubernetesクラスタの作成

EKSやGKEなどではAntreaを使ったOverlayネットワークがまだサポートされていないため、EC2インスタンスを使って自前のKubernetesクラスタを作成します。EC2インスタンスはt2.medium、セキュリティグループは外部からのSSH接続、各ノード間の通信が許可されるように設定してます。

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

masterノードの初期化が終わったら、画面の出力に沿ってKUBECONFIGを保存します。その後に、画面に表示されている sudo kubeadm join... をworkerノードで実行します。

この状態で kubectl get node を実行して、ノードが3つ表示されていることと、ステータスが Not Ready になっていることを確認します。

質問:なぜ各ノードは”Not Ready”なのでしょうか?

回答:Kubernetesの各ノードはkubeletの起動時にネットワークでCNIを使用する( --network-pugin=cni )ように指定された場合、特定のディレクトリにCNIのバイナリ及び構成ファイルを配置する必要があります。Kubeadmで構成した場合は、/opt/cni/bin/配下にバイナリはありますが、/etc/cni/net.d/にCNIの設定ファイルがないため、Readyとなりません。

2. Antreaの導入

Antreaを導入するには、次のコマンドでマニフェストを読み込むだけです。

kubectl apply -f https://github.com/vmware-tanzu/antrea/releases/download/v0.9.1/antrea.yml

これにより、Antreaに必要なKubernetesオブジェクトが作成されます。アーキテクチャの詳細は公式ドキュメントのページが参考になりますが、主なオブジェクトとしては、コントローラの役目を果たす”Antrea Controller”、そしてDaemonSetとして各ノードに配置されCNIプラグインとして動く”Antrea Agent”があります。

この状態で kubectl get node を実行して、ノードのステータスが Ready になっていることを確認します。

質問:なぜ各ノードが”Ready”になったのでしょうか?

回答:Antrea-Agentがそれぞれのノードに配置されると、initContainerとして”isntall-cni”が作成されます。このコンテナは、/etc/cni/net.d/にantreaの構成ファイルを、/opt/cni/bin/にantreaのバイナリをそれぞれ配置します。これによりCNIプラグインの使用準備が整ったと判断され、kubeletからAPI-Serverに対して”Ready”が通知されます。

この図からも見えるように、antrea-agentはOpenVSwitchを作り、ホスト接続用のantrea-gw0というインタフェースとOverlay用のantrea-tun0というインタフェースを作成します。

3. Podの作成と疎通試験

試験用に名前空間を作成します。

$ kubectl create ns antrea-demo
namespace/antrea-demo created

  Nginxサーバを作成します。

$ kubectl get -n antrea-demo pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-gbhs2   1/1     Running   0          29s   192.168.2.4   ip-172-16-1-153   <none>           <none>
nginx-6799fc88d8-hrktw   1/1     Running   0          29s   192.168.1.4   ip-172-16-1-174   <none>           <none>

クライアントを作成し、Nginxサーバに接続できることを確認します。

$ kubectl run -n antrea-demo access --image=busybox -- sleep infinity
pod/access created
ubuntu@ip-172-16-1-14:~$ kubectl get pod/access -n antrea-demo -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
access   1/1     Running   0          27s   192.168.1.6   ip-172-16-1-174   <none>           <none>
$ kubectl exec -it access -n antrea-demo -- sh
/ # wget -q -O - 192.168.2.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
* snipped *
</html>
/ # 

質問:PodのIPアドレス(192.168.0.0/16)は外部(この場合はAWS VPC)に経路情報がありません。では、なぜPod同士の接続が可能なのでしょうか?

回答:Podからの通信はOpenVSwitchに着信後、Geneveでカプセル化されます。その結果、外に出ていくパケットは「送信元:Podがあるホストのeth0のIPアドレス、送信先:NGINXが動いているPodがあるホストのeth0のIPアドレス」になるため、VPC上で正常にルーティングされます。

では、この通信がどのように実現されているかを、手順を追って詳しく見ていきましょう。現在のクラスタの状態は下の図を参照してください。

$ kubectl exec -n kube-system antrea-agent-ct6tx -c antrea-ovs -- ovs-vsctl -- --columns=name,ofport list Interface
name                : access-b675ad
ofport              : 7

name                : antrea-gw0
ofport              : 2

name                : nginx-67-cb2eac
ofport              : 5

name                : antrea-tun0
ofport              : 1

name                : coredns--6b79cb
ofport              : 3
$ kubectl exec -n kube-system antrea-agent-jcldl -c antrea-ovs -- ovs-vsctl -- --columns=name,ofport list Interface
name                : antrea-tun0
ofport              : 1

name                : nginx-67-b5d4c8
ofport              : 5

name                : coredns--7be79d
ofport              : 3

name                : antrea-gw0
ofport              : 2

クライアントPodから外部ネットワークに出るまで

table=0, 10で入力ポートの判定、table=70でOverlayトンネルに入るための準備が行われています。

  • TTL調整
  • Overlayトンネルの宛先設定 … 宛先ノードのeth0アドレス(172.16.1.153)がNXM_NX_TUN_IPV4_DST[]に0xac100199として格納
  • NXM_NX_REG1に0x1を格納

table=105でコネクショントラッキング、そしてtable=110でtable=70でREG1に格納された数字のポート、ここでは1番なのでtun0から出力されます。

外部ネットワーク上

外部のネットワーク上に出る際にパケットはGeneveでカプセル化されます。

外部ネットワークから宛先Podまで

table=0でtun0から入ってきたパケットの判定、table=70で宛先のIPアドレスに紐づけてMACアドレスの書き換えと、TTL調整をします。table=80ではPodが接続されているポートに出力されるための準備が行われています。

  • NXM_NX_REG1に0x5を格納
  • NXM_NX_REG0[16]に0x1を格納

table=105では送信元と同じようにコネクショントラッキングが行われ、table=110でtable=80でREG1に格納された数字のポート、ここでは5番なのでnginxのPodが接続されているvethペアから出力されます。

4. ネットワークポリシーの適用

Pod間に不必要なトラフィックが流れないように制限することは、問題を未然に防ぎ、万が一問題が起こった時に被害を最小限にするためにも大切です。

Antreaが提供するネットワークポリシーはOSIのL3とL4で制限をかけることができ、一般的なファイアウォールと考えることができます。ここでは、ベストプラクティスであるゼロトラストに従って、全ての通信を拒否設定にした上で、必要な通信のみを許可することとします。

まずは名前空間に対して、名前空間内のPod間通信を拒否をするように設定します。

$ kubectl create -f - <<EOF
> kind: NetworkPolicy
> apiVersion: networking.k8s.io/v1
> metadata:
>   name: default-deny
>   namespace: antrea-demo
> spec:
>   podSelector:
>     matchLabels: {}
> EOF
networkpolicy.networking.k8s.io/default-deny created

次にrun=accessというラベルのPodからの、app=nginxというラベルのPodに対しての通信を許可します。

$ kubectl create -f - <<EOF
> kind: NetworkPolicy
> apiVersion: networking.k8s.io/v1
> metadata:
>   name: nginx-access
>   namespace: antrea-demo
> spec:
>   podSelector:
>     matchLabels:
>       app: nginx
>   ingress:
>     - from:
>       - podSelector:
>           matchLabels:
>             run: access
> EOF
networkpolicy.networking.k8s.io/nginx-access created

結果は次の通り、問題なく通信できています。

$ kubectl exec access -n antrea-demo access -- wget -q -O - 192.168.1.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

では、このネットワークポリシーがどのように実装されているかを見てみましょう。これはアクセスの宛先であるNginxのPodが稼働しているノードのOpenVSwitchの不ローテーブル抜粋です。

table=0でtun0から入力された通信は、table=70とtable=80で出力されるインタフェースの情報が引き渡されます。ここまではネットワークポリシーが適用される前と同じです。table=90がネットワークポリシーを実装しているテーブルになり、前の段で出力インタフェースを設定される時に同時に設定されたレジスタと、送信元、ここではbusyboxのIPアドレスの両方をconjunctionで判定し、その結果table=105へresubmitされています。ここからはネットワークポリシーの適用がない時と同じく、table=105でコネクショントラッキング、table=110で出力インタフェースへ出力となります。ここで、例えば送信元がconjunctionで合致しない場合には、table=90のpriority=0にdropがあり、全て落とされます。


ネットワーク周りはPodのIPやサービスIP、そしてCNIによってはOverlay、ネットワークポリシーなどがあり、Kubernetesクラスタを学習する際に頭を悩ませる部分です。特にネットワークポリシーの適用で多くのCNIはiptablesを使っており、その流れを理解し、障害分析をするのには苦労が伴っていました。AntreaではOpenVSwitchを使うことで、ルーティングとネットワークポリシーを一元的に確認することができ、初期の学習コストを下げることができそうです。またOctantを使うことでGUIで視覚的に見ることもできるそうなので、次回の検証ではOctantを試してみます。

How Kubernetes delivers packets inter-node – AWS EKS

In this post, I explore the network part of the AWS EKS, and look into what default CNI plugin they are using as well as how we can replace one with Calico.

Default EKS CNI plugin – AMAZON VPC CNI plugin

By default, AWS EKS uses Amazon VPC CNI plugin. It doesn’t use any encapsulation to convey inter-node traffic and obviously the overheads are kept minimum. However, as the name suggests, it only works along with AWS VPC and hence AWS specific. Let’s take a look how it works!

1. Initial State

The below picture depicts the initial state of an EKS cluster. There are some pods are deployed in kube-system namespace. There is a daemonset to deploy an aws-node cni plugin in every node.

2. Deploy a Pod

Let’s deploy a pod and see what happens. I deployed a Nginx deployment with 3 replicas, and the result is shown below. Notably, there are a few things that happened once the container is deployed on the node, all done via amazon vpc cni plugin:

  • a secondary IP address is added onto the VPC ENI from the same VPC range
  • a veth pair is created, and the route entry is added for this veth pair with the IP address allocated as a secondary IP.

3. Expose a deployment as a service

To access this pod, let’s expose these pods as a service. In this demo, Nginx pods, which are waiting for a connection on port 80, are exposed as a service on port 8080. Note that aws cni plugin is not used here, but kube-proxy is the one who modifies the node’s netfilter to add rules.

4. Packets on the wire

Let’s launch a busybox container and try HTTP connection to the Nginx service.

  1. the container sends out the request packet, the destination is the one from the service(destination: NGINX_SERVICE_IP, destination_port: NGINX_SERVICE_PORT).
  2. The packet destination is modified based on the rules on the netfilter. With this, the service IP address(in this case 10.100.122.140), which is unknown to AWS VPC, is only contained in the originating node.

Replacing EKS CNI plugin with Calico

Amazon VPC CNI plugin works great in AWS EKS, however, it does not provide network policy by default. If you want to have a network policy, you can use calico to work with Amazon VPC CNI plugin by following this instruction on AWS.

In this demo, I will replace the cni plugin with Calico. There is no obvious reason here, but just to show how it works. You may find this useful if you want to avoid the vendor lock-in, but you should really check the compatibility if you use this for the production. Nevertheless, let’s get started!

1. Initial State

If you follow the guide on Calico, the initial state of the cluster looks like below. Note that “calico-node” is deployed instead of aws-node here.

Another item to note is that there is an interface vxlan.calico is deployed in each node. As you can see in the routing table, this vxlan endpoint is used for inter-pod communication which resides in other nodes.

2. Deploy a pod

A veth pair is created and an IP address is allocated to the pod, and the routing table is modified with the specific IP address. Contrary to the Amazon VPC CNI plugin, these IP address is contained only in node and nothing is modified in AWS VPC.

3. Expose a service

This is the same as we saw in Amazon VPC CNI plugin, kube-proxy modifies the netfilter to translate service-ip to actual-pod-ip. We can see 10.100.235.49 is allocated for service IP here.

4. Packets on the wire

Let’s check the HTTP request from the busybox on 172.16.64.2. When it sends the request to Nginx service(10.100.235.49:8080), it is first translated to the actual pod IP and port(172.16.38.4:80), it is the same flow until here as we saw in the previous demo. It is now, however, sent to the vxlan.calico interface and the packet is encapsulated in the vxlan and sent over the wire. So as in the below picture, we can only see vxlan packet on the wire.

It is though, the vxlan doesn’t provide encryption and we can pass “-d” option to see what is going on in the packet. And in this case, we can see the HTTP communication ongoing using actual pod IP addresses.

AWS Client VPN authentication with Gsuite

Following up on the previous post “How to run AWS Client VPN with Multi Factor Authentication “, this post is for another variation to authenticate clients with another ID provider, and this time it is Gsuite. Most of the setup is the same as in the previous post, so this would be quite quick!

TLDR;

At this moment, it seems not possible for AWS Client VPN to use GSuite as an ID provider directly. For this to workaround, I used AWS SSO as an intermedium to glue GSuite and AWS Client VPN. Similar to Okta, it offloads the authentication to GSuite, and you can use MFA for authentication.

Connect AWS SSO and GSuite

The first half of this post is based on this AWS guide “How to use G Suite as an external identity provider for AWS SSO“, and walkthrough the integration between GSuite and AWS SSO. If you already have this, please skip to the next section.

1. Initial setup of AWS SSO and GSuite SAML APP

Follow the guide until “Manage Users and Permissions” and set up AWS SSO and SAML Application.

  1. Set up External Identity Provider in AWS SSO
  2. Set up SAML Application on GSuite

2. User registration on AWS SSO

At this moment, this integration doesn’t support SCIM, and hence the administrator needs to add users manually onto AWS SSO. This can be automated using ssosync, but for simplicity, we follow manual registration in this post.

The registration itself is quite simple though. Click “Add User” in AWS SSO, and fill in the required fields with your users’ information to match with the information in GSuite account.

Click “Next:”, and “Add User”. Since this user information is only required for AWS Client VPN authentication, we don’t need to grant any other permission for now.

Connect AWS SSO and AWS Client VPN

1. Create an application on AWS SSO

Next, we configure AWS SSO to provide ID information for AWS Client VPN.

In AWS SSO dashboard, click “Application”, and “Add a new Application”, then select “Add a custom SAML 2.0 application”.

In the next page, select any name you prefer, and:

  • Download AWS SSO metadata
  • In Application metadata, use below values:
    • Application ACS URL: http://127.0.0.1:35001
    • Application SAML audience: urn:amazon:webservices:clientvpn

Click “Save Changes”

Go to “Attribute mappings”, and create mappings as below:

  • Subject
    • string: ${user:subject}
    • format: emailAddress
  • NameID
    • string: ${user:email}
    • format: basic

Click “Save”.

Next, go to “Assigned users”, and select users who you want to grant access to.

2. Add AWS SSO as an ID provider

In IAM, click “Identity Providers” and “Create Provider”.

In Provider Type, select “SAML”. Name the provider, and select the metadata file you downloaded during the previous step.

Click “Next”, and “Create” to register AWS SSO as an ID provider.

3. Add AWS Client VPN Endpoint

The last step is exactly the same as in the previous post, and we need to specify AWS-SSO IdP instead of Okta. The provider cannot be changed once created, you still need to create another even if you already have one created with Okta.

In VPC, go to “AWS Client VPN Endpoint”, and “Create Client VPN Endpoint”. Use whatever parameter you prefer, only the difference is “SAML provider ARN”, and you need to use AWS-SSO, which you created in the previous step.

Once created, associate this endpoint to the subnet of your choice:

And authorize which network to be accessible, we use “allow all” for the entire segment in VPC(10.0.0.0/16) here for demo purpose, but in production, you should carefully manage this access list.

Connection Test

Now, everything is set up and ready to test. Download the client configuration from AWS management console, and load it to the AWS client VPN software on your local machine.

Once connected, your default browser opens and direct you to the google authentication.

Once you provide all the credentials correctly, you are now connected to AWS.

And you can access your internal server using an internal IP address, or any other resources.


AWS is adding features constantly, and there should be an easy integration possible in the future. Please do let me know if you know such a feature exists 🙂

MFA Client VPN for GCP using pfSense

In the previous post aws client vpn with multifactor authentication , I introduced how to deploy client vpn in aws using aws managed service. This time I introduce how to do “nearly” the same thing in GCP. However, GCP doesn’t have its own managed client vpn service, and the deployment steps are quite different.

What you can achieve after reading this post

  • Basic setup of pfSense in GCP to act as an openvpn server

What is the expected result

  • easy user management on pfSense
  • Multi-factor authentication client VPN to connect to GCP
  • All tunnel (once client is connected to vpn, all traffic passed through GCP, even the internet access)

Walkthrough chart

  1. Launch pfSense in GCP.
  2. pfSense basic setup.
  3. FreeRADIUS installation and setup on pfSense
  4. Authentication server setup on pfSense
  5. OpenVPN setup on pfSense
  6. OpenVPN client exporter install
  7. Firewall rule to allow OpenVPN
  8. Add remote user(s)
  9. Setup client vpn software and test

1. Launch pfSense in GCP

There is no pfSense image available in GCP marketplace. Hence you need to download the image from official site, and create an image in GCP.

Once you created an image, use it to launch an instance. When you create an instance, most of the parameter can be left as default, but be sure to make your network settings as in below image:

In Command line, it would look like this:

$ gcloud compute --project <project_name> disks create "pfsense" --size "20" --zone "us-central1-a" --source-snapshot "pfsense-245p1" --type "pd-standard"

$ gcloud beta compute --project=<project_name> instances create pfsense --zone=us-central1-a --machine-type=n1-standard-1 --subnet=default --address=<allocated_global_ip> --can-ip-forward --tags=openvpn-server --disk=name=pfsense,device-name=pfsense,mode=rw,boot=yes,auto-delete=yes

Next, we need to connect to serial interface of this instance to finish pre-configuration settings. Select the pfsense instance, and click “Edit”, then check “Enable connecting to serial ports”. Once checked and saved, you should be able to connect to serial port 1.

You will be asked several questions, select as in below image.

Soon after you will be greeted with pfSense menu. Select 8) Shell, and type ifconfig vtnet0 mtu 1460, otherwise you will not be able to access web interface, and the connection would be quite unstable.

As a last step, we configure GCP Firewall to allow WEB GUI interface acccess. Go to Firewall and allow tcp/443(https), and udp/1194(openvpn) for the target tags you allocated to pfSense. In command line, it would look like below:

gcloud compute --project=<project_name> firewall-rules create allow-openvpn-server --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:22,udp:119 --source-ranges=0.0.0.0/0 --target-tags=openvpn-server

2. pfSense Basic Setup

Navigate to the pfsense URL at https://<your_external_ip>/, and you should be greeted with pfSense setup wizard.

Once you finished the wizard, now you are in pfsense console!


3. FreeRADIUS installation and setup on pfSense

Go to Paackage Manager, and search “freeradius”. Click “Install” and it will install freeradius along with all dependencies.

Navigate to “Services” > “FreeRADIUS” to open freeradius configuration page.

We have two parts to configure here:

  • Interfaces … this is to specify which interface this radius server serves the request. we will create one interface for authentication and another for accounting.
  • NAS/Clients … this is to specify how the server receives the authentication request.

First, go to interfaces, and create two interfaces as shown below:

Next, create a cclient as shown in below image, do remember client shared secret, this will be used later.


4. Authentication server setup on pfSense

Next we need to create an authentication settings so that pfSense sends request to radius server upon client connection.

Navigate to “System” > “User management” > “Authentication Server”. Fill in the field as shown below. This shared secret is the one you used to setup radius server:


5. OpenVPN setup on pfSense

Authentication system is ready and we are going to configure VPN server. Navigate to “VPN” > “Open VPN” > “Servers”, and click “+Add”.

Fill in the field as shown in below image:

Most of the parameter can be kept as default, and some parameters you need to change are as below:

  • Server mode: Remote Access (User Auth)
  • Backend for authentication: <authentication server you created in previous step>
  • IPv4 Tunnel Network: ANY network you are not using in your production network
  • Redirect IPv4 Gateway … All traffic including internet browsing will be passed through this pfsense once client is connected. This can be useful if client needs to use one static global ip for any other access. If you don’t need this, you can uncheck this, and you can specify which subnet to be injected into the client.

6. OpenVPN client exporter install

Configuring client one by one is tedious task. In pfSense, there is a package called OpenVPN Client exporter, and it can be installed through package manager.

Once installation succeeded, you will see a new tab “client export” in OpenVPN. In “Hostname Resolution” select other, and fill in your global ip address in “Host Name”, then click “Save as default”. Once you saved, click “inline cconfiguration” and it will download the ovpn configuration file.

7. Add Firewall rule to allow OpenVPN

We need to create rules as listed below:

  • Firewall rule to allow openvpn connectiion to the firewall from the internet
  • Firewall rule to allow openvpn client to connecct to somewhere else

For the first rule, ccrete the firewall as below:

For the second rule, it depends on the usage. I made a extremely egnerous rules here to allow any communication. This means any communication even the one to the internet will also be allowed.

8. Add remote user(s)

Now it’s time to add users. Navigate to “Services” > “FreeRADIUS”, and add users.

  • Username … user name of your choice
  • Password … <blank>
  • One-Time Password … checked
  • OTP Auth Method … Google-Authenticator
  • Init-Secret … Click “Generate OTP Secret”
  • PIN … PIN of your choice
  • QR Code … Click Generate QR Code, and ask user to scan this code to register this OTP on their phone.

8. Setup client vpn software and test

One you setup the client with the file you downloaded from the last step, you can use the credentiaal to connecct to VPN. Please note the password is PIN number followed by OTP.

If everything hasa been setup correctly, you should be able to communicte with the internal resources as well as you can connect to the internet through GCP.

How to run AWS Client VPN with Multi Factor Authentication

In the previous post, I introduced AWS Client VPN with Simple AD. On May 2020, AWS introduced a SAML federation. In this post, I will walkthrough the simplest deployment of AWS client VPN with SAML federation.

What you can achieve after reading this post

  • Basic setup of Okta to integrate with AWS Client VPN
  • Basic setup of AWS Client VPN using SAML federation

What is the expected result

  • easy user management on Okta and not in AWS nor separate AD
  • Multi-factor authentication on AWS Client VPN
  • Managed client VPN access to your VPC environment

Walkthrough chart

  1. Generate certs and keys using easy-rsa, and register them on ACM
  2. Deploy AWS resources as in here
  3. Setup Okta to integrate with AWS Client VPN
  4. Deploy AWS Client VPN Endpoint
  5. Install AWS-provided client onto PC and test
  6. Delete all test resources.

1. Generate Certificate and Keys

You need to generate certificate and keys for servers to process client vpn. You can follow the official steps here.

{
 git clone https://github.com/OpenVPN/easy-rsa.git
 cd easy-rsa/easyrsa3
 ./easyrsa init-pki
 ./easyrsa build-ca nopass
 ./easyrsa build-server-full lab_server nopass
 mkdir ~/temp_folder
 cp pki/ca.crt ~/temp_folder/
 cp pki/issued/lab_server.crt ~/temp_folder/
 cp pki/private/lab_server.key ~/temp_folder/
 cd ~/temp_folder/
 }

Once they are generated, register them into the AWS Certificate Manager(ACM). Please note you need to register these to the region you are going to have your VPN connection.

aws acm import-certificate --certificate file://lab_server.crt --private-key file://lab_server.key --certificate-chain file://ca.crt --region us-east-1

If it returns arn, you are successfully registered certificate/key on ACM.


2. Deploy test AWS resources

I have prepared terraform files here for you to setup the lab resources. Once you apply the configuration, it will apply below files to your environment:

  • 1x VPC
  • 1x t2.micro EC2 instance with preloaded web server on ubuntu18.04

Please change necessary parameters, especially those in providers.tf file to adjust to your needs.


3. Setup Okta to integrate with AWS Client VPN

If you don’t have Okta, you can start free trial here.

First, create an AWS ClientVPN integration. Click “Application”, then select “Create New App”.

And change the settings of AWS Client VPN app as in below image:

This step is optional, but if you like to have MFA, add the rule.

Next, I create a user in Okta. You need to assign AWS Client VPN app to this user either individually or via group.

In AWS, go to IAM and configure Okta as an identity provider.


4. Deploy AWS Client VPN Endpoint

In Create Client VPN Endpoint wizard, you need to specify IPv4 CIDR which should be different from your existing VPC.

  • Server certificate ARN … Select arn, which you received in step 1.
  • Authentication Options … Select “Use usesr-based authentication” > “Federated authentication”
  • SAML provider ARN … Select the identity provider ARN(Okta) you created in the previous step.
  • Enable split-tunnel … Enable.

Once Endpoint is created, it needs to be associated to the subnet. Select the VPN endpoint and click “Associate”. Note that you will be charged once you associate endpoint with subnet.

  • VPC … VPC you want to use this VPN Endpoint in
  • Subnet … Subnet you want to use this VPN Endpoint in

Now it’s associated with the subnet. And this is the last step to authrize the access to the network resources from VPN client. You can fine grain users access to specific resources based on user groups in Okta, but I simply “Allow access to all users” for now.


5. Install AWS-provided client onto PC and test

You can download configuration file from AWS console.

Install AWS-provided VPN Client from here and install it on your PC. Previously I used tunnelblick, but it seems not working with federation as of June 2020.

After you installed AWS-Provided VPN Client, follow the manual to import the downloded VPN config.

Once you click “Connect”, it will automatically pops up default web browser and display okta authentication page.

If you didn’t use MFA, you will be connected to AWS now. If you do have MFA enabled in Okta, it will promt you to either:

  1. Setup MFA on the spot if this is user’s first time to connect
  2. Enter MFA token

If everything goes fine, you will be prompted “Authentication details received, processing details. You may close this window at any time”, and you should be able to access the internal web server directly from your PC.

Secure and Easy Remote Work with Sophos UTM

I’m going to walkthrough how to setup remote access vpn in sophos UTM. This post is intended for the minimum deployment and might not be as scalable, but baseline is as below:

  • Clientless – no need to install client software on PC
  • Secure – Multifactor authentication
  • Affordable – no need for extra service nor device

As the requirement of remote access increase, IT needs to setup environment quickly, and still in cost effectively.
Sophos UTM is one of the least expensive UTM in the market, which is ready for enterprise use.

In summary, the settings follow below:

  1. configure users
  2. configure OTP
  3. (optional)configure user portal
  4. configure HTML5 VPN

First, you need to create users. This username is used for remote users to login to the portal.

We use tOTP based token this time to use Multi Factor Authentication(MFA). You just need to enable it.

We need to create HTML5 VPN Portal for every users in this case. First add “network definition” for users PC at office, I’m using IP address here, but alternatively you can use DNS name. Second add remote user, which you created at step 1, into “Allowed users” so that only the user can access each PC. And that’s all for Sophos UTM setup.

Ask users to access the URL “https://”, and they should login with the reomte user name and password which you created at step 1.

Once users logged into the portal, it should prompt users to register OTP. Users can use any tOTP based applicaiton. In my case I used Google Authenticator, which is available via playstore/applestore for free. Scan the QR code, and it should now prompt the PIN.

Once done, users need to login again. But this time the password is “password you created at step 1” + “PIN on tOTP app”(eg. secretpassword123456). users should be able to see the user portal now.

Click “HTML5 VPN Portal”, and then click the PC name to connect to.

It pops up another window showing your PC desktop. Ask users not to shutdown the PC, and ask them simply logoff or close the window.


Some UTMs require separate subscription to use clientless VPN (eg. PaloAlto), while Sophos UTM comes with most of the function built-in the box.

Please drop me a message if you encounter any problem. THanks for reading!