Deploying Twingate to AWS EKS
by Keith Hubner

Deploying Twingate to AWS EKS

Please note, this guide includes creating resources which will bring additional cost to your AWS subscription.

This guide assumes you have already deployed a private EKS cluster. For more information on setting this up please visit the official AWS Documentation.

Creating the connector

Click “Deploy Connector” on one of the existing connectors, or you can setup a new one by clicking the add button. Then select Manual from the deployment method:

deploy connector deployment method

Scroll down and click “Generate Tokens”, this will require you to re-authenticate.

generate tokens

You should copy the tokens and keep them in a safe place, we will need to enter these values later.

Deploying the connector to AWS

The location of the connector is very important, you need to ensure it has both outbound internet access to communicate with the Twingate service and also access to the EKS control plane. In this guide, I will be using one of the subnets in the VPC which was created as part of the EKS cluster creation process.

We will be deploying the Twingate connector as a container on AWS ECS (Fargate). If you don’t already have one, you will need to create an ECS cluster by following these steps.

The instructions below are taken and modified from the Twingate documentation.

First we need to create the task definition file. The easiest way to do this is either using the AWS CLI on your local desktop or via the AWS CloudShell.

You will need to replace the following:

  • Your TENANT_URL e.g. (this is the URL used to access Twingate)
  • The NAME you want to give the connector, for visibility it’s recommended that this matches your connector name in Twingate
cat > taskdef.json << EOF
  "requiresCompatibilities": [
  "containerDefinitions": [
      "name": "NAME",
      "image": "twingate/connector:1",
      "memory": 2048,
      "cpu": 1024,
      "environment" : [
        { "name" : "TENANT_URL", "value" : "https://<YOUR TWINGATE SUBDOMAIN>"},
        { "name" : "ACCESS_TOKEN", "value" : "eyJ0eXAiOiJEQVQiLCJh..."},
        { "name" : "REFRESH_TOKEN", "value" : "suoodqhy0niwjzpY_ki8..."}
  "volumes": [],
  "networkMode": "awsvpc",
  "placementConstraints": [],
  "family": "twingate-connector-<NAME>",
  "memory": "2048",
  "cpu": "1024"

Then we can use this file to create the task definition:

Remember to replace the region with your own value.

aws ecs register-task-definition --region [REGION] --cli-input-json file://taskdef.json --output json

Next we can use the task definition to launch the Connector as a Fargate service. Again, you will need to replace the following values:

  • The name of the service within your cluster where we have used twingate-connector, below.
  • The name of the task definition you created above where we have used twingate-connector-<NAME> below.
  • The subnet ID within your VPC where you would like to launch the service.
  • The security group ID you would like to apply to the connector.
  • The name of the ECS cluster you are launching the service within.
  • The region that the ECS cluster and task definition exist within.
aws ecs create-service --service-name twingate-connector --desired-count 1 --launch-type "FARGATE" --task-definition twingate-connector-<NAME> --network-configuration "awsvpcConfiguration={subnets=[subnet-mysubnet],securityGroups=[sg-mysg]}" --cluster twingate-ecs --region <REGION>

You will need to ensure the subnet you deploy this service into has outbound internet access.

When you run this command, you should see that the service has been created and after a few moments, the container is in a running state:

container state

All being well, you should now also see that the connector in the Twingate admin portal is showing as connected:

connector state

Adding the EKS resource

Now we have a connection from our connector in AWS and our Twingate service, we can go ahead and add the resource to Twingate.

From the network page, click the “Add Resource” button:

add resource

Then we can add a label and the name of the DNS endpoint for the cluster:

You can find the DNS endpoint for your cluster from the cluster information page.

dns endpoint

Then select which groups you would like to access the resource:

select groups

Testing access

Once you have added the resource, you can now test access. To verify we don’t have access outside of the Twingate connection, you can run the following command:

Ensure Twingate is currently logged out and closed.

kubectl get pods -A

You should see that you are unable to connect to your private cluster:

Unable to connect to the server: dial tcp connect: network is unreachable

Then open up your Twingate client and login. Run the same command again and all being well you should get information back from the cluster:

NAME                                STATUS   ROLES   AGE   VERSION
eks-nodepool1                       Ready    agent   38m   v1.22.6

You now have secure access to your Kubernetes API without requiring any public access.

Featured Articles