Infrastructure Access
Twingate enables engineers and DevOps to manage and automate secure access to technical infrastructure, both on-premises and in the cloud.
Benefits of using Twingate
- Enhanced security. Twingate enables access to infrastructure without requiring it to be publicly exposed on the internet via a jump server, Bastion host, or other endpoint.
- Fast deployment. Get up and running in under 15 minutes by deploying the Twingate connector on one of the devices in the network, without needing to redefine network settings or set up a VPN server.
- Programmatic configuration. Twingate supports Terraform and Pulumi and offers an Admin API to automate access controls.
- Granular permissions. Define access to individual resources with custom policies and groups, enabling narrow access permissions.
- Enable CI/CD workflows. Define narrow access permissions to internal infrastructure for automated services (e.g. Jenkins, CircleCI) hosted in the cloud.
- Unified access. Easily access multiple clouds or multiple environments (e.g. staging) at the same time.
Detailed Walkthroughs
Enhanced Security
- Access control for non-production environments
- Ephemeral access via Command Line Interface
- How to add Multi-Factor Authentication to all protocols (ssh, RDP, SQL, zOS, etc.)
- Accessing Private Resources for Vendors & Contractors
Working with Private DNS
Enabling machine-to-machine communication & SaaS CI/CD workflows
- How to secure machine-to-machine communication using Service Accounts
- How to connect CircleCI and Github Actions to Private Resources
- How to connect Github Codespaces to Private Resources
Programmatic configuration
General access
- How to access multiple VPCs concurrently with a single network connection
- Access control for non-production environments
- Remotely access a coworkers development server
- Access private resources in Azure
- AWS Workspace
- Windows Start Before Logon & Using Active Directory with Twingate
Twingate & Kubernetes
Twingate can be deployed in Google Kubernetes Engine (GKE), Amazon EKS, and similar Kubernetes and microK8s deployments to address various use cases.
Your intended use case will dictate the best way to deploy Twingate Connectors or Clients, as well as the way you’d want to structure Resource definition and Group membership.
The following use cases are covered below:
- Managing a Kubernetes cluster using
kubectl
- Exposing a service in a Kubernetes cluster
- Providing access to a specific service in a Kubernetes cluster that isn’t exposed
- Provide Kubernetes Pods access to private Resources in a separate environment
Notes on Connector deployment and Remote Network configuration:
- A Remote Network in Twingate defines a logical set of Resources that are accessible from the Connectors deployed in that Remote Network. If you are deploying a Connector within a K8s cluster (eg. using Twingate’s Helm Chart), it most likely makes sense to define a new Remote Network for Resource addresses that exist within your K8s cluster.
- If you’re deploying Connectors using Helm, you should periodically manually update the Twingate Helm Chart. Please note that upgrading a Connector does not automatically update the Helm Chart.
- For use cases where only public K8s service access is required, Twingate Connectors can also be deployed on a Linux host using either Docker, or as a native
systemd
service.
See Connector section for more detailed information.
Last updated 20 days ago