managing twingate

Automating Infrastructure with Ansible

Run Ansible playbooks over Twingate SSH. No SSH keys in your inventory. No bastion ProxyCommand hacks. Every task recorded.

Ansible connects to hosts over SSH. That’s its whole model: open an SSH session, run a module, close the session. This makes SSH key management a major operational headache for Ansible at scale. Keys get scattered across inventories, bastion ProxyCommand configurations get fragile, and nobody can tell you which key authenticated which playbook run last Thursday.

Twingate removes the key management problem entirely. Ansible calls the system ssh binary, the Twingate Client routes the connection, and the Gateway handles certificate-based authentication tied to the operator’s identity. Your inventory files contain hostnames, not key paths. Your ansible.cfg doesn’t need ProxyCommand entries. And every task Ansible runs is captured in a session recording linked to whoever triggered the playbook.

[ Ansible ] ── ssh ── Twingate Client ── Connector ──> [ SSH Gateway ] ──> [ Target Hosts ]
<====== Twingate Auth & Certificate ======>

Configure Ansible

Step 1: Create ansible.cfg

Create an ansible.cfg file in your project directory:

[defaults]
inventory = ./inventory.ini
host_key_checking = False
interpreter_python = auto_silent

Setting host_key_checking = False skips the fingerprint prompt on first connection. If you prefer to verify host keys, enable Auto-sync SSH Server Configuration in the Twingate Client instead. The Client populates ~/.ssh/known_hosts with the SSH CA’s public key, which lets your SSH client trust all Gateway-issued certificates without TOFU prompts.

Step 2: Define your inventory

Create an inventory.ini file. The hostnames must match the SSH Resources configured in the Twingate Admin Console:

[backend]
api-backend-1.int
api-backend-2.int
[database]
postgres.int

Group your hosts however makes sense for your infrastructure. Ansible groups don’t need to match Twingate Groups, but aligning them simplifies reasoning about who can run what.

Step 3: Verify connectivity

Test that Ansible can reach your hosts through Twingate:

ansible api-backend-1.int -m ping

You should see:

api-backend-1.int | SUCCESS => {
"changed": false,
"ping": "pong"
}

Test an entire group:

ansible backend -m ping

This isn’t an ICMP ping. Ansible’s ping module opens an SSH connection to each host, runs a small Python script, and confirms the round trip worked. If this succeeds, your Ansible-to-Twingate-to-host path is working.

Run a playbook

Create a playbook file called playbook.yml:

- name: Basic server setup
hosts: backend
become: true
tasks:
- name: Ensure curl is installed
apt:
name: curl
state: present
- name: Check uptime
command: uptime
register: uptime_result
- name: Print uptime
debug:
msg: "{{ uptime_result.stdout }}"

Run it:

ansible-playbook playbook.yml

Ansible opens SSH sessions to each host in the backend group, installs curl if missing, checks uptime, and prints the result. Every one of those SSH sessions goes through the Twingate Gateway with certificate-based auth. No SSH key file referenced anywhere.

Transfer files

Ansible’s copy module transfers files over the same SSH connection. Create a playbook:

- name: Copy file to server
hosts: api-backend-1.int
tasks:
- name: Copy config file
copy:
src: ./app-config.yml
dest: /etc/myapp/config.yml

Run it:

ansible-playbook playbook.yml

For larger transfers, Ansible uses SFTP or SCP under the hood (configurable via transfer_method in ansible.cfg). Both work transparently through Twingate.

Session recording

Every SSH session Ansible opens is recorded by the Gateway. This means every apt install, every file copy, every command module invocation is captured in an asciicast recording tied to the identity of whoever ran the playbook.

For a playbook that touches 20 hosts, you get 20 individual session recordings, each linked to the authenticated operator. This is useful for compliance audits where you need to prove exactly what automation did, on which server, and who triggered it.

Recordings are exported to stdout on the Gateway and stay on your infrastructure. Forward them to your SIEM, archive in S3, or replay in a browser. See the SSH session recording guide for details on configuring forwarding.

Best practices

Align inventory groups with access policies. If your Twingate Groups separate production from staging, mirror that in your Ansible inventory. This makes it obvious when a playbook targets hosts the operator might not have access to.

Scope become carefully. Twingate authenticates the SSH session, but become: true escalates to root on the target host. Limit become to tasks that actually need it rather than applying it at the play level.

Use service accounts for scheduled automation. For playbooks triggered by CI/CD or cron, use a Twingate Service Account with a headless Client. Apply time-based access policies to limit when the service account can connect. This keeps automated runs auditable without sharing human credentials.

Pin the remote_user in your inventory. If your SSH Resources map to a specific username, set ansible_user in the inventory file or group vars rather than relying on the local system username:

[backend:vars]
ansible_user=deploy

Troubleshooting

ansible ping returns UNREACHABLE

Symptom: ansible api-backend-1.int -m ping fails with UNREACHABLE.

Cause: The Twingate Client isn’t running, you don’t have access to the SSH Resource, or the hostname doesn’t match the Resource name.

Fix:

  • Verify the Twingate Client is running and connected.
  • Check that you have access to the SSH Resource in the Admin Console.
  • Confirm the hostname in your inventory matches the Resource address exactly.
  • Test the raw SSH connection: ssh api-backend-1.int. If this fails, the issue is with Twingate, not Ansible.

Playbook hangs on “Gathering Facts”

Symptom: The playbook starts but stalls at the “Gathering Facts” step.

Cause: The SSH connection opens but Python isn’t available on the target, or the interpreter_python setting is wrong.

Fix:

  • Add gather_facts: false to your play temporarily to confirm SSH connectivity works.
  • Verify Python is installed on the target host.
  • Confirm interpreter_python = auto_silent is set in ansible.cfg.

Permission denied errors with become

Symptom: Tasks fail with “Permission denied” when using become: true.

Cause: The SSH user doesn’t have sudo privileges on the target host.

Fix:

  • Verify the SSH user can run sudo on the target: ssh api-backend-1.int sudo whoami.
  • If sudo requires a password, set become_ask_pass = true in ansible.cfg or pass --ask-become-pass on the command line.

Next steps

Join us in the community subreddit to share your Ansible setup or ask questions.

Last updated 13 minutes ago