Simplifying Argo Tunnels (Part 1)
As you have probably noticed, I am a big fan of the features and tools that Cloudflare offer. In particular, using cloudflared to secure your DNS queries or making use of what is effectively a free dynamic DNS service or even their Spectrum service which allows you to target basically any port for a specific service.
One thing I have touched on lightly in the past is using Argo Tunnels. This service allows you to run a small daemon locally that connects out of your network to Cloudflare allowing you to expose an internal application via their network without punching holes in your firewall. This traditionally would have required a VPN, but coupling this with Access you can lock down your apps only to the users or groups that should have access.
Basically the only thing you need to use this service is outbound connectivity to the Cloudflare IP ranges on TCP port 7844 and the Argo Tunnel Client which is the same client used for DNS over HTTPS.
The client itself has changed quite a bit from when I first started using it. Traditionally if you wanted to surface multiple internal applications from a single endpoint you had to run multiple instances of the client. This had its drawbacks. Each instance required its own configuraton file which can become a little messy at times. So how did this work in the past? Using systemd template unit files you reference a specific configuration files to build multiple instances.
ubuntu@dns1:~$ cat /etc/systemd/system/[email protected]
[Unit]
Description=Argo Tunnel (%i)
After=network.target
[Service]
TimeoutStartSec=0
Type=notify
ExecStart=/usr/bin/cloudflared --config /etc/cloudflared/%i.yml --origincert /etc/cloudflared/cert.pem --no-autoupdate
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
ubuntu@dns1:~$
In the above example, I would place the below configuration in a file called /etc/cloudflared/service1.yml
ubuntu@dns1:/etc/cloudflared$ cat service1.yml
hostname: service1.seamoo.se
url: http://192.168.1.1
logfile: /var/log/cloudflared-service1.log
no-tls-verify: yes
ubuntu@dns1:/etc/cloudflared$
Upon doing this, I could simply enable the service and have it start at boot by issuing:
systemctl start cfd@service1
systemctl enable cfd@service1
Simple enough? Sure, but this doesn’t scale well. You end up with many instances of the cloudflared client running, logging to different log files, reading from different configuration files. It easily becomes a bit of a mess, and I will happily admit I made many mistakes wondering “What did I call that service again, and what did it actually point to?
Roll in Named Tunnels on Part 2