A while ago, I learned about inlets by Alex Ellis. It allows you to expose an endpoint on your internal network via a tunnel to an exit node. To actually reach your internal website, you navigate to the public IP and port of the exit node. Something like this:
Internet user --> public IP:port of exit node -- tunnel --> your local endpoint
On both the exit node and your local network, you need to run inlets. Let’s look at an example. Suppose I want to expose my Magnificent Image Classifier 😀 running on my local machine to the outside world. The classifier is actually just a container you can run as follows:
docker run -p 9090:9090 -d gbaeke/nasnet
The container image is big so it will take while to start. When the container is started, just navigate to http://localhost:9090 to see the UI. You can upload a picture to classify it.
So far so good. Now you need an exit node with a public IP. I deployed a small Azure B-series Linux VM (B1s; 7 euros/month). SSH into that VM and install the inlets CLI (yeah, I know piping a script to sudo sh is dangerous 😏):
curl -sLS https://get.inlets.dev | sudo sh
Now run the inlets server (from instructions here):
export token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1) inlets server --port=9090 --token="$token"
The first line just generates a random token. You can use any token you want or even omit a token (not recommended). The second command runs the server on port 9090. It’s the same port as my local endpoint but that is not required. You can use any valid port.
TIP: the Azure VM had a network security group (NSG) configured so I had to add TCP port 9090 to the allow list
Now that the server is running, let’s run the client. Install inlets like above or use brew install inlets on a Mac and run the following commands:
export REMOTE="IP OF EXIT NODE:9090" export TOKEN="TOKEN FROM SERVER" inlets client \ --remote=$REMOTE \ --upstream=http://127.0.0.1:9090 --token $TOKEN
The inlets client will establish a web sockets connection to the inlets server on the exit node. The –upstream option is used to specify the local endpoint. In my case, that’s the classifier container (nasnet-go).
I can now browse to the public IP and port of the inlets server to see the classifier UI:
The inlets server will show the logs:
I think inlets is a fantastic tool that is useful in many scenarios. I have used ngrok in the past but it has some limits. You can pay to remove those limits. Inlets, on the other hand, is fully open source and not limited in any way. Be sure to check out the inlets GitHub page which has lots more details. Highly recommended!!!