Now that Azure Container Apps (ACA) is generally available, it is time for a quick guide. These quick guides illustrate how to work with a service from the command line and illustrate the main features.
Prerequisites
- All commands are run from bash in WSL 2 (Windows Subsystem for Linux 2 on Windows 11)
- Azure CLI and logged in to an Azure subscription with an Owner role (use
az login
) - ACA extension for Azure CLI:
az extension add --name containerapp --upgrade
- Microsoft.App namespace registered:
az provider register --namespace Microsoft.App
; this namespace is used since March - If you have never used Log Analytics, also register Microsoft.OperationalInsights:
az provider register --namespace Microsoft.OperationalInsights
- jq, curl, sed, git
With that out of the way, let’s go… 🚀
Step 1: Create an ACA environment
First, create a resource group, Log Analytics workspace, and the ACA environment. An ACA environment runs multiple container apps and these apps can talk to each other. You can create multiple environments, for example for different applications or customers. We will create an environment that will not integrate with an Azure Virtual Network.
RG=rg-aca
LOCATION=westeurope
ENVNAME=env-aca
LA=la-aca # log analytics workspace name
# create the resource group
az group create --name $RG --location $LOCATION
# create the log analytics workspace
az monitor log-analytics workspace create \
--resource-group $RG \
--workspace-name $LA
# retrieve workspace ID and secret
LA_ID=`az monitor log-analytics workspace show --query customerId -g $RG -n $LA -o tsv | tr -d '[:space:]'`
LA_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RG -n $LA -o tsv | tr -d '[:space:]'`
# check workspace ID and secret; if empty, something went wrong
# in previous two steps
echo $LA_ID
echo $LA_SECRET
# create the ACA environment; no integration with a virtual network
az containerapp env create \
--name $ENVNAME \
--resource-group $RG\
--logs-workspace-id $LA_ID \
--logs-workspace-key $LA_SECRET \
--location $LOCATION \
--tags env=test owner=geert
# check the ACA environment
az containerapp env list -o table
Step 2: Create a front-end container app
The front-end container app accepts requests that allow users to store some data. Data storage will be handled by a back-end container app that talks to Cosmos DB.
The front-end and back-end use Dapr. This does the following:
- Name resolution: the front-end can find the back-end via the Dapr Id of the back-end
- Encryption: traffic between the front-end and back-end is encrypted
- Simplify saving state to Cosmos DB: using a Dapr component, the back-end can easily save state to Cosmos DB without getting bogged down in Cosmos DB specifics and libraries
Check the source code on GitHub. For example, the code that saves to Cosmos DB is here.
For a container app to use Dapr, two parameters are needed:
- –enable-dapr: enables the Dapr sidecar container next to the application container
- –dapr-app-id: provides a unique Dapr Id to your service
APPNAME=frontend
DAPRID=frontend # could be different
IMAGE="ghcr.io/gbaeke/super:1.0.5" # image to deploy
PORT=8080 # port that the container accepts requests on
# create the container app and make it available on the internet
# with --ingress external; the envoy proxy used by container apps
# will proxy incoming requests to port 8080
az containerapp create --name $APPNAME --resource-group $RG \
--environment $ENVNAME --image $IMAGE \
--min-replicas 0 --max-replicas 5 --enable-dapr \
--dapr-app-id $DAPRID --target-port $PORT --ingress external
# check the app
az containerapp list -g $RG -o table
# grab the resource id of the container app
APPID=$(az containerapp list -g $RG | jq .[].id -r)
# show the app via its id
az containerapp show --ids $APPID
# because the app has an ingress type of external, it has an FQDN
# let's grab the FQDN (fully qualified domain name)
FQDN=$(az containerapp show --ids $APPID | jq .properties.configuration.ingress.fqdn -r)
# curl the URL; it should return "Hello from Super API"
curl https://$FQDN
# container apps work with revisions; you are now at revision 1
az containerapp revision list -g $RG -n $APPNAME -o table
# let's deploy a newer version
IMAGE="ghcr.io/gbaeke/super:1.0.7"
# use update to change the image
# you could also run the create command again (same as above but image will be newer)
az containerapp update -g $RG --ids $APPID --image $IMAGE
# look at the revisions again; the new revision uses the new
# image and 100% of traffic
# NOTE: in the portal you would only see the last revision because
# by default, single revision mode is used; switch to multiple
# revision mode and check "Show inactive revisions"
az containerapp revision list -g $RG -n $APPNAME -o table
Step 3: Deploy Cosmos DB
We will not get bogged down in Cosmos DB specifics and how Dapr interacts with it. The commands below create an account, database, and collection. Note that I switched the write replica to eastus because of capacity issues in westeurope at the time of writing. That’s ok. Our app will write data to Cosmos DB in that region.
uniqueId=$RANDOM
LOCATION=useast # changed because of capacity issues in westeurope at the time of writing
# create the account; will take some time
az cosmosdb create \
--name aca-$uniqueId \
--resource-group $RG \
--locations regionName=$LOCATION \
--default-consistency-level Strong
# create the database
az cosmosdb sql database create \
-a aca-$uniqueId \
-g $RG \
-n aca-db
# create the collection; the partition key is set to a
# field in the document called partitionKey; Dapr uses the
# document id as the partition key
az cosmosdb sql container create \
-a aca-$uniqueId \
-g $RG \
-d aca-db \
-n statestore \
-p '/partitionKey' \
--throughput 400
Step 4: Deploy the back-end
The back-end, like the front-end, uses Dapr. However, the back-end uses Dapr to connect to Cosmos DB and this requires extra information:
- a Dapr Cosmos DB component
- a secret with the connection string to Cosmos DB
Both the component and the secret are defined at the Container Apps environment level via a component file.
# grab the Cosmos DB documentEndpoint
ENDPOINT=$(az cosmosdb list -g $RG | jq .[0].documentEndpoint -r)
# grab the Cosmos DB primary key
KEY=$(az cosmosdb keys list -g $RG -n aca-$uniqueId | jq .primaryMasterKey -r)
# update variables, IMAGE and PORT are the same
APPNAME=backend
DAPRID=backend # could be different
# create the Cosmos DB component file
# it uses the ENDPOINT above + database name + collection name
# IMPORTANT: scopes is required so that you can scope components
# to the container apps that use them
cat << EOF > cosmosdb.yaml
componentType: state.azure.cosmosdb
version: v1
metadata:
- name: url
value: "$ENDPOINT"
- name: masterkey
secretRef: cosmoskey
- name: database
value: aca-db
- name: collection
value: statestore
secrets:
- name: cosmoskey
value: "$KEY"
scopes:
- $DAPRID
EOF
# create Dapr component at the environment level
# this used to be at the container app level
az containerapp env dapr-component set \
--name $ENVNAME --resource-group $RG \
--dapr-component-name cosmosdb \
--yaml cosmosdb.yaml
# create the container app; the app needs an environment
# variable STATESTORE with a value that is equal to the
# dapr-component-name used above
# ingress is internal; there is no need to connect to the backend from the internet
az containerapp create --name $APPNAME --resource-group $RG \
--environment $ENVNAME --image $IMAGE \
--min-replicas 1 --max-replicas 1 --enable-dapr \
--dapr-app-port $PORT --dapr-app-id $DAPRID \
--target-port $PORT --ingress internal \
--env-vars STATESTORE=cosmosdb
Step 5: Verify end-to-end connectivity
We will use curl to call the following endpoint on the front-end: /call
. The endpoint expects the following JSON:
{
"appId": <DAPR Id to call method on>,
"method": <method to call>,
"httpMethod": <HTTP method to use e.g., POST>,
"payload": <payload with key and data field as expected by Dapr state component>
}
As you have noticed, both container apps use the same image. The app was written in Go and implements both the /call and /savestate endpoints. It uses the Dapr SDK to interface with the Dapr sidecar that Azure Container Apps has added to our deployment.
To make the curl commands less horrible, we will use jq to generate the JSON to send in the payload field. Do not pay too much attention to the details. The important thing is that we save some data to Cosmos DB and that you can use Cosmos DB Data Explorer to verify.
# create some string data to send
STRINGDATA="'$(jq --null-input --arg appId "backend" --arg method "savestate" --arg httpMethod "POST" --arg payload '{"key": "mykey", "data": "123"}' '{"appId": $appId, "method": $method, "httpMethod": $httpMethod, "payload": $payload}' -c -r)'"
# check the string data (double quotes should be escaped in payload)
# payload should be a string and not JSON, hence the quoting
echo $STRINGDATA
# call the front end to save some data
# in Cosmos DB data explorer, look for a document with id
# backend||mykey; content is base64 encoded because
# the data is not json
echo curl -X POST -d $STRINGDATA https://$FQDN/call | bash
# create some real JSON data to save; now we need to escape the
# double quotes and jq will add extra escapes
JSONDATA="'$(jq --null-input --arg appId "backend" --arg method "savestate" --arg httpMethod "POST" --arg payload '{"key": "myjson", "data": "{\"name\": \"geert\"}"}' '{"appId": $appId, "method": $method, "httpMethod": $httpMethod, "payload": $payload}' -c -r)'"
# call the front end to save the data
# look for a document id backend||myjson; data is json
echo curl -v -X POST -d $JSONDATA https://$FQDN/call | bash
Step 6: Check the logs
Although you can use the Log Stream option in the portal, let’s use the command line to check the logs of both containers.
# check frontend logs
az containerapp logs show -n frontend -g $RG
# I want to see the dapr logs of the container app
az containerapp logs show -n frontend -g $RG --container daprd
# if you do not see log entries about our earlier calls, save data again
# the log stream does not show all logs; log analytics contains more log data
echo curl -v -X POST -d $JSONDATA https://$FQDN/call | bash
# now let's check the logs again but show more earlier logs and follow
# there should be an entry method with custom content; that's the
# result of saving the JSON data
az containerapp logs show -n frontend -g $RG --tail 300 --follow
Step 7: Use az containerapp up
In the previous steps, we used a pre-built image stored in GitHub container registry. As a developer, you might want to quickly go from code to deployed container to verify if it all works in the cloud. The command az containerapp up
lets you do that. It can do the following things automatically:
- Create an Azure Container Registry (ACR) to store container images
- Send your source code to ACR and build and push the image in the cloud; you do not need Docker on your computer
- Alternatively, you can point to a GitHub repository and start from there; below, we first clone a repo and start from local sources with the –source parameter
- Create the container app in a new environment or use an existing environment; below, we use the environment created in previous steps
# clone the super-api repo and cd into it
git clone https://github.com/gbaeke/super-api.git && cd super-api
# checkout the quickguide branch
git checkout quickguide
# bring up the app; container build will take some time
# add the --location parameter to allow az containerapp up to
# create resources in the specified location; otherwise it uses
# the default location used by the Azure CLI
az containerapp up -n super-api --source . --ingress external --target-port 8080 --environment env-aca
# list apps; super-api has been added with a new external Fqdn
az containerapp list -g $RG -o table
# check ACR in the resource group
az acr list -g $RG -o table
# grab the ACR name
ACR=$(az acr list -g $RG | jq .[0].name -r)
# list repositories
az acr repository list --name $ACR
# more details about the repository
az acr repository show --name $ACR --repository super-api
# show tags; az containerapp up uses numbers based on date and time
az acr repository show-tags --name $ACR --repository super-api
# make a small change to the code; ensure you are still in the
# root of the cloned repo; instead of Hello from Super API we
# will say Hi from Super API when curl hits the /
sed -i s/Hello/Hi/g cmd/app/main.go
# run az containerapp up again; a new container image will be
# built and pushed to ACR and deployed to the container app
az containerapp up -n super-api --source . --ingress external --target-port 8080 --environment env-aca
# check the image tags; there are two
az acr repository show-tags --name $ACR --repository super-api
# curl the endpoint; should say "Hi from Super API"
curl https://$(az containerapp show -g $RG -n super-api | jq .properties.configuration.ingress.fqdn -r)
Conclusion
In this quick guide (well, maybe not 😉) you have seen how to create an Azure Container Apps environment, add two container apps that use Dapr and used az containerapp up for a great inner loop dev experience.
I hope this was useful. If you spot errors, please let me know. Also check the quick guides on GitHub: https://github.com/gbaeke/quick-guides