Serverless Microservices: Building an orchestrated workflow with Azure Durable functions

In this article, we will be implementing an “orchestrated” flow, or interaction if you will, as discussed in the previous article.

So, this will be quite a technical and a lengthy post, so better get your coffee refill before we start. 

What can you see in this article: 

  • Using an HTTP client for communication between microservices
  • Having a request-reply mechanism using HTTP
  • Using Azure durable function orchestrator for … well orchestrating
  • How to structure your Azure functions code
  • Dependency Injection in Azure functions

Also, as the usual disclaimers, in this article, the idea is not to have the correct modeling of a shopping experience, which I have to admit is quite ridiculous in this example, but to have a “use case” on which we could show how we might implement an Azure Durable Function orchestrated flow.

As usual you can also access the full source code for this post on github.

Good, let’s get started

The setup and assumptions

Starting from the aforementioned solution here, we will be adding a new project to the solution. We will be doing it like this for tutorial reasons, the two projects will not share anything, so the general recommendation is to have separate solutions and git repos per microservice.

We will call this new service “OrderService”, shocking I know!

Now, as discussed in the previous post, we will need to implement a “standard”-ish order flow. Apart from the requirements from the previous post, we will start with the following assumption to make this easier for us, and not that long and cumbersome:

  • A user can only order if he is signed in and has the address saved in the system.
  • This will be handled by the front end and we will have nothing to do about this.

Now, since we resolved all of this stuff let’s get to the code.

The code

So, to get the ball rolling, we will need first to define an entry point into our shiny new service. 

To do this, we will be adding a new HTTP trigger, “CreateOrder”. This will be a “POST” and will require the Card ID and also the user ID.

        public static async Task<IActionResult> RunAsync(
            [HttpTrigger(AuthorizationLevel.Function,  "post", Route = "order/start/{sessionId}")] HttpRequestMessage req,
            Guid sessionId,
            [DurableClient] IDurableClient client )
            if (sessionId == Guid.Empty)
                return (ActionResult) new BadRequestObjectResult("SessionId is required");
            var started = await client.StartNewAsync(nameof(NewOrderOrchestrator), new StartOrderContext
                CorrelationId = Guid.NewGuid().ToString(),
                ShoppingCartId = sessionId.ToString(),
                UserId = Guid.NewGuid().ToString() // this should be in the body of the post request, we will skipp it
            return new AcceptedResult(started, started);

The keen-eyed will notice the “StartOrderContext” object that we use to start the orchestrator, having these types of DTOs or wrapper might be a good idea if you are working with more complex flows. Here is the class definition.

    public class StartOrderContext
        public string CorrelationId { get; set; }
        public string ShoppingCartId { get; set; }
        public string UserId { get; set; }

Now that we have this, we should start working on the Orchestrator itself. 

     public class NewOrderOrchestrator
        public async Task Run(
            [OrchestrationTrigger] IDurableOrchestrationContext context,
            ExecutionContext executionContext)

            var inputData = context.GetInput<StartOrderContext>();
            inputData.UserId = context.NewGuid().ToString();
            // step one, get the cart


            var cartItems = await context.CallActivityWithRetryAsync<List<CartItem>>(nameof(GetShoppingCartActivity),
                new RetryOptions(TimeSpan.FromSeconds(5), 3),


            var userData = await context.CallActivityWithRetryAsync<UserData>(nameof(GetUserDetailsActivity),
                new RetryOptions(TimeSpan.FromSeconds(5), 3),


            var wareHouseStocksAvailableAndReserved = await context.CallActivityWithRetryAsync<CheckAndReserveItemsActivity.ReservationResult>(nameof(CheckAndReserveItemsActivity),
                new RetryOptions(TimeSpan.FromSeconds(5), 3),


            var calculateShippingCost = await context.CallActivityWithRetryAsync<CalculateShippingCostActivity.ShippingCost>(nameof(CalculateShippingCostActivity),
                new RetryOptions(TimeSpan.FromSeconds(5), 3),
                new CalculateShippingCostActivity.ActivityTrigger{
                    ReservationResult = wareHouseStocksAvailableAndReserved,
                    UserData = userData


            var finalCost = await context.CallActivityWithRetryAsync<CalculateFinalCostActivity.FinalCost>(nameof(CalculateFinalCostActivity),
                new RetryOptions(TimeSpan.FromSeconds(5), 3),
                new CalculateFinalCostActivity.ActivityTrigger{
                    ReservationResult = wareHouseStocksAvailableAndReserved,
                    ShippingCost = calculateShippingCost


            // here is a more interesting part
            // let's say we are using an external payment provider that gives uses a webhook

            var paymentSuccess = await context.CallSubOrchestratorAsync<bool>(nameof(PaymentOrchestrator), finalCost);

            if (paymentSuccess)
                // we would have another one here ... but it the same as the the other interaction calls so we skip it 🙂

                // Maybe cancel the product reservation ....

                // do whatever is required in this case ...

            // finish the code


So, let’s start to break down the orchestrator here. 

First things first, as a general rule of thumb, you should aim to have as little logic in the orchestrator itself, anytime you need to have anything more than a simple bolean comparison, try to use an activity.

Looking a the code of the orchestrator you might see that we are using the custom status functionality. I think this is one of the most awesome things, and it is really under utilized.

In order to do things the dotnet way, we created an enum with all possible states of the orchestrator, this is used only for observability, and has no impact whatsoever on the function framework, from azure point of view, this is a string.

    public enum OrderStatus

So, apart from the custom status, most of the orchestrator is made up from a series of activities and custom status updates.

Activity close-up

The first things we need to do is to get the contents of the cart. For doing this we will have a dedicated Activity “GetShoppingCartActivity” that looks like this:

    public class GetShoppingCartActivity
        private readonly IShoppingCartClient _client;
        public GetShoppingCartActivity(IShoppingCartClient client)
            _client = client;
        public async Task<List<CartItem>> Run(
            [ActivityTrigger] Guid id)
               return await this._client.GetCartItems(id);
            catch (Exception e)

Now, looking at this, it seems rather simple, and the reason behind is the usage of the “client” of the Shopping Cart service.

In order to achieve this, we used the HTTP client factory and the DI to create a typed client for the other microservice. In this case this might be a bit overkill, but if there would be multiple interactions between the services, I highly recommend this approach.

So in order to achieve this, we have the following interface defined:

    public interface IShoppingCartClient
        Task<List<CartItem>> GetCartItems(Guid id);

And here is the implementation:

    public class ShoppingCartClient : IShoppingCartClient
        private readonly HttpClient _client;
        private readonly AppConfig _config;
        public ShoppingCartClient(HttpClient client, AppConfig config)
            _client = client;
            _config = config;
        public async Task<List<CartItem>> GetCartItems(Guid id)
            var request = new HttpRequestMessage(HttpMethod.Get, $"{id}");
            using (var response = await this._client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead))
                var content = await response.Content.ReadAsStringAsync();
                return JsonConvert.DeserializeObject<List<CartItem>>(content);
    public class CartItem
        public Guid ProductId { get; set; }
        public int Count { get; set; }

Now, this again looks quite simple, since this only contains the request logic, so in theory you could think of this like a “HTTP repository”. Now, the main advantage of using this approach, apart from the DI, is the way you set it up in the start-up and the fact that you get an already setup HttpClient. What do I mean ? Well first let’s look at the ServiceRegistrations.cs file where we register the this ShoppingCartClient.

private static IServiceCollection AddHttpClients(this IServiceCollection serviceCollection)
            serviceCollection.AddHttpClient<IShoppingCartClient, ShoppingCartClient>((provider, client) =>
                // add the required headers keys tokens and other specific http client stuff
                var config = provider.GetService<AppConfig>();
                client.BaseAddress = new Uri(config.CartServiceUrl);
            return serviceCollection;

So, as you can see here, you will only have one place where you will set-up the base url, the headers, the auth scheme. Using this approach you will easily setup based on environment variables ( you know, CI/CD goodness ).

All the others activities in this example are mostly mocks. They look more or less like this:

        public async Task<ShippingCost> Run([ActivityTrigger] ActivityTrigger input)
            // here we would make a call to a dedicated Shipping Cost Calculator that would do this,
            // things like product weight, dimensions from the product catalog might be important apart from the
            // address of the user
            await Task.Delay(3000); // simulate the wait time for a regular http call
            return await Task.FromResult(new ShippingCost
                Cost = 13,
                Currency = "USD"
        public class ActivityTrigger
            public CheckAndReserveItemsActivity.ReservationResult ReservationResult { get; set; }
            public UserData UserData { get; set; }
        public class ShippingCost
            // In real project, I'd suggest that you always use a Money ValueType
            public double Cost { get; set; }
            public string Currency { get; set; }

Now, I usually like to use nested types for activities, I think it keeps things quite organized for short time usage, depending on the complexity, a lot of times I also use tuples, now, if you feel or see that they get more usage, you could create a DTO / ViewModel namespace and place this “possible” common structures. Also, the delay here is to simulate a “real HTTP call”.

The next interesting thing is the way we would “integrate” the “payment service”.

SubOrchestrator close-up

We arrived at one of the most interesting parts of this whole orchestrator, the payment integration suborchestrator. Basically here we are starting a new orchestrator as a “child process”.

If the idea here is to show a way to integrate with an external asynchronous system. The way described here, is something along the following lines, you send a request and the external system, the external system responds with 202 Accepted, and then, when they finish processing they respond to a callback url / webhook. We will look more into that in a few minutes, first here is the suborchestrators structure:

  public class PaymentOrchestrator
        public async Task<bool> Run(
            [OrchestrationTrigger] IDurableOrchestrationContext context,
            ExecutionContext executionContext)
            var input = context.GetInput<CalculateFinalCostActivity.FinalCost>();
            // here we need to call our payment provider
            var timeout = TimeSpan.FromSeconds(60); // here we should have a bigger time window... but demo..
            var deadline = context.CurrentUtcDateTime.Add(timeout);
            using var cts = new CancellationTokenSource();
            var timeoutTask = context.CreateTimer(deadline, cts.Token);
            var paymentSuccess = context.WaitForExternalEvent($"payment-request-success");
            var paymentFail = context.WaitForExternalEvent($"payment-request-fail");
            // send the request for payment
            await context.CallActivityAsync(nameof(StartPaymentProcessActivity),
                new StartPaymentProcessActivity.ActivityTrigger
                    Cost = input.Cost,
                    OrchestratorInstanceId = context.InstanceId,
            // wait for the winner - payed, not payed or timed out
            var winner = await Task.WhenAny(timeoutTask, paymentSuccess, paymentFail);
            return winner == paymentSuccess;

In order to make this whole magic take flight we need to have a small state machine with 3 end states, payed, rejected or timeout. Timeout is here to be sure that we don’t end up with a hanging service request, and since we reserved the products, we might want to cancel the reservation earlier ( I imagine that in a real system, a reservation would have a TTL if no payment was confirmed, but I digress …).

So, along with the timeoutTask we also have a paymentSuccess and paymentFail task, this will only be declared here. Please take note that here these tasks are not yet awaited. Next we do the call to the external system, and await the result in the form of Task.WhenAny. Based on which of the task is resolved first we have a response. In our case, both fail and timeout are treated the same.

The “WaitForExternalEvent” is implemented quite simple, we create a trigger, and there well … we raise the event. In this example I created an endpoint for each of the events, success and fail :

    public static class PaymentAcceptedTrigger
        public static async Task<IActionResult> RunAsync(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get",  Route = "order/{id}/payment-accepted")]
            HttpRequest req,
            string id,
            [DurableClient] IDurableOrchestrationClient client)
            await client.RaiseEventAsync(id, "payment-request-success");
            return (IActionResult) new OkObjectResult($"Thank you!");

What needs to be noted, that we require the orchestrator ID when we raise the event, here it is provided in the URL as a parameter. So you need to be a bit careful about this when you build your communication with an external system, but this in essence, is quite a common pattern ( using a sort of correlation id).

* If you decide to get the code from github, to test this easier, when you get to this step, the orchestrator will print the id to the console, so you can copy and paste it in the REST testing tool you prefer.

But why ?

So, by now you must be wondering, although this looks quite cool (hopefully), this seems to be a bit over-engineered, and why should you bother doing this ? How is this different from let’s say doing this in a regular api controller ?

To this, I could say, “Great question!”

(Awkward silence…)

(Cleaning up the glasses …)

(Taking a sip of the finest brandy, while looking at the fireplace … )

Well… leaving the dramatic pause aside, I think there are several advantages apart for the clear CVDD upside (CV driven development).

One thing we got for free is the status, as you might recall. I also added the following Trigger to our Flow :

        public static async Task<IActionResult> RunAsync(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get",  Route = "order/{id}/status")] HttpRequestMessage req,
            [DurableClient] IDurableClient client,
            string id)
            var status = await client.GetStatusAsync(id);
            return (IActionResult) new OkObjectResult(status.CustomStatus);

Using this trigger, you could poll or even have it push the status to a queue the status of the flow. Now for our example this doesn’t provide a lot of value, but if you build a flow with multiple steps that spans several days event weeks, that this provides an amazing amount of value.

Another cool thing which I think I mentioned in the first article of the “series” is that each call from the orchestrator to an activity and suborchestrator is event sourced, which means that in essence, you have access to the whole audit trail of what happened, including inputs and outputs. All this could be explored in the TaskHub in the configured storage in the table storage section.

There is also scaling, but I think this is quite self evident.

The end

Well, if you got this far, thank you for taking the time and hopefully you found out at least something interesting.

And, keep in mind that serverless offerings are not the silver bullet, and if used wrong or inappropriate, they could generate a ridiculous amount of costs. ( This seems like a good idea for a future post … )

Success! You're on the list.

3 thoughts on “Serverless Microservices: Building an orchestrated workflow with Azure Durable functions”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s