During Microsoft experiences ’16, InfoQ FR was able to discuss with Florent Dotto and Thomas Sontheimer from Brennus Analytics on their technical architecture and in particular their R stack, DocumentDB, Java, API App on Azure.


Hello Florent, Thomas, could you first explain the product to us before talking about technical architecture?


Florent: Brennus Analytics is developing a dynamic price optimization SaaS solution for manufacturers, based on data analysis: from the choice of this policy by customer segment and product, to the calculation of the optimal price for each quote request received. We use artificial intelligence technology to continuously learn customer behavior from historical commercial data, and optimize the prices of each offer in real-time, taking into account its specific costs. All of this guarantees the manufacturer an optimal pricing policy that is always in line with market developments.

you describe the overall use case of the product, and how the architecture results from it?

Florent: So already the use case of the product, it is to provide an industrial solution, access which allows their sales teams to make quotes to requests for quotes. So the sales teams receive requests for quotes from customers, they must set a price to offer to their customer. Somehow, our system has just integrated into the pricing process of these sales teams, and the added value is that the price that will be calculated will take into account historical data, finally data, a historical data of our manufacturer, to model the behavior, the probability that the customer accepts the deal at a certain price that will be offered, this is what will allow us to calculate an optimal price to maximize the expected margin of the


So the problem we have is that the manufacturer can upload, or that we can retrieve the data of our manufacturer on our server, that we can work with this data there to model the behavior of the customer, that we can also calculate the notions of margin and cost of our manufacturer, and once we have all that, we can do some optimization to cross the two curves and output a result. This result must then be displayed, so this is a web app, to simplify it we were able to set a browser that is Chrome. And the problem we had initially was that we didn’t know if the manufacturers would be OK for it to be in SaaS, initially we said to ourselves that we had to be deployed as well on Azure since we had chosen to work with them, or with them locally. So it was the first approach


The second is that in the end, we didn’t want to have to manage physical infrastructure, because that presents more disadvantages than advantages for us. And precisely Azure allows you not to bother with that. So you might as well use it.

How long ago did you start?

The reflections started … 10 months ago, and we started to implement about 8 months ago, around February 2016.


How are the different parts of the system structured?

The app consists of the following:

  • a frontend deployed on App services (Web App); implemented with Angular2 using Swagger to generate REST API client code,
  • a REST API deployed on a stand-alone basis on App services (App API); implemented with Sparkjava for the service, Jackson for JSON and JWT for authentication,
  • an analysis core deployed on a VM; currently implemented in R and serving as an API with FastRWeb, we are working on replacing it with a version integrating AMAS technology with Sparkjava to serve the API,
  • a DocumentDB database with support for the MongoDB protocol which the REST API and the analysis core access with Jongo.

Thomas: And in terms of evolution, we were initially taken more on IaaS to use Azure, therefore deploying virtual machines and then deploying a web app in Angular, and then going to tap directly on VMs that they deployed to both an API, and in the back-end behind a computing server, which is an R.


Little by little, we have evolved this architecture to take advantage of Azure to the maximum and therefore push as much as possible in Platform as a Service, so our API, which was originally deployed in a virtual machine, we are in the process of migrating it to an API App which finally allows us to expose to the front-end all the API of our application without having to manage the virtual machine, without having the maintenance of the operating system, the updates, this kind of things. And also manage redundancy.


You are taking advantage of more and more managed services, we were talking about the API; Are there any other examples of things that you have, like this, migrated, or that you plan to migrate at some point?


Thomas: No, it’s mainly on this point, afterward we will use more and more the functionalities of these services, both at the front-end level as regards deployment through deployment slots, rather than to deploy several web applications, the idea is to be able to deploy the same web application and then to switch between the slots to have more fluid deployments. And if not, anyway we still plan to stay on an R server for all that is analytical, the idea will be to deploy our technology, which is developed in Java and which still requires relatively resources. important, and therefore we will not be able to deploy it on App Services a priori.


We think we will stay for this part always on a kind of virtual machine, knowing that what Azure offers on this is interesting for us, it is the possibility of having a gateway between the virtual network on which are deployed virtual machines, and the App Service which ultimately provides the API, and which allows us to have a secure connection in HTTPS, more than on our entry point which is our API which is deployed as an application, an App Service, and then everything happens on a virtual network which is therefore secure and which allows us not to manage that


Please enter your comment!
Please enter your name here