Developing Business Central Extensions/Apps in Team

I picked up a new challenge these days: for one of our (quite big) customers, we need to develop a solution, based on extensions. In short: ready for the future, easy to upgrade. When I would explain the case in a paragraph, you’d say “this is not NAV”, although when you would really look deep into it, it’s ideal for an Extension-scenario, obviously in combination with Business Central.

The project

In short, let’s define the project like this:

  • Multiple extensions to be made:
    • A W1 Base Extensions with shared functionality and frameworks
    • For each country (like Belgium) a separate extension, specifically for local functionality
  • The timespan is 2 years
  • We are working with 5 developers at the same time on this project, spread over 3 countries. Good developers, which means not really infrastructural-minded.
  • Test Driven development in an agile approach

So, one project, multiple extensions, multiple developers spread over multiple countries (not time zones, luckily), dependencies, different symbols (W1 and BE), … .

Why this blogpost?

Well, we came to the conclusion that “developing an extension” is one thing. One app, usually one developer, some local development environment, it’s all very manageable. But when you try to do development in team – it’s a whole different story. And we have faced (and still facing) quite some points that need attention. You will see this is a train of thoughts – not a complete list – and not all points I have answers to. More than enough reasons to write some follow-up posts in the future ;-).

So why this post? Just to share my experience in this story, and may be also to get feedback from this great community we have ;-). Any feedback is always appreciated – the community is here to make everyone and the product better!

The focus of this post is not on “how to develop in team”, but rather “what I think we need to take into consideration when you do”.


If you have never heard about “Continuous Integration”, “Continuous Delivery” and/or “Continuous Deployment” – now it’s time to dive into it :-). It’s a mature way of handling software these days – and very new for our precious NAV world. Let it be no doubt, also I realize there is no way to work in team without a decent implementation of CI/CD, so we have set up a system that automated quite a lot.

  • GIT with a remote in VSTS takes care of our “continuous integration” of the code of team members.
  • Branch for each “unit of work”
  • We have a master-branch-policy, which means we work with pullrequests to pull changes of our team members in the master branch by build definitions – which also forces us to review the code.
  • These build definitions do quite a lot (analyze, compile, test-deploy, manage manifest, …)
  • We have release pipelines to get our stuff to test-environments (continuous delivery/deployment)

And a lot more – part of which I will address in the upcoming points. Again, this is not about CI/CD, but you’ll see that the concept solves a lot of points, but also introduces some other challenges.

Object Numbers

A challenge you’ll face quite soon is “how to manage object numbers”. As a team, each member has his branch, within his branch, he will start creating tables, codeunits, … . But if you just use that awesome “auto numbering” that comes out of the box with the al-language .. you’ll end up with the same object numbers when you start merging your branch. Compile errors will cause many builds to fail!

You can’t just change the app.json to influence the autonumbering, because things will not compile in a while.

So, in a way, you need to abandon the nice autonumbering, and go into assigning object numbers to people/branches/workitems.

We created a dashboard app to manage all the things (we think) can’t be managed by VSTS – which also includes object numbers and field numbers.

“I need that field you are creating – can you save your object?”

I was a big fan of a central development database. Besides the fact you couldn’t develop in one object with multiple developers at the same time, it mostly had advantages. So easy. At all time, you had an overview of all developments everyone was creating.

Well, I guess I need to grow up now, because now, we are not developing in a database, we are developing in files, distributed on local systems/branches, integrated by VSTS .

So, if you need a field (or any piece of development for that matter) that was created in another branch, but not pushed to master just yet? Well, there are two ways – you start merging the two branches (which I wouldn’t do in a million years) – or you pull the change to master, so the other branch can merge with master, and continue development with those needed fields.

Does the field already exist?

An extra challenge which is quite similar to the above is the fact you don’t have an actual overview of all fields that are being created. Basically a list for all fields, for all tables, for all branches in development.

As said, we are being agile, so for functional and technical people, it’s very nice to be able to check who/what/.. Fields are already created that you can use in the next sprint.. . You can’t just open a table and see what fields are there – there might be some in extension objects – even not pushed to master yet.

We will create a dedicated functionality to “push” our fields from all branches to our dashboard so that we always have a nice up-to-date list to filter and analyse the fields we have at hand ;-).

Breaking schema changes during development phase

You’d say – during development, schema is not important. If you want to delete a field, just delete it. In the world of extensions these days, that means – you will delete all data that comes with your extension. Any breaking change needs to schema to be recreated. And I can tell you, you’ll end up with breaking changes quite soon:

  • Renumber a field
  • Rename a field
  • Delete a field
  • Change datatype

For a development database, I don’t care, but for a test-database, for User Acceptance Testing, or even just functionality testing, it can be devastating to lose the data.

We realized that quite soon in our development cycle, data became important .. and we are not “just” in development phase anymore. When the app ends up in the test system (during release-pipeline), it should be a matter of upgrading, not a matter of “delete data and recreate the schema”.

So the only thing I think we can do is to handle the app as being a release/live app from the very first moment users start testing/data is important. That means: the schema shouldn’t change anymore, and if you have to change schema, it’s a matter of using obsolete fields and creating upgrade scripts – just like you would do in live!

Well, we will be “in development” for about a full year, and during that year, people need to test, users need to perform UATs, … and basically this means if we make wrong assumptions in the analysis of our design and architecture (and in an agile world, it’s not that uncommon) – we might end up with (lots) of obsolete tables and/or fields.

As you might have noticed – I don’t really have a comfortable way of handling this just yet… working on it!


Dependencies is a nice ability we have with Extensions. But I can tell you, when you are developing the base app, and the dependent app at the same time .. In team .. It does bring some challenges as well.

In a way, we all are dependent from different symbols – as we all want to run our app in multiple countries .. . In my view, it’s a good idea to include in your build process a workflow that tests if your app would deploy on these countries as well. That’s why our build looks something like:

  • Compile
  • Run code analysis
  • Create app
  • Deploy on W1
  • Deploy on BE
  • Deploy on CountryX

Thanks to this, we already avoided an error in development where we added a field that already existed in the BE database .. . It compiled well in W1, but it didn’t in BE.

On top of that, you might create two apps, where one is dependent on the other. In that case, you also need to include in your build process to test that dependency at all times. A simple compile of the dependent app is easy, but actually, when you change the base app, you should also see if your dependent app still compiles. In our scenario, a change on the base app, results in a build of all apps that are dependent from it.

Distributed development environment

With C/SIDE, lots of partners implemented a “centralized development environment”, which is quite unorthodox, but C/SIDE allowed it, and it was super easy. At all times, the developments were in one database, one overview, on “thing” to maintain.

With AL, we are “forced” to do it the right way. This is positive, for sure, but it’s different. Now, code will have be “continuously integrated” (which is indeed the “CI” part) – which means, merged. You don’t “just” checkout (and reserve) and object, you commit and merge, work with branches, … . All good, but different.

We use Git on VSTS, and work with pullrequests to pull new code in the master branch, which means we have introduced a code review together with this. Good stuff!


But this distributed environment also brings some challenges, which Docker can solve. Everyone needs to work on his own isolated development environment – you can’t be deploying your apps to the same NST as your colleague.

Docker solves this – as it’s easy to create a local container so you could start development – but it also comes with the assumption that people are able to work with Docker.

We have seen this is a difficult one – lots of developers (including me) are not infrastructural-minded. And in that case, “docker” becomes a lot more difficult.

We decided to go for a managed approach – if the developer creates a branch, we will spin up a new environment for him to do his deployments. Thing is, we wanted to do this as easy as possible – with the possibility for him to work remote, with only one entrypoint, like: . With Docker, that gets more difficult, as now we can’t just depend on a serverinstancename (which in Docker is always “NAV”), but we need to do some routing for every container we spin up. It’s not just 1 IP with multiple services, but it’s always a different IP with 1 service.

The alternative is that every developer is going to manage his own docker container on his own VM on his own laptop. All of a sudden, we’d have to support local development environments of developers on their laptops – I don’t see that feasible, actually.. . At least not yet ;-).


With tests, we have one big “challenge”: it’s not possible to run a complete default test just yet. So for the moment, in our setup, running a complete test is not implemented.

But that doesn’t mean we can’t implement our own tests – and that’s something we do. And we also execute with every single build, triggered by the pullrequests.

On top of that, we should also test all the dependencies as well – if you change something on the base-app, it’s obvious that some dependent app might have failed tests.. That’s another reason to always rebuild the dependent apps as well. Keep that in mind ;-).


As you probably know, the way we will do translations has grown up as well. Developers don’t need to be linguistics anymore ;-). Translation is done through xlf-files, “outside” the code.

But this also means we need to manage that in our process.

All the team members will be creating their own branched xlf-file – which will conflict every single time you try to merge branches. So best thing is – handle translations totally outside the build-scope. At this point, I put the xlf-files in .gitignore. What I haven’t done yet is implement a workflow for handling translations just yet, because we don’t need it yet.

That’s it for now .. I hope this post at least opened some eyes, or confirmed some concerns, or even helped you in solving some of the points … . In any case, Belgium just lost their semi-finals on the World Cup – so I’m signing out and going to be grumpy now ..

4.70 avg. rating (94% score) - 10 votes

Permanent link to this article:


9 pings

Skip to comment form

    • Jakub Vanak on July 11, 2018 at 4:10 am
    • Reply

    Great post, thanks for your time and sharing the experience!!! I have just passed through the blog post very quickly and I will read it tonight carefully.

    Just one quick note. Docker 🙂 I would recommend you using Traefik reverse proxy which can perfectly route all requests to a specific container. I did it, in my home network and just doing something like “” (for modern dev) and “” (web client) I was able to access remotely over the Internet to every single instance. And Traefik does everything automatically (it supports Docker very well). I have some configs that are results of some testing and tweaking. No problem to share them.

    Have a nice day and keep blogging 🙂

    • Karim Mahrady on September 7, 2018 at 12:50 pm
    • Reply

    Hi Waldo,

    First, thanks for sharing your experience.

    We are having the exact same challenges on an existing project.

    What do you think about creating one extension for all the tables ?



      • waldo on September 10, 2018 at 8:48 am

      Depends on the use case – why would you want to do that?

    • Kristof on December 11, 2018 at 10:38 am
    • Reply

    Hi Waldo!

    Nice post! I was just curious what tweaks have been made in the meantime? Found better ways to work in teams?
    Like many others we have quite a journey in front of us moving an old solution (2500 – 3500 Objs – don’t ask) to AL. And right now it’s a small team. But we want to ramp up soon. SO once again thanks for your post. It gave us quite some good starting points to arrange upfront before going “AL(l) in”.

    So anything you want to share with us what turned out to be a good idea and what not 🙂


      • waldo on December 11, 2018 at 12:03 pm

      Working on it 😉

    • Konstantin on January 15, 2019 at 9:06 pm
    • Reply

    Hi Waldo,
    Did you find any good solution to use Docker Containers centralized, to eliminate all technical issues?


      • waldo on January 16, 2019 at 4:56 pm

      No, not yet, I’m afraid. We already had a solution to simply provision and remove databases/nst’s and such – so we haven’t looked into Docker on enterprise-level just yet…

    • Jakub Vanak on January 17, 2019 at 9:33 am
    • Reply

    Hello Konstantin,

    we use Docker Swarm to create services.

    We are still finishing the platform but at the end, it’s quite easy to destroy all the platform and recreate again. We use a hybrid solution (Win+Linux nodes). We run all proxy related stuff on Linux nodes (because you can find much more docker images for networking stuff on Linux) and NAV (and now SQL) on Windows nodes.

    The advantage of using Swarm (or theoretically Kubernetes as well but for Windows containers, I still prefer to stay today with Docker Swarm) is that you can easily add nodes and distribute seamlessly the workload.

    On the proxy level, we reached to use Traefik to proxy all HTTP services with SSL provided out of the box by Let’s Encrypt (we just set few params and don’t care about it as we use wildcard certificate automatically updated by LE via ACME).

    And for TCP (RTC) we use HAProxy with some other containers doing autodiscovery on the Swarm level so we don’t need to update anything manually.

    We also provide C/SIDE access (yes, you can access C/SIDE via Internet :)) and it’s done by TCP over HTTP. We use Chisel to make a tunnel between your PC and container and this one is automatically routed again by Traefik.

    It looks complicated and it is. But once you set it up, you can spin up the same config on different places without any effort as you just deploy all the stack in one step.

    • marknitek on February 6, 2019 at 7:40 am
    • Reply

    Some time passed by, do you have any updates to this? Have you found a solution for translation/xlf by now?

      • waldo on February 6, 2019 at 9:12 am

      Yes, it’s high on my todo list, trust me 😉

    • gualter on February 7, 2019 at 11:25 am
    • Reply

    nice post, we are all on the same fight :). Something is on my mind and don’t know what is the best approach, lets see an example you got your project finished you have one extension, you install on the Live, after some time the client ask for a new functionality.
    Do you create a new project on Git on VSTS then a new extension or do you add the functionality to the previous extension?(and use the same project in Git).
    -If you go for a new extension you might end up to manage multi extensions and Git projects(if the client ask for more functionalities in the future).
    -if you go for one extension only and upgrade the versions, will be very clean and easy to maintain but you might want to use some functionality already in there for other client.

    What do you think about this?

      • waldo on February 7, 2019 at 11:48 am

      Well – I would always go for updating and upgrade-routines.
      Not sure what you mean with “might want to use some functionality already in there for other client”

    • gualter on February 7, 2019 at 12:26 pm
    • Reply

    I get your point, and that’s what i’m thinking of doing too.
    Related to the other comment. what i was trying to say was one extension with 10 functionalities but you know one of the functionality will be used multiple times on other projects.
    thanks for the feedback

    • José on July 1, 2019 at 6:49 am
    • Reply

    Thank you very much for the post. Even though I see a lot of issues need resolving or giving it another spin, I appreciate the effort put into it.

  1. […] Original URL… […]

  2. […] Bron : Waldo’s Blog Lees meer… […]

  3. […] while ago (in fact: in July last year), I posted a blog post about the challenges when “working in Team on apps“. Since then, I have been asked to work on a follow-up post on how we are tackling these […]

  4. […] while ago (in fact: in July last year), I posted a blog post about the challenges when “working in Team on apps“. Since then, I have been asked to work on a follow-up post on how we are tackling these […]

  5. […] while ago (in fact: in July last year), I posted a blog post about the challenges when “working in Team on apps“. Since then, I have been asked to work on a follow-up post on how we are tackling these […]

  6. […] while ago (in fact: in July last year), I posted a blog post about the challenges when “working in Team on apps“. Since then, I have been asked to work on a follow-up post on how we are tackling these […]

  7. […] Waldo's Blog, Waldo (a.k.a. Eric Wauters) noted that last July he wrote a blog post about the challenges when "working in team on […]

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.