Information Overload

Sorry, I’ve been away from my blog for a while (again). I tend to plow head-down into the mountain of work that I have, and my blog tends to end up lower on the priority scale…

It’s interesting how things take an unforeseen turn. I’ve been working towards making a demo of the SF. I have a pretty solid plan, but I need a few other things in place before that can happen. In support of that, I’ve been expanding the models and SF’s capabilities. And it’s been growing… Like really growing.

By the numbers

The scale and the quality of the code generated by the SF are increasing at a geometric rate. I estimate I double the number of lines of code I produce every 3-6 months.

In the beginning, many years ago, I was happy when I got 20 files out of a model. Now I expect extensive functionality; looking at the project I was working on yesterday, I get an average of 700 files and 30K LOC out of each model.

I had a peek in my database and got some numbers. Since my last post in July, I got:

  • from 47 to 115 gensets.
  • I created over 774 new templates.
  • I have a total of 1869 templates organized in 59 projects.
  • 109 different model

As a result, I have a lot more information to manage.

It’s an evolution thing

Not only do I produce more lines, but I also have more options. All those gensets generate different code. That allows me to create more diverse functionality. Now, I can:

  • Generate synchronous code or asynchronous code;
  • I can generate fully functional services, clients (for different languages), API gateways, and even a default set of user interfaces;
  • I can target different databases, have backup and restore functionality, access control and authentication with JWT;
  • I can also generate unit test code;
  • I can organize the output differently.

I have plans to support a lot more than that. This means more gensets and models. That will significantly increase the amounts of:

  • the number of templates and gensets;
  • the amounts of models;
  • the amounts of results;
  • and the amounts of generated code.

I have made significant changes to my tooling to allow me to build those functionalities faster and more efficiently. This means by the same occasion that I have more choices but also an ever-increasing amount of information to manage.

Another layer

I started to see a significant issue when mapping models to functionality at the build definition level. Having to pick the generation activities to execute from a list of 155 and growing geneset that are competing and sometimes dependent for each model is quickly becoming unmanageable. Especially since I expect that list to eventually grow into the 1000s…

I need to introduce an extra layer to manage that complexity. I will call this layer Blueprints. Blueprint will group sets of generation activities used to generate a specific output.

This should simplify the build definition, where I can associate multiple models with a specific Blueprint instead of multiple gensets to get the desired output.


My previous post (in July) was about struggling with managing the amount of information the SF required and generating. Since then, the amount of data has more than quadrupled.

While I have changed the tooling I used to manage that information, it is now time to address some of those challenges at the engine level.

I am constantly amazed by what the SF can achieve, and I’m looking forward to taking it to the Next Level…

And, yes! There is a demo coming…

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: