I’m currently working on an interesting problem: The interaction between micro-services.
I have a solution already, but there are issues in some cases. I reached a point where it became problematic, and I really need this to work.
This is a problem that is interesting at the best of times. In this case, though, I need to make it generic enough that it will work in ALL cases.
And, to make things even more interesting, I need to integrate this into the Software Factory (SF), so it gets generated with every service I write.
To ensure service decoupling, I use proxies. Proxies are objects that represent an object in a separate service (aka bounding context). The client service uses proxies as one of its own entities without worrying about where they come from. This is akin to the translation and anti-corruption layers in DDD.
The issue with proxies is that you need to keep them up to date with their original entities, which are managed separately.
The proxies we use are based on a core structure; every item has the same basic properties: Id, Name, Description, Source, and a few more working properties. The core structure of entities is essentially the same, without the Source: Id, Name Description.
We use a message bus to communicate between services. When an entity is modified raises an event that gets sent on the message bus. Clients listen to those specific events and update their proxies accordingly.
Initially, I thought I could get away with using a generic DTO for every proxy: ProxyDto. Every entity update event will send a generic ProxyDto along with it. Then if the client service needs additional information, it can just call the original service to get the entity and update the proxy.
That causes a couple issues already. In about 50% of the cases, I need to perform a second service call, and those calls are pretty expensive because they return the complete object hierarchy, which is not always required. Also, if the call fails, I need to remember to try again later; otherwise, the proxy system will fall out of sync.
Making a call to the original services is almost unavoidable. I usually do this at startup to ensure the proxies are in sync. But, making a call as part of an event handler is something we would rather avoid. There probably is a better way.
Another issue is that the entities are, in many cases, not just flat data. They are sometimes more complex hierarchical information. Which makes the service call volume much larger.
In addition, I thought that I could make economies by only raising events on the root entities and ignoring changes to the children. By calling the service every time, I would get a complete snapshot of the entity. However, this introduces coupling where the client service needs to be aware of the entity’s structure to find all information it needs.
This also means that, since changes to children’s entities do not generate events of their own, they need to generate an event on their parents so the clients can be notified when they are changed. And entire parent structures need to be reloaded by the client.
In short, while this would work in theory, there are edge case issues, and the generated code was getting a lot more complicated than it needed to be.
Figuring out the solution
This would be handled on a case-by-case basis in a typical development environment. We would manually choose how to handle each situation. And then document it as a solution to be used in a similar case.
When generating code, however, you need to develop solutions that work in ALL cases.
The first part is figuring out the requirements.
These are all the potential scenarios:
- Flat entity – Flat proxy
- Hierarchical entity – Flat proxy
- Hierarchical entity – Hierarchical proxy
Also, there are things that I am reluctant to do:
- Send complete hierarchical entities on the message bus (performance, security, encapsulation issues)
- Have the client containing the proxy know about the remote entities structure (breaks the separation of concerns)
So the change I am looking at doing is:
- Replace the generic DTO ProxyDto with the flat version of the entity.
- Generate events individually for all entities that could be used as proxies, including children. (more like an event sourcing approach)
- Fix the proxy interface endpoints to return flat DTO of all entities that could be used as a proxy.
The next step is to manually implement the changes on a sample and generate micro-services so I can identify the target state.
Lastly, I can then use git to compare previous states with the new state. And identify the changes I made.
Implementing the solution
That is where the challenge of SF is introduced.
First, I need to identify which template is used to generate each file that I modified. Then I need to know where the specific code is generated; is it in the main template or a sub-template? If I create a new file, I need to know if this can be generated by an existing template or if I need to create a new template.
Then I need to define which information from my model I need to make the change. Do I have this information available in my template already? Then I need to ensure that this information is available in the template.
Then I need to define the scope of the change in the template. Where does the change fit? Should I segregate it in a sub-template, so it has less impact on the rest of the template?
Then I need to change the template or create a new one if it is a new one.
Then I need to go into the Genset to ensure that the template’s conditions are correct and add new templates to the generation process.
I do not have the typical code completion and syntax verification on the T4 templates, so usually, I have to fix all T4 compilation errors first. Then I need to run the SF to try to generate the code. Once the SF runs successfully, I need to ensure that the generated code matches what I expect, and I need to update the templates accordingly.
Once everything is to my liking, I can re-generate ALL my micro-services and deploy the new solution.
That is a lot of work! However, what is remarkable about the SF is that once you implement a solution, you can re-use it repeatedly for free…
In this case, I’m only doing this to support the next version of the SF. So, I re-generated 14 micro-services projects (2,962 files, 87,453 LOC). And the changes to the solution resulted in exactly 1,600 files being affected. Not bad for 3 days of work…
Is there a better way?
I’m sure there is a better way to manage micro-services interaction!
At this point, however, this will have to be sufficient. The objective really is to get the next version of the SF online. Then I will have to rebuild all of this from new models anyway.
However, if you know of a better way of doing this, I would be very interested. Please let me know in the comments…