Those that know the history of Flowable know that it is a fork of the Activiti open source project made by the team that originally created Activiti. This article isn’t going to go into the reasons for the fork but rather look at the improvements that have been made since then. We get asked a lot about what the differences are between the two open source codebases (as well as other forks of Activiti), so this is an attempt to call out 10 of the many advances that have been made since – but if you want to be really sure, the beautiful thing about open source is that you can look at the code and activity around it to see for yourself.
Strategically, this one is huge. Introducing a CMMN engine to the toolkit adds a whole new set of dimensions for modeling intelligent business automation. Most significantly, this engine is a completely native implementation of the CMMN semantics, so it’s not piggybacking off the BPMN engine. Where there’s commonality with BPMN execution, such as creating tasks for users, then there are shared services between the engines to provide that. Combining CMMN, BPMN and DMN as part of a single solution is becoming the norm. In real-world implementations, we’ve found CMMN to provide a powerful way to model problems that are very human or event driven. Also modelers solving complex automations have found it a sophisticated way of describing the overall end-to-end business activity, managing the different processes and their relevance in addressing the need at any point in time.
Many solutions these days are using events as the backbone for the interaction between microservices, systems and people, be it Kafka/Confluent, RabbitMQ, AWS SQS or ActiveMQ/JMS. The ready-to-go integration in Flowable is both highly scalable and extensible. It’s possible to abstract away from the underlying event implementation and just work with business events that contain process and case variables. The low level implementation can change without affecting the case or process models describing their effects. There’s even an internal event mechanism now, which provides the benefits of event-driven automation without actually needing an external framework. Incredibly useful for event-orchestration using BPMN, it’s even more powerful when coupled with CMMN to provide contextual sensitivity to event-driven behavior.
This major change was started while the team was working on Activiti but it wasn’t completed until after the fork to Flowable. There’s a history of the different generations of Java BPM engines that describes the evolution of introducing an abstract process virtual machine, which was then optimized out in the current generation of BPMN execution. This capability opens up many options for executing BPMN that would be near impossible to do with the previous generation engines, while also allowing performance enhancements. One of those capabilities is significant enough to be called out on its own…
Sounds painful but it’s a very powerful way of allowing process fragments to be introduced on-demand into a running instance of a process – either by a user or automatically, for example AI-driven. You can deploy a process model and start any number of instances of it, then for each of them the model steps will be followed as defined in the common model. With dynamic injection, individual tasks or even complete process models can be inserted at any point into a running instance, and that one instance will continue as if the model had originally included the inserted process. This means a simple, basic process can be modeled without having to account for all possible exceptional situations, then rely on a human or machine learned system to decide to inject processes to handle different circumstances.
You just need to look at Github and the open source forums to see how active the development is on the core Flowable engines. Also the number of contributions coming from the community is continually growing. There’s people around the planet using Flowable for pet projects or powering global, mission-critical solutions, and many will help answer questions from newbies and experienced people alike.
Maximizing performance and minimizing database size were some of the key drivers for this capability. The original way of keeping history in the same database, and having to make decisions about the level of audit history you need against managing database size, was clearly just a first step. Like others, we’ve added pruning capabilities to remove history as it ages. But by also adding a completely different approach, Flowable is able to offer faster throughput and pass transactional history to external systems (typically NoSQL) for warehousing or analytics. More topically, we’ve used it to feed historic data to machine learning systems that then feedback into process and case execution.
The database used by Flowable doesn’t need to be relational. As long as the data source can support transactions, Flowable can use it. While most people do use relational databases with Flowable, there’s some that are looking for it to run on non-relational databases. The world of databases can always change, so by providing abstract data sources, Flowable will always be ready to exploit advances, such as with CockroachDB. It also gives people options to use whatever data sources they want in the way they want. There’s an experimental integration with MongoDB to illustrate the point.
We have looked hard at how data is stored and queried and how it performs at high throughput and scale. This has resulted in a bunch of improvements in how data is represented for active processes and jobs. One of the bottlenecks has been the use of history data sources to find information about previous steps of live instances. Flowable now keeps all the information about active instances separate from history, so it doesn’t matter how large your history gets, the runtime performance is optimal. Similarly, jobs of different types previously shared the same source, but now hold their data separately to ensure the fastest possible querying of job state.
With Flowable you can link multiple sets of business rules together to form higher level decisions. DMN is an open standard for describing decision-making through business rules. Decision Requirement Diagrams in Flowable allow you to model multiple decision tables connected as dependencies. Instead of using a process to define the aggregation of business rule outcomes, a single DRD can be used.
For all the Activiti-based engines, the execution of tasks after a parallel gateway is not actually parallel: all the flows are serialized. Flowable has been able to fix this thanks to its new architecture and can execute parallel flows in a truly parallel way. Not only that, it can execute blocking tasks, such as making REST calls, highly efficiently (minimal threads, for the technically minded). This isn’t so problematic when dealing with parallel human tasks, but is critically important when working with microservices or events running in parallel. And of course, transactional coherence is fully maintained.
Flowable turns the power up to 11. Behind all of these advantages is the team that’s continually innovating and driving open source BPM and Intelligent Business Automation forward as the technology landscapes and demands change around us. These are the people that have radically evolved the engines without causing any revolutions in continuity or loss of capability. Interfaces and schema consistency across releases mean that moving from the old Activiti-based architecture to Flowable is simple – even with any inflight processes. You only get that happening when the team behind it has a deep understanding of BPM engine implementation and the needs of businesses and organizations using it. That and uncompromising commitment, openness and enthusiasm for creating the very best software.
Tools like ChatGPT can handle a variety of business tasks, automating nearly everything. And it’s true, GenAI really can do a wide range of tasks that humans do currently. Why not let business users work directly with AI then? And what about Agentic AI?
In the past few months, this has culminated into a clear understanding of the strengths and weaknesses of the Generative AI (GenAI) technology; and where it makes sense to integrate with it and – perhaps more important – where it doesn’t make sense.
As AI gains prominence as a pivotal technology and enterprises increasingly seek to leverage its capabilities, we are actively exploring diverse avenues for integrating AI into process automation.