Skip to main content

Event Handlers: Unsuitable for Data Integrity

Before-Commit (BC) event handlers trigger immediately before an object is committed to the Database Transaction Buffer. While this makes them a "last line of defense" for specific technical requirements, such as enforcing object uniqueness (e.g., ensuring a System ID is unique across the entire database), they are not a suitable mechanism for full data-integrity purposes.

This page outlines why relying on event handlers for comprehensive data integrity is an architectural risk.

7.1 The Risk of "Silent Failure"

A significant risk in event-driven logic is the interpretation of the Boolean return value of an event handler. If a Before-Commit microflow returns false without the "Raise an error" flag being explicitly set, the Mendix Runtime cancels the commit for that specific object only.

This behavior creates a state divergence: the calling microflow proceeds as if the persistence was successful, while the database remains in its previous state. While this can be mitigated by ensuring the "Raise error" flag is always checked, the potential for human error makes it an unreliable foundation for data integrity. Relying on a configuration flag to prevent a transaction from desynchronizing is architecturally fragile compared to using Service-level validation where failures are explicit and blocking by default.

7.2. Absence of User Feedback and "Pink Field" Support

Even when the Event Handler is configured to raise an exception, it lacks the necessary communication channel to the User Interface:

  • Validation Feedback Suppresion: Event Handlers cannot trigger "Validation Feedback" actions (the pink highlights under input fields).
  • Context Loss: If configured to "Raise an error," the user is met with a generic system exception (identical to Option 1 in section 6.1). The transaction rolls back to ensure integrity, but the user is left with no clue which field failed or why, often losing all their session data in the process.

This forces the transaction into a fault state that must be managed by the parent Microflow. Using triggers in this way does not simplify the architecture; it merely shifts the burden of error handling and rollback management elsewhere in the stack.

7.3. Failure of Atomic Batch Validation

The most critical limitation regarding Atomicity is that Before-Commit triggers fire on an object-by-object basis, not at the transaction level. This architecture makes it impossible to validate a collection of various objects as a single, cohesive unit.

  • Incomplete Transactions: If a validation failure occurs on the final object in a set, the preceding objects may have already been sitting in the Database Transaction Buffer. While a raised error may trigger a rollback, the validation logic itself was executed in isolation for each object.
  • Lack of Context: A trigger on "Object A" cannot inherently "see" the state of "Object B" in the same commit batch. True data integrity requires the entire Microflow Context to be validated as one unit before any data is sent to the buffer. Because triggers cannot enforce this "all-or-nothing" rule across different entities, they cannot guarantee a clean transition from one consistent database state to another.

7.4. Conclusion on Event Handlers

Because of these limitations, a clear distinction should be made in microflow architecture:

  • Use Event Handlers for Data Augmentation: They are ideal for "side-effect" logic that does not require user intervention, such as timestamps or audit logging. In these cases, the microflow should always return true to avoid silent failures.
  • Avoid Event Handlers for Data Validation: Business rules and integrity checks should be handled within the Microflow Context prior to any commit actions. This ensures that the developer maintains control over user feedback and transaction atomicity.