UI testing
A UI test (User Interface test) is executed on behalf of an end user by interacting directly with the user interface (UI) to test the behavior of the entire application.
UI tests are the only way to check specific aspects of the user's experience that occur directly in the browser. These include:
- (Conditional) visibility: When elements on a page are shown or hidden based on factors like the user's role, the specific data presented, or the state of an object.
- User Flow: The sequence of moving from one page to the next (page to page navigation).
- Client-Side Behavior: The way widgets behave when executing logic directly in the browser (such as Nanoflows or custom JavaScript widgets).
1. Functional testing via the UI
UI tests can be used to test the entire functionality of an application (functional or end-to-end testing). However, relying on the UI layer for functional testing is often inefficient.
If an application is not built with isolated components and does not clearly separate unit logic from process/integration logic (a sign of low structural quality), the development team is forced to test the application primarily through slow, end-to-end functional tests run via the UI.
The Menditect Testability Framework promotes moving testing efforts down the Testing Pyramid. This means developers should prioritize writing fast, reliable Unit and Component Tests for the underlying business logic, and minimize the number of slower, more brittle UI/Functional tests at the top layer.
Note that back-end processes - such as scheduled events or processes driven by external API calls - cannot be tested via the UI, as they are not triggered by the user interface. These processes must be validated using API testing or other server-side methods.
2. Creating a UI test
While the specific setup may vary, a comprehensive UI test generally follows this pattern:
- Define the business risks: Based on a formal risk assessment, determine which functions are the most critical and therefore require testing via the UI.
- Define coverage and test techniques: Based on the risk assessment and overall test strategy, determine the necessary test coverage goals and the specific techniques (like boundary testing) that must be applied.
- Design the logical test cases: Based on the coverage and techniques identified, design the logical steps for each test case.
- Determine the required test data: Analyze the logical test cases to design the necessary data sets. To make tests reliable, it is recommended that the test script itself creates and cleans up the test data.
- Design the test scripts: Based on the data needed and the logical test cases, the corresponding test scripts can be designed and structured in a test framework. It is recommended to create and clean the test data as part of the test scripts to prevent reliance of test execution on the state of the database.
- Implement the scripts: Once the scripts are designed, they are implemented into the automated test tool. This requires the tester to create a direct link between the scripts and the specific elements (like buttons or fields) in the Mendix application. This linking process is called the locator strategy. Setting up robust and non-fragile locators is a technical task dependent on the application's structure and the test tool's capabilities.
- Execute the scripts: Run the tests to confirm that:
- the test setup (tooling and test environment) is working properly,
- the locators are correctly selected (meaning the test tool can find the page elements),
- The test itself is repeatable (e.g., a rerun does not fail due to data issues like violating uniqueness constraints).
- Add asserts: Once the script executes correctly, assertions (checks) must be added to verify the expected results. It is important to confirm that the assertions themselves are working correctly before relying on them.