Repository and contributions here
As software engineers, we believe it is self-evident that the success of most endeavors in the field are dependent on the practices of the team. Choosing the correct tools, the correct architecture and technological solution are important, but our day to day practices, when we don’t move mountains but rather ever so slightly advance our project, make the difference.
Automation projects have overall long feedback loops and success, or failure is measured over vast expanses of time. For this reason, it is important to have a long-term approach when it comes to implementation rather than a quick win view. This has implications for development time initially, but it pays off when it comes to debugging, maintenance, further projects using reusable pieces of code or handoff procedures
It is for these reasons that, regardless of the kind of organization undertaking the implementation, the project should be approached taking into account certain considerations:
Does this feel like we are preaching to the choir? These are the end goals of any organization and these goals and phrases have been used and over-used over time and space.
But we are having this conversation because over the past decade practices have emerged that have constituted the underpinnings for meteoric rises of some organizations involved in technology, companies that did not exist two decades ago and now are market leaders in their respective fields. These practices can be found under different names, but they constitute the same broad category of approaches to developing pieces of software or IT projects. Such practices are usually revolving around creating testing harnesses that fix in place solutions, automated testing frameworks, projects that are always in a state of “incomplete but ready to run” rather than “complete sub-systems and impossible to run”, continuous monitoring of systems and their constituent components and the list can go on.
Test suits are there to ensure that once a solution is found, any future changes will not impact the work completed already. Test harnesses that are run automatically ensure that adding new functionality or alterations in the work done already does not have unintended consequences. Id est, fixes for one case that break other cases if not tested properly.
While there is no book on good practices in RPA (yet), we can infer the translation of such practices.
The applications that are automated offer a wide range of functionality. Usually, processes span across multiple applications and interact with each application through a limited subset of those functionalities. Among these functionalities, from a high level perspective, we can mention:
At first glance, “natural flow” of a process might be:
Initialize
Application A, Login
and Search for invoice #1234
. From the invoice screen, Extract provider name
into clipboard and switch to Application B. In this application perform Initialize
, Login
and Search for the provider
extracted from Application A. After the provider is found, check if the invoice was paid. Switch back to Application A and Update invoice status
.
A different way of viewing this process would be as a series of interactions which aggregate under the application which whom they happen.
Source files should have associated test files, following some naming convention. For example, an init-and-login.xaml
should have a corresponding test_init-and-login.xaml
. A good folder structure to use would look like this:
some-process
│ README.md
│ main.xaml
| run-all-tests.xaml
└── app-one
│ └── tests (UnitTests and static assets used for tests)
| | └── assets
| | | | static files used for testing selectors and functionality
| | | test_nav.xaml
│ │ | test_init-and-login.xaml
| | | ...
│ └── src (the actual libray implementation)
│ | nav.xaml
| | init-and-login.xaml
| | ...
└── app-two
│ └── tests
| | | test_nav.xaml
│ │ | test_init-and-login.xaml
| | | ...
│ └── src
│ | nav.xaml
| | init-and-login.xaml
| | ...
└── Framework (REFramework and other helper workflows)
| └── src
| | │ GetAppCredentials.xaml
| | │ ReadCsvAsDict.xaml
| └── tests
| │ Test_ReadCsvAsDict.xaml
Continuously testing workflows during development will increase delivery velocity. It would be advisable to test specific operations individually, rather than as part of a longer process. In this sense, we would want to test some add record functionality once we are already in the desired window, rather than as part of a longer sequence comprising:
login -> navigate -> search -> open records form -> add record
This would help with:
More over, placing the sequence inside of REFramework while developing, and testing it as such, would ensure an even longer feedback loop. REFramework can be quite heavy during the initialization, ensuring connectivity, getting credentials, communicating with Orchestrator. We would prefer to have the results of the execution of the workflow in seconds rather than minutes.
Implemented consistently, a large series of tests ensuring proper functionality will aggregate into a run-all-tests.xaml
workflow, which can be executed before commit code to the source control repository.
By their nature, most workflows will be stateful. They rely that the application being automated is in a specific state, on a specific page or screen, with certain controls present or activated or disactivated. Also, it is obvious that some of the interactions between the automation and the underlying application will change the state of that application. This becomes quite problematic when there is no Test environment to test in.
Tests rely on the state of the applications that are being automated, or on the state of the system. They rely on users present in the database, emails in the inbox, documents on a shared drive.
Testing strategies rely on the Fail fast and obviously principle. Failures should be caught in Dev and/or UAT, not in production. A major fallacy exhibited is the belief that applications have deterministic behaviors. There are innumerable applications which behave inconsistently, with controls that move, popups that appear and disappear without predictability. When dealing with such applications, tests should be performed in loops of tens or hundreds of iterations, for the workflows that are likely to fail. This kind of testing strategy will identify outliers and edge cases.
While automated tests strive to isolate workflows, it is usually the case that more than one functionality is tested. It is not possible to test a hypothetical Search function without testing both Initialize application
and Close application
. So, there seems to be an incremental approach to testing, with more “advanced” tests relying on the good implementation of more “basic” workflows.
Assumptions: The file is accessible and the format is the expected one
Arguments: in_strFilePath (In): String containing the file path out_dictData (out): IDictionary <String, String> containing the extacted data from the file in_sstrPassword (In): Secure String containing the file password
Throws: ArgumentException(InvalidFileFormat): if the excel file is in an unrecognized format ArgumentException(FileNotFound): if the file is not in the expected location ArgumentException(InvalidPassword): if the supplied password is not correct
Outcome: Success: the data is presented in the output argument Failure: Throw ```
Windows, browsers, terminal sessions should be opened/found only once and then passed as arguments to invoked workflows, rather than attach the window at the beginning of workflows. This minimizes the risk of inadvertently attaching a wrong window, especially if there is a possibility of having more than one window of the same process open at the same time
Opening a browser:
Capturing the browser as an object
Using the object as a reference to the opened browser in subsequent workflows
Workflows should be atomic in their intended functionality and small enough to express their complete functionality in one sentence or two. If this is not the case, seams should be searched so that the workflow can be further broken up into sub components.
Do’s:
Dont’s:
Workflows that are more complex should be broken up into reusable pieces of code. These complex workflows should strive to leave the overall system in the exact state it was before execution (windows that are open, pages or screens in apps). The state is referring exclusively to the presentation layer, not about data persistence which can obviously be changed.
For example, rather than implementing the whole process in one file, the individual interactions will be placed in different files and just invoked from the main implementation xaml. So, the overarching process will just be a series of imvokations of the atomic xaml files:
search-for-member.xaml
|
open-member-page.xaml
|
extract-member-info.xaml
|
nav.xaml (with target=home page)
some-process
│ main.xaml
└── app-one
│ └── src (the actual libray implementation)
│ | search-user.xaml
| | update-profile.xaml
| | ...
└── app-two
│ └── src
│ | nav.xaml
| | get-member-data.xaml
| | ...
there should be no calls from app-one\src\update-profile.xaml
to app-two\src\get-member-data.xaml
. All of data transfer from different applications should be handled by the overall process implementation file, such as main.xaml
.