23 Mar 2017 by mallyanitin
The typical architecture representations are diagrams, abstracts, views, and perspectives that are documented formally or informally.
However, a modern architect does not stop there. These representations start at version 0.1 and gets iterated through testing, measuring and validation.
Example: Interface Definitions (a closely observed architectural artifact) goes through several versions.
How to test architecture?
Need to inspect the architecture, to determine, what needs to be tested.
- Test the concept/assumptions: Search-ability is a quality attribute for an image archive. Do a spike to test the assumptions about what/how. E.g. Spike: Storing meta-data of an image as tags on the file image is sufficient and efficient for the query use-cases; Alternative is to store the tags as a JSON document in NoSQL DB with indexes for efficient search; Alternative is to store tags as a JSON document in ElasticSearch or Apache Solr. This will force sampling the search use-cases and validate that the approach selected will indeed meet the use-cases with desired performance & cost attributes.
- Test the interface definition: Interface Granularity & Availability is a quality attribute. Create a mock API and push for consumers to consume the API, to test implicit or unsaid requirements. Iterate the API’s quickly, until, details are flushed out; this will help meet the DOR (Definition of Ready) requirements for the interface definition before significant coding is done to implement the interface, and integration issues surface.
How to measure architecture?
Three things to look out for coupling, cohesion, and complexity. These days, people also put naming into this bucket.
If developers measure coupling and cohesion of classes, architects measure coupling and cohesion of services (*) and teams. The goal is to minimise coupling and maximise cohesion. High coupling between services (e.g. use of common database, use of common cache data, direct dependency) is indicative of low modularity. High coupling between teams developing services (and continuously in meetings to discuss interfaces) is indicative of potential integration issues. In some cases, micro services may be a good answer, and in others it may be a very bad idea! High cohesion for a single responsibility (including deployment, startup and shutdown lifecycle) is a great thing! Coupling cannot be completely avoided, dependencies is a reality. For sanity of the system, its necessary to track the coupling and cohesion, if not numerically, it can also be done visually. If there are couplings from platform services to application services, it’s a RED Flag. If the build system, can generate an inventory of services and the service graph, this graph can be used to measure the coupling and cohesion.
* A service could be a RESTful service, or a UI Component in the UI container, or products in a solution.
Complexity is a different beast. This one is hard to measure. It can be felt alright. If there is complexity, a good indicator is teams trying to push it out from their scope to someone else. Example:
Platform Architect: Failure re-try is an application concern.
Application Architect: Best effort and guaranteed delivery is a QOS that platform service should provide. I want to fire & forget.
Platform Architect: OK. This will introduce complexities in my service that I had not planned for. In that case, I can only provide you an asynchronous interface. Will notify you.
Application Architect: Huh. Can you design the interface to have both synchronous and asynchronous? The asynchronous model will create complexities in my workflow.
In such areas, I personally think, consistency is important. We should not have one platform service make re-try an application concern, and the other takes on the burden.
This is a place where the system architect must step in to define guidelines. Each case is different. And. this must be dealt with case-by-case.
If you can feel complexity, it can be visualised & addressed (if not numerically measured). This can be done by enumerating RISKS.
Testing and counting is a time-tested best practice to validate. This is architectural governance.