Governance, Risk and Compliance–Semantic Computer Systems Development

What if something is unknowable?

By John R. Coyne, Semantic Systems Architect

An old adage says “It’s not what you don’t know that will hurt you, it’s what you know that isn’t so.”

Essentially, the difference between dealing with complexity and complication is one of the unknowable versus the knowable.  Typically, in old-school computer systems development, modeling comprises linear processes of formal reductionism to model individual elements or components of data and process flows with the typical decision junction switching directions or bridging of old school-style ”swim lanes.”

As anyone familiar with the process knows, this kind of modeling can get very complicated very quickly, especially when one encounters after months of discovery the “yeah, but” anomaly to the equation that has been set up.  Part of this has to do with the inherent non-linearity of actual operations in the real world.   In the days of predictable outcomes in simple behavior models that would encounter simple modifications when simple changes took place, our attempts at orderly discovery of workflows were easy. These models usually operated in a single framework or context of activity–the factory floor, the accounts department, the typing pool, etc.

The keyword, of course, is “simple.”  But advances in technology, increased transaction speeds, multi-dimensional interests and Web-scale interactions have made single-framework models and the concept of Business Process Modeling Notation (BPMN) tools not only redundant, but inappropriate for dealing with complexity.  BPMN deals with reductionism and the knowable, and is therefore perfectly suited to defining complicated processes; in other words, the knowable.  But it starts with the premise that something is knowable.

The hubris with which systems are addressed today states that “If I can know the state between A and B, and then B to C and C to D, then, I can trace A to N functions, map them and the system can be knowable, definable and hence controllable.’   Usually, today’s systems deal with single frameworks or contexts of operations.  However, like life, business throws the occasional curve.  And that curve usually comes from a framework not previously considered.

It’s the unknown risks, or “what you know that isn’t so” is what can cause the most damage

These curveballs are–for most businesses–equated to the unknowable, while the unknowable equates to risk.  The appetite for risk is usually a factor of ”known risk,” but it is  the unknown risks or “what you know that isn’t so” that cause the most damage, as can be seen from the recent systemic collapse in financial institutions that caused an avalanche of unintended consequences resulting not just in financial problems, but social upheaval, personal catastrophe and even sovereign collapse.

The linear approach trap raises the question of which approach helps to detect the unknown risk, along with the proverbial “what you know that isn’t so.”

After 40 or so years of continuous research and development in systems design and programming tools in the artificial intelligence arena, a level of maturity has evolved that facilitates the development of systems that deal with complexity.  As a result, there are more complex (unknowable) than simply complicated systems.  One outcome has been the separation of the relationship between objects and concepts and the flow of activity between and across them.

No “if, then, else” statement required

Using an example from the financial services industry illustrates the simplicity of the concept in context of the seller and buyer:   A mortgage (an object) requires (a relationship) top credit (another object or concept)–there is no ”if, then, else” statement required.  The process of determining whether the goal of obtaining a mortgage is to be met is dropped into an inference engine that determines the goal and the requirements for its achievement.  It discovers the dynamic activities that go into achieving the goal should the ”top credit” requirement be met, or stops the activities should the goal not be met.

Now add in the complexity of regulatory controls and minority rights, and the computer systems to support the production of the paperwork. Then add the various underwriting and risk models to be addressed and the mitigation of the risk by breaking the product (the mortgage) up into interest-rate derivatives, and cross-border jurisdictions, etc. In this way, a simple transaction becomes a complex web of inter-framework activity.  (And if you don’t believe that, try ascertaining who actually owns your mortgage!)

To be sure, the world is more complicated.  Change is happening at an exponential rate.  But what can be done?

Start by trying something different for a change.

Looking at Governance, Risk and Compliance (GRC) and using the idea of simple concept (object)/relationship/concept model, we can begin with modeling topics of governance (risk, risk appetite, policies) and external regulations (compliance).  Initially, we can start with topics at a high level.  Duty of care (Topic A) is a topic that we will focus on for the time being.  Topic B could be policy and risk tolerance.

The regulatory and policy models are designed at a gross level.  A first pass at interfacing to the sub-systems and data in the legacy environment is achieved through a service-oriented architecture (SOA) approach. This is a non-invasive and non-destructive method of creating new systems without disturbing day-to-day business.  Once again using the financial services industry as an example, these legacy systems may include point solutions for anti-money laundering, suspicious activity reporting or liquid coverage ratio requirements. The point of the model is not to replace them, but to assure that they are doing the correct systemic job.

Exposure to risks will be uncovered very quickly. In this case, topic A has two factors that do not satisfy the goal of the regulation. These become knowable, definable and fixable (at whatever layer of detail). Topic B has one missing variable.  But the chain reaction moves the non-compliant nature of the problem up to the topic.  Now you know that you cannot fully satisfy the ‘”duty of care” topic (A) and cannot fully satisfy your internal policy.

Not satisfying a regulatory requirement with all its ramifications (fines, imprisonment, loss of public trust) may well be more important than not meeting only one trace line in your governance policy.  Alternatively, they may be related (more on this later).  But now you know what you have to do.  As the model increases in complexity, it will expose more gaps, but as these gaps emerge, they will, of course, become knowable and therefore fixable.

The question is whether this same approach is be viable for dealing with multiple frameworks.

While this is a powerful start, it is indeed only dealing with a single framework.

Discovering relatedness and interdependency

Each framework has been modeled, and the behavior of each is well-known.  The name of the topic is, for instance, standardized in a business, data and/or process ontology.  In the case of the above example, topic A refers to duty of care. Since we are not running a process, but just the relationships among (things) them, we can run our models against our inference engine and discover that there is a linkage among all three frameworks.

In framework one, the duty of care may have been to apprise the buyer of all the risks related to the product being sold and mapped to a regulation dealing with consumer protection (which is fully discoverable in the model’s knowledge base).

The second framework may concern stakeholder protection.  In this case, the policy decision may be a risk tolerance or risk exposure relationship, such as ”This is a $30 million mortgage, and it has put us over the risk coverage limit we set for the month.”  This is mapped to an internal policy, and also mapped to regulations regarding the permissible acceptance or denial criteria.

The third framework is the operations and technology framework, and the duty of care here may be the protection and privacy of the data used in the decisions, its transmittal and traversal across and between networks.

We can now determine something we did not know in the past, and may never have known until it was too late by finding both an interrelatedness and interdependency between frameworks that is essential to both external and internal compliance.

Leave a Reply