John Coyne Named Chairman and CEO by Virtual Piggy Inc., A Children’s Online Privacy Protection Act E-Commerce Regulatory Compliance Platform

LOS ANGELES, May 11, 2016 /PRNewswire/ — Virtual Piggy Inc. (OTC BB: VPIG) announced the appointment of John R. Coyne as Chief Executive Officer, effective April 18, 2016.  Coyne, 63, will also serve as Chairman of the Board and a member of Virtual Piggy Inc.’s Board of Directors.

“Virtual Piggy’s Oink is the world’s only payment system capable of compliantly serving 80 million children under the age of 17 inthe United States alone who influence over $160 billion in spending,” Coyne said.  “It’s an untapped opportunity.”

Attracting substantial and high-profile investor interest during its initial rollout, Virtual Piggy is the provider of Oink, an award-winning financial payment delivery platform that enables online businesses to ensure compliance with the Children’s Online Privacy Protection Act (COPPA) and similar international children’s privacy laws.  During his initial months on the job, Coyne will focus on simplifying and updating Virtual Piggy’s business models by leveraging its current infrastructure and enhancing its operating environment for mobile payments to enable more flexibility to meet rapidly changing e-commerce technology trends.

One of the world’s leading experts in real-time regulatory and compliance oversight, Coyne brings over 30 years’ experience in the technology, financial services, media and defense industries.  He has founded and operated start-up businesses, as well as consulted for Fortune 500 companies.  His notable achievements have included creating the most advanced reusable financial software library in the United States for Systemhouse (a Canadian services company) that was ultimately purchased by MCI.  Coyne also led the technology team for NYNEX in the Bell Atlantic/Nynex merger (that became Verizon) and was the key technologist consulting for Netscape that led to Citibank’s entry into internet banking and the creation of the brand “eCiti.”

More recently, he has advised European banking institutions, including the Bank of England and Dutch financial regulators.  Coyne holds multiple patents, as well as current applications for patents in software, electronics, bio-tech and green energy.

“Oink will be ubiquitous for payment use anytime, anywhere and anyhow, including peer-to-peer financial transfers,” Coyne added.

Based in Los Angeles, California, Virtual Piggy Inc. holds three technology patents, US Patent No. 8,762,2308,650,621 and 8,812,395.

To view this news release on its original Web page, click here.

Semantic Technology Can Bolster CyberSecurity, John Coyne Tells NAIC

As the National Association of Insurance Commissioners (NAIC) prepares to review insurance industry input on its draft Principles for Effective Cybersecurity Insurance Regulatory Guidance, two prominent technologists explained in their own comments that the inherent porousness of insurers’ computer systems’ and the need to overhaul their internal IT operations can factor as strongly in overall cybersecurity as external products, protocols or methods.

“Insurers worldwide are grappling with modernizing their legacy computers, a sizable majority of which are sited on IBM mainframes,” explained Don Estes, Chief Technology Officer for REDpill Systems Inc., which recently modernized systems at Aviva Insurance.  “These were constructed prior to the existence of security ‘hacking’ as we know it. Their very foundations mistakenly assume no significant problem, or an invulnerable mainframe.”

Worse, these systems may be inadvertently modernized onto platforms that are decidedly vulnerable.  To the extent that replacement applications are influenced by the past, insurers need to be on guard, Estes warned.  His solution:  Security must be “baked” into the new design–not simply assumed to be present.

“We are on the eve of a new kind of software development—one based on Semantic Structures augmented by artificial intelligence, not just the Java and Oracle thinking that is the de facto standard today,” Estes added.

REDpill Chief Innovation Officer John Coyne, a leading authority on Semantic Technology Architecture, explained that Semantic Modeling provides a powerful solution to cybersecurity threats by adding a layer of artificial intelligence into the security process that can provide real-time oversight.  Similar to current methods used mostly by early-adopter governments to secure their fundamental data and processes, Semantic Modeling speeds delivery of security integration features and solves for complexity that often stultifies lesser security integration efforts, he pointed out.

Coyne, who counts AIG, Prudential and other financial and regulatory institutions among his clients, has found that many internal insurance applications are large, enterprise-wide systems–highly valuable in day-to-day operations–but with massive exposure to cyber-risk.

“Using Semantic Architecture, we have updated insurers’ current legacy systems with a non-invasive and non-destructive method of insertion of Semantic Structure at critical vulnerability points, so that the preservation of value is enhanced and system security is evergreen,” Coyne related.  “The bonus is that these new methods also help facilitate compliance with increasingly onerous insurance regulatory standards governing cyber-security underwriting products, thus adding an extra layer of confidence for stakeholders.”

In his own comments to the NAIC, Coyne recognized the Federal Identity and Credential Access Management  (FICAM) standards as a formidable architecture that inherits proven trust models with separations of concern that facilitate enterprise adoption.   Specifically for the insurance industry, he recommended a corresponding delivery and performance-enhancing method based on ontological standards and knowledge-based systems supported by artificial intelligence techniques that uniquely provide assurance of integrity for the insurer and its customers.

REDpill Launches Computer Software Development Technology Suite Backed By Market Breakthrough Guarantee, John Coyne Notes

(USA, Belgium) April 7, 2015—In the growing world of legacy computer modernization, augmentation and “greenfield” application construction, a cross-continental technology partnership introduced a breakthrough product suite today in REDpill Systems—a trinity of software development products offering a heretofore unheard-of guarantee to anyone seeking to update aging Java, COBOL or any third-generation systems:

“One hundred percent Business Rules Extraction and Zero Defects in the outcome, with no errors or omissions,” said REDpill Chief Innovation Officer John Coyne.  “It’s a strong statement that’s certainly raised a lot of eyebrows—from CIOs, to lawyers, to the naysayers and competition we’ve left behind.  But yes, it’s really true and we provide our customers with the forensic proof to back it up.”

Indeed, REDpill’s formative success has been manifest with substantial early adopters like two major U.S. federal regulatory agencies, a European military administration department and, most recently, one of Europe’s largest insurance companies.

REDpill’s three modules, “Revelation,” “Synchronizer” and “Transformation” coordinate in a proprietary manner to produce the guaranteed Zero Defects outcome, with minimal enterprise disruption during any phase of development—from a fresh start, to a modernization or augmentation.

“REDpill actually eliminates the need to “code” systems forever through an artificial intelligence technique,” REDpill Chief Scientist Michel vanden Bossche said.

Those opting to build wholly new greenfield systems benefit from REDpill’s innovative and novel use of Semantic Web 3.0 Technology built into an easy-to-use platform for a faster, flexible and more durable product than others with more perishable programming styles that degrade into “legacy” almost immediately.

“Many schools of thought point out that 90 percent of time and cost occurs in the final phase of modernization—which typically ceased at the 80 percent level—until now,” explained REDpill Chief Technology Officer Don Estes.

REDpill‘s FutureProof©  Transformation module eliminates the “legacy problem” by utilizing an ontology-driven Business Architecture Semantic Engine, enabling fully defined and executed workflows, application of subject matter expertise and establishment of processing goals without programming code interpretations.

With three U.S. offices and one in Brussels, Belgium, RED pill will expand throughout 2015.

“A handful of large-scale early adopters already have us in their competitive arsenal,” Coyne said.  “Given the breakthrough nature of REDpill, using it is now the only means to ensure any modernization or greenfield job is actually done right—guaranteed.”

Media Inquiries, Please Contact:  Helen Farrell, Executive Vice President at +1 412 552 8207 or helen.farrell@redpillsystems.com

Governance, Risk and Compliance–Semantic Computer Systems Development

What if something is unknowable?

By John R. Coyne, Semantic Systems Architect

An old adage says “It’s not what you don’t know that will hurt you, it’s what you know that isn’t so.”

Essentially, the difference between dealing with complexity and complication is one of the unknowable versus the knowable.  Typically, in old-school computer systems development, modeling comprises linear processes of formal reductionism to model individual elements or components of data and process flows with the typical decision junction switching directions or bridging of old school-style ”swim lanes.”

As anyone familiar with the process knows, this kind of modeling can get very complicated very quickly, especially when one encounters after months of discovery the “yeah, but” anomaly to the equation that has been set up.  Part of this has to do with the inherent non-linearity of actual operations in the real world.   In the days of predictable outcomes in simple behavior models that would encounter simple modifications when simple changes took place, our attempts at orderly discovery of workflows were easy. These models usually operated in a single framework or context of activity–the factory floor, the accounts department, the typing pool, etc.

The keyword, of course, is “simple.”  But advances in technology, increased transaction speeds, multi-dimensional interests and Web-scale interactions have made single-framework models and the concept of Business Process Modeling Notation (BPMN) tools not only redundant, but inappropriate for dealing with complexity.  BPMN deals with reductionism and the knowable, and is therefore perfectly suited to defining complicated processes; in other words, the knowable.  But it starts with the premise that something is knowable.

The hubris with which systems are addressed today states that “If I can know the state between A and B, and then B to C and C to D, then, I can trace A to N functions, map them and the system can be knowable, definable and hence controllable.’   Usually, today’s systems deal with single frameworks or contexts of operations.  However, like life, business throws the occasional curve.  And that curve usually comes from a framework not previously considered.

It’s the unknown risks, or “what you know that isn’t so” is what can cause the most damage

These curveballs are–for most businesses–equated to the unknowable, while the unknowable equates to risk.  The appetite for risk is usually a factor of ”known risk,” but it is  the unknown risks or “what you know that isn’t so” that cause the most damage, as can be seen from the recent systemic collapse in financial institutions that caused an avalanche of unintended consequences resulting not just in financial problems, but social upheaval, personal catastrophe and even sovereign collapse.

The linear approach trap raises the question of which approach helps to detect the unknown risk, along with the proverbial “what you know that isn’t so.”

After 40 or so years of continuous research and development in systems design and programming tools in the artificial intelligence arena, a level of maturity has evolved that facilitates the development of systems that deal with complexity.  As a result, there are more complex (unknowable) than simply complicated systems.  One outcome has been the separation of the relationship between objects and concepts and the flow of activity between and across them.

No “if, then, else” statement required

Using an example from the financial services industry illustrates the simplicity of the concept in context of the seller and buyer:   A mortgage (an object) requires (a relationship) top credit (another object or concept)–there is no ”if, then, else” statement required.  The process of determining whether the goal of obtaining a mortgage is to be met is dropped into an inference engine that determines the goal and the requirements for its achievement.  It discovers the dynamic activities that go into achieving the goal should the ”top credit” requirement be met, or stops the activities should the goal not be met.

Now add in the complexity of regulatory controls and minority rights, and the computer systems to support the production of the paperwork. Then add the various underwriting and risk models to be addressed and the mitigation of the risk by breaking the product (the mortgage) up into interest-rate derivatives, and cross-border jurisdictions, etc. In this way, a simple transaction becomes a complex web of inter-framework activity.  (And if you don’t believe that, try ascertaining who actually owns your mortgage!)

To be sure, the world is more complicated.  Change is happening at an exponential rate.  But what can be done?

Start by trying something different for a change.

Looking at Governance, Risk and Compliance (GRC) and using the idea of simple concept (object)/relationship/concept model, we can begin with modeling topics of governance (risk, risk appetite, policies) and external regulations (compliance).  Initially, we can start with topics at a high level.  Duty of care (Topic A) is a topic that we will focus on for the time being.  Topic B could be policy and risk tolerance.

The regulatory and policy models are designed at a gross level.  A first pass at interfacing to the sub-systems and data in the legacy environment is achieved through a service-oriented architecture (SOA) approach. This is a non-invasive and non-destructive method of creating new systems without disturbing day-to-day business.  Once again using the financial services industry as an example, these legacy systems may include point solutions for anti-money laundering, suspicious activity reporting or liquid coverage ratio requirements. The point of the model is not to replace them, but to assure that they are doing the correct systemic job.

Exposure to risks will be uncovered very quickly. In this case, topic A has two factors that do not satisfy the goal of the regulation. These become knowable, definable and fixable (at whatever layer of detail). Topic B has one missing variable.  But the chain reaction moves the non-compliant nature of the problem up to the topic.  Now you know that you cannot fully satisfy the ‘”duty of care” topic (A) and cannot fully satisfy your internal policy.

Not satisfying a regulatory requirement with all its ramifications (fines, imprisonment, loss of public trust) may well be more important than not meeting only one trace line in your governance policy.  Alternatively, they may be related (more on this later).  But now you know what you have to do.  As the model increases in complexity, it will expose more gaps, but as these gaps emerge, they will, of course, become knowable and therefore fixable.

The question is whether this same approach is be viable for dealing with multiple frameworks.

While this is a powerful start, it is indeed only dealing with a single framework.

Discovering relatedness and interdependency

Each framework has been modeled, and the behavior of each is well-known.  The name of the topic is, for instance, standardized in a business, data and/or process ontology.  In the case of the above example, topic A refers to duty of care. Since we are not running a process, but just the relationships among (things) them, we can run our models against our inference engine and discover that there is a linkage among all three frameworks.

In framework one, the duty of care may have been to apprise the buyer of all the risks related to the product being sold and mapped to a regulation dealing with consumer protection (which is fully discoverable in the model’s knowledge base).

The second framework may concern stakeholder protection.  In this case, the policy decision may be a risk tolerance or risk exposure relationship, such as ”This is a $30 million mortgage, and it has put us over the risk coverage limit we set for the month.”  This is mapped to an internal policy, and also mapped to regulations regarding the permissible acceptance or denial criteria.

The third framework is the operations and technology framework, and the duty of care here may be the protection and privacy of the data used in the decisions, its transmittal and traversal across and between networks.

We can now determine something we did not know in the past, and may never have known until it was too late by finding both an interrelatedness and interdependency between frameworks that is essential to both external and internal compliance.

Semantic Computer System Development Programming–A Primer

A Primer on Programming–The Basics, History, Design & Components for Non-Technical Business Executives

By John R. Coyne, Semantic Computing Consultant

In traditional programming and the Systems Development Lifecycle, a process of gathering information from users to describe needs is translated into a systems analysis, confirmed and then codified—thus producing a System Design.

Then, an architecture or framework to support the system is created.John R Coyne Service Oriented Architecture

This will include

  • Infrastructure
  • Software
  • Choice of programming language
  • Operating system
  • Data elements
    • These are called from time to time and potentially modified

Thus, this architecture is then the support system for the system design and all its components.

Programmers perform two fundamental functions:

  1. They express the user(s) needs in terms of statements of computer functions.
  2. Embedded in those computer functions are the methods that the computer will need to perform in order to execute them. These are:
    • descriptions of data to be used
    • networks to traverse
    • security protocols to use
    • infrastructure for processing

(Summarized at the most abstract level, these could be described as: Transport, Processing and Memory)

This intricate association of descriptions of 1) what the system should do, and 2) how it will do it relies on the programmer and system designer to perform their tasks with precision.

In many cases both will rely on third-party software, the most common of which is a proprietary database.

These proprietary databases come with tools that make their use more convenient.   (That is because these databases are complex and, without the tools, the systems designers would have to have intimate knowledge of how the internals of the database systems work.)

Thus, the abstraction allows the systems builder to concentrate on what the user wants, versus what the database system needs to perform its functions.

In the early days of computing, programmers would have to make specifications of the data they needed, test the data and merge or link other data types. Now, databases come with simple tools like “SQL” that allow programmers to simply ask for the data they want. The database system does the rest.

Programs written in programming languages are also abstractions.

 

How Computer Programming Developed

In the early days of programming, programs were written in machine language, which was an arcane art blending both engineering and systems knowledge. Later, assembler languages were developed as a first level of abstraction. These were known as second generation languages. Even these languages required specialized skills. The next leap was with third-generation languages—the most common of which was COBOL (short for Common Business Oriented Language, which was developed so that people without engineering skills could program a computer).

With machine languages, there was no translation function to have the computer system understand what the programmer wanted it to do. (With assembler languages, there was a moderate translation, but they are so similar to the machine language that little translation is needed).

In third-generation languages, the concept of a “compiler” was created. The compiler takes a computer language that is easy to program in and translates it to a language the computer can use for processing the requirements. During this generation of programming, many third-party tools were developed to aid systems designers in the delivery of their systems and thus a whole industry was born.

Not surprisingly, computers became more complex and, over time, so did the systems that people wanted designed. This complexity drove systems to become almost impossible to understand in their entirety. Eventually, instead of changing them, systems designers simply appended new programs to the older systems and created what is sometimes termed “spaghetti code.”

Eventually, something had to change. Now, after years of research based on artificial intelligence techniques, new tools have emerged that enable a new generation of programming that allows the computer to determine the best resources it needs to do what is requested of it. The science in this is not important. What IS important is that now, the original process of determining what the user wants can be separated from how it gets done.

In semantic modeling, no programming takes place. Rather, a modeler interviews subject matter experts to determine what they want to happen, the best way for it to happen and the best expected results.

Semantic modeling is constructed much like an English sentence, (which is one reason for the term “semantic”). There is a subject, a predicate (or relationship) to an object of the sentence. Like the building of a story, or report, these “sentences” are connected to one another to create a system. Also like creating a report, “sentences” may be used over and over again to reduce the amount of repetitive work. In semantic modeling, these “sentences structures” are called concepts. Concepts are the highest level of abstraction in the program’s “story.”

Like a sentence, the requirements of the system can be structured in near English grammar-level terms.

For instance:

“A Passport (subject) requires (predicate) citizenship (object).”

(The concept that we are dealing with could be “international travel.” This demonstrates the linkages between coding “sentences.”)

“International travel – requires – a passport.” and thus, as has been seen, “A passport – requires – citizenship.”

To expound on our grammatical analogy for programming the system, the same terms delineating a “passport” could be used for “checking into a hotel”:

“Hotel – requires – proof of identity.”

(Identity as a concept can re-use the “passport” sentence.)

“Passport – is a form of – identity.”

(Thus, the speed of development is greatly improved because of the re-usability.)

Also like a sentence, the terms can be graphically represented as a hierarchy—much like sentence deconstruction (diagramming) we learned in high school.

Notice that the terms do not describe how such information is to be found, what order of precedence they have, or how the system is to process such statements. In the “separation of concerns,” the new semantic systems use another mechanism to process the data known as an inference engine.

The inference engine is a logic tool that determines what is needed to accomplish the semantic concepts. The goal of the inference engine is to solve the computing requirements.

Of course, like the databases that have been built, semantic systems come with tools that allow the business user and modeler to describe what the system should be doing, without the need for intimate knowledge of expert systems or artificial intelligence techniques. They simply model.  Like the aforementioned SQL statement, the computer takes care of the requirements for satisfying the system requirements.

Underneath all this is the usual figurative plumbing found in computer programming. There are networks to be traversed, data to be called and transformed, reports to write, and computers to process the requests. Today, all of these are now simply services that have been well-understood and are supported by a whole industry of third-party suppliers with proprietary products and an even greater universe of engineers supporting open standards, and even free software available to do these tasks.