Our patent portfolio is obviously within the public domain. However, understanding how the patents cover systems and methods of cyber risk assessment, quantification, financial valuation and compliance can be difficult to understand. Here we have provided a series of videos and short explanations of how such patent protection is implemented within software applications. Where there is a similarity to other software products you have viewed, we advise that you consult your organization’s Office of the General Council and Chief Risk Officer as to the potential for patent infringement litigation to be instigated against your organization for direct or indirect patent infringement of our intellectual property through the use of another product.


Each video sample has been assigned a reference number. Each patent claim has similarly been assigned a reference number that correlates the wording of the patent protection claim language to how these may be applied within a software application. The mode of implementation provided is only indicative and there may be many alternative methods of implementing the protected cyber risk management methods and you are advised to consult your organizations’ legal department and Chief Risk Officer where there is doubt.

B1 & B1A:

In n-ORM and PAE the threat data files can be loaded from any location; locally via a network or from a webserver online for example. The threat databases currently used are Maria DB and MySQL, bit any may be utilised.

The same threat data is used by both applications and the modelling is undertaken by each application type running different models.

This is due to:

  1. the requirements of transparency of calculation required by financial regulators. Prior to the last financial crisis, “black box” types of risk models in the calculation of financial risk exposure were permitted;
  • the requirements of all organizations to assess and quantify their cyber risk exposure for internal cyber risk management programmes.


B2, B2A& B2B:

The applications have observed threat data and actual incidents as inputs to the models utilised. The data can be manually altered in terms of the parameters used by the models in order to fit to the requirements of the user. The models used vary as do the model parameters in order to predict future threat activity and profiles within a specified time period.

The output figure is presented to the user as both a value within a table as well as graphically as a bar chart for ease of use and for non-technical users to quickly assimilate the model findings. Various parameters can be altered to account for recency of threat data versus long-term trend analysis for example.




The financial losses for one or more scenarios are displayed in a chart ass a summary value at risk, a residual risk exposure value from making changes to parameters and the implementation cost of those mitigating changes.

This allows users to run different configurations, mitigation actions and changes to technology topologies in order to run what-if scenarios and evaluate the cost-benefit of each mitigation action. Where the residual value remains too high for the risk appetite of an organization after a mitigating action has been calculated, additional mitigation actions may be assessed until the exposure value for the organization fits.

Alternatively, the residual risk exposure may then be transferred by way of cyber re/insurance, ILS, or captive risk transfer. The value for the risk carrier is clear and the mode of calculation is both transparent for the risk carrier as well as having multiple model outputs to price the risk from.

Additionally, through the mapping of systems, processes and categories and the values input by the user organization, the risk carrier can better understand the actual residual risk exposure and may mandate certain underwriting criteria being met in order to offer cover.


The main portfolio page for each scenario provides a tabular readout with 4 colour-coded columns for ease of use, these representing data from observations; externally sourced data; user input data and predicted data.

For each period of concern, with the data acquisition system setting time blocks of 1 week, broken into daily and hourly counts. The threat data system runs periods for “Observation Start”; “Observation End”; Threat ID; Category of Threat; Target of the Threat; and the Severity of the Threat.

The models therefore utilise actual threats posed to an individual organization, as opposed to a generalised approach to assigning a threat data value. Each organization’s operations, applications and topology are proprietary and ass such a generalised approach to modelling such threats is invalid for both internal risk management purposes and for risk carriers to understand their actual exposure.

Predicted future threats are accounted for by the models through the use of both actual observed threat data and externally sourced data, as now required under ISO27001/2: 2022.


For each scenario, the models use the actual observed threats that are assigned the labels listed in B4 above. The threat data acquisition system utilises the patented method of Quantar from 2005 in order to ensure that the threat data utilised by the models has not been subject to manipulation by an external party such as an attacker, a malicious insider or other. The system remains unaddressable directly in order to prevent malicious attacks, whilst remaining able to update threat databases in order to correctly identify threats. The system and method have been of interest to the military for use in mobile missile guidance systems requiring the same covert operation whilst also updating mapping and target positioning.


The threat data acquisition system runs on FreeBSD and Ubuntu although this may be combined into a single server.

The method of acquiring threat data from inbound network traffic was developed 1999-2002 when the first patent covering the system and method was filed.

In the video the IDS is SNORT and the threat DB is MySQL, however the method is agnostic and more recently developments have used Suricata and Maria DB for cost efficiencies.

B6A & B6B:

The applications have various functionalities to both automatically identify target systems and manual mapping for relationships between business processes and technology. In the case of mapping of the hardware and software against processes and categories, this has been created as a sub-application. This enables all organizations to define their own degree of granularity of the mapping process. Inputs are normally available from business continuity planning data.

This sub-application can be imported and exported into the parent application and can assist in the ongoing development of continuity and resilience planning programmes. Linking is through simple drag-n-drop functionality to ensure all personnel profiles are able to complete the task of mapping their individual systems, processes and categories. For largescale operations, this becomes more critical with thousands of processes and hundreds of applications being utilised within various sectors.

Although cyber risk assessment, quantification and valuation may be viewed as a purely electronic form of threat, the reality is that outages from malicious insiders, external attacks on critical infrastructure and naturally occurring phenomenon such as flooding are just as likely to impact technology and operations. As such, the application caters for these and can also be used as a proxy for any user-defined inputs required by an organization according to their operating environment.



Once the mapping of the relationships between systems, processes and categories has been completed, the models utilise the actual observed threat data of an organization, in conjunction with external data to determine expected downtime values and display this value for the end user. The scenarios created may be altered for the same mapping and threat data with changes made to evaluate the impact upon expected downtime and/or threat event frequency through changing the period of observed threats for example.


Although cyber risk assessment, quantification and valuation may be viewed as a purely electronic form of threat, the reality is that outages from malicious insiders, external attacks on critical infrastructure and naturally occurring phenomenon such as flooding are just as likely to impact technology and operations. As such, the application caters for these and can also be used as a proxy for any user-defined inputs required by an organization according to their operating environment.

These figures are added to the electronic threat losses to give an overall combined loss value for the organisation. Using this physical threat functionality, users can add any data the organisation wishes to add to the model inputs. An example of this in a post-Covid world may be legal claims, reputational damage or other elements to provide an overall enterprise risk value.

B9, B10 & B12:

The main portfolio page, as described in B4 above contains the data within a given period relating to actual threats posed to the organization. This includes system-acquired data, user defined data and externally sourced data. For user-defined data, clearly the intention is to experience ass few successful hacks, attacks, virus and malware infections as possible.

However, the limits placed upon inbound network traffic to eliminate every threat remains an issue for capacity, load, speed, volume and value. As such, security within the security perimeter, plus manual interventions are used to disinfect as required. This is a method used by many organizations where the data flow is critical for operations and is a form of competitive advantage.

There are numerous functionalities available to a user, including the ability to filter the risk value results by process, location or page (scenario). Additionally, whilst the values output, both as a table and a bar graph for ease of use, there is the optional utility of a historical trend output.

The graph trace over a period of time can be used to demonstrate the effectiveness of the organizations’ ability to control and manage cyber risks. This can be regarded as the security maturity level of the organization and can be used for capital budgeting for security for future periods. The trace will also of use for risk carriers to determine both the risk appetite of the organization, as well as its ability to control cyber risks. The security maturity level measure has subsequently to Quantar’s patented method become an agreed measure by a large number of risk carriers ( see ref: https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/crs-cyber-data-schema-v1.0.pdf ) .

The E.U. GDPR was one of the first major pieces of legislation relating to data use, transfers and storage where the law infers guilt until an entity can both proof of compliance and the intent to comply with the law. Using the applications and systems, allied to the traces relating to historic threat data for an organization can assist in proof of intent to comply with an increasing number of data laws that have the same burden.


Allied to the graphical and tabular outputs illustrating cyber risk exposure are the historic trend outputs. As stated in B10 above, the application can be utilised as part of an overall GRC programme in addition to cyber risk management.

Since Quantar originally developed the applications for banking and re/insurance regulatory compliance requiring auditable documentary proof of how risk values are determined, there is an audit trail of every change to the application. The function provides the data required by regulators and auditors in evaluating cyber financial risk exposure of an organization.

Allied to this data provision and auditable documentation, the primary application is secured with physical encryption to ensure only users issued with the relevant hardware may use and configure it for further audit use.



The Predictive Analytics Engine application utilises the same actual observed threat data as with Network Operational Risk Manager, but has additional functionalities and utilises different models. It facilitates drill-down capability for specific types of threat to allow a user to identify a specific historic threat trend line as opposed to the overall risk trend. This can assist where attacks of a certain type are more frequently identified in order to allocate resources to that particular pain point.

Additional functionality is provided through model parameter change options, allowing for observation according to the fit method, curve type, and temporal weighting of the model parameters.

Time block are again specified in order for risk frequency to be evaluated for management of cyber risks and the capability of the organization to effectively control digital threats consistently. Risk carriers are again able to utilise such trend data for their underwriting data support and in developing underwriting criteria.

B15 & B15A:

The threat data is used to model the threats posed to the organization and displaying this as a computed loss distribution. A red/amber/green (RAG) warning system is employed for ease of use for non-technical personnel to rapidly identify compliance with the organizations’ risk appetite. The RAG system parameters are user-defined according to requirements.

The metrics output in a tabular format, with the data relating to the expected loss, root semi-variance high and low (to identify long-tailed distributions) and values for given confidence levels and conditional expected shortfalls for the said given confidence levels.

This output can be utilised by the chief risk officer’s department in defining appropriate risk management and risk transfer options to reduce exposure to an acceptable level. For risk carriers, the output can be used to determine re/insurance pricing, attachment points, overall limits, etc.

Users have the ability to change the model and output parameters through defining the plot scale, risk tenor, the latter defining the period for prediction up to one year ahead. Whilst the current risk exposure level may be manageable and at an acceptable level, the ability to predict into future periods enables capital budgeting to be better aligned with the cyber risk management needs of the organization.

The application also outputs audit reports for the risk values of the organization for regulatory compliance reporting, annual audits for proof of effectiveness of the organizations’ IT general controls. This avoids detailed individual IT control audits, reducing audit costs as well as removing doubt as to FSLI impacts by auditors and enabling reasonable assurance sign-off of annual accounts.



The period for reviewing the historic data trend analysis is user-defined for the model parameters enabling shorter term snapshots for evaluation where there is a change in technology topology or business process, M&A, or other organizational change. Alternatively, for an auditable period, the time period for historic threat data analysis may be selected for up to a 1-year period.

Since audit periods may cover a partial calendar period, the application allows for common audit timelines utilised by the Big4 firms. Additionally, such periods may be used for risk transfer where a period of cover falls outside of a standardised time period.


The Predictive Analytics Engine application outputs two sets of tabulated predictive data; one for attack rate forecasts and one for expected losses into future periods of up to one year.

As with the historic threat data, the time periods are separated into quarterly periods for both audit and risk transfer purposes, as well as for internal risk management use. With the predicted future loss values projected up to one year in advance, it is possible for the organization to evaluate its best cost-benefit and capital budgeting options. It enables strategic decisions relating to risk appetite, security investments and risk transfer options to be considered specifically, or within the overall risk management framework.

Additionally, such data is key for risk carriers to correctly evaluate and value risk exposure in determining underwriting criteria, pricing, limits and hedging options for their clients. For investors within the ILS segment, such data is also crucial in determining participation and for reinsurers the layer, or tranche they are willing to cover. Where captives/sidecars are used, the data may be used for trigger and attachment point determination.


The model parameters may be adapted according to the individual organizations’ needs. This is attained for the core model output functions. The number of iterations for Monte Carlo simulations will be dependent upon need and the computing power available to the user. A 10000 iteration setting, for example, is capable of being run on an average laptop. Running to far greater iterations clearly requires increased capacity.

For PDF monitor settings will have a significant effect on the model outputs and should therefore only be available to personnel with specific knowledge/skill sets. The simulation configuration parameters may also be utilised for “what-if” forecasting by an organization, risk carrier or auditor in determining future risk exposure for a given set of assumptions.

B19 & B20:

The summary page within Network Operational Risk Manager contains the value at risk posed to the organization and is expressed as a monetary value in the currency selected by the user. To facilitate the acceptance up to a defined limit, the application has a function of an acceptable variance percentage that is set according to the risk appetite of the organization.

At the initial baseline stage, the value is either accepted and the value set within the system as the baseline acceptable risk exposure, or the application will mandate a change be made to bring the risk value down to a level that the user accepts.

After the baseline value is input, any variance beyond the % increase set within the model requires a mitigation action to be made to the organizations’ systems, processes or categories. The application is effectively locked until such time as the risk exposure value falls to the acceptance level.

This operates in conjunction with the audit trail functionality and the encryption of the application to serve as auditable and documented proof of regulatory compliance and the intent to comply with various data laws.

The application has an in-built RAG warning system for rapid assimilation of the status of the organization. In this video, the bar is mauve, indicating a move from a safe green status towards a red exceedence level. This can be utilised by auditors and risk carriers in their evaluation of how the organization manages its risk exposure and what changes are made internally to control and manage digital threat risk exposures.