Getting started

PEALT input consists of text statements delimited with \n (NEWLINE) characters. The input is divided into sections by keywords POLICIES, POLICY_SETS, CONDITIONS, DOMAIN_SPECIFICS (optional), and ANALYSES. We illustrate this with the Car Rental Risks example provided in the tool interface. It specifies four policies:

POLICIES
% policy capturing risk of financial loss dependent on type of rented car
b1 = max ((isLuxuryCar 150000) (isSedan 60000) (isCompact 30000)) default 50000
% policy capturing trust in rentee dependent on type of his or her driving license
% trust score for 'hasOtherLicense' contains non-deterministic uncertainty and so is in [0.3,0.5]
b2 = min ((hasUSLicense 0.9) (hasUKLicense 0.6) (hasEULicense 0.7) (hasOtherLicense 0.4 [-0.1,0.1])) default 0
% policy that captures potential risk dependent on type of intended car usage
% this policy happens not to be used in the conditions below
b3 = max ((someOffRoadDriving 0.8) (onlyCityUsage 0.4) (onlyLongDistanceUsage 0.2) (mixedUsage 0.25)) default 0.3
% policy that accumulates some signals that may serve as additional trust indicators
b4 = + ((accidentFreeForYears 0.05*x) (speaksEnglish 0.05) (travelsAlone -0.2) (femaleDriver 0.1)) default 0
% the next policy is just defining a 'constant' -1, to be used as sub-expression in a policy set
b_minOne = + () default -1

Note the use of "%" to insert comment lines into PEALT input. The predicate names 'isLuxuryCar' etc. occurring in policies do not have to be explicitly declared as predicates, their type is implicitly inferred through their occurrence in a rule.

The scores within rules may be constants (e.g. 30000 in policy b1 above), negative numbers (e.g. -0.2 in policy b4 above), variables, or constants multiplied with variables (e.g. 0.05*x in policy b4 above). All variables within score expressions of rules are implictly assigned the type Real. Additionally, a score expression s may be followed by an interval [l,u] where l and u are constant real numbers such that l <= 0.0 <= u. For example, in policy b2 we have rule (hasOtherLicense 0.4 [-0.1,0.1]). The meaning of this score expression is that it can be any number 0.4 + z where z is in the interval [-0.1,0.1]. This interval therefore models non-deterministic uncertainty of score expressions. If you use the same predicate more than once in the same policy and in rules whose scores have an interval, you should disambiguate these names so that, e.g., b1 = +((q 0.4 [-0.1,0.2]) (q [0.0,0.1]) default 0.0 becomes b1 = +((q 0.4 [-0.1,0.2]) (q_copy1 0.0,0.1]) default 0.0. Not doing this will mean that the non-determinism in [-0.1,0.2] and in [0.0,0.1] will be resolved to the same value. This may lead to unexpected analysis results. Of course, any constraints you formulate for q will have to be cloned for q_copy1 as well. Fortunately, we only seem to need this when q models 'true' in the composition of tree-like models -- so no complicated constraints need cloning.

If we want to compose a policy with a real number, e.g. to write an expression b4 - 1, PEALT requires us to write the real number -1 as a policy expression and to compose these policies with +. This is an artifact of the language definition and its parsing. Therefore, we use b_minOne = + () default -1 as a means of defining such a value. In general, we may define a real number r as a policy b_r = op () default r where the choice of operator op is irrelevant as long as it is any allowed operator for policies (max, min, +, and *).

The intuitive meaning of a policy is that we first collect all score expressions of predicates that are true in that policy. If that set is empty, the policy returns its default score. Otherwise, it returns the result of applying its operator to that set of score expressions. For example, if isLuxuryCar is true and isSedan and isCompact are false, then policy b1 evaluates to 150000. Let us see next how we can use policies to declare policy sets:

POLICY_SETS
% policy set that 'converts' the trust expressed in b2 into risk
pSet0 = +(b2,b_minOne)
% policy set that multiplies risk with potential financial loss
pSet1 = *(b1,pSet0)
% casting policy p4 into a policy set
pSet_b4 = b4

It is an artifact of the PEALT input language that we cannot write a policy name within a policy set directly. Rather, we have to either cast it into a policy set (e.g. as done in declaring pSet_b4 as policy b4) or we have to apply policy set composition operators (min, max, +, and *) to policies (e.g. in pSet0) or to a mix of policies and policy sets (e.g. in pSet1). The input language does not support the application of these operators to more than two arguments at a time. Let us see how we can use policy sets to declare conditions:

CONDITIONS
% condition that the risk aware potential financial loss is below a certain bound
cond1 = pSet1 <= 50000
% condition that the accumulated trust is above a certain threshold
cond2 = 0.4 < pSet_b4
% condition that insists that two previous conditions have to hold
cond3 = cond1 && cond2
% variant of condition cond2 with a higher threshold
cond4 = 0.6 < pSet_b4
% variant of condition cond3 for that higher threshold
cond5 = cond1 && cond4

The definition of conditions follows very similar conventions as that of defining policy sets. The composition operators are negation (!), conjunction (&&), disjunction (||), and comparison operators (< and <=). All of these operators have to be applied one at a time in defining conditions. For example, in order to define a condition cond as !isLuxuryCar && (0.4 < pSet4) we have to declare something similar to

% cast predicate into condition
isLuxuryCar_c = isLuxuryCar
% negate condition
c1 = !isLuxuryCar_c
% create atomic condition
c2 = 0.4 < pSet4
% form desired conjunction
cond = c1 && c2

by inventing suitable intermediate condition variables such as isLuxuryCar_c, c1, and c2. We realize that this may make specifications more verbose than necessary. But it did simplify the front end of our input processing and means that vacuity checking (discussed below) applies to all sub-conditions declared in this manner. Let us now show how we may add domain-specific knowledge or assumptions next:

DOMAIN_SPECIFICS
% real x models the number of years driven without accident, has to be non-negative and is 'truncated' at value 10
(assert (and (<= 0 x) (<= x 10)))
% capturing a company policy: luxury cars must not be used for off road driving
(assert (or (not isLuxuryCar) (not someOffRoadDriving)))
% capturing that the different types of rental cars are mutually exclusive
(assert (and (implies isLuxuryCar (and (not isSedan) (not isCompact))) (implies isSedan (and (not isLuxuryCar) (not isCompact))) (implies isCompact (and (not isSedan) (not isLuxuryCar)))))
% capturing that cars that are only used in cities are not used in a mixed sense
(assert (implies onlyCityUsage (not mixedUsage)))
% capturing that cars used only for long distance driving are not used in a mixed sense
(assert (implies onlyLongDistanceUsage (not mixedUsage)))
% capturing domain constraints (or company policy?) that city driving cannot happen off road
(assert (implies onlyCityUsage (not someOffRoadDriving)))
% capturing that cars used only for long distance driving must drive off road
(assert (implies onlyLongDistanceUsage (not someOffRoadDriving)))

In section DOMAIN_SPECIFICS, we can enter any input that is legal syntax for the Z3 input language (and not for the PEALT input language). This section allows for an easy manner of formulating domain knowledge or domain assumptions and of generating code for that (essentially just cut and paste of that code). But there are some issues that this simplicity brings and which we now want to highlight:

CONDITIONS
c1 = seniorCitizen
...
DOMAIN_SPECIFICS
% seniorCitizen not yet declared as Boolean, and does not occur in any rules of any policies above
(declare-const seniorCitizen Bool)
% define meaning of being a senior citizen
(assert (= seniorCitizen (< 65 age)))
...

Once we have declared conditions, be it with or without domain-specific content, we can declare analyses for these conditions. PEALT supports six analysis types that all reduce to satisfiability checks:

ANALYSES
% is condition cond3 satisfiable?
name1 = satisfiable? cond3
% is condition cond3 always true? this would suggest a specification error
name2 = always_true? cond3
% is condition cond3 always false? this also would suggest a specification error
name3 = always_false? cond3
% is condition cond5 satisfiable?
name4 = satisfiable? cond5
% are conditions cond3 and cond5 equivalent?
name5 = equivalent? cond3 cond5

In section ANALYSES, we can declare any number of analyses using the same naming mechanisms as for polices, policy sets, and conditions. The supported analyses are

What's in a name?

PEALT has a global name space for its input. It has some tolerance towards reusing the same name in a different section, e.g. you may declare a policy b1 and then declare a condition with the same name b1. We do not recommend this, if only as it will make models and scenarios hard to read and appreciate.

The legal names of predicates, score variables, policies, policy sets, conditions, and analyses all follow the same grammar:

IDENT : ('a'..'z' | 'A'..'Z')('a'..'z' | 'A'..'Z' | '0'..'9' | '_')*

Vacuity checking

The analyses always_true? and always_false? capture what is known as vacuity checking. Knowing that a condition is always true, or always false may suggest that there is a specification error in the condition; for example if the condition were to be a temporal logic property that checks the equivalence of two hardware circuits. In the setting of PEALT, vacuities may well be specification errors. But they may alternatively establish desired invariants, for example, that the expected risk of some monetary losses is always below a constant threshold. It is up to the specifier to decide whether such vacuities are desired or indicate specification issues.

We think that checking vacuities is so important in our context, that we make it a settings option that all declared conditions (which include all sub-conditions due to the manner in which conditions are declared) be checked automatically for these two types of vacuities. Below, we see the reporting for this when vacuity checking is set ON, for the Car Rental Risks example:

Vacuity check on all conditions declared in CONDITIONS section above

Conditions that are always true: cond1
Conditions that are always false:

This output suggests that the monetary loss weighted with its perceived risk is always below 50000, and that no declared condition is always false. Should the tool not be able to decide any of the vacuity analyses, those outcomes will be reported separately as "may always be true" and "may always be false", respectively.

Three analysis buttons for each code generation option

Each of the code generation options, "General scores" and "Non-negative, constant score" have three ways of displaying analysis output:

We should stress that use of the first way above is preferred. The latter two expose users to explicit Z3 code and Z3 output, and so are only of use for researchers who may want to modify or understand the inner workings of this tool. We will therefore only describe the output of the first, preferred way of reporting analysis findings.

We note that "all analyses" refers to all analyses declared under keyword ANALYSES. If automatic vacuity checking is set ON, more analyses will be performed but only analyses declared under keyword ANALYSES will have explicit output as discussed below. In particular, vacuity checks that are also declared under that keyword would be performed twice, once for the compact vacuity reporting, and once for the detailed analysis output reporting. Also, the independent certification of analyses, discussed below, applies only to analyses declared under the keyword ANALYSES.

Reporting format for analysis output

Each analysis declared under keyword ANALYSES has its output reported in the same manner. There are two types of output, depending on whether the outcome of the Z3 analysis is SAT, or one of UNSAT or UNKNOWN. In the latter case, the output will be given in the form similar to this example:

Result of analysis [name4 = implies? cond2 cond1]

cond2 is pSet1 > 0.6 and cond1 is pSet1 > 0.5

cond2 implies cond1

Output of analysis [name4] is Unsat: so no certification performed and no specialized policies reported.

The above was generated using the Social Network Access Control example and the outcome was UNSAT. The first line recalls the condition(s) involved in the analysis. The next line summarizes the finding, here that condition cond1 logically implies condition cond2. The last line lists the analysis outcome (here UNSAT) and explains that such an outcome won't trigger additional reporting.

Let us now discuss the output format when the analysis outcome is SAT. Below, we see the report for analysis name2 for the Car Rental Risks example, when vacuity checking is set OFF:

Result of analysis [name2 = always_true? cond3]

cond3 is (pSet1 <= 50000.0) && (pSet_b4 > 0.4)

cond3 is NOT always true, for example, in the scenario in which:
onlyLongDistanceUsage is False
speaksEnglish is True
isLuxuryCar is False
isCompact is False
hasOtherLicense is True
onlyCityUsage is False
isSedan is False
travelsAlone is True
accidentFreeForYears is True
mixedUsage is True
femaleDriver is True
someOffRoadDriving is False
b_minOne_score is -1.0
b2_hasOtherLicense_U is 0.0
b1_score is 50000.0
b3_score is 1.0/4.0
b2_score is 2.0/5.0
x is 9.0
b4_score is 2.0/5.0

Certification of analysis [name2] succeeded.

Policy scores statically inferred in this certification process:
b1 has score 50000
b3 has score 0.25
b4 has score 0.4
b_minOne has score -1

Policies in analysis [name2] specialised with respect to the above scenario:
b1 = max () default 50000
b2 = min (([hasOtherLicense] 0.4) (hasUSLicense? 0.9) (hasUKLicense? 0.6) (hasEULicense? 0.7)) default 0
b4 = + (([accidentFreeForYears speaksEnglish travelsAlone femaleDriver] 0.4)) default 0
b_minOne = + () default -1

As before, this output recalls the condition(s) analyzed here. Then it gives an explanation, here why a condition is not always true, and reports a scenario next that supports this claim. This is followed by a statement of whether independent certification of that scenario on the PEALT condition(s) succeeded or failed. Such certification should be agnostic of any Z3 code that was generated for this analysis.

Success of certification may trigger additional reports: which predicates this certification additionally had to set to false, and which real variables it additionally had to set to 0.0. Such refinements reflect the tradeoff between the compositionality of this certification process (which uses Kleene's strong 3-valued propositional logic) and its precision. The certified scenario is the one that includes any additional 0.0 variable values or false predicates if such refinements did occur. This was not the case above.

In the next reporting zone, we see all policies for which this certification statically had to infer and use their scores based on the certified scenario. We emphasize that this process does not make use of the scores for variables of form p_score found in the scenario. Rather, we verify that any such reported values equal the score of p if it can be statically inferred.

Finally, we see a report of all policies that are inspected in this certification process, and it is shown how these policies partially evaluate in that certified scenario: scores that are outcomes of such policies are shown in red; and red brackets are used to group all true predicates within a policy (where applicable such as in b2 above) and report their aggregated score according to the certified scenario. If no predicates are true, no rules are shown but the default score is shown in red as policy outcome. Note that predicates within policies that are shown in green end in a question mark, indicating that the truth values of these predicates did not matter in the certification of the scenario.

It is instructive to look at the above analysis name2 again after vacuity checking has been set to ON. We then see the reporting of a different scenario that is still certified successfully but where now no green predicates are shown in specialized policies. This phenomenon is an artefact of the incremental analysis that Z3 uses, and that our code generator invokes with the push/pop constructs of Z3. We here list the output when vauity checking is set ON:

Result of analysis [name2 = always_true? cond3]

cond3 is (pSet1 <= 50000.0) && (pSet_b4 > 0.4)

cond3 is NOT always true, for example, in the scenario in which:
isLuxuryCar is False
isCompact is False
hasUKLicense is False
hasEULicense is False
hasUSLicense is True
accidentFreeForYears is True
mixedUsage is True
onlyLongDistanceUsage is False
speaksEnglish is True
hasOtherLicense is False
onlyCityUsage is False
isSedan is False
travelsAlone is False
femaleDriver is True
someOffRoadDriving is False
b1_score is 50000.0
b3_score is 1.0/4.0
b2_score is 9.0/10.0
b4_score is 2.0/5.0
b2_hasOtherLicense_U is 0.0
b_minOne_score is -1.0
x is 5.0

Certification of analysis [name2] succeeded.

Policy scores statically inferred in this certification process:
b1 has score 50000
b2 has score 0.9
b3 has score 0.25
b4 has score 0.4
b_minOne has score -1

Policies in analysis [name2] specialised with respect to the above scenario:
b1 = max () default 50000
b2 = min (([hasUSLicense] 0.9)) default 0
b4 = + (([accidentFreeForYears speaksEnglish femaleDriver] 0.4)) default 0
b_minOne = + () default -1

Analysis options and syntax restrictions

The input language for PEALT gives users two options:

The editor and runner interface of the tool offer buttons for generating random or example PEALT input. Amongst those buttons, the ones coulored in light blue should only be analyzed by clicking on the light blue button "General scores", whereas those coulored in dark blue may be analyzed by clicking either "General scores" or "Non-negative, constant scores". Users should take care to only use that option when the input is known to meet the above score constraints.

Using option "Non-negative, constant scores" may yield more scalable results for policies that have multiplication as composition operator. But for PEALT input that has negative scores, non-constant scores or scores outside of the unit interval in * policies it may produce errors. Fortunately, such errors are likely to be identified; for example, during parsing if scores contain variables or during certification if * policies contain scores not in [0,1]. We cannot vouch for the correctness of "successful certification" if option "Non-negative, constant scores" was used on in PEALT input that violates these input assumptions (e.g. by having a * policy with scores outside of the unit interval [0,1]). Users need to ensure that they use the semantically correct option; if in doubt, option "General scores" is always correct.

Note that we still can express conditions cond of form pSet <= th for the "Non-negative, constant scores" option. But we then have to state this as

cond_aux = th < pSet
cond = !cond_aux

Referring to policy scores in scores

The Fire Fault Tree example of PEALT illustrates a powerful feature of its input language. For a declared policy p, we can write p_score for a variable that represents its score, and therefore use that variable in score expressions within rules. Of course, this usage assumes that option "General scores" is clicked. Some care should be exercised when using this feature. A policy q depends on policy p if variable p_score occurs in policy q. The certification process may not terminate when this dependency relation contains a cycle. In the presence of cycles, PEALT will still produce raw Z3 code and raw Z3 output in the third way of using that option. Moreover, the certification process may then even terminate, in which case its success can be trusted sinceit will have removed any circularity in dependencies by making false some predicates that occur in rules with scores containing p_score type variables. Let us illustrate this with an example:

POLICIES
b1 = +((q1 b2_score) (q2 0.5)) default 0.0
b2 = max((q3 b1_score) (q4 0.4)) default 1.0
POLICY_SETS
pSet1 = min(b1,b2)
CONDITIONS
cond1 = 0.0 < pSet1
ANALYSES
name1 = satisfiable? cond1

There is a circularity in policies b1 and b2 above. But when we analyze this with option "General scores", we arrive at the following output:

cond1 is pSet1 > 0.0

cond1 is satisfiable, for example, in the scenario in which:
q1 is True
q2 is False
q4 is True
b1_score is 2.0/5.0 = 0.4
b2_score is 2.0/5.0 = 0.4

Certification of analysis [name1] succeeded.
Additional predicates set to false in this certification process are: Set(q3)

Policy scores statically inferred in this certification process:
b1 has score 0.4
b2 has score 0.4

Policies in analysis [name1] specialised with respect to the above scenario, extended with false predicates from
Set(q3):
b1 = + (([q1] 0.4)) default 0
b2 = max (([q4] 0.4)) default 1

Our certification process could make sense of this by refining the predicate q3, for which the found scenario did not report a truth value, to false. However, it we were to add to the above PEALT input the domain specific assumption that both q1 and q3 need to be true:

DOMAIN_SPECIFICS
(assert (and q1 q3))

then our certification process will simply not terminate! It now has to statically infer scores for both policies that refer indirectly to the very scores that need to be statically inferred, an infinite regress. This is why we said above that scores p_score should be used with care.

Failed certifications

When a certification fails, PEALT will only output a message to that effect with some diagnostic information for PEALT implementers. Fortunately, this occurs rarely and when it does it can be attributed to two potential causes that also have work-arounds. We now discuss these causes and their work-arounds:

Here is an example PEALT input for which certification fails (for version PealApp-lift-assembly-3.2.0.jar):

POLICIES
b0 = min ((q14 0*vl) (q0 0.7850)) default 0.8919
b1 = * ((q6 0.6819) (q7 0.7271)) default 0.5390
b2 = * ((q12 0.3504) (q0 0.4032)) default 3*vp
b3 = + ((q3 0.3078) (q7 0.1332)) default 0.7163
b4 = max ((q3 0.0948 [-0.1435,0.4347]) (q11 0.1327)) default 0.8418
b5 = * ((q3 0.3235) (q9 2*v0 [-0.3561,0.7747])) default 0*vy
b6 = max ((q14 0.5613) (q4 0.6564)) default 0.6351
b7 = + ((q8 vx) (q2 0.9709)) default 0.5696 [-0.8854,0.0560]
b8 = * ((q14 0.7031) (q5 0.6469)) default 0.3753
b9 = min ((q11 0.3598) (q7 0.4311)) default 0.0868
b10 = * ((q10 0.1041) (q12 0.6119)) default 0.8983
b11 = + ((q1 0.4650) (q4 0.6019)) default 0.3478
b12 = min ((q3 3*vu [-0.2797,0.6717]) (q9 2*vu)) default 0.0223
b13 = max ((q5 0.9802) (q2 0.9236)) default 0.8039
b14 = min ((q6 2*vk) (q13 0.4516)) default 2*vl
b15 = max ((q13 0.3230) (q12 0.9485)) default 0.6186
b16 = + ((q1 0.8562 [-0.9047,0.6472]) (q11 3*vb)) default 0.6468
b17 = + ((q4 2*vl) (q11 0.8956)) default 0.5290
b18 = min ((q6 0.0261) (q4 0.8287)) default 0.9865
b19 = max ((q0 0.0663) (q10 0.5418)) default 0.7368
POLICY_SETS
p0_1 = min(b0,b1)
p2_3 = min(b2,b3)
p4_5 = min(b4,b5)
p6_7 = min(b6,b7)
p8_9 = min(b8,b9)
p10_11 = min(b10,b11)
p12_13 = min(b12,b13)
p14_15 = min(b14,b15)
p0_3 = max(p0_1,p2_3)
p4_7 = max(p4_5,p6_7)
p8_11 = max(p8_9,p10_11)
p12_15 = max(p12_13,p14_15)
p0_7 = +(p0_3,p4_7)
p8_15 = +(p8_11,p12_15)
p0_15 = *(p0_7,p8_15)

p16_17 = min(b16, b17)
p18_19 = +(b18, b19)
p0_15_0 = min(p0_15,p16_17)
p0_15_1 = max(p0_15_0,p18_19)
CONDITIONS
cond1 = 0.50 < p0_15_1
cond2 = 0.60 < p0_15_1
ANALYSES
analysis2 = always_false? cond2
analysis3 = always_false? cond2

We need to use option "General scores" as there are variables in some score expressions. Running both analyses, we get the following certification results:

Certification of analysis [analysis2] failed.
...
Certification of analysis [analysis3] failed.
Additional predicates set to false in this certification process are: Set(q13)

So both certifications failed, and only the second one refined one predicate q13. To see that this is attributable to the aforementioned Z3 bug, let us turn vacuity checking OFF under Settings, comment out the line declaring analysis3, and run this again. Now we obtain that certification succeeds (it won't use the push construct as only one analysis executed), by refining one score variable vy to 0.0:

Certification of analysis [analysis2] succeeded.
Variables not defined in the Z3 model but are assumed to be 0 in this certification process are: Set(vy)

If we comment out analysis2 and execute analysis3 only, we also get that certification succeeds by only refining variable vy to 0.0.