1 Motivation
The quaranta giorni or forty day isolation by the Venetians, as the name implies, was a measure applied to incoming ships [venice] which evolved into containment practices to handle recurrent epidemics^{2}^{2}2why 40 days? because the bubonic plague had a 37day period from infection to death. At the time of writing, owing to ubiquitous world travel, COVID19 ‘quarantines’ or lockdowns keep millions of people around the world confined mostly to the home for months before an ‘easing of measures’ gradually reopens society.
The 2020 lockdowns in response to the COVID19 pandemic enforce in different ways. Argentine and Spanish lockdowns are strictly policed with citizens required to make written application for outings. In contrast, in north western Europe and the United States, lockdows are not as strict, with Denmark and the United Kingdom entrusting their citizens not to infringe lockdown rules, and liberal Sweden choosing not to lock down in an official sense but instead practicing a small number of restrictive measures.
The manner of lockdown and how to exit a lockdown are problems that are short of informed solutions. It is useful to discuss the generic lockdown problem as requiring an innovation capable of overcoming the following tradeoff: generally, the longer and more extensive is a lockdown the more effective becomes society’s ability to prepare hospital facilities to control the spread of the disease but the higher are important negative factors: loss of personal freedoms; damage to the economy; poorer personal psychology; social unrest; abusive relations; undetected crime; higher incidence of other disease because people are too scared to visit the emergency room or doctor and negative effects on care of both vulnerable and elderly.
2 Solution
The idea is to enable lockdown whilst minimizing many of its negative consequences, e.g., to personal psychology, economics and health. Indeed, it would be nice if life could continue as normal while in lockdown, a seemingly contradictory statement. Inspiration comes from a crude approach taken by governments to ensure that those who venture outdoors are fewer in number. Panama [panama] used the last number of an identity document to assign two hour time slots to venture away from the home for essentials and, as Spain eased measures, it allowed certain age groups to go out at different times of the day. The open literature also explores changing a general lockdown to a number of partial ones [delockdown].
The solution presented here is for citizens in lockdown to enter into smartphone, handset, tablet or computer, a schedule of the places that they wish to visit on that day, or future dates, together with a rough idea of the part of the day they would prefer for such outings to take place. A method of optimization, in this proof of concept this is a Genetic Programming [koza:book] method, takes these requests and simulates the outings by means of an infection model, to discover a nearly optimal allocation of precise time slots for visits that reduce the likely hospitalization and death numbers. It then communicates the time allocations to citizens on their devices such that they will carry out the journeys more safely than at any times of their choice. The solution is proactive.
Specifically, it is useful to discuss the problem within the implementation of this proof of concept. A data file captures all requests as shown on the left side of Figure 1. The visit requests on each of three consecutve days can be many, and each is denoted by a three symbol key. Requests are separated by the colon symbol. The first letter concerns a broad time of day requested for the visit limited to:
M wishes for the visit to take place during ‘morning’ hours;  
P wishes for the visit to take place during ‘afternoon’ post meridiem hours;  
N wishes for the visit to take place during ‘night’ hours;  
A anytime: does not mind at what hour on this day. 
and the second letter is the desired place of visit, limited here to six types of establishment:
F may represent a ‘supermarket’ selling food;  
C may represent a sports ‘club’;  
P could represent a ‘park’;  
D may stand for ‘doctor’ a doctor’s surgery centre;  
R may stand for ‘restaurant’;  
S could represent a ‘social’ establishment; 
the third symbol can be 1 or 2, because there are only two of each type of establishment, or a total of 12 establishments available for visits.
For example, one such request is from Person ID 20 who wants to go to Doctor’s Surgery 2 at any time of day. The day is divided into eight two hour slots. After analysis of all request, by working with an infection model, the computer generated optimization minimizes the total risk of COVID19 infection, hospitalization and death. It allocates the time slots to the requests and the solution is communicated back to citizens. In this case Person ID 20 is allocated a time slot within the constraint that they imposed as shown on the right side of Figure 1. There are other data fields such as age and health in the left side of the figure that are discussed further in this text.
3 Data set for the proof of concept numerical experiments
The proof of concept simulations cover three consecutive days denoted as: Monday, Tuesday and Wednesday, involving 1704 visits as requested by 282 people. A day consists of eight twohour visitation slot periods available to schedule the visits. Details of the data for the purpose of the approximate reproduction of results is as in Table 1. The data is an entire fabrication but common sense governed its choices: older people carry out relatively fewer outings than the young, and are in a poorer state of health.
The degree of health of a person is a number that ranges from 1 to 10. In any future realworld application of this research, the general health of a person, which is a measure of their immune system response to the pandemic and assumed to abate the probability of hospitalization or death could be gathered from patient records. As seen in later sections, this level of health combines with infection and plays a pivotal role in determining which solutions are better than others. The idea is to reduce hospitalizations and fatalities.
age  20  30  40  50  60  70  80  total 
number of people  35  65  49  43  27  43  20  282 
average health  9.49  9.08  8.51  7.27  7.86  5.45  4.10  
minimum health  9  8.1  7  5.1  4.2  2.1  1.3  
maximum health  10  10  9.9  9.9  10  9  7  
variance health  0.09  0.29  0.65  2.37  2.29  3.31  3.69  
Monday visits  586  
average  2.74  2.08  2.41  2.23  2.26  1.30  1.2  
minimum  1  1  1  1  1  1  1  
maximum  5  5  4  4  4  3  3  
variance  0.53  0.69  0.65  0.46  0.85  0.30  0.26  
Tuesday visits  572  
average  2.57  2.06  2.37  2.23  2.19  1.26  1.15  
minimum  1  1  1  1  1  1  1  
maximum  4  5  4  4  4  3  2  
variance  0.53  0.67  0.68  0.50  0.89  0.24  0.13  
Wednesday visits  546  
average  2.6  1.88  2.06  2.12  1.85  1.30  1.75  
minimum  1  1  1  1  1  1  1  
maximum  4  4  3  3  4  3  4  
variance  0.47  0.54  0.79  0.47  0.94  0.40  0.69 
4 Infection models used in this proof of concept
The solution must simulate the infection process cumulatively and longitudinally in time as people go from place to place to optimize the allocation of time slots. This requires as component a model of COVID19 infection. The proof of concept develops a model based on person to person transmission based on simple probabilities of meetings between people in a confined location and is necessarily a very basic model but one that illustrates the potential of the solution.
Two models are presented: a partial infection probability developed for this work and a simple standard probability model. Both are simple and would need to be improved by epidemiologists for their real world application.
4.1 A partial infection probability model
The idea of a ‘partially infected’ individual is developed here because it will only be possible in some average sense to know who might be infected^{3}^{3}3if we knew precisely who was infected they could be isolated and the solution becomes unnecessary. The idea is to assign a ‘partial infection level’ to all members of a taxonomy class of interest, and then simulate how they will infect others and further infect each other.
If an infected individual is in close proximity to a susceptible individual then the possibility exists of transmission of the disease from to . Without loss of generality the assumption is made that every such encounter will result in infection transmission. An infection probability based on a count of such encounters between and one or more can be expressed as where
is the number of infected who may come into contact with susceptible in a fixed interval of time that denotes the duration of a visit, e.g., an hour or two. More sophisticated relations should consider this time dimension and may model it with Poisson distributions but such complications are ignored for the purpose of this presentation.
If the location of the encounters is for example, a store of a certain physical size, and is the number of sublocations of that are available to visitors that together comprise the walkable area of that store, they are subunits of area small enough such that people may position themselves and come into close proximity of each other, then a simple count of probability for such encounters between and one or more leads to: . A property of this relation is the convergence to one as grows large: then meaning that a noninfected person will surely come into contact with one or more infected at that store. Moreover, also such that if doubling then the probability of infection for is increased but never doubled.
In driving a contact based infection model, simulations must assume a certain number of persons are infected a priori at the start of the simulation. This presents a challenge because it requires assumptions of who is infected and why, and also a huge number of stochastic computation and an averaging of results. As each person is different and not a clear member of a taxonomy class^{4}^{4}4the idea will not be unfamiliar to mathematical biologists and epidemiologists [taxonomy], for example, people may be of the same age but of different health level granting them less resistance to the infection, the computations would necessitate carefully balancing assumptions about who should be assumed to be infected to drive the computations.
The partial infection model that is presented, while not being itself a perfect solution, does not have this onerous requirement and simplifies the computational effort requirement for this proof of concept study because it can take an idea about how infected are members of the population, assigning a probability of infection to all members of that taxonomy grouping in order to drive the infection simulation.
The concept of a partially infected individual is a modelling tool. Each person
is represented by a vector of size two,
with . For example a person with ID label that is forty percent infected is represented by and and another with ID label who is one percent infected by and .Consider the meeting at the store of persons of which have some degree of partial infection. The resulting infection pressure, , that is the resultant partial infection probability of the encounter that is brought about by the infection contributions of the partially or fully infected persons, can be obtained for two cases:
is the maximum out of all persons. With the exception of the owner of each of the other partially or fully infected persons is multiplied by and by a multiplicative numerical constants soon to be discussed. These product terms are summed to obtain the probability of infection for the encounter. When , the number of such product terms is therefore . However, when , is considered to be from one of the fully susceptible persons in and thus . Now the number of such product terms in the sum is . A formula to compute given with every participant infected or partially infected, is developed as
(1) 
and for the case with as
(2) 
the constants emanate from simple overlap counts in probability trees as in appendix B.2 and this author’s earlier technical communication [inha2020]. Deliberately the products are arranged or ordered so that the largest corresponds to the largest . This represents the worst infection case which subsumes all others.
Once the probability of infection is calculated, the partial infection of all participants in is updated for encounter in readiness for the next visit as follows:
(3) 
4.1.1 A worked out example
As an example, assume and . The four participants are partially infected as follows,
which approaches the situation of three fully infected and one susceptible. Participant has the highest component of S, thus . The probability of infection is computed as
This number is very close to . It is smaller because the infected do not contribute a complete level of infection. The updated partial infections for all four participants in readiness for subsequent encounters are:
4.1.2 A second worked out example
A second example with has seven participants but only three are infected . In this case, and . Assume the infected are as follows:
The probability of infection is computed as
This number approaches since there is a fully infected participant and two other partially infected participants. The updated partial infections for all seven participants in readiness for subsequent encounters are:
4.2 Standard probability model
The infection probability indeed follows a similar simple ‘probability count’ philosophy as in the previous section but as obtained with a Monte Carlo process that generates the probabilities. Each evaluation involves trials for one fully susceptible and twenty fully infected persons and it counts the times when the susceptible is colocated with any infected. For example, it does this on 40 separate occasions, i.e., once these forty trials are computed for the one susceptible and the twenty infected with results as in Figure 2. The procedure checks to see whether a susceptible and one or more infected shared an encounter from those forty, ( sampling). It is however fashioned in a more elaborate way to try to account for differing times spent in different subareas.
The random sampling is made to fit into 7200 seconds of time (two hours) but the random number indicating the location is chosen to weigh certain locations more. It crudely captures that some areas of the store or establishment are more popular than others. For example, browsing a magazine rather than simply walking through some area of the store. Random numbers are clustered to represent three types of areas: one where the person spends 2 seconds or another where the person spends 30 seconds, and yet another where the person spends 102 seconds in the selected area. The procedure is outlined in Appendix A.
The constants in the figure are calculated prior to the run and give the probability of susceptible getting infected depending on how many people are infected in the store. All visits are of the same duration so no attempt is made to model probability of infection through time with a Poisson process. This model of infection transmission is once again a very simple one. When the probability of infection is assumed to be equal to one. Note that a lower or higher rate of infection can be obtained by changing the sampling from 40 to a smaller or larger number.
At any meeting place and time, three types of person can participate in the encounter: or infected; or susceptible and or immune (recovered). No action is taken if fewer than two people participate . No action is taken if all are immune. No action is taken if all are infected. No action is taken if all are susceptible. No action is taken if there are no infected. No action is taken if there are no susceptible. Otherwise the aforementioned infection probability is selected for the number of infected and that real number is multiplied by the number of susceptible and then truncated to obtain the integer number that will become infected. In no particular order that many susceptible are labelled as infected.
5 Solution method
The method adopted to optimize the allocation of requested visits is a Genetic Programming (GP) scheme developed by this author to discover a set of precise numerical constants that serve as coefficients of a polynomial representing the direct solution of the one dimensional homogeneous convection diffusion equation [howard:2001:gecco]. This and other puiblications [Howard:2011convdiff] [Baber:2009:MSG] demonstrate that standard Genetic Programming trees are capable of computing very precise real numbers when and if needed. When GP trees are evaluated they deliver a vector of real numbers of selfdetermined variablelength.
Table 2 lists the functions and terminals that comprise the GP tree. They manipulate two pointers and to the output variable length vector of numbers. They also make use of two working memories, two real numbers, and . Terminals are small and large numbers that become arithmetically manipulated. Certain functions write to the variable length result vector, shrink it or expand it as the GP tree evaluates.
GP function name  description  GP function name  description 

Constant(1)  values (127,126,…,127,128)  sConstant(1)  values (0,1,…,255)/255 
AddNumber(2)  = L + R  SubtractNumber(2)  = L  R 
MultiplyNumber(2)  = L * R  DivideNumber(2)  if ( R 1.0e9) R=1 
AverageNumber(2)  = (L + R) / 2  = L / R  
SubRecord(2)  = L  ZeroRecord(2)  = L and increment 
if () decrease  if ()  
(there must be at least one element)  0.00001  
WriteRecord(2)  = R and increment  AddRecord(2)  = L and increment 
if ()  if ()  
= L  and = L  
GetMem1(2)  =  SetMem1(2)  = L and = R 
GetMem2(2)  =  SetMem2(2)  = R and = ( + L)/2 
The result of evaluating a GP tree is a variable length vector of real numbers. These numbers can be of any size and can be positive or negative. A subroutine then operates on these numbers to bound them as positive real numbers in size between 0.0 to 1.0 [howard:2001:gecco]. For example, if a number in the vector is 21.27625 it now becomes 0.27625. Also if a number is 29.00000 it now becomes 0.0001.
How is this vector of real numbers used? Consider 1704 visitation requests of Table 1. The real numbers vector is probably much smaller. This is consulted from left to right as when a child reads words letter by letter. Consider the outing request by a person is AD2, e.g. person 20 Monday in Figure 1 wishing to go to Doctor’s Surgery 2 at any time of the day. The day is divided, into eight two hour slots of time. As each number ranges from 0.0 to 1.0 consider that this might be 0.2763. This number would indicate prescription of the third slot of time since . The allocation for that visit complete, the next real number in the variable length vector gets consulted to deal with the next visit request.
As the variable length vector of numbers is usually shorter than the total number of visits requested, when the last element of the vector is reached it cycles back to the first number in the vector and continues until allocations for all 1704 visitations are dealt with, allocations of time as in Figure 1. Excellent solutions are achieved with small to moderate sized vectors.
Practical use of the method may require a small number of additional strategies. On some problems for a small number of initial generations the Darwinian fitness is set as the vector size. It stimulates production of large output vectors. When a desired variable length vector output size is reached by all members of the population the fitness is reset. From then onwards the fitness is the problem’s Darwinian fitness, typically a measure of solution error. The procedure is often not necessary but can become useful when required solution complexity or dimension is very high. Once seeded in this way the solution vector will grow or shrink as crossover and mutation operations on the GP tree create new and improved GP trees.
As the method makes use of GP trees and standard GP, then all that we know about standard Genetic Programming including modularization options such as ADFs [koza:gp2], and subtree encapsulation [roberts:2001:EuroGP] are applicable. As will be revealed in the numerical experiments, the approach works, discovers good solutions, and appears to be quick on a portable computer, the compiled c++ Visual Studio 2019 executable delivering solutions in seconds for the proof of concept experiments.
5.1 Measures of success and error
Regardless of the infection model used the procedure is similar. There will exists a taxonomy of persons by some parameter(s), for example, age group. Information about the likely degree of infection for different taxonomy classes is input to the computations. As time progresses and visits take place, people get infected. At the end of the three days, account is taken of who is infected and their prior health level. This calculates how many people in the proof of concept study will become infected a few weeks hence and how many may die. There are many assumptions but if correctly taken they drive Genetic Programming to discover allocations that reduce hospitalizations and fatalities. Central to Darwinian fitness are such calculations of who and how many will perish or fall very ill and use the Intensive Care Unit (ICU).
5.1.1 Darwinian fitness for the partial infection probability model
COVID19 testing is unlikely to become universal and frequent for all citizens but limited testing and other measurements are sufficient to serve as indications of the degree of infection likelihood for sectors, partitions, of the population. The proof of concept uses age as taxonomy. Figure 3 reveals how this works. If testing reveals that more people of age groups A or B have COVID19 than of age groups C or D, then percentages are entered at the start of numerical experiments. For example, 20=0.03; 30=0.01 means that the 20yearolds have a suspected level of COVID19 infection of three percent, while the 30yearolds are suspected to have one percent.
Each simulation proceeds from Monday to Wednesday and for each day from morning to night (8 time slots). At each time slot all twelve establishments are considered and calculations of partial infection update the partial infection level of all. At the end of the day, a calculation is made to determine who should go into selfisolation and participate no longer in the simulation. The rules to identify those persons are presented in the left side of Table 3.
At the very end of the simulation another procedure calculates the total number of hospitalized in the Intensive Care Unit (ICU), denoted by the symbol . These are those in hospital who eventually recover but who never the less put pressure on the health service. The procedure also calculates those who unfortunately pass away, denoted by the symbol . The rules to determine these two numbers are presented in the right side of Table 3. Note the assumption is made here that both the state of health and the age of the subject correlate similarly but are assumed to be separate causal factors.
selfisolation conditions  outcome rules  

age  infected  health  age  infected  health  actions  outcomes 
20  0.97  20  0.95  7.010.0  none  immune  
0.950.97  0.07.0  3.07.0  ICU  recovered  
0.03.0  ICU  death  
30  0.95  30  0.9  8.010.0  none  immune  
0.920.95  0.07.0  4.08.0  ICU  recovered  
0.04.0  ICU  death  
40  0.92  40  0.85  8.010.0  none  immune  
0.870.92  0.07.0  4.08.0  ICU  recovered  
0.04.0  ICU  death  
50  0.85  50  0.8  8.010.0  none  immune  
0.800.85  0.07.0  4.08.0  ICU  recovered  
0.04.0  ICU  death  
60  0.75  60  0.75  9.010.0  none  immune  
0.700.75  0.07.0  5.09.0  ICU  recovered  
0.05.0  ICU  death  
70  0.65  70  0.7  9.510.0  none  immune  
0.600.65  0.07.0  7.59.5  ICU  recovered  
0.07.5  ICU  death  
80  0.65  80  0.65  8.510.0  ICU  recovered  
0.600.65  0.07.0  0.08.5  ICU  death 
The GP Darwinian fitness function is the measure of solution goodness. It combines and weighted by some desirability constant . is by choice a negative quantity because the evolution is set to maximize or make bigger this quantity, with a perfect score being zero implying :
(4) 
All of the numerical results in this proof of concept use , reflecting desire to reduce fatalities.
An alternative fitness function (not explored in this proof of concept) might compute the final average rate of infection: e.g., if is the final average probability of infection of 20 and 30yearolds and the final average probability of infection of 40, 50 and 60yearolds with being the final average probability of infection of 70 and 80yearolds then a putative fitness measure is:
where reflects that the mortality of the young from COVID19 is low and that of the elderly is very high. However, it suffices to show by the small worked out example of Figure 4 (and as seen in the figures) that lower average partial infection does not necessarily reflect in smaller values of and .
5.1.2 Darwinian fitness for the standard probability model
In this type of model persons can only become fully infected, . Hence, the rules to determine and are different and only apply to those infected. Additionally, the decision to selfisolate depends on the number of days that the person is infected and their health level. Consider that some of the people in the input data file are already fully infected.
Table 4 shows the decision schema for selfisolating persons as well as for deciding on and after the simulation is complete. Note the assumption is again made here that both the state of health and the age of the subject correlate similarly but are assumed to be separate causal factors. The fitness measure for these computations is also equation 4.
only for persons who become infected  
selfisolation conditions  outcome rules  
age  days infected  health  age  health  actions  outcomes 
20  20  none  immune  
ICU  recovered  
ICU  death  
30  30  none  immune  
ICU  recovered  
ICU  death  
40  40  none  immune  
ICU  recovered  
ICU  death  
50  50  none  immune  
ICU  recovered  
ICU  death  
60  60  none  immune  
ICU  recovered  
ICU  death  
70  70  none  immune  
ICU  recovered  
ICU  death  
80  80  ICU  recovered  
ICU  death 
6 Results
The numerical experiments compare the solution to three round robin uninformed allocations. The first of these sends all those: (a) requesting ‘morning’ visits or ‘any time’ visits to the first morning time slot, the 8:00 hrs to 10:00 hrs slot; (b) requesting ‘afternoon’ visits to the first afternoon slot, the 12:0014:00 hrs time slot; finally (c) requesting ‘night’ visits are sent to the first night time slot, the 18:00 hrs to 20:00 hrs slot.
The second of these uninformed allocations for comparison to results is a round robin of two, . For case (a) it sends half the requests to the first slot of 8:00 hrs to 10:00 hrs and the other half to the 10:00 hrs to 12:00 hrs slot. For case (b) it sends half to the first time slot, 12:00 hrs to 14:00 hours, and the other half to the last slot of the afternoon, 16:00 hrs to 18:00 hours. For case (c) again sends half the requests to the first night slot, 18:00 hrs to 20:00 hrs, and the other half to the last night time slot, the 22:00 hours to 24:00 hours slot.
There is also a third uninformed allocation by round robin for comparison, . It is a round robin of three. However, as the morning has only two time slots this sends two thirds of the requests for case (a) to the first slot and the other half to the second slot. For cases (b) and (c) it uses all three time slots distributing the visitation requests evenly. Comparison could be made to other allocations but these are broadly representative of what would transpire in the real world without the benefit of the solution and in conditions of partial lockdown or no lockdown. It turns out, as expected, that results in the highest number of hospitalizations and deaths for the model problem and in the lowest as revealed in comparative experiments presented in this section.
Each experiment follows common GP practice executing a large number of parallel independent runs (PIRs). Each PIR differs in its initial random seed (uses PC clock timer) seeding the population randomly and differently for each PIR. PIRs can have different population sizes of GP trees. All runs use 80 percent crossover and 20 percent mutation to generate new GP trees. This is a steadystate GP with tournament selection of four individuals to select a strong mate for crossover, whereby two GP trees swap branches, or the winner of the tournament simply mutates, with a kill tournament of two to choose the weaker GP tree to replace. The maximum possible tree size for GP is set at 2000 and if this is exceeded then the shorter side of the crossover swap is taken. The maximum variable length vector size is set at 10,000 but never remotely approached: excellent solutions have vector sizes of between 10 and 100. It is an ‘elitist’ GP for it does not destroy the fittest solution, i.e., solutions do not have a lifespan inside a PIR. All PIRs implement standard ‘vanilla’ GP with no parsimony pressure or any other nonstandard approach. This proof of concept study does not employ explicit reuse: ADF [koza:gp2] or subtree encapsulation [roberts:2001:EuroGP].
When a better solution emerges during the execution of a PIR, this is separately stored. A PIR typically produces up to around ten of which two are highly fit and interesting. As subsequent PIRs produce more solutions, all become ranked by fitness, a global list of solutions, from which the user can select one to inspect. For each solution, full details of the participants to all visitations, infection levels, numbers selfisolating, in ICU and sadly passed away can be inspected, as well as the variable length vector of real numbers that is the solution and visitation schedule and identity, age and level of infection of all participants recovered, in ICU or deceased. For the partial infection experiments it also computes partial infections by groups: young, middle aged and elderly every four hours. The comprehensive result panes in figures in this section merit closer inspection. As PIRs build, a Pareto front picture can emerge of nondominated solutions involving the two criteria and .
6.1 Numerical experiments with partial infection
Table 5 shows the vast superiority of the solution to the uninformed round robin allocation schemes. The superiority is marked and so there is no need to check this against other possible random allocation schemes. It makes common sense that the Genetic Programming solution is always superior.
Note that with the problem becomes easier than with because the chances for meetings between people are lower. This can be seen to be true for both round robin allocations and Genetic Programming solutions. It also appears to be that the superiority of Genetic Programming solutions over round robin allocations is correlated with higher
. This is probably because the Genetic Programming has more degrees of freedom to discover a better solution. Therefore, even if the intuitive idea is that with a smaller density of people there should be less need for the solution, this is not always the case. Note that in the cases where
, Genetic Programming managed to discover an allocation where , with no deaths. Perhaps such a counterintuitive conclusion can be discerned that the solution is needed more so when the problem seems simpler.Close inspection of the figures referred to in Table 5 reveals that the system of PIRs delivers a number of Pareto nondominated solutions to choose from. Also, there are solutions that achieve an identical result in terms of and but which are quite different. Then one is also able to inspect the age and health of the fatalities, again this may be a tertiary factor that could come into play when selecting among the many equivalent solutions. Finally, it can also be discerned that the average partial infection level and its final levels as shown in the figures do not always correlate with lower error and higher Darwinian fitness.
parameters  , or  fittest GP solutions  
input  s  figures in text  figures in text  
4  70  52  : see [inha2020]  7.70  9  7  Figure 12  
20 = 0.01; 40 = 0.03;  44  23  : Figure 11  7.70  9  7  see [inha2020]  
50 = 0.02  8.15  14  5  see [inha2020]  
6  7  8  : Figure 13  0.0  0  0  Figure 14  
4  72  52  : see [inha2020]  20.05  35  12  Figure 16  
20 = 0.05; 30 = 0.05;  64  41  : see [inha2020]  20.50  40  10  see [inha2020]  
40 = 0.03; 50 = 0.05;  57  34  : Figure 15  
60 = 0.01  6  69  49  : see [inha2020]  1.05  3  0  Figure 18 
52  29  : see [inha2020]  1.65  1  2  see [inha2020]  
32  17  : Figure 17  1.70  3  1  see [inha2020] 
6.2 Numerical experiments with full infection
A different set of computations to the real world problem is included here for completion. It pertains to knowing who is infected a priori and trying to optimize what could have happened had their visits been scheduled differently. It could either be useful to a retrospective study or, alternatively, such computations can potentially address the same real world problem but only by the carrying out of a plethora of experiments with different seeds of infected individuals, assumed infected according to some taxonomy knowledge of level of infection in the population, and then averaging the results in some way to prescribe safer allocations of visits, as alternative to the experiments (presented in the previous section) with the partial infection model.
A set of 15 persons, as shown in Figure 6, out of the 282 in the proof of concept data, as described in section 3, come already fully infected a priori to drive the computations^{5}^{5}5Note that in the partial infection model experiments, the field ‘Immunity?’ is completely ignored but not here, with values: 0 for susceptible; 1 for already infected and 2 for already immune or recovered.. This is about 5.3 % of total people, and they undertake 105 visits or 6.2 % of the 1704 total visits that appear in the proof of concept dataset. Individuals with good health levels are chosen as already infected. For completion, the figure also shows six people who have immunity and therefore cannot become infected in the computations. They are included in computations but there is no need for them.
It is envisaged that the optimization problem by GP is challenging. Hence, the infection model is implemented at different values of (see section 4.2 and appendix A) to allow the solution room to make gains but also to understand the dynamics under various levels of contagion (perhaps representing adherence to social distancing and use of masks). There are eight types of combinations of the events that can befall a person:

person comes to the simulation already infected and does not use the Intensive Care Unit (ICU);

person comes to the simulation already infected and does make use of the ICU (adds to );

person comes to the simulation already infected passes through the ICU or not and dies (adds to );

person comes to the simulation immune (has had the disease before and has recovered) and stays that way;

person comes to the simulation susceptible and does not get infected;

person comes to the simulation susceptible, gets infected but does not use the Intensive Care Unit (ICU);

person comes to the simulation susceptible, gets infected and does make use of the ICU (adds to );

person comes to the simulation susceptible, gets infected, goes or not to ICU and dies (adds to );
Figure 7 illustrates how to interpret that part of the result figures pointed to in Table 6. As there are generally no cases of infected ending in ICU or dying only six result summaries show as in the figure.
The results of all numerical experiments that use the full infection model are as in Table 6. The difficulty with this infection model is that it multiplies the probability of infection by the number of susceptible and then it simply truncates the result to the lower integer. Moreover, it simply goes down the list of susceptible infecting the first bunch it sees up to that integer number. This gives the solution opportunities to play games. For example if then and if there is only one susceptible and nine infected then the susceptible will not become infected.
Notwithstanding the weaknesses of such computations and their erroneous assumptions this can be said about the Table 6 figures. As is increased then the GP search becomes much harder. Best solutions come from bigger GP populations run for longer (longer numbers of generations). Also the advantage of the solution over the round robin assignments is less valuable than at lower values. This is to be expected because at high the disease is far more contagious and there is not much room for the optimization. Note that with all round robin solutions have the same and possibly indicating little room for schedule optimization by the solution.
parameters  , or  fittest GP solutions  
(Monte Carlo)  figures in text  figures in text  
5  70  53  : see [inha2020]  1.05  3  0  Figure 20 
49  26  : see [inha2020]  1.35  2  1  Figure 21  
31  17  : Figure 19  
10  71  53  : see [inha2020]  21.00  34  14  Figure 23 
69  52  : see [inha2020]  21.30  33  15  see [inha2020]  
64  40  : Figure 22  21.65  34  15  see [inha2020]  
30  71  53  : see [inha2020]  44.60  68  32  Figure 25 
71  53  : see [inha2020]  44.90  67  33  see [inha2020]  
71  53  : Figure 24 
7 Conclusions
Large improvements in both lower mortality and lower use of the ICU are available with the solution for the proof of concept data described in section 3 in computations with both infection models. The improvements as compared to round robin allocations in tables: Table 5 and Table 6 clearly show it. The parameters and respectively in the two numerical experiments have a similar but inverse influence as the former gives the number of possible sublocations that can be occupied while the latter is the number of checks performed inside the Monte Carlo calculation that detects copresence. However, even at low and at high , the worst possibilities, the solution keeps outperforming round robin by a significant margin. At high and low the solution really shines and outperforms round robin by a very considerable margin.
It augers well because with good cleaning at locations, washing of hands and even moderate social distancing, the rate of infection is expected to be low (corresponding to higher and lower in a sense). Of course, a more serious realworld model needs to be considered in further research. Such a model would need to consider:

is it correct that a difference in infection rates exists between taxa? if so, what is this taxonomy and how to construct it? is it possible to discover this taxonomy through COVID19 testing and other data collection?

is it possible to develop a reasonably accurate infection model for certain stores and places that people wish to, or need to, frequent?

the infection model must also include a dynamic related to object contamination and transmission through contact of surfaces and objects;

travel by public transport to such locations also needs to be accounted for by the model;

time at the location, estimated, should be taken into consideration perhaps by a Poisson distribution like infection and contamination function;

speed of motion of different types of people through a location of visit may be another characteristic affecting infection rates;

what is the maximum number of requests for outings by people that makes the solution very effective? what level of demand makes it less effective?

is the round robin a fair reflection of how people wish to go out in the unrestricted normal case? are there invariant principles gathered from mobile phone roaming data that could inform how and when people go out?

what is the effect of noncompliance on the solution? could the solution account for it and still be gainful?

the solutions arrange into Pareto nondominated sets involving and : what is the difference between these and also between equivalent solutions, same score of ? is there a difference between such solutions of further interest?

the solution is designed to work even with very little idea or precision about the rates of infection and contamination as it aims for an improvement rather than precise values. How valid are these assumptions?

the solution out pefoms round robin but how does it fare in terms of infection rate against strict lockdowns, or exiting lockdown with phased lockdown strategies suggested in [delockdown]?

can economic, psychological and other benefits of the solution be quantified to understand the cost benefit analysis of adopting it versus strict lockdown policies?

will people be willing to undergo numerous lockdowns waiting for the availability of a vaccine if the solution is adopted?
Genetic Programming is known to scale well with problem size. However, even if millions of people were considered there would be a degree of clustering that could be treated differently by different discovered solutions. The partial infection model is developed here for the first time. If there is something similar in the literature then this author does not know of it. It must be tested and developed better. It is possible that the pessimistic approach of multiplying the terms of worst case is not entirely reasonable. Lastly, how well infections can be avoided will depend on the number of visitations, the number of establishments visited, the number of people and the frequency of visits. In this research Genetic Programming was handed a tough challenge as the number of visitations was in the order of 2000 and the times slots very few per day. Also the number of establishments was only a few.
In general, it is said that washing one’s hands is far more effective than social distancing. It is probably true that contamination is more important than person to person transmission. Contamination can be easily incorporated into the model. Although the model is incipient, it is considered something few have considered if any. Most research is involved in exploiting data sources to predict infection levels or explain the disease. This contribution is different in nature because its deployment could generate data and would also need only tendency of disease data. This contribution is markedly different from contact tracing approaches that are reactive. The work described here and its implementation would be proactive but it could also inform and be informed by contact tracing.
Appendix A Pseudocode for the Monte Carlo routine
The following pseudocode with produces the numbers of Figure 2:
do m, 20 times (iterate over 20 infected) 
nInfection[m] =0 (counts the encounters for this many infected) 
end iteration index m 
do 100,000 times (use a big number for good accuracy) 
do k, q times (typicaly q is between 10 and 40) 
21 times (calculate for 1 susceptible and 20 infected) call index m 
i= random number; range (1:7200) 
if i < 600 (300 2x2 m2 areas, spent 2 seconds in each: corridors traversed quickly) 
j = i 
trial[k]][m] = j/2 
else if i < 2100 (50 2x2 m2 areas, spent 30 seconds in each: shelves) 
j = i – 600 
trial[k][m] = 300 + j/30 
else (50 2x2 m2 areas, spent 102 seconds in each: popular shelves) 
j = i – 2100 
trial[k][m] = 350 + j/102 
end if 
end iteration index k 
do k, q times (did infected meet susceptible in any k trial) 
do m, 20 times (check 20 infected) 
if trial[k][m] == trial[k][0] (0 is the susceptible individual) 
do k2, 20 times (check 20 infected) 
bMet[k2] = true (records meet between an and the s) 
end iteration index k2 
end if 
end iteration index m 
end iteration index k 
do m, 20 times (iterate over 20 infected) 
if bMet[m] == true (was there any meeting?)) 
nInfection[m] = nInfection[m] + 1 (adds to the global count) 
end if 
end iteration index m 
end Monte Carlo iteration 
do m, 20 times (iterate over 20 infected) 
dInfectProbability[m] = ((double)nInfection[m])/100000.0 
end iteration index m 
random number  interval  number of areas  visit time  size of area 

0600  600  300  2 secs  2x2 m squares 
6012100  1500  50  30 secs  2x2 m squares 
21017200  5100  50  102 secs  2x2 m squares 
7200  7200  400  7200 secs  1600 m walk surface 
Appendix B Appendix: Simple infection models for the computations
b.1 When chance encounters underlie infection
When a susceptible individual meets an infected individual there is a possibility of transmission of the disease^{6}^{6}6as contrasted with place contamination and contact with things, and the abating of infection by the washing of hands from to . This is usually expressed as an infection probability where is the number of infected individuals present at the location of the encounters between the susceptible individual and infected individuals. Sophisticated relations can be determined in function of the number of encounters between a susceptible and one or more infected that also incorporate time of exposure modelled as a Poisson process [bib:grahammedley]^{7}^{7}7Personal communication: ‘the risk of transmission increases nonlinearly with the number of infected and with time. Suppose I go to the garden centre for one hour and there is one other person, and it gives me a rate, p, of being infected each hour we are there. So the probability of me being infected at the end of the hour is (1exp(p*1)), and if we are there 2 hours: (1exp(p*2)). If there are n people there then the time I spend contacting each person drops, so the risk per person drops. A reasonable assumption is that the risk with n people is , where . This gives you a contamination function.’. An important characteristic of such relations is that as the number of infected individuals grows then increases but not linearly, so that for example one can expect .
This proof of concept assumes a trivially simple infection function based on counting colocations between and . These increase with the number of infected but the ratio of these to the total possible encounters never exceeds one, that is, as then . The essence of this behaviour can be crudely emulated with Monte Carlo or, as discussed here, by counting on simple probability trees of Figure 8. We ignore dependency on time of exposure to assume all visits to places are of similar duration. The Genetic Programming approach that uses such infection functions, however, is general and able to incorporate any candidate infection function.
From this figure, first, consider the case of two individuals only: P1 and P2. Individual P1 is susceptible while P2 is infected. The number of encounters between these two, the red boxes at the P2 level in the figure, or where both individuals share a same location, is four and the total number of possible outcomes is sixteen. Hence, the opportunity for an encounter taking place and therefore for infection, assuming each individual should have an equal presence for each of the four locations and spend the same amount of time at any visited, is . Next consider the case where three individuals participate with P1 susceptible and P2 and P3 both infected. Now we recognize opportunities for all three individuals to exist at the same location, and also opportunities for P1 to share a location with either infected P2 or P3. If we count such opportunities we arrive at and we verify that . For the particular scenario of Figure 8 the coarse emulation leads to a simple relation to obtain the probability of encounter for any that is: . Considering rather large it tends to one: .
b.2 Introducing: partially infected people
We now introduce perhaps what may be an unusual idea. The real world objective of this initiative is to leverage off COVID19 testing. Imagine that upon testing for COVID19 the incidence is found to be higher in age groups 20 and 50. It is unlikely that COVID19 testing will be undertaken by all people all of the time. Moreover, tests are still unreliable. Hence, we need to work with partial knowledge. An option to explore might be to work directly with the probable levels of infection that are informed by the testing for different groups of people as organized in some taxonomy: for example, consider the probability of infection to be 5 percent and 8 percent respectively and zero in other age groups?^{8}^{8}8without loss of generality, this work assumes that age group confers to persons variable risks of already having the infection. It motivates use of a ‘partial infection’ but why? If we knew who was or was not infected we would let them out or keep them in isolation. However, as we cannot test every single person, it is useful to assign to them uncertainty, a probability level^{9}^{9}9alternatively, we could use fuzzy set membership. In such a case, we must consider that any of the people in Figure 8 namely P1, P2 or P3 may be partially infected (and partially susceptible).
Figure 8
shows the encounters between persons, i.e., two people sharing one location (1,2,3,4) at the same moment in time, thus coming into contact with each other. We are especially interested in two encounters: P1P2 and P1P3 as we can assume that one person P1 is susceptible while the other two are infected. Note that under permutations assigning
and there is no need to count encounters P2P3 as these would represent two infected, and neither of interest are encounters between two susceptible. The thirty two encounters that determined the probability of infection result in the union that gives twenty eight encounters in the figure. Each of P1P2 and P1P3 result in 16 encounters but 4 encounters are shared. So only 28 out of 64 possible combinations of three locations are or interest. This ratio gives a crude probability of infection . We now consider permutations of and components of partially infected persons in such basic probability tree count calculations as will become apparent in the below examples^{10}^{10}10Out of interest the number of non encounters (three different locations for the three persons) are twenty four: 123, 124, 132, 134, 142, 143, 213, 214, 231, 234, 241, 243, 312, 314, 321, 324, 341, 342, 412, 413, 421, 423, 431, 432..b.2.1 An example involving three people
As an example, consider the interactions between three people with partial infection:
To arrive at an infection probability we need to consider contributions from the following six possibilities^{11}^{11}11there is no need to consider SSS or III: ISI, IIS, ISS, SII, SSI, SIS. Here are the numerical contributions from each term:
Note that the encounters of overlap where all three persons are in the same location give preference to the largest infection, hence the third term which multiplies by four, the overlapping cases (see Figures 8, 9 and 10 ) The total probability of infection is the maximum of ISI, IIS and SII or 0.0155. Now we compute the new partial infections for our three participants:
b.2.2 A second example involving three people
Here is a second example, consider the interactions between three people with partial infection:
which approaches the situation of one susceptible person, P3, meeting two infected: P1 and P2. Again to arrive at an infection probability we consider contributions from the following six possibilities: ISI, IIS, ISS, SII, SSI, SIS. Here are the numerical contributions from each term:
The total probability of infection is the maximum of ISI, IIS and SII or 0.4189. This number is close to . Now we compute the new partial infections for our three participants:
Note that in spite of P3 already carrying a small probability of infection, its new infection level . This is because others were not one hundred percent infected and the result was close to but less than 0.4375.
b.2.3 A third example involving three people
Here is a third example, consider the interactions between three people with partial infection:
which approaches the situation of the interactiion of the noninfected with a fully infected person. Again to arrive at an infection probability we consider contributions from the following six possibilities: ISI, IIS, ISS, SII, SSI, SIS. Here are the numerical contributions from each term:
The total probability of infection is the maximum of ISI, IIS and SII or 0.2493. This number is very close to . Now we compute the new partial infections for our three participants:
Notice that P1 and P3 have an infection level this is because the people were already one percent infected.
b.2.4 A first example involving four people
An example involving four people is illustrated with the help of Figure 9, consider the interactions between three individuals with partial infection:
Again to arrive at an infection probability we consider contributions from the following: SIII, ISII, IISI, IIIS. Here are the numerical contributions from each term:
The total probability of infection is the maximum of SIII, ISII, IISI and IIIS, or 0.2560. This number is very close to . It is a higher value because all participants contribute a significant level of infection. Now we compute the new partial infections for our four participants:
b.2.5 A second example involving four people
In this example we approach the situation of four infected persons, consider the interactions between three individuals with partial infection:
Once again we consider contributions from the following: SIII, ISII, IISI, IIIS:
Note from previous examples that if we simply identify the participant with the highest component of then that will be the one that indicates the highest contribution. In this case that is P1 and so SIII should be highest, and it is and is 0.5708. This number is very close to . It is a higher value because all participants contribute a significant level of infection. Now we compute the new partial infections for our four participants:
b.2.6 Algorithm for modelling partially infected people
For partially infected people meeting at possible locations, e.g., , there are multiplicative constants. In the first three examples with they are: 16 and 12 which sum to 28. In the last two examples with they are: 64, 48 and 36 which sum to 148. For there is only one constant: 4. The constants then get divided by (16, 64, 256,…).
Figures 8 and 9 illustrate that the sum of these constants divided by gives the infection probability for the pure case of encounter of a fully susceptible individual with infected, e.g., and . With then and the only consideration in such a case are cases SI and IS for the P1P2 encounter. A formula to obtain the multiplicative constants where is:
This can be understood pictorially at the top (right) of Figure 10. When division by is considered the multiplicative ratios in the above examples such as 16/64, 48/256 are given by:
These factors can be seen to be in descending order. If people participate in an encounter but the number of partially infected people is less than then we assume one fully susceptible individual with and will meet with the partially infected individuals and thus is used in the formula. The calculation follows this ten step procedure (pseudocode):
1. precompute a large number of factors at the start of the run.  
Arriving at each visit at a location and time:  
Comments
There are no comments yet.