diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzedil" "b/data_all_eng_slimpj/shuffled/split2/finalzzedil" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzedil" @@ -0,0 +1,5 @@ +{"text":"\\section{The Anamorphosis Theorem\\label{sec:appendix1}}\nHere we derive a simple and general relation that connects multi-terminal resistance values for an anisotropic conductor and those for a reshaped isotropic conductor of conductivity $\\sigma'=(\\sigma _{xx}\\sigma _{yy})^{1\/2}$. This relation, thereafter referred to as {\\it the anamorphosis (reshaping) theorem}, is valid for homogeneous samples of an arbitrary shape with the contacts placed in an arbitrary fashion, either at the sample boundary or in the interior. The theorem stipulates that the multiterminal resistance values $R_{ij,kl}$, defined as the potential difference on the contacts $i$ and $j$ induced by a unit current fed through contacts $k$ and $l$, remains unchanged upon sample reshaping:\n\\be\\label{eq:R'=R}\nR'_{ij,kl}=R_{ij,kl}\n.\n\\ee\nHere $R_{ij,kl}=V_{ij}\/I_{kl}$, $R'_{ij,kl}=V'_{ij}\/I'_{kl}$ where the unprimed and primed quantities correspond to the original and reshaped sample, respectively. Eq.\\eqref{eq:R'=R} provides a simple and model-independent relation between different transport measurement results. Unlike the van der Pauw theorem\\cite{VanderPauw1958} which connects multiterminal resistance values obtained for the same sample by a permutation of current and voltage contacts, the result in Eq.\\eqref{eq:R'=R} relates the values obtained for different samples albeit with no contact permutation.\n\nTo establish the result in Eq.\\eqref{eq:R'=R}, we recall that the conductivity of an anisotropic 2D conductor, in the absence of magnetic field, is described by a second rank tensor. Without loss of generality, we can pick the axes of our coordinate system to be aligned with the principal conductivity axes, writing\n\\be\n\\sigma = \\left[\\begin{array}{cc}\\sigma _{xx} & 0 \\\\ 0 & \\sigma _{yy} \\end{array}\\right]\n.\n\\ee\nIn these coordinates, the equations describing ohmic transport read\n\\be\\label{eq:j-E}\nj _{x}=\\sigma_{xx} E _{x}\n,\\quad\nj _{y}=\\sigma_{yy} E _{y}\n\\ee\nwhere $\\vec E=(E _{x},E _{y})=-\\nabla\\phi$. \nHere the current $\\vec j$, electric field $\\vec E$, and the potential $\\phi$ spatial dependence are obtained from a solution of a boundary value problem posed through the current continuity relation \n\\be\\label{eq:div j=0}\n\\p_i j_i=0\n.\n\\ee\nThis equation, which holds inside the sample material, must be solved together with the boundary condition specifying fixed potential values at the contacts $\\phi_i$, $i=1...4$ and the tangential-current condition $\\vec l\\times \\vec j=0$ at the nonconducting parts of the boundary (here $\\vec l$ is a vector tangential to the boundary).\n\nIn order to map the anisotropic problem to an isotropic one we introduce an anisotropic scaling transformation\n\\be\\label{eq:rescaling}\nx =\\eta x',\\quad\ny=\\eta^{-1} y'\n,\\quad\n\\eta=(\\sigma _{xx}\/\\sigma _{yy})^{1\/4} \n\\ee\nWe note that under this transformation \nthe continuity relation given in \nEq.\\eqref{eq:div j=0} continues to hold provided the current components are redefined as\n\\be\nj _{x}=\\eta j' _{x}\n,\\quad\nj _{y}=\\eta^{-1} j' _{y}\n.\n\\ee\nCrucially, $\\vec j'$ satisfies the current-field relations for an isotropic conductor of conductivity value $\\sigma'$: \n\\be\nj' _{x}= \\sigma' E' _{x}\n,\\quad\nj' _{y}= \\sigma' E' _{y}\n,\\quad\n\\vec E'=(E' _{x},E' _{y})=-\\nabla'\\phi\n\\ee\nwhere \n$\\nabla'=(\\eta^{-1}\\p_{x'},\\eta \\p_{y'})$.\n\nNext, it is straightforward to check that the reshaping transformation in Eq.\\eqref{eq:rescaling} leaves the tangential-current boundary condition $\\vec l\\times \\vec j=0$ unchanged. Indeed, upon rescaling in Eq.\\eqref{eq:rescaling} the tangential vector changes as $\\vec l'=(\\eta l _{x}, \\eta^{-1}l _{y})$, yielding $\\vec l'\\times \\vec j'=0$ (our tangential vectors $\\vec l$ and $\\vec l'$ are not normalized.) We therefore conclude that new transport equations are identical to those for an isotropic material of conductivity $\\sigma'$. \n\nLastly, we note that the definitions of voltages and currents at the contacts remain unchanged upon rescaling: $V'_k=V_k$ and \n\\be\nI'_k=\\int d \\vec l'\\times\\vec j'=\\int d\\vec l\\times\\vec j =I_k\n,\n\\ee \nwhere the integral is taken along the boundary of the $k$th contact before and after rescaling. As a result, the multi-terminal resistance values for the transformed geometry, in which both the sample and the contacts are reshaped, are identical to those in the original anisotropic sample:\n\\[\nR'_{ij,kl}=\\frac{V'_{ij}}{I'_{kl}}=\\frac{V_{ij}}{I_{kl}}=R_{ij,kl}\n,\n\\]\nwhich proves the reshaping theorem, Eq.\\eqref{eq:R'=R}.\n\nAs a sanity check, consider a conducting strip of width $W$ parallel to the $x$ axis which carries constant uniform current $I$. Voltage drop between probes on the strip edge separated by a distance $L$ is then given by $V=(L\/W\\sigma_{xx})I$. After the reshaping transformation, Eq.\\eqref{eq:rescaling}, the new strip width and the new distance between voltage probes are $W'=\\eta W$ and $L'=\\eta^{-1} L$. Accounting for the isotropic conductivity $\\sigma'$ of the reshaped conductor we find that voltage remains unchanged upon reshaping:\n\\be\nV'=\\frac{L'}{W'\\sigma'}I=\\frac{L}{W\\sigma_{xx}}I=V\n,\n\\ee\nwhich is in full accord with our anamorphosis theorem.\n\n\nNext, we consider currents and potentials for a nonlocal measurement in the same strip. The corresponding contact placement is illustrated in Fig.\\ref{SI_fig1}. Performing the reshaping transformation to a strip with an isotropic conductivity $\\sigma'$, as above, we arrive at a problem which has been solved elsewhere.\\cite{Abanin2011} In this case the nonlocal voltage induced at a distance $L'$ from current leads is given by\n\\be\\label{eq:R_NL_2011}\nV=\\frac{\\rho'}{\\pi}\\ln\\lp \\frac{\\cosh\\pi\\frac{L'}{W'}+1}{\\cosh\\pi\\frac{L'}{W'}-1} \\rp I\n\\ee\nwhere $\\rho'=1\/\\sigma'$. This relation can also be written as\n\\be\\label{eq:R_NL_coth}\nV=\\frac{2\\rho' I}{\\pi}\\ln\\coth\\lp\\frac{\\pi L'}{2W'}\\rp\n.\n\\ee\nFor $L'\\gg W'$ this gives\n\\be\nV\\approx \\frac{4\\rho'}{\\pi}e^{-\\pi\\frac{L'}{W'}} I\n.\n\\ee\nThese expressions, according to the relation \\eqref{eq:R'=R} proven above, \nalso describe the nonlocal response in an anisotropic system, provided that the corresponding lengthscales are related via $W'=\\eta W$, $L'=\\eta^{-1}L$. This provides a derivation of the expression for nonlocal resistance used in the main text. \n\n \\begin{figure}[!htb]\n \\includegraphics[width=0.8\\columnwidth]{Strip2}\n \\caption{Schematics of anisotropic conductor modelled as infinite strip of width $W$. Principal axes of conductivity tensor $\\lp\\sigma_{xx},\\sigma_{yy}\\rp$ are aligned with the $x$ and $y$ axes. Current $I $ is applied between source ($S$) and drain ($D$) electrodes. Voltage probes $V_1$ and $V_2$ are placed at a distance $L$ away from $S-D$ pair.}\n\t\\label{SI_fig1}\n \\end{figure}\n\n\\section{Direct derivation of the nonlocal response for a strip geometry\\label{sec:appendix2}}\n\nHere we derive the nonlocal response for a strip geometry by a direct method that does not rely on the reshaping theorem. We proceed by combining the current-field relations, Eq.\\eqref{eq:j-E}, with the continuity equation $\\p_i j_i=0$, which gives an anisotropic Laplace's equation for the potential \n\\be\\label{eqn:laplace_anisotropic}\n\\lp \\sigma_{xx}\\frac{\\partial^2 }{\\partial x^2}+\\sigma_{yy}\\frac{\\partial^2 }{\\partial y^2} \\rp \\varphi(x,y)= 0\n\\ee\nThis equation must be solved in a strip \n\\be\\label{eq:strip}\n-\\infty0$ and the integration path runs along $-\\infty0$ (the subscript $0$ denotes the principal real branch of the function). The Lambert function is easily accessible from most of mathematics software products, including OriginPro\\footnote{in OriginPro the Lambert \nfunction can be accessed as described in the folloowing link: http:\/\/www.originlab.com\/doc\/LabTalk\/ref\/LambertW-func}, MATLAB\\footnote{Good reference on\nhow to use Lambert \nfunction in MATLAB products is provided here: http:\/\/uk.mathworks.com\/help\/symbolic\/lambertw.html} and Wolfram Mathematica\\footnote{For a description of $\\mathbb{W}_0$ and its use in Mathematica products look at http:\/\/mathworld.wolfram.com\/LambertW-Function.html}.\nThis procedure gives two independent estimates of the anisotropy value $A=\\sigma_{xx}\/\\sigma_{yy}$. Notably, despite the fact that in our measurements $R_{\\text{nl}}^{(x)}$ and $R_{\\text{nl}}^{(y)}$ typically differ by orders of magnitude, the relations \\eqref{eq:Lambert} yield nearly identical $\\gamma$ values for the Hall bars oriented along different crystal axes on the same device. Similar values are also found for measurements on different devices. \n\n\\bibliographystyle{artemnl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\n\tThe advancements in the development of reliable and low-cost sensors, capable of measuring different structural response quantities (e.g. accelerations, displacements, strains, temperatures, loads, etc.) have led to vast scientific and practical developments in the field of Structural Health Monitoring (SHM) over the last four decades \\cite{Worden_introduction}. Techniques for processing the raw measurement data and obtaining indicators of structural ``health\" have been made readily available \\cite{Limongelli}. However, despite the advancements in the field, SHM still remains predominantly applied within the research community \\cite{Worden_book} and has not yet translated to extensive application on real-world structures and infrastructure systems. One main reason for this is that the effect and the potential benefit from the use of SHM systems can only be appraised on the basis of the decisions that are triggered by monitoring data. Key open-ended questions include \\cite{VOI_ID}: How can information obtained from an SHM system provide optimal decision support? What is the Value of Information (VoI) from SHM systems? How can it be maximized?\n\t\n\tPreposterior Bayesian decision analysis can be employed as a formal framework for quantifying the VoI \\cite{Raiffa}, which adequately incorporates the uncertainties related to the structural performance and the associated costs, the monitoring measurements, etc. A VoI analysis provides the necessary mathematical framework for quantifying the benefit of an SHM system prior to its installation. In the civil and infrastructure engineering context, the computation of the VoI has been considered mainly related to optimal inspection planning for deteriorating structural systems \\cite{Straub_Faber, Jesus, Vereecken}. Recent works \\cite{Pozzi, Zonta, Straub_VOI, Thons, Konakli, Andriotis, Zhang, Iannacone} use the VoI concept in an attempt to quantify the value of SHM on idealized structural systems within a Bayesian framework. All works to date, however, adopt rather simplified assumptions regarding the type of information offered by the SHM system. They thus largely rely on hypothetical likelihood functions or observation models, which render these demonstrations, although insightful, not easily transferable to realistic applications. A first attempt towards modeling the entire SHM process and the monitoring information has been made by the authors in \\cite{Kamariotis}, which is formalized and extended herein.\n\t\n\tInstallation of a continuous monitoring system on a structure allows for continuous measurement of the dynamic response of the structure (e.g. accelerations, strain, etc.). In an in-operation regime, a precise measurement of the acting loads, which are usually distributed along a system (e.g. wind, traffic), is a challenging task. Output-only operational modal analysis (OMA) \\cite{SSI, Au_OMA} techniques have been developed to alleviate the burden of the absence of acting load measurements. Using an OMA procedure one can identify the system eigenfrequencies and mode shapes of typical structures excited by unmeasured ambient (broadband) loads. This is beneficial, since the operation of the structure is not obstructed, as it would be in the case of forced vibration testing. \n\t\n\tFurther to data acquisition and system identification, model updating forms a popular subsequent step toward modeling the system performance on the basis of the monitoring information. This process is also referred to as the process of establishing a digital twin via model updating \\cite{Wright}. Bayesian model updating (BMU) using identified modal data has proved successful in identifying damage on a global or local level within a structure \\cite{Vanik, Papadimitriou, Beck_MCMC, Simoen, Yuen, Behmanesh}. These methods hold significant promise for application with actual full-scale structures \\cite{Ntotsios, Moaveni, Argyris}. The vast majority of studies are focused on investigating how the BMU framework performs in detecting, localizing and quantifying different types of artificially created damage given some fixed set of modal data. A few recent studies are concerned with BMU using vibrational data obtained in a continuous fashion from SHM systems \\cite{Behmanesh,Simoen_progressive, Ierimenti}. However, no studies are available that systematically quantify the benefit of BMU using continuous SHM data towards driving optimal informed maintenance decision making.\n\t\n\tThis work embeds a sequential implementation of the BMU framework within a preposterior Bayesian decision analysis, to quantify the VoI from long-term vibrational data obtained from an SHM system. We employ a numerical benchmark for continuous monitoring under operational variability \\cite{bench19} to test and demonstrate the approach. The numerical benchmark serves as a tool to create continuous reference monitoring data from a two-span bridge system subject to different types (scour, corrosion) of deterioration at specific hotspots over its lifespan. The benchmark is used as a simulator for extracting dynamic response data, i.e. simulated measurements (accelerations), corresponding to a typical deployment of accelerometers on the structure. Acceleration measurements are provided as input to an output-only OMA algorithm, which identifies the system's modal characteristics. We implement Bayesian model and structural reliability updating methods in a sequential setting for incorporating the continuous OMA-identified modal data within a decision making framework. This proposed procedure follows the roadmap to quantifying the benefit of SHM presented in \\cite{VOI_ID}. We employ a simple heuristic-based approach for the solution of the life-cycle optimization problem in the preposterior Bayesian decision analysis. The resulting optimal expected total life-cycle costs are computed in the preposterior case, and compared against the optimal expected total life-cycle costs obtained in the case of only prior knowledge, thus enabling the quantification of the VoI of SHM.\n\t\n\t\\section{VoI from SHM analysis}\n\t\\label{sec:VoI_workflow}\n\tThe monitoring of a structural system through deployment of an appropriately designed SHM system is a viable means to support decision-making related to infrastructure maintenance actions. But is gathering this information worth it? Preposterior Bayesian decision analysis provides the necessary formal mathematical framework for quantifying the VoI of an SHM system. A concise representation of a such an analysis with the use of an influence diagram (ID) has been introduced in \\cite{VOI_ID}. An adaptation of this ID for the purposes of the VoI analysis that we propose and apply on a simulated SHM benchmark study in this paper is offered in Figure \\ref{fig: framework}.\n\t\\begin{figure}[ht]\n\t\t\\centerline{\n\t\t\t\\includegraphics[width=0.9\\textwidth]{figures\/VoI_SHM_ID.pdf}\n\t\t}\n\t\t\\caption{Influence diagram of the SHM process for a preposterior Bayesian decision analysis to quantify the VoI.}\n\t\t\\label{fig: framework}\n\t\\end{figure}\n\t\n\tInfluence diagrams build upon Bayesian networks (BN), which offer a concise graphical tool to model Bayesian probabilistic inference problems, and extend these through the addition of decision and utility nodes to model decision-making under uncertainty \\cite{Jensen}. \n\tIn the ID of Figure \\ref{fig: framework}, green oval nodes model uncertain parameters and models\/processes related to the structural system, the orange square node models the decision on the SHM system, while the orange oval node models the monitoring data that is extracted via use of a specific SHM system. This data can be used to learn the structural condition via Bayesian updating to then inform the decision on maintenance\/repair actions (red square node). Finally, the grey diamond-shaped nodes represent the different costs that enter into the process. The box [t+1] shows that this ID represents a decision process over the lifetime of the structure. The blue text bubbles introduce the different computational methods that are incorporated in the different parts of the process. The large number of these bubbles highlights the modeling and computational challenges associated with a full VoI analysis.\n\t\n\tIn this paper, for the first time in existing literature, we avoid overly simplifying assumptions in some parts of the modeling of the preposterior Bayesian decision analysis for quantifying the VoI from SHM, but we still model some parts of the process in a simplistic way. The main contribution lies in the modeling of the SHM data. As can be seen in Figure \\ref{fig: framework}, we employ continuous SHM information over the lifetime of a deteriorating structural system in the form of acceleration time series, which are subsequently processed by an OMA procedure that identifies the modal eigenfrequencies and mode shapes. These SHM modal data are then used within a BMU procedure to sequentially identify the structural condition (see Section \\ref{sec:Bayes}). The way in which the SHM data sets are sampled within a preposterior Bayesian decision analysis with the use of the benchmark structural model is described in detail in Section \\ref{subsec: Synthetic}. We treat the modeling of the structural performance node of the ID, as well as the incorporation of the monitoring information within a reliability updating, in a realistic and computationally efficient approach (see Section \\ref{sec: SR}). To provide a computationally viable solution to the VoI analysis, we adopt a rather simplified modeling of the action decision node, and we perform the life cycle optimization with the use of heuristics (see Section \\ref{sec:LCC}).\n\t\n\tThe solution of the preposterior Bayesian decision analysis leads to monitoring-informed optimization of the repair action, which in turn leads to the computation of the optimal expected total life-cycle cost in the case of having an SHM system installed. If the adopted SHM strategy is to implement no SHM system, then life-cycle optimization is conducted on the basis of prior information only. By comparing the optimal expected total life-cycle costs in the prior and preposterior cases, the VoI is implicitly quantified as the difference between the two.\n\t\n\t\\section{Bayesian model updating}\n\t\\label{sec:Bayes}\n\tIn this section, the Bayesian model updating framework with the use of OMA-identified modal data is presented. The Bayesian formulation presented here corresponds to the state-of-the-art formulation \\cite{Simoen, Yuen, Moaveni}.\n\t\n\t\\subsection{Bayesian formulation}\n\tWe consider deterioration that leads to local stiffness reductions. The random variables (RVs) describing the uncertainty within the employed deterioration models are $\\boldsymbol{\\theta} \\in {\\rm I\\!R}^d$, with $d$ being the total number of RVs. The goal of the Bayesian inverse problem is to infer the deterioration model parameters $\\boldsymbol{\\theta}$ given noisy OMA-identified modal data. These are the modal eigenvalues $\\widetilde{\\lambda}_m = (2\\pi \\widetilde{f}_m)^2$, which can be identified quite accurately, and\/or mode shape vector components $\\boldsymbol{\\widetilde{\\Phi}}_{m} \\in {\\rm I\\!R}^{N_{s}}$ at the $N_s$ degrees of freedom (DOF) that correspond to the sensor locations, where $m=1,...,N_m$ is the number of identified modes. An accurate identification of the mode shape displacements requires the deployment of a relatively large number of sensors. Conditional on a fairly good representation of the mode shape vector, one can then derive other modal characteristics, such as the mode shape curvatures $\\boldsymbol{\\widetilde{K}}_{m} \\in {\\rm I\\!R}^{N_{s}}$, which are shown to be more sensitive to local damage \\cite{Pandey}. If only the eigenvalue data is at hand, damage can be detected on a global level, while damage localization requires the existence of spatial information, in the form of mode shape (or mode shape curvature) data.\n\n\tConsider a linear finite element (FE) model, which is parameterized through the parameters $\\boldsymbol{\\theta}$ of the deterioration models. The goal of the Bayesian probabilistic framework is to estimate the parameters $\\boldsymbol{\\theta}$, and their uncertainty, such that the FE model predicted modal eigenvalues $\\lambda_{m}(\\boldsymbol{\\theta})$ and mode shapes $\\boldsymbol{\\Phi}_{m}(\\boldsymbol{\\theta})$, or mode shape curvatures $\\boldsymbol{K}_{m}(\\boldsymbol{\\theta})$, best match the corresponding SHM modal data.\n\t\n\tUsing Bayes' theorem, the posterior probability density function $\\pi_{\\text{pos}}$ of the deterioration model parameters $\\boldsymbol{\\theta}$ given an identified modal data set $[\\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}}]$ is computed via equation (\\ref{Bayes}); it is proportional to the likelihood function $L(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}})$ multiplied with the prior PDF of the model parameters $\\pi_{pr}(\\boldsymbol{\\theta})$. The proportionality constant is the so-called model evidence $Z$ and requires the solution of a $d$-dimensional integral, shown in equation (\\ref{Evidence}).\n\t\\begin{equation}\n\t\t\\pi_{\\text{pos}}(\\boldsymbol{\\theta} \\mid \\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}}) \\propto L(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}}) \\pi_{\\text{pr}}(\\boldsymbol{\\theta})\n\t\t\\label{Bayes}\n\t\t\\end{equation}\n\t\\begin{equation}\n\t\tZ= \\int_{\\Omega_{\\boldsymbol{\\theta}}}L(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}}) \\pi_{\\text{pr}}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}\n\t\t\\label{Evidence}\n\t\t\\end{equation}\n\t\n\tThe model updating procedure contains significant uncertainties, which should be taken into account within the Bayesian framework. According to \\cite{Simoen}, these are classified into i) measurement uncertainty, including random measurement noise and variance or bias errors induced in the SSI procedure, and ii) model uncertainty. In \\cite{Behmanesh} the existence of inherent variability emerging from changing environmental conditions is highlighted. The combination of all the above uncertainties is called the total prediction error in literature \\cite{Simoen, Behmanesh}. In order to construct the likelihood function, the eigenvalue and mode shape (similarly for mode shape curvature) prediction errors for a specific mode $m$ are defined as in equations (\\ref{eigenvalue_error}) and (\\ref{modeshape_error}).\n\t\\begin{equation}\n\t\t\\eta_{\\lambda_{m}} = \\widetilde{\\lambda}_{m} - \\lambda_{m}(\\boldsymbol{\\theta}) \\in {\\rm I\\!R}\n\t\t\\label{eigenvalue_error}\n\t\t\\end{equation}\n\t\\begin{equation}\n\t\t\\boldsymbol{\\eta}_{\\boldsymbol{\\Phi}_m} = \\gamma _{m} \\boldsymbol{\\widetilde{\\Phi}}_{m} - \\boldsymbol{\\Phi}_{m}(\\boldsymbol{\\theta}) \\in {\\rm I\\!R}^{N_{s}}\n\t\t\\label{modeshape_error}\n\t\t\\end{equation}\n\twhere $\\gamma_{m}$ is a normalization constant, which is computed as in equation (\\ref{gamma}). $\\boldsymbol{\\Gamma}$ is a binary matrix for selecting the FE degrees of freedom that correspond to the sensor locations.\n\t\\begin{equation}\n\t\t\\gamma_{m} = \\frac{{\\boldsymbol{\\widetilde{\\Phi}}_{m}}^T \\boldsymbol{\\Gamma}\\boldsymbol{\\Phi}_{m}}{\\left \\| \\boldsymbol{\\widetilde{\\Phi}}_{m} \\right \\|^2}\n\t\t\\label{gamma}\n\t\t\\end{equation}\n\tThe probabilistic model of the eigenvalue prediction error is a zero-mean Gaussian random variable with standard deviation assumed to be proportional to the measured eigenvalues:\n\t\\begin{equation}\n\t\t\\eta_{\\lambda_{m}} \\sim \\mathcal{N}\\left(0, c_{\\lambda m}^2 \\widetilde{\\lambda}_{m}^2 \\right)\n\t\t\\label{eigenvalue_error_std}\n\t\t\\end{equation}\n\tAll the $N_s$ mode shape prediction error components in the vector $\\boldsymbol{\\eta}_{\\boldsymbol{\\Phi}_m}$ are assigned a zero-mean Gaussian random variable with the same standard deviation, assumed proportional to the $L_2$-norm of the measured mode shape vector. A multivariate Gaussian distribution is used to model this error:\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t&\\boldsymbol{\\eta}_{\\boldsymbol{\\Phi}_m} \\sim \\mathcal{N}(\\boldsymbol{0}, \\boldsymbol{\\Sigma}_{\\boldsymbol{\\Phi}_{m}}) \\\\\n\t\t\\boldsymbol{\\Sigma}_{\\boldsymbol{\\Phi}_{m}} &= \\text{diag}\\left(c_{\\Phi m}^2 \\left \\| \\gamma_{m} \\boldsymbol{\\widetilde{\\Phi}}_{m} \\right \\|^2\\right)\n\t\t\\end{split}\n\t\t\\end{equation}\n\tThe factors $c_{\\lambda m}$ and $c_{\\Phi m}$ can be regarded as assigned coefficients of variation, and their chosen values reflect the total prediction error. In practical applications, usually very little (if anything) is known about the structure or the magnitude of the total prediction error. At the same time, even if the assumption of an uncorrelated zero mean Gaussian model for the errors has computational advantages and can be justified by the maximum entropy principle, the choice of the magnitude of the factors $c_{\\lambda m}$ and $c_{\\Phi m}$ clearly affects the results of the Bayesian updating procedure. It appears that most published works do not properly justify this particular choice of the magnitude of the error. \n\t\n\tAssuming statistical independence among the $N_m$ identified modes, the likelihood function for a given modal data set can be written as in equation (\\ref{likelihood}).\n\t\\begin{equation}\n\t\tL\\left(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}}, \\boldsymbol{\\widetilde{\\Phi}}\\right)= \\prod_{m=1}^{N_m}N\\left(\\eta_{\\lambda_{m}}; 0, c_{\\lambda m}^2 \\widetilde{\\lambda}_{m}^2 \\right)\n\t\tN(\\boldsymbol{\\eta}_{\\boldsymbol{\\Phi}_m} ; \\boldsymbol{0}, \\boldsymbol{\\Sigma}_{\\boldsymbol{\\Phi}_{m}})\n\t\t\\label{likelihood}\n\t\t\\end{equation}\n\tThe benefit of SHM is that the sensors can provide data in a continuous fashion, therefore resulting in an abundance of measurements received almost continually. Assuming independence among $N_t$ modal data sets obtained at different time instances, the likelihood can be expressed as: \n\t\\begin{equation}\n\t\tL\\left(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}}_{1}...\\boldsymbol{\\widetilde{\\lambda}}_{N_t},\\boldsymbol{\\widetilde{\\Phi}}_{1}...\\boldsymbol{\\widetilde{\\Phi}}_{N_t} \\right) = \\prod_{t=1}^{N_t}\\prod_{m=1}^{N_m}N\\left(\\widetilde{\\lambda}_{t_m}- \\lambda_{t_m}(\\boldsymbol{\\theta}); 0, c_{\\lambda m}^2 \\widetilde{\\lambda}_{t_m}^2\\right)\n\t\tN\\left(\\gamma _{t_m} \\boldsymbol{\\widetilde{\\Phi}}_{t_m} - \\boldsymbol{\\Phi}_{t_m}(\\boldsymbol{\\theta}); \\boldsymbol{0}, \\boldsymbol{\\Sigma}_{\\boldsymbol{\\Phi}_{t_m}} \\right)\n\t\t\\label{time_likelihood}\n\t\t\\end{equation}\n\twhere the index $t_m$ indicates the modal data of mode $m$ identified at time instance $t$. The formulation in equation \\eqref{time_likelihood} allows for sequential implementation of the Bayesian updating process.\n\tAt any time step $t_i$ when new data becomes available, the distribution of the parameters given all the data up to time $t_i$, $\\pi_{\\text{pos}}(\\boldsymbol{\\theta} \\mid \\boldsymbol{\\widetilde{\\lambda}}_{1:i},\\boldsymbol{\\widetilde{\\Phi}}_{1:i})$ or the one step ahead predictive distributions for time $t_{i+1}$ can be obtained. The inclusion of data in a continuous fashion can increase the level of accuracy of the Bayesian model updating procedure. However, one should be aware that the assumption of independence in equation \\eqref{time_likelihood} typically does not hold. This could be addressed by a hierarchical modeling of $\\boldsymbol{\\theta}$ \\cite{ Behmanesh}.\n\t\n\t\\subsection{Solution methods}\n\t\\label{subsec: Bayesian solution}\n\tThe solution of the Bayesian updating problem in the general case involves the solution of the $d$-dimensional integral for the computation of the model evidence. Analytic solutions to this integral are available only in special cases, otherwise numerical integration or sampling methods are deployed. The two solution methods that we employ within this work are the Laplace asymptotic approximation and an adaptive Markov Chain Monte Carlo (MCMC) algorithm.\n\t\\subsubsection{Laplace approximation}\n\t\\label{subsubsec: Laplace}\n\tA detailed presentation of this method can be found in \\cite{Katafygiotis, Papadimitriou}. The main idea is that for globally identifiable cases \\cite{Katafygiotis}, and for large enough number of experimental data, the posterior distribution can be approximated by a multivariate Gaussian distribution $N(\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$. The mean vector $\\boldsymbol{\\mu}$ is set equal to the most probable value, or maximum aposteriori (MAP) estimate, of the parameter vector, which is obtained by minimizing the negative logposterior:\n\t\\begin{equation}\n\t\t\\boldsymbol{\\mu} = \\boldsymbol{\\theta}_{MAP} = \\underset{\\boldsymbol{\\theta}}{\\argmin}(-\\operatorname{ln}\\pi_{\\text{pos}}(\\boldsymbol{\\theta} \\mid\\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}})) = \\underset{\\boldsymbol{\\theta}}{\\argmin}(-\\operatorname{ln}L(\\boldsymbol{\\theta}; \\boldsymbol{\\widetilde{\\lambda}},\\boldsymbol{\\widetilde{\\Phi}}) -\\operatorname{ln}\\pi_{\\text{pr}}(\\boldsymbol{\\theta}))\n\t\t\\label{MAP}\n\t\t\\end{equation}\n\tand the covariance matrix $\\boldsymbol{\\Sigma}$ is equal to the inverse of the Hessian of the log-posterior evaluated at the MAP estimate. When new data becomes available, the new posterior distribution has to be approximated. The MAP estimate of the previous time step is used as the initial point for the optimization at the current time step, to facilitate a faster convergence of the optimization algorithm.\n\t\n\t\\subsubsection{MCMC sampling}\n\tFor more accurate estimates of the posterior distributions than the one obtained by using the Laplace approximation, one can resort to MCMC sampling methods. Among the multiple available MCMC algorithms, here we employ the adaptive MCMC algorithm from \\cite{Haario}, in which the adaptation is performed on the covariance matrix of the proposal PDF. Whenever new data becomes available, the MCMC algorithm has to be rerun to obtain the new posterior distribution. The posterior mean of the parameters estimated via MCMC at the previous time step is used as seed of the new Markov chain, which allows the chain to converge faster.\n\t\n\t\\section{Structural reliability of a deteriorating structural system and its updating}\n\t\\label{sec: SR}\n\tEstimation of the structural reliability, and the use of vibrational data to update this, is instrumental for the framework that we are presenting here. A detailed review of the ideas presented in this section can be found in \\cite{Melchers, Straub1}.\n\t\n\t\\subsection{Structural reliability analysis for a deteriorating structural system}\n\t\\label{subsec: SR_prior}\n\tIn its simplest form, a failure event at time $t$ can be described in terms of a structural system capacity $R(t)$ and a demand $S(t)$. Both $R$ and $S$ are random variables. With $D(\\boldsymbol{\\theta},t)$ we define a parametric stochastic deterioration model. Herein we assume that the structural capacity $R(t)$ can be separated from the demand $S(t)$, and that the capacity is deterministic and known for a given deterioration $D(\\boldsymbol{\\theta},t)$, hence we write $R\\left(D(\\boldsymbol{\\theta},t)\\right)$. More details on how this deterministic curve can be obtained for specific cases are given in Section 6, which contains the numerical examples. Therefore, at a time $t$ the structural capacity includes the effect of the deterioration process. The uncertain demand acting on the structure is here modeled by the distribution of the maximum load in a one-year time interval. The cumulative distribution function (CDF) of this distribution is denoted $F_{s_{max}}$. Such a modeling choice simplifies the estimation of the structural reliability, as will be made clear in what follows, which is vital within a computationally expensive VoI analysis framework.\n\t\n\tWe discretize time in annual intervals $j=1,..,T$, where the $j$-th interval represents $t \\in (t_{j-1}, t_j]$. For the type of problems that we are considering, the time-variant reliability problem can be replaced by a series of time-invariant reliability problems \\cite{Straub1}. $F_j^*$ is defined as the event of failure in interval $(t_{j-1}, t_j]$. For a given value of the deterioration model parameters $\\boldsymbol{\\theta}$ and time $t_j$, the capacity $R\\left(D(\\boldsymbol{\\theta},t_j)\\right)$ is fixed, and the conditional interval probability of failure is defined as:\n\t\\begin{equation}\n\t\t\\text{Pr}(F_j^* \\mid \\boldsymbol{\\theta}, t_j)= 1 -F_{s_{max}}\\left(R\\left(D(\\boldsymbol{\\theta},t_j)\\right)\\right)\n\t\t\\end{equation}\n\tWe define $\\text{Pr}[F(t_i)] = \\text{Pr}(F_1^*\\cup F_2^*\\cup...F_i^*)$ as the accumulated probability of failure up to time $t_i$.\n\tOne can compute $Pr[F(t_i)]$ through the conditional interval probabilities $Pr(F_j^*|R(\\boldsymbol{\\theta}, t_j))$ as:\n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}] =1-\\prod_{j=1}^{i}[1 - \\text{Pr}(F_j^*\\mid \\boldsymbol{\\theta}, t_j)]\n\t\t\\label{conditional_accumulated}\n\t\t\\end{equation}\n\tFollowing the total probability theorem, the unconditional accumulated probability of failure is:\n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)] = \\int_{\\Omega_{\\boldsymbol{\\theta}}}\\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}] \\pi_{\\text{pr}}(\\boldsymbol{\\theta})d\\boldsymbol{\\theta}\n\t\t\\label{accumulated}\n\t\t\\end{equation}\n\tThe solution to the above integral is approximated using Monte Carlo simulation (MCS). We draw samples from the prior distribution $\\pi_{\\text{pr}}(\\boldsymbol{\\theta}) $ of the uncertain deterioration model parameters and the integral in (\\ref{accumulated}) is approximated by:\n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)] \\approx \\frac{1}{n_{\\text{MCS}}} \\sum_{k=1}^{n_{\\text{MCS}}} \\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}^{(k)}]\n\t\t\\label{MCS}\n\t\t\\end{equation}\n\tHaving computed the probabilities $Pr[F(t_i)]$, one can compute the hazard function $h(t_i)$ for the different time intervals $t_i$, which expresses the failure rate of the structure conditional on survival up to time $t_{i-1}$:\n\t\\begin{equation}\n\t\th(t_i) = \\frac{\\text{Pr}[F(t_i)] - \\text{Pr}[F(t_{i-1})]}{1 - \\text{Pr}[F(t_{i-1})]}\n\t\t\\label{hazard_prior}\n\t\t\\end{equation}\n\t\n\t\\subsection{Structural reliability updating using SHM modal data}\n\t\\label{subsec:SR_updating}\n\tThe goal of SHM is to identify structural damage. Monitoring data can be employed in order to identify the parameters $\\boldsymbol{\\theta}$ of the deterioration models and obtain their posterior distribution, as shown in Section \\ref{sec:Bayes}. Consequently this leads to the updating of the accumulated probability of failure at time $t_i$, which can now be conditioned on data $\\boldsymbol{Z}_{1:i-1}$ obtained up to time $t_{i-1}$. \n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)\\mid \\boldsymbol{Z}_{1:i-1}] = \\text{Pr}(F_1^*\\cup F_2^*\\cup...F_i^*\\mid \\boldsymbol{Z}_{1:i-1})\n\t\t\\label{conditional}\n\t\t\\end{equation}\n\tThe accumulated probability of failure up to time $t_i$ conditional on modal data obtained up to time $t_{i-1}$ is:\n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)\\mid \\boldsymbol{Z}_{1:i-1}] =\\\\\n\t\t\\int_{\\Omega_{\\boldsymbol{\\theta}}}\\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}] \\pi_{\\text{pos}}(\\boldsymbol{\\theta}| \\boldsymbol{\\widetilde{\\lambda}}_{1:i-1}, \\boldsymbol{\\widetilde{\\Phi}}_{1:i-1})d\\boldsymbol{\\theta}\n\t\t\\label{conditional_integral}\n\t\t\\end{equation}\n\tIn (\\ref{conditional_integral}), one needs to integrate over the posterior distribution of the parameters $\\boldsymbol{\\theta}$. As described in Section (\\ref{subsec: Bayesian solution}), two different methods for obtaining samples from this posterior distribution at each time step are implemented. In the case that an adaptive MCMC algorithm is used, at every step of the sequential updating we obtain the desired posterior distribution of the parameters in the form of correlated MCMC samples. In the case that the posterior distributions are approximated by multivariate Gaussian distributions using the Laplace approximation, independent posterior samples can be drawn from this approximate posterior density. Using $n_{\\text{pos}}$ samples $\\boldsymbol{\\theta}^{(k)}$ from either MCMC or the asymptotic approximation , the integral in equation (\\ref{conditional_integral}) can be approximated:\n\t\\begin{equation}\n\t\t\\text{Pr}[F(t_i)\\mid \\boldsymbol{Z}_{1:i-1}]\n\t\t\\approx \\frac{1}{n_{\\text{pos}}} \\sum_{k=1}^{n_{\\text{pos}}} \\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}^{(k)}]\n\t\t\\label{accumulated_posterior}\n\t\t\\end{equation}\n\tThe hazard function conditional on the monitoring data can then be obtained as:\n\t\\begin{equation}\n\t\th(t_i \\mid \\boldsymbol{Z}_{1:i-1}) = \\frac{\\text{Pr}[F(t_i) \\mid \\boldsymbol{Z}_{1:i-1}] - \\text{Pr}[F(t_{i-1}) \\mid \\boldsymbol{Z}_{1:i-1}]}{1 - \\text{Pr}[F(t_{i-1})\\mid \\boldsymbol{Z}_{1:i-1}]}\n\t\t\\label{hazard_posterior}\n\t\t\\end{equation}\n\t\n\t\\section{Life-cycle cost with SHM}\n\t\\label{sec:LCC}\n\t\n\t\\subsection{Life-cycle optimization based on heuristics}\n\tThe VoI is the difference in life-cycle cost between the cases with and without SHM system. To calculate the life-cycle cost we optimize the maintenance strategy. A strategy $S$ is a set of policies that determine which action to take at any time step $t_i$, conditional on all the information at hand up to that time \\cite{Jensen}, \\cite{Elizabeth}. One may define policies based on simple decision rules, also called heuristics, which may emerge from basic engineering understanding. \n\t\n\tA detailed presentation of the use of heuristics in optimal inspection and maintenance planning can be found in \\cite{Jesus, Elizabeth}. With the use of heuristics, the space of solutions to the decision problem is drastically reduced, but the problem is solved only approximately. Here, we utilize a simple heuristic for maintenance decisions. The simple heuristic chosen in this work is the following: Perform a repair action whenever the estimate of the hazard function (the conditional failure rate) is larger than a predefined threshold $h_{thres}$. The use of the hazard function as a decision criteria for condition assessment and maintenance planning is a popular choice in literature \\cite{Elingwood}. The parameter $w = h_{thres}$ describing the heuristic is a parameter of the strategy $S$. For simplicity, we assume herein that performing a repair action results in replacing the damaged components and bringing them back to the initial state, and that no failure will occur once a repair action has been performed. In this way, after a repair action, the computation of the total life cycle cost stops. This modeling choice is simplifying, but does allow for a viable computation of the VoI herein.\n\t\n\tThe total life-cycle cost $C_{\\text{tot}}$ is here taken as the total cost of maintenance and the risk of failure costs over the lifetime of the structure. The initial cost is not included in $C_{\\text{tot}}$, because it is the same with or without SHM, therefore it cancels out when caclulating the VoI.\n\t\n\tWith the use of heuristics, solving the decision problem boils down to finding the optimal value of the heuristic parameter $w$ which minimizes the expected total cost, i.e. to the solution of the optimization problem: \n\t\\begin{equation}\n\t\tw^* = \\underset{w}{\\argmin} \\boldsymbol{\\text{E}}[C_{\\text{tot}} \\mid w]\n\t\t\\label{optimization}\n\t\t\\end{equation}\n\t\n\t\\subsection{Computation of the expected total life-cycle cost in the prior case}\n\t\\label{subsec: Prior_LCC}\n\tIn the prior case, where only the prior deterioration model is available, the expectation in equation (\\ref{optimization}) is with respect to the system state, i.e. the deterioration model parameters $\\boldsymbol{\\theta}$. \n\tThe total cost of maintenance and risk is the sum of the repair costs and the risk of failure costs over the lifetime of the bridge, $C_{\\text{tot}}(w, \\boldsymbol{\\theta}) = C_{\\text{R}}(w) + C_{\\text{F}}(w, \\boldsymbol{\\theta})$, therefore the expected total life-cycle cost for a given heuristic parameter $w$ is: \n\t\\begin{equation}\n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}} \\mid w] = \\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{R}}(w) \\mid w] + \n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{F}}(w, \\boldsymbol{\\theta}) \\mid w] \n\t\t\\label{cost_breakdown}\n\t\t\\end{equation}\n\t\n\tThe first part of the right hand side of equation (\\ref{cost_breakdown}) can be computed in the following way. We draw samples $\\boldsymbol{\\theta}^{(k)}, k=1,..,n_{\\text{MCS}}$, from the prior distribution $\\pi_{\\text{pr}}(\\boldsymbol{\\theta})$ and use them to compute the accumulated probability of failure via equation (\\ref{MCS}), and subsequently compute the hazard function with equation (\\ref{hazard_prior}). When the hazard function exceeds the threshold, i.e. when $h(t_i) \\geq w$, then we define $t_{\\text{repair}}(w) = t_{i-1}$ as the time that the repair takes place. The time of repair is thus a function of our chosen heuristic. Hence the expected total cost of repair over the lifetime is given as:\n\t\\begin{equation}\n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{R}}(w) \\mid w] = \\hat{c}_R\\gamma(t_{\\text{repair}}(w))\n\t\t\\label{cost_repair}\n\t\t\\end{equation}\n\twhere $\\hat{c}_R$ is the fixed cost of the repair, and $\\gamma(t) = \\frac{1}{(1+r)^t}$ is the discounting function, $r$ being the annually compounded discount rate.\n\t\n\tThe risk of failure over the lifetime can be computed via MCS, using the samples $\\boldsymbol{\\theta}^{(k)}$, $k=1,..,n_{\\text{MCS}}$, that were drawn from the prior distribution $\\pi_{\\text{pr}}(\\boldsymbol{\\theta})$, with the following formula:\n\t\\begin{equation}\n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{F}}(w, \\boldsymbol{\\theta}) \\mid w] \\approx \\frac{1}{n_{\\text{MCS}}} \\sum_{k=1}^{n_{\\text{MCS}}} C_{\\text{F}}(w, \\boldsymbol{\\theta}^{(k)})\n\t\t\\label{MC_for_risk}\n\t\t\\end{equation}\n\twhere:\n\t\\begin{equation}\n\t\tC_{\\text{F}}(w, \\boldsymbol{\\theta}^{(k)}) = \\sum_{i=1}^{t_{repair}(w)}\\hat{c}_F\\gamma(t_i) \\{\\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}^{(k)}]- \\text{Pr}[F(t_{i-1})\\mid \\boldsymbol{\\theta}^{(k)}]\\}\n\t\t\\label{cost_failure}\n\t\t\\end{equation}\n\tand $\\hat{c}_F$ is the fixed cost of the failure event.\n\t\n\tFollowing the solution of the optimization problem in (\\ref{optimization}), the expected total life-cycle cost associated with the optimal decision in the prior case without any monitoring data is $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}} \\mid w_0^*]$.\n\t\n\t\\subsection{Computation of the expected total life-cycle cost in the preposterior case}\n\t\\label{subsec: Preposterior_cost}\n\tThe goal of a preposterior analysis is to act as a decision tool on whether collecting SHM data is beneficial, and to quantify the VoI of an SHM system, prior to its installation. Therefore this type of analysis is performed before any actual SHM data are obtained. Instead, the SHM monitoring data histories must be sampled over the lifetime from the prior distribution of the uncertain deterioration model parameters $\\boldsymbol{\\theta}$, as will be explained shortly. A sampled monitoring data history vector $\\boldsymbol{Z}= [\\boldsymbol{Z}_1,...,\\boldsymbol{Z}_{n_T}]$ contains the OMA identified modal data at fixed time instances over the structure lifetime. The rate at which this data is sampled should be chosen on the basis of the problem at hand. In this case, we explore a slow evolving deterioration process, and further ignore dependence on temperature effects. Therefore, for this investigation we employ one set of OMA-identified modal data per year. Under the currently assumed independence on environmental and operational conditions (EOCs) this sparse assumption is further justified from the observation that increasing the number of data sets used in the BMU process seemingly decreases the parameter estimation uncertainty, albeit not properly reflecting the full variability of the updated parameters \\cite{Behmanesh, Vanik}.\n\t\n\tIn a preposterior analysis, the expectation in equation (\\ref{optimization}) is operating over both the system state $\\boldsymbol{\\theta}$ and on the monitoring outcomes $\\boldsymbol{Z}$.\n\t\\begin{equation}\n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}, \\boldsymbol{Z}}[C_{\\text{tot}} \\mid w] = \\int_{\\Omega_{\\boldsymbol{\\theta}}} \\int_{\\Omega_{\\boldsymbol{Z}}} C_{\\text{tot}}(w, \\boldsymbol{\\theta}, \\boldsymbol{z})f_{\\boldsymbol{\\Theta}, \\boldsymbol{Z}}(\\boldsymbol{\\theta},\\boldsymbol{z} )d\\boldsymbol{z}d\\boldsymbol{\\theta}\n\t\t\\label{posterior_case}\n\t\t\\end{equation}\n\tThe total cost of maintenance and risk is again the sum of the repair cost and the risk of failure cost over the lifetime of the structure, which now both depend also on the monitoring outcomes $\\boldsymbol{Z}$, $C_{\\text{tot}}(w, \\boldsymbol{\\theta}, \\boldsymbol{Z}) = C_{\\text{R}}(w, \\boldsymbol{Z}) + C_{\\text{F}}(w, \\boldsymbol{\\theta}, \\boldsymbol{Z})$.\n\t\n\tThe integral in equation (\\ref{posterior_case}) is computed with crude MCS. We draw samples from the uncertain deterioration model parameters $\\boldsymbol{\\theta}$, which correspond to a deterioration history over the lifetime, as given by the deterioration model equation $D(\\boldsymbol{\\theta},t)$. For each of these histories, we generate noisy acceleration measurements every year, feed them into an SSI algorithm, and obtain one vector of monitoring modal data $\\boldsymbol{Z}$ (one identified modal data set per year). In this way we are jointly sampling the system state space and monitoring data space, and equation (\\ref{posterior_case}) is approximated as:\n\t\\begin{equation}\n\t\t\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}, \\boldsymbol{Z}}[C_{\\text{tot}} \\mid w] =\\frac{1}{n_{MCS}}\\sum_{k = 1}^{n_{MCS}}[C_{\\text{R}}(w, \\boldsymbol{z}^{(k)}) + C_{\\text{F}}(w, \\boldsymbol{\\theta}^{(k)}, \\boldsymbol{z}^{(k)})]\n\t\t\\label{post_case_MCS}\n\t\t\\end{equation}\n\tFor each of the sampled system states and corresponding monitoring data, we compute the updated hazard rate as given by equation (\\ref{hazard_posterior}), and when $h(t_i \\mid \\boldsymbol{z}^{(k)}_{1:i-1}) \\geq w$, then $t_{\\text{repair}}(w, \\boldsymbol{z}^{(k)}) = t_{i-1}$.\n\t\n\tThe cost of repair is:\n\t\\begin{equation}\n\t\tC_{\\text{R}}(w, \\boldsymbol{z}^{(k)}) = \\hat{c}_R\\gamma(t_{\\text{repair}}(w,\\boldsymbol{z}^{(k)}))\n\t\t\\label{cost_repair_post}\n\t\t\\end{equation}\n\t\n\tThe risk of failure is:\n\t\\begin{equation}\n\t\tC_{\\text{F}}(w, \\boldsymbol{\\theta}^{(k)},\\boldsymbol{z}^{(k)} ) = \\sum_{i=1}^{t_\\text{repair}(w, \\boldsymbol{z}^{(k)})}\\hat{c}_F\\gamma(t_i) \\{\\text{Pr}[F(t_i)\\mid \\boldsymbol{\\theta}^{(k)}]- \n\t\t\\text{Pr}[F(t_{i-1})\\mid \\boldsymbol{\\theta}^{(k)}]\\}\n\t\t\\label{cost_failure_post}\n\t\t\\end{equation}\n\tComparing equations (\\ref{cost_failure}) and (\\ref{cost_failure_post}) it is evident that adoption of the same samples of $\\boldsymbol{\\theta}$ in both prior and preposterior analysis, leads to an identical estimate of the risk of failure for the two analyses up to the time of the repair. The only difference between prior and preposterior case is the resulting $t_\\text{repair}(w, \\boldsymbol{z}^{(k)})$.\n\t\n\tSolving equation (\\ref{optimization}), we obtain the optimal expected total life-cycle cost given the monitoring data, $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta, Z}}[C_{\\text{tot}} \\mid w_{mon}^*]$.\n\t\t\n\t\\subsection{Summary of the proposed methodology to calculate the VoI}\n\t\\label{subsec: VOI}\n\tThe proposed procedure for the VoI analysis consists of the following steps:\n\t\\begin{enumerate}\n\t \\item Choose a prior stochastic deterioration model describing the structural condition over the lifetime of the structure. Define a decision analysis time discretization, maintenance\/repair actions, costs of actions and cost of failure event. Choose a heuristic parameter $w$ (threshold on hazard rate) for a heuristic-based solution of the decision problem.\n\t \\item Draw Monte Carlo samples $\\boldsymbol{\\theta}$ of the stochastic deterioration model parameters. \n\t \\item Perform a prior decision analysis:\n\t \\begin{itemize}\n\t \\item Use the prior $\\boldsymbol{\\theta}$ samples to estimate the lifetime accumulated probability of failure $Pr[F(t_i)]$ and the corresponding hazard rate $h(t_i)$.\n\t \\item Solve the LCC optimization problem to obtain the optimal value of the heuristic parameter $w_0^*$ and the corresponding optimal $t_{repair}$. Obtain the optimal expected LCC in the prior case: $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}}(\\boldsymbol{\\theta},w) \\mid w_0^*]$.\n\t \\end{itemize}\n\t \\item Perform a preposterior decision analysis:\n\t \\begin{itemize}\n\t \\item For each individual prior sample $\\boldsymbol{\\theta}$ realization and given value of the heuristic parameter $w$ do the following:\n\t \\begin{enumerate}[label=(\\alph*)]\n\t \\item Sample the corresponding noisy acceleration time series data for every year over the lifetime of the structure. Feed the accelerations into the SSI algorithm to identify the structure's modal data vectors $\\boldsymbol{Z}$.\n\t \\item Perform a posterior Bayesian analysis: BMU to sequentially learn the posterior distributions of $\\boldsymbol{\\theta}$ and subsequently obtain an updated estimate of the accumulated probability of failure $\\text{Pr}[F(t_i)\\mid \\boldsymbol{Z}_{1:i-1}]$ and the hazard rate $h(t_i \\mid \\boldsymbol{Z}_{1:i-1})$.\n\t \n\t \n\t \n\t \t \\item Find the time to perform the repair action for this specific deterioration and monitoring data realization, conditional on a value of the heuristic parameter $w$. \n\t \\end{enumerate}\n\t \\item Solve the LCC optimization problem to obtain the optimal value of the heuristic parameter $w_{mon}^*$ which minimizes the expected LCC in the preposterior case $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta, Z}}[C_{\\text{tot}}(\\boldsymbol{\\theta},\\boldsymbol{Z},w) \\mid w_{mon}^*]$.\n\t \\end{itemize}\n\t \\item Compute the VoI.\n\t \\begin{equation}\n\t VoI= \\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}}(\\boldsymbol{\\theta},w) \\mid w_0^*] - \\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta, Z}}[C_{\\text{tot}}(\\boldsymbol{\\theta},\\boldsymbol{Z},w) \\mid w_{mon}^*]\n\t \\label{VOI}\n\t \\end{equation}\n\t\\end{enumerate}\n\t\n\t\\subsection{Value of Partial Perfect Information}\n\t\\label{subsec: PI}\n\tThe case of partial perfect information is related to a hypothetical situation, in which the SHM system provides perfect information on the condition of the structure. This means that there is no uncertainty on the parameters $\\boldsymbol{\\theta}$ of the deterioration model, and the optimal decision is found conditional on this perfect knowledge of $\\boldsymbol{\\theta}$. Because the SHM system is not able to provide any information about the load acting on the structure, which here is modeled by an uncertain Gumbel random variable, therefore one uses the term ``partial''.\n\t\n\tEstimation of the value of partial perfect information is given by:\n\t\\begin{equation}\n\t\tVPPI= \\underset{w}{\\min} \\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}}(\\boldsymbol{\\theta},w)] - \\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}\\{\\underset{w}{\\min}[C_{\\text{tot}}(\\boldsymbol{\\theta},w)\\mid \\boldsymbol{\\theta}]\\}\n\t\t\\label{VPPI}\n\t\t\\end{equation}\n\t\n\tThe left hand side of equation (\\ref{VPPI}) is the optimal expected total life-cycle cost in the prior case, exactly as presented in Section \\ref{subsec: Prior_LCC}. On the right hand side, first the optimal heuristic is found conditional on exact knowledge of $\\boldsymbol{\\theta}$, then the expected value of the total life-cycle costs associated with optimal decisions is computed. This quantity corresponds to the value of information that one would obtain in the case of perfect monitoring and perfect decision making with the chosen heuristic.\n\t\n\tThe VPPI provides an upper limit on the value that the VoI can obtain. Since it can be computed much easier than the VoI, the VPPI can provide a first estimate on the maximum investment that should be made for SHM systems. Therefore, we motivate the idea that a VPPI computation should always be performed first.\n\t\n\t\\section{Numerical investigations}\n\t\\label{sec: Numerical_Investigations}\n\t\\subsection{Numerical benchmark: Continuously monitored bridge system subject to deterioration}\n\tWe consider the two-span bridge model of Figure~\\ref{f:benchmark}, with its reference behavior \\cite{bench19} simulated by a FE model of isoparametric plane stress quadrilateral elements. This benchmark structure has been developed as part of the TU1402 COST Action and serves for verification of analysis methods and tools for SHM. 200 elements are employed to mesh the $x$ direction, and 6 elements are assumed per height ($y$ direction). The beam dimensions form configurable parameters of the benchmark and are set as: height $h$ = 0.6m, width $w$ = 0.1m, while the lengths are $L_1$ = 12m for \n\tthe first span and $L_2$ = 13m for the second span. A linear elastic material with Young's modulus $E$ = 30GPa, Poisson ratio $\\nu$ = 0.2, and material density $\\rho$ = 2000 kg\/m\\textsuperscript{3} is assigned. Elastic boundaries in both directions are assumed for all three support points, in the form of translational springs with $K_x$ = 10\\textsuperscript{8} N\/m and $K_y$ = 10\\textsuperscript{7} N\/m.\n\t\n\tIt is assumed that the simulated two-span bridge is continuously monitored using a set of sensors measuring vertical acceleration, whose locations correspond to predefined FE nodes. A distributed Gaussian white noise excitation $F(x)$ is used as the load acting on the bridge, to simulate the unknown ambient excitation. A dynamic time history analysis of the model, for a given realization of the load, results in the measured vertical acceleration signals at the assigned sensor locations. \n\t\\begin{figure}[ht]\n\t\t\\centerline{\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/benchmark_4.pdf}\n\t\t}\n\t\t\\caption{Benchmark model}\n\t\t\\label{f:benchmark}\n\t\\end{figure}\n\n\t\\subsection{Deterioration modeling}\n\tA prior model describing structural deterioration is a prerequisite for a VoI analysis. A detailed presentation of probabilistic deterioration models for life-cycle performance assessment of structures can be found in \\cite{Elingwood, Frangopol, Biondini}. For time-dependent reliability assessment purposes, the use of simple empirical models, which are still flexible enough to model different kinds of deterioration mechanisms, can be adopted \\cite{Elingwood}.\n\tWithin this work, we use a simple rate equation of the form:\n\t\\begin{equation}\n\t\tD(t) = A t^B\n\t\t\\label{deterioration}\n\t\t\\end{equation}\n\tto model structural deterioration, where $D(t)$ is the unit-less deterioration parameter (loss of stiffness) entering in the assumed damage model, and $A, B$ are random variables driving the uncertainty in this model. Parameter $A$ models the deterioration rate, while parameter $B$ is related to the nonlinearity effect in terms of a power law in time. We consider herein the following two case studies related to structural deterioration of the bridge structure.\n\t\n\t\\subsubsection{Bridge system subject to scour}\n\tWe assume that the middle elastic support (pier) of the bridge structure is subjected to gradual deterioration, simulating the case of scour \\cite{Prendergast}. Damage is introduced as a progressive reduction of the stiffness in $y$-direction of the spring $K_y^{(2)}$ at the middle elastic support of the bridge (Figure \\ref{f:benchmark}). The evolution of the stiffness reduction of the vertical spring support over the lifespan of the bridge is described by employing the damage model of equation (\\ref{stifness_reduction}), where $K_{y,0}^{(2)}$ is the initial undamaged value, and $D(t)$ is the stiffness reduction described by equation (\\ref{deterioration}). We consider a lifespan of $T$=50 years for the structure. The uncertain parameters of the deterioration model are summarized in Table \\ref{table:Parameters_scour}. The mean and coefficient of variation of the parameters $A$ and $B$ are chosen to reflect a significant a-priori uncertainty. They result in a 10\\% probability that $D(t=50)>9$ at the end of the lifespan.\n\t\\begin{equation}\n\tK_y^{(2)}(t) = \\frac{K_{y,0}^{(2)}}{(1+D(t))} = \\frac{K_{y,0}^{(2)}}{(1+At^{B})}\n\t\\label{stifness_reduction}\n\t\\end{equation}\n\t\n\t\\begin{table}[!ht]\n\t\t\\caption{Parameters of the stochastic deterioration model for scour.}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\\hline\n\t\t\tParameter&Distribution&Mean&CV\\\\\\hline\n\t\t\tA&Lognormal&7.955$\\times$10\\textsuperscript{-4}&0.5\\\\\n\t\t\tB&Normal&2.0&0.15\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:Parameters_scour} \n\t\\end{table}\n\t\n\t\\subsubsection{Bridge system subject to corrosion deterioration}\n\tAs a second separate case study, we assume that the bridge structure is subjected to gradual deterioration from corrosion in the middle of both midspans (elements in black in Figure \\ref{f:benchmark}). At both locations, damage is introduced as a progressive reduction of the stiffness at the bottom 2 elements of the FE mesh. For the deterioration hotspots at the left and right midspans, the evolution of the elements' stiffness reduction over the lifespan of the bridge is described by employing the damage model of equation (\\ref{Youngs_modulus_reduction_left}). $E^{(0)}$ is the initial undamaged value of the Young's modulus, and $D_1(t)$, $D_2(t)$ are the deterioration models (reduction of stiffness) employed for each location, as described by equation (\\ref{deterioration}). The random variables of the deterioration models are summarized in Table \\ref{table:Parameters}. According to \\cite{Elingwood}, for this simple empirical deterioration model, a value of $B$=0.5 corresponds to diffusion-controlled damage processes. Therefore the mean values of $B_1$ and $B_2$ have been chosen equal to 0.5. The mean and coefficient of variation of the four uncertain parameters are chosen so that they result in a 1\\% probability that $D(t=50)>$9 at the end of the lifespan.\n\n\t\\begin{equation}\n\t\tE_j(t) = \\frac{E^{(0)}}{(1+D_j(t))} = \\frac{E^{(0)}}{(1+A_jt^{B_j})}, j = 1, 2\n\t\t\\label{Youngs_modulus_reduction_left}\n\t\t\\end{equation}\n\t\n\t\\begin{table}[ht]\n\t\t\\caption{Parameters of the stochastic deterioration model for corrosion.}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\\hline\n\t\t\tParameters&Distribution&Mean&CV\\\\\\hline\n\t\t\t$A_1, A_2$&Lognormal&0.506&0.4\\\\\n\t\t\t$B_1, B_2$&Normal&0.5&0.15\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:Parameters} \n\t\\end{table}\n\t\n\t\\subsection{Synthetic monitoring data creation}\n\t\\label{subsec: Synthetic}\n\tFor the purpose of the VoI analysis framework presented in this paper, for every deterioration time instance at which we want to simulate a monitoring data set obtained from the deployed SHM system, the corresponding stiffness reduction is implemented in the FE benchmark model, a dynamic time history analysis is run and the ``true\" vertical acceleration signals $\\ddot{x}$ at the sensor locations (FE nodes) are obtained. The noise-free acceleration time series data set is contaminated with Gaussian white noise of 2\\% root mean square noise-to-signal ratio, simulating a sensor measurement error. Subsequently the noisy accelerations $\\tilde{\\ddot{x}}$ are fed into an output-only operational modal analysis (OMA) scheme. Specifically, the stochastic subspace identification (SSI) \\cite{SSI} algorithm is used to identify a set of the lower eigenvalues (squares of natural frequencies) and mode shapes. The data creation process can be seen in Figure \\ref{f:synthetic}.\n\t\n\t\\begin{figure}\n\t\t\\centerline{\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/synthetic_data_creation.pdf}\n\t\t}\n\t\t\\caption{Process for generating the SHM data}\n\t\t\\label{f:synthetic}\n\t\\end{figure}\n\t\n\t\\subsection{Continuous Bayesian model updating}\n\t\\label{subsec: BMU results_chapter_6}\n\tInitially we demonstrate how the Bayesian framework performs in learning the parameters of the deterioration model on the basis of availability of the SHM modal data. In this work, the model predicting the eigenvalues and mode shapes for the Bayesian updating process is the same FE model as the one described in Section \\ref{subsec: Synthetic} for the creation of the noise-contaminated synthetic data. Despite addition of artificial noise, adoption of the same model constitutes a so-called inverse crime \\cite{inverse_crime}. This is the a built in feature of standard preposterior analysis. \n\t\n\tWe draw samples of the deterioration parameters, defining the evolution of sample deterioration curves. For each of these deterioration curves we create one monitoring history, i.e., we generate one set of OMA-identified modal data every year over the fifty years of the lifetime. In this simple example, the structural properties are not assumed influenced by environmental (temperature, humidity) and operation (non stationary effects due to traffic) variability. For this reason, we assume it suffices to utilize one estimate of the modal properties set per year. Using this data, we employ the sequential Bayesian deterioration model updating framework of Section \\ref{sec:Bayes}. \n\t\n\tThe sequential Bayesian analysis framework requires a substantial number of evaluations of the likelihood function, implying multiple forward runs of the FE model. Within a VoI framework, Bayesian analysis must be performed numerous times. For this reason a VoI analysis can quickly become intractable. To enable the VoI analysis, we employ simple surrogate models to replace the structural FE model, which are described in the following two subsections.\n\t\n\t\\subsubsection{Bridge system subject to scour deterioration - Global damage identification}\n\t In this assumed damage scenario we are interested in identifying damage in a global scale, for which use of the OMA-identified eigenvalue data alone may be sufficient. The benefit is that eigenvalue data can be successfully identified from an OMA procedure, even when only a rather small number of accelerometers are employed on the structure. The sensor placement that we assume here is the one corresponding to Figure \\ref{f:benchmark_3}, with twelve employed sensors. This configuration is selected on the basis of engineering judgment, when seeking to identify the type of damage (local stiffness reduction) considered herein. Using the SSI algorithm, we identify the lower $N_m=6$ modes, which we then use for the updating. \n\t\n\t\\begin{figure}\n\t\t\\centerline{\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/benchmark_3.pdf}\n\t\t}\n\t\t\\caption{Bridge system subject to scour damage}\n\t\t\\label{f:benchmark_3}\n\t\\end{figure}\n\t\n\tWe employ a surrogate model to replace the structural FE model for facilitating the Bayesian updating procedure. To this end, we create a fine uniform grid of values for $D(t)$; for each of these, we execute a modal analysis using the FE model and store the output eigenvalues. Eventually, we replace the modal analysis run of the structural FE model with a simple nearest neighbor lookup in the precomputed database. \n\t\n\tFor illustrating the data sampling and updating process, we assume a scenario where the underlying ``true\" deterioration model corresponds to parameters values $A^*$=9.85$\\times$10\\textsuperscript{-4} and $B^*$ = 2.28. The ``true\" deterioration curve can be seen in black in all the subfigures of Figure \\ref{f:updating_scour}.\n\t\n\tFigure \\ref{f:parameters_learnt_scour} demonstrates how the distribution of the deterioration model parameters is updated by comparing the prior PDF of $A$ and $B$ with the posterior PDF of $A$ and $B$ at year 25 and year 50. For this analysis, both factors $c_{\\lambda m}$ and $c_{\\Phi m}$ are assumed equal to $0.02$, i.e. we assume that the total prediction error causes up to two percent deviation on the nominal model predicted values. 5000 MCMC samples are used for the Bayesian analysis at each time step. The posterior PDFs are given via a kernel density estimation using the 5000 posterior MCMC samples of the parameters. It is observed that using one SHM data set per year, the uncertainty in the deterioration model parameters gradually decreases, the PDFs become narrower and peak around the underlying ``true'' values for which the data was created.\n\t\n\tFigure \\ref{f:updating_scour} contains the following: The mean estimated deterioration model together with its $90\\%$ credible interval in the prior case, obtained via a MCS from the prior distribution of the uncertain parameters, is plotted in the left panel in green. In red we plot the posterior predictive mean models together with their $90\\%$ credible intervals, which are estimated with posterior MCMC samples using monitoring data up to the three different time instances. For example in the second column, we use the monitoring data of the first ten years to obtain the posterior distribution $\\pi_{\\text{pos}}(\\boldsymbol{\\theta} \\mid \\boldsymbol{\\widetilde{\\lambda}}_{1:10},\\boldsymbol{\\widetilde{\\Phi}}_{1:10})$, and then we use the posterior MCMC samples to predict the evolution of the deterioration model over the structural lifetime. We observe that already the data obtained during the first few years of the deterioration process (up to year 10) help in shifting the mean posterior model towards the underlying ``true'' deterioration curve, however the posterior uncertainty in the estimation is still relatively large. The posterior uncertainty is reduced significantly as more SHM modal data become gradually available (year 25, year 50).\n\t\n\t\\begin{figure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Prior PDF and posterior PDF at years 25 and 50 for deterioration model parameters ($c_{\\lambda m}=c_{\\Phi m}=0.02$)}\n\t\t\\label{f:parameters_learnt_scour}\n\t\\end{figure}\n\t\\begin{figure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/prior_model_scour.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_10_updating.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_25_updating.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_50_updating.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Sequential Bayesian learning of the scour deterioration model}\n\t\t\\label{f:updating_scour}\n\t\\end{figure}\n\n\t\n\t\\subsubsection{Bridge system subject corrosion deterioration - Damage detection and localization}\n\t\\label{subsubsec: Updating_corrosion}\n\t\n\t\\begin{figure}\n\t\t\\centerline{\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/benchmark_2.pdf}\n\t\t}\n\t\t\\caption{Bridge system subject to corrosion damage in two locations}\n\t\t\\label{f:benchmark_2}\n\t\\end{figure}\n\t\n\tIn the assumed scenario with two potential corrosion damage locations, the employed Bayesian model updating framework should be able to both detect and localize damage. Therefore both eigenvalue as well as mode shape displacement data should become available. As discussed in Section \\ref{sec:Bayes}, a relatively large number of sensors is required for an accurate measurement and representation of the mode shape displacements. The sensor placement that we assume here is the one corresponding to Figure \\ref{f:benchmark_2}, with 24 equally distributed accelerometers. By using a finite difference scheme, we can also obtain the mode shape curvatures, which are used instead of the mode shapes in the likelihood function, which seems to enhance the localization capabilities of the framework. Also in this case we identify the lower $N_m$=6 modes. \n\t\n\tFor defining a surrogate model, we create a two-dimensional grid of values for $D_1(t), D_2(t)$, and for each of the grid points we run a modal analysis with the FE model, and we store the output eigenvalues and mode shape vectors. Eventually we employ the following surrogates: For each of the eigenvalues, we fit a two-dimensional polynomial regression response surface model. For the mode shape displacement vector data, we replace the run of the structural FE model with a simple nearest neighbor lookup in the precomputed two-dimensional database. \n\t \n \tFor illustration purposes, we draw one sample $\\boldsymbol{\\theta}^*$, which corresponds to the underlying ``true\" deterioration parameter values $A_1^*=0.65$, $B_1^*=0.55$, $A_2^*=0.42$ and $B_2^*=0.48$ (``true\" deterioration curves can be seen in black in all the subfigures of Figure \\ref{f:updating}).\n\n\tFigures \\ref{f:parameters_learnt_1}, \\ref{f:parameters_learnt_2}, \\ref{f:parameters_learnt_3} demonstrate how the distribution of the deterioration models' parameters is updated, by comparing the prior PDFs with the posterior PDFs at three different time instances. Both factors $c_{\\lambda m}$ and $c_{\\Phi m}$ in the likelihood function are assumed equal to $0.02$. In Figure \\ref{f:updating}, we compare the underlying ``true\" deterioration model with the deterioration model estimated using MCS in the prior case, and with the ones estimated with 5000 posterior MCMC samples at three different time instances.\n\t\n\t\\begin{figure}[!ht]\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a1_10.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b1_10.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a2_10.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b2_10.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Prior PDF and posterior PDF at year 10 for deterioration models parameters ($c_{\\lambda m}=c_{\\Phi m}=0.02$)}\n\t\t\\label{f:parameters_learnt_1}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a1_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b1_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a2_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b2_25.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Prior PDF and posterior PDF at year 25 for deterioration models parameters ($c_{\\lambda m}=c_{\\Phi m}=0.02$)}\n\t\t\\label{f:parameters_learnt_2}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a1_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b1_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a2_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b2_50.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Prior PDF and posterior PDF at year 50 for deterioration models parameters ($c_{\\lambda m}=c_{\\Phi m}=0.02$)}\n\t\t\\label{f:parameters_learnt_3}\n\t\\end{figure}\n\t\n\tSection \\ref{sec:Bayes} discusses the fact that quite often the choice of the magnitude of factors $c_{\\lambda m}$ and $c_{\\Phi m}$ for constructing the likelihood function can be arbitrary, since usually very little is known about the magnitude of the total prediction error. Figure \\ref{f:parameters_learnt_cov_5} attempts to demonstrate how crucial this choice can be for the results of the Bayesian updating, by performing it additionally for $c_{\\lambda m}=c_{\\Phi m}=0.05$. Comparing Figure \\ref{f:parameters_learnt_cov_5} to Figure \\ref{f:parameters_learnt_3} (both at year 50), it can be clearly observed that the posterior distribution of the deterioration model parameters that one learns is significantly affected by the choice of these factors.\n\t\n\t\\begin{figure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/prior_D1.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_10_updating_D1.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_25_updating_D1.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_50_updating_D1.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/prior_D2.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_10_updating_D2.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_25_updating_D2.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/year_50_updating_D2.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Sequential Bayesian learning of the two corrosion deterioration models}\n\t\t\\label{f:updating}\n\t\\end{figure}\n\t\n\t\\begin{figure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a1_50_cov_5.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b1_50_cov_5.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_a2_50_cov_5.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.24\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/posterior_b2_50_cov_5.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{Prior PDF and posterior PDF at year 50 for deterioration models parameters ($c_{\\lambda m}=c_{\\Phi m}=0.05$)}\n\t\t\\label{f:parameters_learnt_cov_5}\n\t\\end{figure}\n\t\n\t\\subsection{Time-dependent structural reliability and its updating using monitoring data}\n\n\tThe uncertain demand acting on the structure is modeled by the maximum load in a one-year time interval with a Gumbel distribution (left subfigure of Figure \\ref{f:capacity_Pr_F_scour}). The parameters of the Gumbel distribution are chosen such that the probability of failure in the initial undamaged state is equal to $10^{-6}$ and the coefficient of variation of the random load is $20\\%$. \n\t\n\t\\subsubsection{Bridge system subject to scour deterioration}\n\tThe deterministic capacity curve $R(D(\\boldsymbol{\\theta},t))$ of the damaged structure for any realization of the scour deterioration $D(\\boldsymbol{\\theta},t)$ can be seen in the middle panel of Figure \\ref{f:capacity_Pr_F_scour}. To determine this curve, we consider that when scour damage occurs in the middle support, the critical quantity that increases is the normal stress at the middle of the second, slightly longer, midspan. We create a fine one-dimensional grid of possible values as input for $D(\\boldsymbol{\\theta},t)$, for each of those we run a static analysis of our model, and we evaluate the loss of load bearing capacity of the structure relative to the initial undamaged state. \n\t\\begin{figure}\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/Gumbel.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/capacity_curve.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.\\linewidth]{figures\/prior_PrF_hazard_rate_scour.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{left: CDF of the Gumbel distribution for the load with location $a_n = 0.0509$, scale $b_n= 0.297$. middle: Structural capacity in function of scour deterioration, right: Time-dependent structural reliability curves estimated with the prior model in the scour deterioration case.}\n\t\t\\label{f:capacity_Pr_F_scour}\n\t\\end{figure}\n\n\tIn the right panel of Figure \\ref{f:capacity_Pr_F_scour} we plot the time-dependent accumulated probability of failure and the hazard function in the prior case, together with the 95\\% credible intervals, estimated using $10^4$ prior samples. Because of the skewness of the assumed prior deterioration model, the mean estimated curves are not contained within the 95\\% credible intervals.\n\t\n\t\\subsubsection{Bridge system subject to corrosion deterioration}\n\tWhen damage (stiffness reduction) occurs in the elements at the bottom of each midspan, the quantity that increases critically are the normal stresses at the top of each midspan. We create a two-dimensional grid of possible values of the two corrosion deteriorations $D_1$ and $D_2$. For each of those possible combinations, we run a static analysis with our model, and we evaluate the loss of load bearing capacity of the bridge structure relative to the undamaged state. Eventually we fit a two-dimensional polynomial regression response surface curve that describes $R(\\boldsymbol{D}(\\boldsymbol{\\theta},t))$; it can be seen in the left panel of Figure \\ref{f:SR_corrosion_example}. \n\t\n\tAs presented in Section \\ref{subsec:SR_updating}, learning the parameters of the deterioration models, and the reduction of the uncertainty in their estimation through the sequential acquisition of SHM modal data, affects the estimation of the time-dependent structural reliability. In Figure \\ref{f:SR_corrosion_example}, we plot in green the accumulated probability of failure and the hazard rate of the bridge structure in the case of using the prior deterioration model, and we compare it with the red plots of the accumulated probability of failure and the hazard rate conditional on the continuous monitoring data (1 data set per year), which correspond to the underlying ``true'' deterioration models described by $A_1^*=0.65$, $B_1^*=0.55$, $A_2^*=0.42$ and $B_2^*=0.48$. The prior estimates are obtained with 5000 Monte Carlo samples following equations (\\ref{MCS}), (\\ref{hazard_prior}). The posterior estimates are obtained via equations (\\ref{accumulated_posterior}), (\\ref{hazard_posterior}) using 5000 MCMC samples at each time step. The $95\\%$ credible intervals are computed using the Monte Carlo prior samples in the prior case, and the MCMC posterior samples in the posterior case. It is observed that the uncertainty in the estimation of the structural reliability is reduced in the posterior case. This reduction of the uncertainty and the updated estimate of the structural reliability form the basis for the VoI analysis.\n\t\n\t\\begin{figure}[ht]\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=0.90\\linewidth]{figures\/capacity_response_surface_2.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.05\\linewidth]{figures\/prior_filtering_PrF_5000_MCMC.pdf} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\includegraphics[width=1.05\\linewidth]{figures\/CORRECT_hazard_post_prior_5000_MCMC.pdf} \n\t\t\\end{subfigure}\n\t\t\\caption{left: Polynomial regression response surface for structural capacity in function of corrosion deterioration, right: time dependent structural reliability curves in the prior\/posterior corrosion case.}\n\t\t\\label{f:SR_corrosion_example}\n\t\\end{figure}\n\t\n\t\\subsection{VoI analysis}\n\t\\label{subsubsec: VOI_results}\n\tThe VoI is computed with equation (\\ref{VOI}) following the Bayesian preposterior decision analysis framework presented in Section \\ref{sec:LCC}. The expected total life-cycle costs, $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}}[C_{\\text{tot}} \\mid w]$ in the prior case and $\\boldsymbol{\\text{E}}_{\\boldsymbol{\\theta}, \\boldsymbol{Z}}[C_{\\text{tot}}|w]$ in the preposterior case, are both computed with MCS. In the preposterior case, as already explained in Section \\ref{subsec: Preposterior_cost}, the system state space $\\boldsymbol{\\theta}$ and the monitoring data space $\\boldsymbol{Z}$ are jointly sampled.\n\tFor each full history of modal data, one sequential Bayesian posterior analysis has to be performed, which is a costly procedure by itself. It is clear that such an analysis can be very computationally expensive, therefore some considerations on the available computational budget, and how to distribute it, have to be made in advance. The computational cost of the VoI analysis is approximately proportional to the number of MCS samples used in the expected life-cycle cost computation and the necessary corresponding synthetic monitoring data creation, and by the computational cost of the employed method for performing the sequential Bayesian updating.\n\n\tFor our investigation we assume $\\hat{c}_F = 10^7$\\euro{}, and for the repair cost $\\hat{c}_R$ we investigate different ratios $\\frac{\\hat{c}_R}{\\hat{c}_F}=[10^{-1},10^{-2},10^{-3}]$, and for each of those we calculate the VoI. The discount rate is taken as $r=2\\%$.\n\t\n\tThe solution to the stochastic life-cycle optimization problem of equation (\\ref{optimization}) is performed through an exhaustive search among a large discrete set of values of the heuristic parameter (the threshold at which a repair is performed).\n\t\n\t\\subsubsection{VoI results for bridge system subject to scour deterioration}\n\t\n\tFor this example, we draw 1000 samples of $\\boldsymbol{\\theta}$, which are used in both prior and preposterior analysis. In the preposterior case, for each $\\boldsymbol{\\theta}$ sample we create one continuous set of identified modal data $\\boldsymbol{Z}_{1:50}$. For the 1000 different sequential Bayesian analyses that have to be performed, we employ the adaptive MCMC algorithm. For the estimation of the different posterior accumulated probabilities of failure in equation (\\ref{accumulated_posterior}), 2000 posterior MCMC samples are used. \n\n\t\n\tTables \\ref{table:LCC_scour} and \\ref{table:LCC_SHM_scour} summarize the results of the life cycle optimization, documenting the optimal value of the heuristic parameter $w^*$, and the optimal expected total life cycle costs that correspond to $w^*$. Table \\ref{table:LCC_scour} documents also the optimal time for a repair action in the prior case. This is not documented in Table \\ref{table:LCC_SHM_scour}, since in the preposterior case there is not one single optimal $t_{repair}$ value, but $t_{repair}$ varies for each sample $\\boldsymbol{\\theta}$ and the corresponding monitoring history. \n\t\n\tTable \\ref{table:VOI_scour} documents the resulting VoI values that we obtain with the 1000 samples via equation (\\ref{VOI}) for the three different cost ratios, while Table \\ref{table:VPPI_scour} reports the VPPI values obtained via equation (\\ref{VPPI}), related to the hypothetical case when we learn perfectly the condition of the structure from the SHM system. We also include the CV of the mean VoI, VPPI estimates, which quantifies the uncertainty in the estimates obtained via MCS. In cost ratio cases for which the optimal action in the prior case is not to perform any repair, the VoI estimate has a quite large variability. This is because the samples in the preposterior analysis that lead to a different optimal $t_{repair}$ than in the prior case are only a few, which is an indication that a larger number of Monte Carlo samples or more efficient sampling techniques (e.g. importance sampling) should be used to reduce the variance. It is important to take into account that equation (\\ref{VPPI}) for computing the VPPI can easily be solved even for a very large number of MC samples, which would reduce the variability of the estimate shown here. \n\t\n\tFor all the cost ratio cases, the VoI is positive, which indicates a potential benefit of installing an SHM system on the deteriorating bridge structure. It is interesting to compare the obtained VoI values to the VPPI values. We observe that in this example the VoI from SHM extracted via Bayesian model updating is close to optimal, as it provides almost the full VPPI value.\n\t\n\t\\begin{table}\n\t\t\\caption{Results of preposterior Bayesian decision analysis for the scour example}\n\t\t\\begin{subtable}{.5\\textwidth}\n\t\t\t\\footnotesize\n\t\t\t\\centering\n\t\t\t\\caption{Life-cycle optimization in the prior case.}\n\t\t\t\\begin{tabular}{cccc}\n\t\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&$w_0^*$&$\\boldsymbol{\\text{E}}[C_{\\text{tot}}|w_0^*]$&$t_{repair}$\\\\\\hline\n\t\t\t\t$10^{-1}$&$\\ge2\\times10^{-3}$&45395&no repair\\\\\n\t\t\t\t$10^{-2}$&$\\ge2\\times10^{-3}$&45395&no repair\\\\\n\t\t\t\t$10^{-3}$&2$\\times10^{-5}$&5924 &year 31\\\\\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\label{table:LCC_scour} \n\t\t\\end{subtable}\n\t\t\\begin{subtable}{.5\\textwidth}\n\t\t\n\t\t\n\t\t\t\\footnotesize\n\t\t\t\\centering\n\t\t\t\\caption{Life-cycle optimization in the preposterior case.} \n\t\t\t\\begin{tabular}{cccc}\n\t\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&$w_{mon}^*$&$\\boldsymbol{\\text{E}}[C_{\\text{tot}}|w_{mon}^*]$\\\\\\hline\n\t\t\t\t$10^{-1}$&2.1$\\times10^{-2}$&12552\\\\\n\t\t\t\t$10^{-2}$&1.2$\\times10^{-3}$&3125\\\\\n\t\t\t\t$10^{-3}$&9.9$\\times10^{-5}$&1109\\\\\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\label{table:LCC_SHM_scour} \n\t\t\\end{subtable}\n\t\n\t\t\\begin{subtable}{.5\\textwidth}\n\t\t\t\\footnotesize\n\t\t\t\\centering\n\t\t\t\\caption{Value of information (VoI)}\n\t\t\t\\begin{tabular}{ccc}\n\t\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&VoI (CV)\\\\\\hline\n\t\t\t\t$10^{-1}$&32843 (0.34)\\\\\n\t\t\t\t$10^{-2}$&42270 (0.30)\\\\\n\t\t\t\t$10^{-3}$&4815 (0.02)\\\\\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\label{table:VOI_scour} \n\t\t\\end{subtable}\n\t\t\\begin{subtable}{.5\\textwidth}\n\t\t\t\\footnotesize\n\t\t\t\\centering\n\t\t\t\\caption{Value of partial perfect information (VPPI)}\n\t\t\t\\begin{tabular}{ccc}\n\t\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&VPPI (CV)\\\\\\hline\n\t\t\t\t$10^{-1}$&35013 (0.23)\\\\\n\t\t\t\t$10^{-2}$&42717 (0.21)\\\\\n\t\t\t\t$10^{-3}$&4918 (0.02)\\\\\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\label{table:VPPI_scour} \n\t\t\\end{subtable}\n\t\\end{table}\n\t\n\t\\subsubsection{VoI results for bridge system subject to corrosion deterioration}\n\t\\label{subsubsec: VoI_results_corrosion}\n\t\n\tFor this second example, we draw 2000 samples of $\\boldsymbol{\\theta}$, which are used in both prior and preposterior analysis. In the preposterior case, for each $\\boldsymbol{\\theta}$ sample we create one continuous set of identified modal data $\\boldsymbol{Z}_{1:50}$. For the 2000 different sequential Bayesian analyses that have to be performed, we employ the Laplace approximation method of Section \\ref{subsubsec: Laplace} for the solution, which introduces an approximation error in the posterior solution, especially in the initial years, when the data set is not so large, yet is computationally much faster than an MCMC solution. For the estimation of the posterior accumulated probability of failure in equation (\\ref{accumulated_posterior}), 10000 samples are drawn from the approximate multivariate Gaussian posterior distribution.\n\t\n\tThe computed VoI and VPPI estimates can be seen in Table \\ref{table:VOI_corrosion}. We observe that the VoI is 0 in the case when the costs have a ratio $\\frac{\\hat{c}_R}{\\hat{c}_F}=10^{-1}$, which means that one does not get any benefit from the data obtained from the SHM system. This is related to the fact that, for this cost ratio, the optimal decision is to not perform a repair action in the lifespan of the bridge, in both the prior and all the preposterior samples, since at all time steps the cost of a repair is much larger than the the risk of failure cost. For the cost ratio $\\frac{\\hat{c}_R}{\\hat{c}_F}=10^{-2}$, we observe that the VoI from SHM extracted via Bayesian model updating is not optimal, as it does not provide the full VPPI value, but $51\\%$ of this value, while for the cost ratio $\\frac{\\hat{c}_R}{\\hat{c}_F}=10^{-3}$ it only provides $22\\%$ of the VPPI value.\n\t\n\t\\begin{table}\n\t\\caption{Results of preposterior Bayesian decision analysis for the corrosion example}\n\t\\begin{subtable}{.5\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Life-cycle optimization in the prior case.}\n\t\t\\begin{tabular}{cccc}\n\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&$w_0^*$&$\\boldsymbol{\\text{E}}[C_{\\text{tot}}|w_0^*]$&$t_{repair}$\\\\\\hline\n\t\t\t$10^{-1}$&$\\ge2.8\\times10^{-4}$&$26792$&no repair\\\\\n\t\t\t$10^{-2}$&$\\ge2.8\\times10^{-4}$&$26792$&no repair\\\\\n\t\t\t$10^{-3}$&$2\\times10^{-5}$&$9308$&year $8$\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:LCC} \n\t\\end{subtable}\n\t\\begin{subtable}{.5\\textwidth}\n\t\n\t\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Life-cycle optimization in the preposterior case.}\n\t\t\\begin{tabular}{cccc}\n\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&$w_{mon}^*$&$\\boldsymbol{\\text{E}}[C_{\\text{tot}}|w_{mon}^*]$\\\\\\hline\n\t\t\t$10^{-1}$&$\\ge9\\times10^{-3}$&$26792$\\\\\n\t\t\t$10^{-2}$&$7.5\\times10^{-4}$&$25334$\\\\\n\t\t\t$10^{-3}$&$1.83\\times10^{-5}$&$9200$\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:LCC_SHM} \n\t\\end{subtable}\n\n\t\\begin{subtable}{.5\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Value of information (VoI)}\n\t\t\\begin{tabular}{ccc}\n\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&VoI (CV)\\\\\\hline\n\t\t\t$10^{-1}$&0\\\\\n\t\t\t$10^{-2}$&1458 (0.42)\\\\\n\t\t\t$10^{-3}$&108 (0.08)\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:VOI} \n\t\\end{subtable}\n\t\\begin{subtable}{.5\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Value of partial perfect information (VPPI)}\n\t\t\\begin{tabular}{ccc}\n\t\t\t$\\frac{\\hat{c}_R}{\\hat{c}_F}$&VPPI (CV)\\\\\\hline\n\t\t\t$10^{-1}$& 132 (0.30)\\\\\n\t\t\t$10^{-2}$& 2871 (0.22)\\\\\n\t\t\t$10^{-3}$& 497 (0.05)\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:VPPI} \n\t\\end{subtable}\n\t\\label{table:VOI_corrosion}\n\t\\end{table}\n\n\tAt a glance, one could claim by looking at the VPPI values, that for both constructed examples, the upper limit to how much value the SHM information can have for supporting the single repair decision is small, since this value does not contain the cost of the SHM system itself. In principle, if the cost of obtaining this information, i.e. the cost of the SHM system (installation, maintenance etc.), is higher than the VPPI, then no further investigation into a VoI analysis would make sense. The resulting VPPI and VoI estimates for these simplified example case studies are affected by the fact that only a single repair action case is explored.\n\n\t\\subsubsection{VoI results - Sensor placement study}\n\tThe purpose of this section is to demonstrate that the presented VoI analysis can be employed as a formal decision analysis tool for performing various parametric studies related to different choices in designing the SHM system and performing the Bayesian model updating procedure.\n\t\n\tOne critical choice when designing an SHM system is the number and position of the sensors to be employed on the structure. One could employ the proposed VoI analysis to perform optimal sensor placement studies for a deteriorating structural system. Each sensor arrangement choice will result in a VoI value, and the choice which leads to the highest VoI would be the preferred one.\n\t\n\tHerein we demonstrate this with the use of the second example of the bridge system subject to corrosion deterioration at two locations. For the decision problem, we now fix the cost of failure to $\\hat{c}_F = 10^7$\\euro{} and the cost of repair to $\\hat{c}_R = 3.5\\times10^4$\\euro{}. We consider the following two different arrangements of the sensors: i) 24 uniformly distributed accelerometers along the structure, ii) 12 uniformly distributed accelerometers along the structure. In both cases the VoI analysis is performed by drawing 1000 samples of $\\boldsymbol{\\theta}$.\n\t\n\tIt becomes evident that in the case that the structure is subjected to deterioration at two different damage locations, the number of sensors and consequently the quality of the mode shape displacement or curvature information that one obtains clearly affects the BMU results and therefore leads to a notable difference in the heuristic-based life-cycle optimization and the VoI result that we obtain.\n\t\n\t\\begin{table}[ht]\n\t\\caption{Parametric study for the effect of the number of sensors on VoI result}\n\t\\begin{subtable}{1\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Life-cycle optimization in the prior case.}\n\t \\begin{tabular}{cc}\n\t $w_0^*$&$t_{repair}^*$\\\\\\hline\n\t $6.1\\times10^{-5}$&21\\\\\\hline\n\t \\end{tabular}\n\t \\vspace{5pt}\n\t\t\\label{table:LCC_prior_parametric} \n\t\\end{subtable}\n\t\\begin{subtable}{.5\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Life-cycle optimization in the preposterior case.}\n\t\t\\begin{tabular}{cc}\n\t\t\tsensors&$w_{mon}^*$\\\\\\hline\n\t\t\t24&$1\\times10^{-4}$\\\\\n\t\t\t12&$3.2\\times10^{-4}$\\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:LCC_opt_parametric} \n\t\\end{subtable}\n\t\\begin{subtable}{.5\\textwidth}\n\t\t\\footnotesize\n\t\t\\centering\n\t\t\\caption{Effect of number of sensors on the VoI}\n\t \\begin{tabular}{c|cccc}\n\t\t VPPI (CV)&sensors&VoI (CV)&$\\frac{\\text{VoI}}{\\text{VPPI}}$\\\\\\hline\n\t\t 7681 (2.6\\%)&24&4614 (5.3\\%)&60\\%\\\\\n\t\t &12&2711 (15\\%)&35\\%\\\\\\hline\n\t \\end{tabular}\n\t\t\\label{table:VOI_parametric} \n\t\\end{subtable}\n\t\\label{table:VOI_corrosion_parameter_study}\n\t\\end{table}\n\t\n\t\\section{Concluding remarks}\n\tThis paper investigates the quantification of the VoI yielded via adoption of SHM systems acting in long-term prognostic mode for cases of deterioration. A preposterior Bayesian decision analysis for quantifying the VoI, specifically tailored for application on an employed numerical benchmark structural model is presented. The modeling of the acquired SHM data is done in a realistic way, following a state-of-the-art operational modal analysis procedure. The data is used within a Bayesian model updating framework, implemented in a sequential setting, to continuously update the uncertain structural condition, which subsequently leads to the updating of the estimate of the structural reliability. A heuristic-based solution to the simplified decision problem is provided for finding the optimal time to perform a single repair action, which might be needed during the lifetime of the structure. We discuss specific computational aspects of a VoI calculation. The VoI analysis requires the integration over the monitoring data, which are here modeled in a realistic way, adding an extra computationally expensive layer in the analysis. In addition to the VoI solution, an upper limit to the VoI through the value of partial perfect information is also provided, related to hypothetical situations of perfect knowledge on the system condition. It should be noted that the resulting VoI estimates are affected by the fact that only a single repair action case is explored. In the present exemplary analysis, we do not take into account dependence on varying environmental effects (e.g dependence on temperature). The VoI analysis and results presented herein focus on demonstrating, for the first time, a VoI analysis on the full SHM chain, from data acquisition to utilization of a structural model for the purpose of the updating and reliability calculation. \n\t\n\t\\section*{Declaration of competing interest}\n\tThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.\n\t\n\t\\section*{Acknowledgments}\n\tThe authors would like to gratefully acknowledge the support of the TUM Institute for Advanced Study through the Hans Fischer Fellowship.\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Background}\n\\label{sec:background}\n\nOur work is built upon prior literature in inverse differential kinematics and null space projection based redundancy resolution.\n\n\\subsection{Inverse Differential Kinematics}\nLet $\\textit{\\textbf{q}} \\in\\mathbb{R}^n$ denote the joint configuration of a robot with\n$n$ degrees of freedom. Let $\\textit{\\textbf{x}} \\in\\mathbb{R}^m$ denote the vector of task variables\nin a suitably defined $m$-dimensional task space.\nThe first-order \\textit{differential kinematics} is usually expressed as\n\\begin{equation}\\label{eq:differen}\n \\dot{\\textit{\\textbf{x}}} =\\textbf{\\text{J}}(\\textit{\\textbf{q}}) \\dot{\\textit{\\textbf{q}}}\n\\end{equation}\nwhere $\\dot{\\textit{\\textbf{x}}}$, $\\dot{\\textit{\\textbf{q}}}$ are vectors of task and joint velocities respectively. $\\textbf{J}(\\textit{\\textbf{q}})$ is the $m\\times n$ Jacobian matrix.\nThe dependence on $\\textit{\\textbf{q}}$ is omitted hereafter for notation compactness.\n\nTypically, one has $n \\geq m$ for a redundant robot, \ni.e. the robot has a $(n\\!-\\!m)$-dimensional \\textit{redundancy space} for subtasking. \nThen the general \\textit{inverse differential kinematics} solution of Eq.~\\ref{eq:differen} is usually expressed as \n\\begin{equation}\\label{eq:inverseKinematics}\n \\dot{\\textit{\\textbf{q}}} =\\textbf{J}^{+} \\dot{\\textit{\\textbf{x}}} +(\\textbf{I}- \\textbf{J}^{+}\\textbf{J} ) \\dot{\\textit{\\textbf{q}}}_0 \n\\end{equation}\n\n\\noindent where $\\textbf{J}^{+} \\in \\mathbb{R}^{n\\times m}$ is the pseudoinverse matrix of $\\textbf{J}$. $\\textbf{N}(\\textbf{J})=\\textbf{I}-\\textbf{J}^{+}\\textbf{J} \\in \\mathbb{R}^{n \\times n}$ is an operator projecting any arbitrary joint velocity $\\dot{\\textit{\\textbf{q}}}_0 \\in \\mathbb{R}^n$ into the {null space} of $\\textbf{J}$, i.e. the robot redundancy space.\n\n\\subsection{Null Space Projection based Redundancy Resolution}\n\nThe projection of $\\dot{\\textit{\\textbf{q}}}_0$ onto the null space ensures no effect on the primary task. Under this premise, the early works \\cite{hanafusa1981analysis,maciejewski1985obstacle,nakamura1987task} have proposed the control framework of redundancy resolution with task priority, \nwhich essentially consists of computing a $\\dot{\\textit{\\textbf{q}}}_0$ to suitably enforce a secondary task\nin the null space of the primary task.\n\nWith reference to Eq.~\\ref{eq:inverseKinematics}, the inverse kinematics solution considering a two-order of task priorities (indexed by 1, 2 for the primary and secondary task respectively) can then be expressed as \n\\begin{equation}\\label{eq:controlLaw_0}\n \\dot{\\textit{\\textbf{q}}} = \\textbf{J}^{+}_{1}\\dot{\\textit{\\textbf{x}}_{1}} + (\\textbf{I}- \\textbf{J}^{+}_1\\textbf{J}_{1} ) [\\textbf{J}_2(\\textbf{I}-\\textbf{J}^{+}_1\\textbf{J}_1)]^{+} ( \\dot{\\textit{\\textbf{x}}}_2 -\\textbf{J}_2\\textbf{J}^{+}_1 \\dot{\\textit{\\textbf{x}}}_1 )\n\\end{equation}\n\n\\noindent where $\\dot{\\textit{\\textbf{x}}}_1$, $\\dot{\\textit{\\textbf{x}}}_2$ and $\\textbf{J}_1$, $\\textbf{J}_2$ are the task velocities and Jacobian matrices of the primary and secondary task respectively.\n\nAs illustrated in Fig.~\\ref{fig:overview},\nwe build our control framework upon Eq.~\\ref{eq:controlLaw_0}, where we model a virtual dynamic secondary task for subtasks, and then deploy it in the null space of the primary task, such that all subtasks can be suitably executed as good as possible without disturbing\nthe primary task.\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\scriptsize\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\t\\import{.\/background\/figures\/}{overview2.pdf_tex}\n\n\t\\caption{Overview of the Approach.}\n\t\\label{fig:overview}\n\\end{figure}\n\\section{Case Studies}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Appendix}\n\\label{sec:app} \n\n\n\n\n\\subsection{Winner-Take-All Based Updating Algorithm}\\label{subsec:WTA}\n\n\\begin{algorithm}[!h]\n\\label{alg:Framwork} \n\\caption{$\\dot{\\textbf{\\text{A}}} = \\textbf{W}(\\textbf{\\text{A}},\\textbf{S},\\textbf{\\text{P}})$} \n\\hspace*{0.02in} {\\bf Input:} \n$\\textbf{\\text{A}},\\textbf{S},\\textbf{\\text{P}}$\\\\\n\\hspace*{0.02in} {\\bf Output:} \n$\\dot{\\textbf{\\text{A}}}$\n\\begin{algorithmic}[1]\n\\State $\\dot{\\textbf{\\text{A}}}=\\textbf{\\text{P}}\\times \\textbf{S} $\n\\For{each row $\\dot{\\textbf{\\text{A}}}_i$ in $\\dot{\\textbf{\\text{A}}}$}\n \n \\State $\\omega = \\text{argmax}(\\dot{\\textbf{\\text{A}}}_i)$ \n \\If{$\\alpha_{i\\omega}=\\gamma$}\n \\State $\\dot{\\textbf{\\text{A}}}_i\\leftarrow \\bf{0}$\n \n \\Else\n \\State $v = \\text{argmax}(\\dot{\\textbf{\\text{A}}}_i- \\{ \\dot{\\alpha}_{i\\omega} \\} )$\n \\State $z = (\\dot{\\alpha}_{i\\omega}+\\dot{\\alpha}_{iv})\/2$\n \\State $\\dot{\\textbf{\\text{A}}}_i \\leftarrow \\dot{\\textbf{\\text{A}}}_i-z$\n \\State $s\\leftarrow0$\n \\For{$j\\leq l$} \n \\If{$\\dot{\\alpha}_{ij}>0$ and ${\\alpha}_{ij}\\neq 0$ } \n \\State $s\\leftarrow s+\\dot{\\alpha}_{ij}$\n \\EndIf\n \\EndFor\n \\State $\\dot{\\alpha}_{i\\omega} \\leftarrow \\dot{\\alpha}_{i\\omega}-s$\n \\EndIf\n\\EndFor\n\\State \\Return $\\dot{\\textbf{\\text{A}}}$\n\\end{algorithmic}\n\\end{algorithm}\n\nAlg.~1 first yields a preliminary updating rate by multiplying the priority matrix $\\textbf{\\text{P}}$ with the task status matrix $\\textbf{S}$ (line 1). Then at each row $\\dot{\\textbf{\\text{A}}}_i$, if the entry $\\alpha_{i\\omega}$ in $\\textbf{\\text{A}}_i$ corresponding to the greatest update in $\\dot{\\textbf{\\text{A}}}_i$ is already saturated at 1, then $\\textbf{\\text{A}}_i$ will not be updated by setting $\\dot{\\textbf{\\text{A}}}_i$ to be $\\textbf{0}$ (line 3\u20135).\n\nOtherwise, the algorithm first lowers $\\dot{\\textbf{\\text{A}}}_i$ to a baseline by subtracting an average of the first-two largest entries (line 7-9). This ensures only one updating rate in $\\dot{\\textbf{\\text{A}}}_i$ is positive, i.e. only one weight $\\textbf{\\text{A}}_i$ will increase.\nThen, in order to ensure the sum of the updating rate is 0, we calculate the sum of the current effective updating rate and subtract it to the maximum update rate (line 10\u201314).\n\n\\[\\begin{array}{l}\ns = {{\\dot \\alpha }_{\\omega j}} + T\\\\\nT = \\sum\\limits_{i \\ne \\omega \\cap \\left( {{{\\dot \\alpha }_{ij}} > 0 \\cup {\\alpha _{ij}} \\ne 0} \\right)} {{{\\dot \\alpha }_{ij}}} \n\\end{array}\\]\n\n$S$ of the equation represents the sum of all valid update rates\n${\\dot \\alpha }_{ij}$. \n\n\\[\\begin{array}{l}\n{{\\dot \\alpha }_{\\omega j}} \\leftarrow {{\\dot \\alpha }_{\\omega j}} - S\\\\\n = {{\\dot \\alpha }_{\\omega j}} - ({{\\dot \\alpha }_{\\omega j}} + T) = - T\n\\end{array}\\]\n\n\\[{{\\dot \\alpha }_{\\omega j}} + \\sum\\limits_{i \\ne \\omega \\cap \\left( {{{\\dot \\alpha }_{ij}} > 0 \\cup {\\alpha _{ij}} \\ne 0} \\right)} {{{\\dot \\alpha }_{ij}}} = - T + T = 0\\]\n\nThe above formula indicates that the sum of all valid update rates is 0. Therefore, after the $A$ matrix is updated, the sum of its items remains unchanged.\nThis will ensure that the weight will not be cleared. \n\n\\subsection{Weight Convergence and System Stability}\nThis section presents a detailed proof our approach can converge each weight in $A$ to a stable state along both redundancy and subtask.\n\nSuppose two elementary subtasks $f_p$ and $f_q$, where $f_p$ is being (or has been) activated, i.e. $\\bar f_p \\simeq 1$, and by contrast $f_q$ is idle, i.e. $\\bar f_q \\simeq 0$. \nWe aim to prove that the weight transition can be always correctly achieved for both subtasks, such that they can be suitably performed in due course. We open up our proof along the redundancy and subtask space separately. \n\n\n\n\n\\vspace{2mm}\n\\noindent\\textbf{I. Weight Transition along Redundancy}: Assume an $i$-th redundancy is available for subtasking $f_p$ and $f_q$.\nIf in the winner take all process, the winner is $f_p$,\n\\begin{equation} \n \\Delta \\dot{\\alpha}_{i-pq} \\&:= \\dot{\\alpha}_{ip}-\\dot{\\alpha}_{iq}\\\\\n \\&= {\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{ip}-\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{iq}}\n \\geq 0\\\\\n\\end{equation}\nThe weight will transition from $f_q$ to $f_p$, and vice versa.\\\\\n\nIf the winner has been born and the maximum update value is still the winner, then the weight of all non-winners is 0, the weight remains stable, and there is no mutual transition (Alg.~1 line 4-5).\nIf there is a weight transition,the below relationship holds for all $i$ that is not the winner.\n\\begin{equation} \n \\frac{\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{ip}-\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{iq}} {(\\textbf{\\text{P}}\\textbf{S})_{ip}-(\\textbf{\\text{P}}\\textbf{S})_{iq}} \\geq 1\n\\end{equation}\nIn Alg.~1 lines 7 to 9, only the same item z is subtracted from all elements, and the relative distance between elements remains the same. Since neither $f_q$ nor $f_p$ is winner, there is no action on line 10.\n\nThen the relative updating difference between $f_p$ and $f_q$ is\n\\begin{equation*}\n \\begin{split}\n \\Delta \\dot{\\alpha}_{i-pq} &:= \\dot{\\alpha}_{ip}-\\dot{\\alpha}_{iq} \\\\\n &= {\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{ip}-\\textbf{W}(\\textbf{\\text{P}}, \\textbf{S}, \\textbf{\\text{A}})_{iq}}\n \\geq (\\textbf{\\text{P}}\\textbf{S})_{ip}-(\\textbf{\\text{P}}\\textbf{S})_{iq}\\\\\n &= \\prod_{u=0}^{i-1}(1-\\alpha_{up})\\prod_{v=0}^{p-1}(1-\\alpha_{iv})\\prod_{u\\neq i}(\\gamma-\\alpha_{up})\\bar{f}_p\\\\\n & - \\prod_{u=0}^{i-1}(1-\\alpha_{uq})\\prod_{v=0}^{q-1}(1-\\alpha_{iv})\\prod_{u\\neq i}(\\gamma-\\alpha_{uq})\\bar{f}_q\n \\end{split}\n\\end{equation*}\n\n\\vspace{2mm}\n\\noindent Specifically, there are four cases:\n\n\\vspace{1.5mm}\n\\noindent\\textbf{Case One}: Suppose neither of $f_p$,$f_q$ is occupying a redundancy, i.e. $\\alpha_{up}\\simeq \\alpha_{uq} \\simeq 0$, $\\forall u\\neq i$. Then we have \n\\begin{equation*}\n \\begin{split}\n 0<&\\prod_{u=0}^{i-1}(1-\\alpha_{up})\\prod_{u\\neq i}(\\gamma-\\alpha_{up})\\\\\n \\simeq&\\prod_{u=0}^{i-1}(1-\\alpha_{uq})\\!\\prod_{u\\neq i}(\\gamma-\\alpha_{uq}) \\simeq \\gamma^{n-m-1}\n \\end{split}\n\\end{equation*}\nDenote $c=\\gamma^{n-m-1}>0$ (a constant), then we have \n\\begin{equation*}\n \\dot{\\alpha}_{i-pq}\\! \\geq\\! (\\textbf{\\text{P}}\\textbf{S})_{ip}\\!-\\!(\\textbf{\\text{P}}\\textbf{S})_{iq}\\! \\simeq c(\\!\\prod_{v=0}^{p-1}(1-\\alpha_{iv})\\bar{f}_p\\!-\\!\\prod_{v=0}^{q-1}(1\\!-\\!\\alpha_{iv})\\bar{f}_q)\n\\end{equation*}\n\n\\noindent\\textbf{(1)}. If $pq$, similarly, we have \n\\begin{equation*}\n \\dot{\\alpha}_{i-pq}\\! \\geq c\\prod_{v=0}^{q-1}(1-\\alpha_{iv})(\\prod_{v=q}^{p-1}(1-\\alpha_{iv})\\bar{f}_p-\\bar{f}_q)\n\\end{equation*}\nwhich indicates,\nsimilarly, a higher weight will be eventually transited to $f_p$, as $\\bar{f}_p$ and $\\bar{f}_q$ vary in accordance with their task status. It also suggests that, however, since $f_q$ is previous to $f_p$ by index, until $\\bar{f}_q=0$, $\\dot{\\alpha}_{i-pq}\\geq0$ is not guaranteed. That is, the weight of $f_p$ will not be improved as faster as $f_q$ until $f_q$ is competed, since $f_q$ has a higher indexing priority.\n\n\n\n\n\\vspace{1.5mm}\n\\noindent \\textbf{Case Two}: Suppose only $f_p$ is occupying a redundancy, i.e. $\\exists u \\neq i, \\alpha_{up} =\\gamma$. \nThen $\\dot{\\alpha}_{ip} =0$ and therefore $\\dot{\\alpha}_{i-pq} = \\dot{\\alpha}_{ip}-\\dot{\\alpha}_{iq} = 0 -\\dot{\\alpha}_{iq}\\leq0$. That is, a relatively faster weight increase will be given to $f_q$. This is in compliance with the fact that $\\bar{f}_p$ has been allocated with a redundancy and therefore its weight will not increase. A higher weight will be accordingly transited to $f_q$. \n\n\\vspace{1.5mm}\n\\noindent\\textbf{Case Three}: Suppose only $f_q$ is occupying a redundancy, similarly, we can prove $\\dot{\\alpha}_{i-pq} \\geq0$ holds, which is consistent with the fact a higher weight is supposed to transit to $f_p$.\n\n\\vspace{1.5mm}\n\\noindent\\textbf{Case Four}: Suppose both substasks are holding redundancies. Then $\\dot{\\alpha}_{ip} =\\dot{\\alpha}_{iq} = 0$ and therefore $\\dot{\\alpha}_{i-pq} = 0$, i.e. there is no relative difference between their updating rate, which is consistent with the fact that subtasks that have been (being) executed will not compete for redundancy and there is no weight transition between them.\n\n\\vspace{2mm}\n\\noindent\\textbf{II.$\\;$Weight Transition along Subtask}: Suppose the subtask $f_p$ has been allocated to a $u$-th redundancy, i.e. $\\exists u, \\alpha_{up} =\\gamma$. Then at any other $\\omega$-th redundancy, it satisfies\n$\\dot{\\alpha}_{\\omega p}\\leq0, \\forall \\omega \\neq u$. That is, once a subtask has been allocated at a certain redundancy, the weights of the subtask at other redundancies will not increase, which exactly meets the constraint that an assigned subtask should not jump back and forth.\n\nTo sum up, our approach can converge the weights along both the redundancy and the subtask space. Since each subtask controller is stable in design, the entire system can be executed stably once the convergence is achieved.\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion} \n\n\n\n\n\n\n\n\nThis work has addressed the constrained redundancy resolution problems where multiple constraints or subtasks in addition to a primary task have to compete for insufficient redundancies. The proposed approach based on subtask merging and null space projection\nresolves redundancy insufficiency by dynamically allocates them to subtasks in compliance with task status and priority. \nTwo real robot case studies with solid and substantial results have proved that our approach \ncan be a promising solution \nto suitably handle complex robot applications characterized by dynamic and unstructured environments.\nBased on the results, our future works will focus on (1) further modulating and smoothing redundancy shifts to reduce its effect on task execution, e.g. at around 15s in Fig.~\\ref{subfig:joint}, the joint difference fluctuates shortly due to a redundancy shift.\nand (2) introducing a certain level of predicting capability to the weight updating strategy such as to proactively predict and accommodate the change of task status, e.g. the occurrence of an emergency.\n\n \n\n\n\n\\section{Experiment Results}\n\\label{sec:experiment} \n\n\nThis section presents two test cases followed by experimental results to show the performance of our approach.\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\tiny\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\n\t\\includegraphics[width=3.5in]{experiment\/figures\/04.png}\n\t\\caption{The traditional approach: The robot collides with itself at the elbow joint (the blue line) at around 13s, as the self-collision subtask is not treated during the whole process due to redundancy insufficiency. Each solid line represents a relevant joint for self-collision avoidance, while the dotted line in the same color represents its joint-collision limit. }\n\t\\label{fig4:SelfColli}\n\\end{figure}\n\n\n\\subsection{Experimental Cases}\n\n\\noindent\\textbf{I. Drink-Serving}: As introduced previously in Fig.~\\ref{fig1:intro}, the first test case is about a mobile robot serving drinks along a desired path. We implement this test case on a real six-DOF UR16e robot manipulator mounted on an omnidirectional mobile platform. Therefore, the robot has in total nine DOFs. The primary task of serving drinks requires six DOFs and therefore leaves three DOFs as redundancies for subtasking.\nThe subtasks in this case involve:\n\\begin{itemize}\n \\item A three-dimensional \\textit{obstacle-avoidance} subtask, e.g. avoiding the walking human, which can be split into three elementary obstacle-avoidance subtasks.\n \\item A three-dimensional \\textit{self-collision} avoidance subtask, e.g. avoiding the collision between the manipulator and the platform, which can be split into three elementary self-collision avoidance subtasks.\n\\end{itemize}\n\nIdeally, both subtasks should be performed simultaneously along with the primary task. However, due to the lack of sufficient redundancies, the six elementary subtasks have to compete for three redundancies during runtime.\n\n\\vspace{2mm}\n\\noindent\\textbf{II. Circle-Drawing}: As illustrated in Fig.~\\ref{fig4:maintas}, the second case is about a manipulator drawing a circle along a desired end-effector path. We implement this test case using the same robot as the first case, but the mobile platform is fixed at a certain location. Therefore, the robot has in total six DOFs. The primary task of circle drawing requires three DOFs and therefore leaves three DOFs as redundancies for subtasking. The subtasks in the case involve:\n\n\\begin{itemize}\n \\item A three-dimensional \\textit{singularity-avoidance} subtask, which can be split into three elementary singularity-avoidance subtasks. \n \\item A one-dimensional \\textit{wrist-limit} subtask, which simply constraints the wrist joint to a desired angle.\n\\end{itemize}\n\nTherefore, there are four elementary subtasks competing for three redundancies in this case. \n\n\\subsection{Experimental Results}\nWe test our approach (Eq.~\\ref{eq:controlLaw_1}) on both cases and compare it with the traditional approach (Eq.~\\ref{eq:controlLaw_0}). Briefly, given a case, \n\\begin{itemize}\n \\item The traditional approach first assigns a number of required DOFs to primary task. Then it allocates the remaining redundancies to as many subtasks as it can \\textit{and} then keep the redundancy allocation. \n \\item The subtask-merging based approach (our approach), as explained in Sec.~\\ref{sec:method}, first assigns the required DOFs to primary task. Then it dynamically allocates the remaining redundancies to all elementary subtasks generated from subtask unitization in due time.\n\\end{itemize}\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\scriptsize\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\t\\import{experiment\/figures\/}{t2.pdf_tex}\n\t\\caption{Our approach: \n\n All subtasks are suitably performed. Three redundancies are first shifted to the obstacle-avoidance subtask to avoid the walking human at round 5s, and then given back to three elementary self-collision subtasks to avoid potential self-collision at around 9s. }\n\t\\label{fig5:NonSelfColli}\n\\end{figure}\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\tiny\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\n\t\\includegraphics[width=3.5in]{experiment\/figures\/05.png}\n\t\\caption{Redundancies shift dynamically among elementary subtasks: \n Our approach dynamically allocates three redundancies to six elementary subtasks in due course. Each subfigure corresponds to a redundancy, where three dotted coloured lines correspond to the weights of three elementary obstacle-avoidance subtasks, and three solid coloured lines correspond to the weights of three elementary self-collision subtasks, i.e. the update of $\\textbf{\\text{A}}$.}\n\t\\label{fig5:WeightMatrix}\n\\end{figure}\n\n\n\\vspace{2mm}\n\\noindent\\textbf{I. Experimental Results of Drink-Serving:}\nFig.~\\ref{fig4:SelfColli} and~\\ref{fig5:NonSelfColli} show the results generated respectively by the traditional and our approach during the whole process of the self-collision avoidance subtask. \nFig.~\\ref{fig5:WeightMatrix} shows the redundancy shift among six elementary subtasks (i.e. the evolution of weights in $\\textbf{\\text{A}}$) generated by our approach. \n\n\\begin{figure*}[!th]\n\t\\begin{center}\t\n\t\t\\mbox{\n\t\t\t\\subfigure[Performance on the Singularity-avoidance Subtask ]{{\n \t\\centering\n \t\\tiny\n \t\\def0.98\\columnwidth{1\\columnwidth}\n\t\t\t\t\t\t\\import{experiment\/figures\/}{t7.pdf_tex}\n \t\\label{subfig:sin}\n\t\t\t\t\t}}\n\t\t\t\\subfigure[Performance on the Wrist-Limit Subtask]{{\n \t\\centering\n \t\\tiny\n \t\\def0.98\\columnwidth{1\\columnwidth}\n\t\t\t\t\t\t\\import{experiment\/figures\/}{t8.pdf_tex}\n \t\\label{subfig:joint}\n\t\t\t\t\t}}\t\t\t\t\t\n\t\t}\n\t\t\\caption{Both approaches perform well in the singularity-avoidance subtask, \nwhile our approach outperforms in properly addressing the wrist-limit subtask.}\n\t\t\\label{fig:tc2}\n\t\\end{center}\n\\end{figure*}\n\nIn this case, as shown in Fig.~\\ref{fig4:SelfColli}, the traditional approach allocates three redundancies to the obstacle-avoidance subtask, and then leaves the self-collision subtask untreated since there is no more redundancy available. As a result,\neven though the moving human is successfully avoided during the whole course (as the obstacle-avoidance subtask is taking all redundancies), the robot collides with itself at the elbow joint at around 13 s and locks its manipulator henceforth for mechanical safety, i.e. the robot fails in executing the case.\n\nInstead, as shown in Fig.~\\ref{fig5:NonSelfColli} and~\\ref{fig5:WeightMatrix}, our approach dynamically allocates three redundancies to six elementary subtasks, and therefore all subtasks are suitably performed in due course. Specifically, the three redundancies are initially taken by the self-collision subtask, therefore the relative difference between each joint to its corresponding joint-collision limit (illustrated by the red double-arrowed line segments in Fig.~\\ref{fig5:NonSelfColli}) increases in this phase (0s-5s). \nAs the human enters the robot's sensing range of obstacle avoidance from around 5s, the redundancies are shifted to three self-collision elementary subtasks to keep the robot away from the walking human.\nMeanwhile, as a result, the joint differences for self-collision decrease (but not to zero) till around 9s, when the redundancies are shifted back to three elementary self-collision subtasks first and last. Accordingly, the joint differences increase again to avoid potential self collisions. All above redundancy shifts can be directly observed in Fig.~\\ref{fig5:WeightMatrix}.\n\nRemarkably, Fig.~\\ref{fig5:NonSelfColli} and~\\ref{fig5:WeightMatrix} also show redundancy shifts do not need to happen simultaneously, even for the same subtask. That is, our approach allocates redundancies directly to one-dimensional elementary subtasks rather than their corresponding high-level multi-dimensional subtasks. This is \nthanks to the subtask unitization as introduced in Sec.~\\ref{subsec:a}, which greatly improves the redundancy availability and utilization. \nFor example, from around 8s to 10s in Fig.~\\ref{fig5:WeightMatrix}, the second redundancy is shifted to an elementary self-collision subtask, while the other two redundancies are still occupied by two elementary obstacle-collision subtasks. \nIt is also suggested from both figures that the redundancy shift can be performed swiftly (mostly within 1s) and smoothly by our approach.\n\n\\vspace{2mm}\n\\noindent\\textbf{II. Experimental Results of Circle-Drawing:}\n\nFig.~\\ref{fig:tc2} shows results for the second case on the singularity-avoidance and wrist-limit subtasks generated by the traditional and our approach respectively. Both approaches perform well in the singularity-avoidance subtask (Fig.~\\ref{subfig:sin})\nwhile the traditional approach underperforms in the wrist-limit subtask due to redundancy insufficiency (Fig.~\\ref{subfig:joint}). \n\nFig.~\\ref{fig:2shitf} shows the redundancy shifts among four elementary subtasks (i.e. the evolution of $\\textbf{\\text{A}}$) generated by our approach. Specifically, from 0s to around 9s, two elementary singularity-avoidance subtasks and the wrist-limit subtask are performed. Then at around 9s, the second redundancy is shifted from one elementary singularity-avoidance subtask to the other, i.e. a redundancy shift happens between two elementary subtasks unitized from the same high-level subtask. \nThis further proves that our approach allocates redundancies in the elementary subtask level. Such a redundancy shift is in fact due to the change of task status, i.e. a (nearly) completed subtask gives its redundancy to an alive subtask. \n\nRemarkably, \nFig.~\\ref{fig:2shitf} shows the primary task can be performed well by both approaches, i.e. the primary task is not affected by the execution of subtasks.\nThis is thanks to the null space projection technique applied by both approaches.\n\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\tiny\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\n\t\\includegraphics[width=3.5in]{experiment\/figures\/07.png}\n\t\\caption{Three redundancies shift dynamically among four elementary subtasks in due course. The three solid lines correspond to three elementary singularity-avoidances. The dotted line corresponds to the wrist-limit subtask. }\n\t\\label{fig:2shitf}\n\\end{figure}\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\tiny\n\t\\def0.98\\columnwidth{1\\columnwidth}\n\t\\import{experiment\/figures\/}{t5_2.pdf_tex}\n\t\\caption{The manipulator performs a primary task of drawing a circle along a desired end-effector path.\n\tBoth the traditional and our approach perform well in executing the primary task. }\n\t\\label{fig4:maintas}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction} \n\nRedundant robots have been dominating with growing popularity \nthe robotics community in virtue of their increased dexterity, versatility and adaptability~\\cite{hanafusa1981analysis,siciliano1990kinematic,chiaverini2016redundant}.\nHowever, except for few highly specialized systems, most redundant robots still underperform due to lack of relatively sufficient redundancies, especially when operating in unstructured or dynamic environments like households or warehouses characterized by the occurrence of multiple additional subtasks.\nTake a drink-serving task as illustrated in Fig.~\\ref{fig1:intro} for example. Even though the mobile robot is already equipped with nine degrees of freedom (DOF), as the robot carries a tray upright to serve drinks, only three DOFs will be left as redundancies.\nHowever, besides the primary serving task, the robot is frequently confronted with a large number of additional constraints or subtasks, e.g. obstacles, walking humans and singularity avoidance, which may actually require far more redundancies than the remaining ones. That is, the robot may not be able to deal with all subtasks simultaneously due to the lack of redundancies for subtasking.\n\nWe focus on the constrained scenario of \\textit{redundancy resolution} problems~\\cite{maciejewski1985obstacle,nakamura1987task,ficuciello2015variable} like this, where a redundant robot is supposed to carry out a primary task accompanied by multiple additional subtasks \\textbf{but} subject to redundancy insufficiency. \n\nA straightforward engineering way out of the above redundancy dilemma is to introduce more kinematic redundancies into the robot mechanical structure, which apparently is way too expensive to be repeatable. The majority of prior works on redundancy resolution, either via optimization~\\cite{khatib1986real,ge2002dynamic,flacco2015discrete} or task augmentation~\\cite{sciavicco1988solution,zanchettin2011general,benzaoui2010redundant}, however, are\nfundamentally under the premise the robot can provide sufficient redundancies\ni.e. all subtasks can be performed simultaneously with required redundancies\n\nRather, we noticed that in fact not all aforementioned subtasks have to be performed simultaneously or synchronously\nthanks to task feature and environment characteristics\\footnote{It is acknowledged that there exist cases where subtasks need to be performed strictly simultaneously. For such cases, the engineering augmentation as aforementioned is the only solution.}. For example, a whole-course obstacle avoidance subtask can actually be idle during most of the runtime until some obstacle appears within a certain threshold region, and therefore can be deferred from taking redundancy. Such characteristics give rise to the potential of asynchronicity among subtasks, which essentially accommodates most practical robot applications characterized by dynamic and unstructured environments. \n\nIt leads to a lightweight but effective solution\nthat the robot can dynamically allocate redundancies to subtasks\naccording to some common rules like task urgency, activeness and importance. \nFor example in Fig.~\\ref{fig1:intro}, as the robot carries out the primary drinking-serving task, if a human moves closer to the robot (Fig.~\\ref{fig1:subfig:human}), the subtask of human avoidance is of an increasing and ultimately dominating priority of taking all redundancies, while all other substasks will be temporarily frozen since no more redundancy is available. As the human walks away, the robot will eventually release the (part of) redundancies,\n until some other subtask takes them, e.g. the self-collision avoidance subtask (Fig.~\\ref{fig1:subfig:self}). \n\n \\begin{figure}[!t]\n \n \\begin{center}\t\n \\mbox{\n \\hspace{-3.5mm}\t\t\t\n \\subfigure[Human avoidance subtask]{\n \\label{fig1:subfig:human} \n \\includegraphics[height=0.36 \\columnwidth, angle=0]{.\/figures\/human.jpg}}\n \\subfigure[Self-collision avoidance subtask]{{\n \\label{fig1:subfig:self} \n \\includegraphics[height=0.36 \\columnwidth, angle=0]{.\/figures\/selfco.jpg}}}\t\n }\n \\caption{A drink-serving robot dynamically allocates relatively insufficient redundancies to accomplish multiple subtasks while serving drinks. (a) The human avoidance subtask takes redundancies as the human walks close. (b) The self-collision subtask takes redundancies once the human is far away.}\n \\label{fig1:intro}\n \\end{center}\n \\end{figure}\n\nIn this work, we borrow ideas from asynchronous time-division multiplexing (ATDM), propose an approach to subtask management for redundant robots subject to redundancy insufficiency. Our approach unfolds as follows: we first unitize all multi-dimensional subtasks to be executed along with the primary task \ninto a set of one-dimensional \\textit{elementary subtasks}. This step allows us to greatly improve the redundancy availability by deploying subtasks in a more fine-grained and compact manner. We then manage elementary subtasks by fusing them into a virtual multi-dimensional \\textit{secondary task} w.r.t. the primary task. \nWe propose a novel \\text{subtask merging operator} and an efficient updating strategy to dynamically modulate the secondary task in compliance with the \\textit{task status} and \\textit{soft priority} derived heuristically.\nBased on the approach, all subtasks can be suitably performed in due course.\n\nOur control framework is built upon previous work of task priority based redundancy resolution~\\cite{hanafusa1981analysis,maciejewski1985obstacle,nakamura1987task}, which guarantees the low-level tasks executed in the null space do not interfere with the high-level tasks. We integrate our subtask merging strategy into the null space projection technique to derive a general control framework of subtask management for redundant robots subject to redundancy insufficiency. \nIn this framework, the primary task is perfectly performed using a number of required DOFs, while all other subtasks are suitably carried out as a virtual dynamic secondary task using the remaining insufficient redundancy, \\text{but} without affecting the primary task. \n\n\n\nThe paper is organized as follows. Sec.~\\ref{sec:relatedwork} and~\\ref{sec:background} reviews and recapitulates prior related works. Sec.~\\ref{sec:method} presents details of our approach to manage multiple subtasks subject to redundancy insufficiency. Sec.~\\ref{sec:experiment} introduces two case studies\nwith experimental results to verify the performance of our approach. Sec.~\\ref{sec:conclusion} concludes this paper and our future work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction} \n\nRedundant robots have been dominating with growing popularity \nthe robotics community in virtue of their increased dexterity, versatility and adaptability~\\cite{hanafusa1981analysis,siciliano1990kinematic,chiaverini2016redundant}.\nHowever, except for few highly specialized systems, most redundant robots still underperform due to lack of relatively sufficient redundancies, especially when operating in unstructured or dynamic environments like households or warehouses characterized by the occurrence of multiple additional subtasks.\nTake a drink-serving task as illustrated in Fig.~\\ref{fig1:intro} for example. Even though the mobile robot is already equipped with nine degrees of freedom (DOF), as the robot carries a tray upright to serve drinks, only three DOFs will be left as redundancies.\nHowever, besides the primary serving task, the robot is frequently confronted with a large number of additional constraints or subtasks, e.g. obstacles, walking humans and singularity avoidance, which may actually require far more redundancies than the remaining ones. That is, the robot may not be able to deal with all subtasks simultaneously due to the lack of redundancies for subtasking.\n\nWe focus on the constrained scenario of \\textit{redundancy resolution} problems~\\cite{maciejewski1985obstacle,nakamura1987task,ficuciello2015variable} like this, where a redundant robot is supposed to carry out a primary task accompanied by multiple additional subtasks \\textbf{but} subject to redundancy insufficiency. \n\nA straightforward engineering way out of the above redundancy dilemma is to introduce more kinematic redundancies into the robot mechanical structure, which apparently is way too expensive to be repeatable. The majority of prior works on redundancy resolution, either via optimization~\\cite{khatib1986real,ge2002dynamic,flacco2015discrete} or task augmentation~\\cite{sciavicco1988solution,zanchettin2011general,benzaoui2010redundant}, however, are\nfundamentally under the premise the robot can provide sufficient redundancies\ni.e. all subtasks can be performed simultaneously with required redundancies\n\nRather, we noticed that in fact not all aforementioned subtasks have to be performed simultaneously or synchronously\nthanks to task feature and environment characteristics\\footnote{It is acknowledged that there exist cases where subtasks need to be performed strictly simultaneously. For such cases, the engineering augmentation as aforementioned is the only solution.}. For example, a whole-course obstacle avoidance subtask can actually be idle during most of the runtime until some obstacle appears within a certain threshold region, and therefore can be deferred from taking redundancy. Such characteristics give rise to the potential of asynchronicity among subtasks, which essentially accommodates most practical robot applications characterized by dynamic and unstructured environments. \n\nIt leads to a lightweight but effective solution\nthat the robot can dynamically allocate redundancies to subtasks\naccording to some common rules like task urgency, activeness and importance. \nFor example in Fig.~\\ref{fig1:intro}, as the robot carries out the primary drinking-serving task, if a human moves closer to the robot (Fig.~\\ref{fig1:subfig:human}), the subtask of human avoidance is of an increasing and ultimately dominating priority of taking all redundancies, while all other substasks will be temporarily frozen since no more redundancy is available. As the human walks away, the robot will eventually release the (part of) redundancies,\n until some other subtask takes them, e.g. the self-collision avoidance subtask (Fig.~\\ref{fig1:subfig:self}). \n\n \\begin{figure}[!t]\n \n \\begin{center}\t\n \n \\hspace{-3.5mm}\t\t\t\n \\subfigure[Human avoidance subtask]{\n \\label{fig1:subfig:human} \n \\includegraphics[height=0.36 \\columnwidth, angle=0]{.\/introduction\/figures\/human}}\n \\subfigure[Self-collision avoidance subtask]{{\n \\label{fig1:subfig:self} \n \\includegraphics[height=0.36 \\columnwidth, angle=0]{.\/introduction\/figures\/selfco.jpg}}}\t\n \\caption{A drink-serving robot dynamically allocates relatively insufficient redundancies to accomplish multiple subtasks while serving drinks. (a) The human avoidance subtask takes redundancies as the human walks close. (b) The self-collision subtask takes redundancies once the human is far away.}\n \\label{fig1:intro}\n \\end{center}\n \\end{figure}\n\nIn this work, we borrow ideas from asynchronous time-division multiplexing (ATDM), propose an approach to subtask management for redundant robots subject to redundancy insufficiency. Our approach unfolds as follows: we first unitize all multi-dimensional subtasks to be executed along with the primary task \ninto a set of one-dimensional \\textit{elementary subtasks}. This step allows us to greatly improve the redundancy availability by deploying subtasks in a more fine-grained and compact manner. We then manage elementary subtasks by fusing them into a virtual multi-dimensional \\textit{secondary task} w.r.t. the primary task. \nWe propose a novel \\text{subtask merging operator} and an efficient updating strategy to dynamically modulate the secondary task in compliance with the \\textit{task status} and \\textit{soft priority} derived heuristically.\nBased on the approach, all subtasks can be suitably performed in due course.\n\nOur control framework is built upon previous work of task priority based redundancy resolution~\\cite{hanafusa1981analysis,maciejewski1985obstacle,nakamura1987task}, which guarantees the low-level tasks executed in the null space do not interfere with the high-level tasks. We integrate our subtask merging strategy into the null space projection technique to derive a general control framework of subtask management for redundant robots subject to redundancy insufficiency. \nIn this framework, the primary task is perfectly performed using a number of required DOFs, while all other subtasks are suitably carried out as a virtual dynamic secondary task using the remaining insufficient redundancy, \\text{but} without affecting the primary task. \n\n\n\nThe paper is organized as follows. Sec.~\\ref{sec:relatedwork} and~\\ref{sec:background} reviews and recapitulates prior related works. Sec.~\\ref{sec:method} presents details of our approach to manage multiple subtasks subject to redundancy insufficiency. Sec.~\\ref{sec:experiment} introduces two case studies\nwith experimental results to verify the performance of our approach. Sec.~\\ref{sec:conclusion} concludes this paper and our future work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{reference}\n\\bibliographystyle{IEEEtran}\n\n\\section{Method}\n\\label{sec:method} \n\nThis section presents our approach to manage multiple subtasks subject to redundancy insufficiency (Fig.~\\ref{fig:overview}).\n\n\\subsection{Subtask Unitization}\\label{subsec:a}\n\nWe first split and unitize all multi-dimensional subtasks to be executed along with the primary task into a set of one-dimensional \\textit{elementary subtasks}. \nFor example, the obstacle avoidance of a mobile robot can be unitized into three elementary subtasks of \\textit{x}-direction, \\textit{y}-direction and \\textit{yaw}-rotation obstacle avoidance. \nIn this manner, the subtasks can be unitized into a set of elementary subtasks expressed as\n\\begin{equation}\\label{eq:ds}\n \\dot{x}_{\\text{s}i} = f_i(\\boldsymbol{\\xi}_{i})\\in \\mathbb{R} \\quad i=1,2,...,l\n\\end{equation}\n\\noindent where $l$ is the number of total elementary subtasks. $\\boldsymbol{\\xi}_{i}$ is a vector of all related parameters (i.e. the real robot state), and $\\dot{x}_{\\text{s}i}$ is the desired velocity of the \\textit{i}-th elementary subtask. \nEach elementary subtask expressed in the form of Eq.~\\ref{eq:ds} need to ensure global stability during construction. Note that the number of elementary subtasks can be less than or equal to the number of redundancies (i.e. $n-m \\geq l$), which implies the robot can provide sufficient redundancies for subtasking. We focus on the opposite case ($n-m\\! < \\!l$), where the subtasks have to compete for redundancy due to insufficiency.\n\nThe subtask unitization allows our approach to deploy elementary subtasks in a more fine-grained and compact manner, and therefore improve the overall redundancy utilization and availability. Stacking all elementary subtasks together yields a \\textit{subtask vector}\n\\begin{equation}\\label{eq:merged}\n\\begin{split}\n \\dot{\\textbf{\\textit{x}}}_\\text{s}=[\\dot{x}_{\\text{s}1}\\: \\dot{x}_{\\text{s}2}\\:...\\:\\dot{x}_{\\text{s}l}]^\\text{T}\n =[f_1\\;f_2\\;...\\;f_l]^\\text{T}\n\\end{split}\n\\end{equation}\nNote that we associate an implicit order of elementary subtask priority by index in $\\dot{\\textbf{\\textit{x}}}_\\text{s}$, i.e. the smaller the index $i$, the higher the priority of its corresponding elementary subtask. \n\n\nSuppose the first-order differential kinematics for the $i$-th elementary subtask is expressed as\n\\begin{equation}\\label{eq:subtaskJo}\n \\dot{x}_{\\text{s}i} = \\textbf{J}_{\\text{s}i}(\\textit{\\textbf{q}})\\dot{\\textit{\\textbf{q}}} \n\\end{equation}\n\\noindent where $\\textbf{J}_{\\text{s}i} \n\\in \\mathbb{R}^{1\\times n}$ is its Jacobian matrix.\nSubstituting Eq.~\\ref{eq:subtaskJo} into Eq.~\\ref{eq:merged} yields\n\\begin{equation}\\label{eq:elmentary}\n \\dot{\\textbf{\\textit{x}}}_\\text{s} = \\textbf{J}_{\\text{sub}}(\\textit{\\textbf{q}})\\dot{\\textit{\\textbf{q}}}\n\\end{equation}\n\\noindent where $\\textbf{J}_{\\text{sub}} = [\\textbf{J}_{\\text{s}1}^{\\text{T}} \\;\\textbf{J}_{\\text{s}2}^{\\text{T}} \\;... \\;\\textbf{J}_{\\text{s}l}^{\\text{T}}]^{\\text{T}} \\in \\mathbb{R}^{l\\times n}$ is the merged Jacobian matrix for the elementary subtask set.\n\n\\subsection{Merging Subtasks into A Dynamic Secondary Task}\\label{subsec:merge}\nWe then build a \\textit{virtual secondary task} $\\dot{\\textit{\\textbf{x}}}_2$ from the set of elementary subtasks $\\dot{\\textbf{\\textit{x}}}_\\text{s}$ in line with Eq.~\\ref{eq:controlLaw_0}\n\\begin{equation}\\label{eq:secondary_formal}\n \\begin{split}\n \\dot{\\textit{\\textbf{x}}}_2 &= \\textbf{\\text{H}}(\\dot{\\textbf{\\textit{x}}}_\\text{s}) \\\\\n \\end{split}\n\\end{equation}\nwhere ${\\textbf{H}(\\cdot)}$ is an operator dynamically allocating $n-m$ robot redundancies to $l$ elementary subtasks $\\dot{\\textbf{\\textit{x}}}_\\text{s}$ during runtime.\n\n\\vspace{2mm}\n\\noindent\\textbf{Multi-Subtask Merging Matrix}: In order to construct the operator ${\\textbf{H}(\\cdot)}$, we first define a \\text{multi-subtask merging matrix} \n\\begin{equation}\n \\! \\textbf{\\text{A}}(t):=\\!\n \\begin{bmatrix}\n \\alpha_{11}(t) & \\alpha_{12}(t) &\\!\\dotsm &\\!\\alpha_{1l}(t) \\\\ \\!\n \\alpha_{21}(t) & \\alpha_{22}(t) &\\!\\dotsm &\\!\\alpha_{2l}(t) \\\\ \\!\n \\vdots &\\vdots &\\!\\ddots &\\!\\vdots\\\\ \\!\n \\alpha_{(n-m)1}(t) & \\alpha_{(n-m)2}(t) &\\!\\dotsm &\\!\\alpha_{(n-m)l}(t) \\!\n \\end{bmatrix}\\!\n\\end{equation}\nwhere each entry $\\alpha_{ij}$ denotes the weight of the $i$-th redundancy to be allocated to the $j$-th elementary subtask varying w.r.t. time.\nIt satisfies\n$\\sum_{j=1}^l\\alpha_{ij}\\! = \\!\\gamma$, where $\\gamma \\in [0.5, 1]$ is the upper bound for entries in $\\textbf{\\text{A}}$. The dependence on $t$ is omitted hereafter for notation compactness.\nThe matrix is initialized with\n\\begin{equation*}\n \\textbf{\\text{A}}_0:=\\;\\left[\\gamma \\cdot {\\bf{I}}_{\\left( {n - m} \\right) \\times \\left( {n - m} \\right)} \\quad {\\bf{0}}_{\\left( {n - m} \\right)\\times \\left( {l - n + m} \\right)}\\right]\n\\end{equation*}\nwhich implies the $\\!n\\!-\\!m$ robot redundancies will be initially allocated to the first $n-m$ elementary tasks in $\\dot{\\textbf{\\textit{x}}}_\\text{s}$, in keeping with the aforementioned implicit indexing task priority.\n\n\\vspace{2mm}\n\\noindent\\textbf{Virtual Secondary Task}:\nThen the virtual secondary task $\\textit{\\textbf{x}}_2$ is defined as a weighted contributions of $l$ subtasks as\n\\begin{equation*}\\label{eq:secondary}\n \\dot{\\textit{\\textbf{x}}}_2 \\! = \\!\\textbf{\\text{H}}(\\dot{\\textbf{\\textit{x}}}_\\text{s})\\!=\\! (1\/{\\gamma})\\cdot\\textbf{\\text{A}}_{(n-m)\\times l}\\dot{\\textit{\\textbf{x}}}_{\\text{s}(l \\times1)}\\!= (1\/\\gamma)\\sum_{j=1}^l{\\boldsymbol{\\alpha}_{j}\\dot{x}_{\\text{s}j} }\n \n\\end{equation*}\n\\begin{equation}\\label{eq:secondary2}\n \\begin{split}\n \\! & = \\! (1\/\\gamma)\\!\n \\left[\n \\sum_{j=1}^l\\alpha_{1j}\n \\dot{x}_{\\text{s}j} \\:\\sum_{j=1}^l\\alpha_{2j}\n \\dot{x}_{\\text{s}j} \\: ...\n \\:\\sum_{j=1}^l\\alpha_{(n-m)j}\n \\dot{x}_{\\text{s}j} \n \\right]^{\\text{T}} \\!\\\\\n \\!& = \\! [\\dot{x}_{21} \\:\\:\\dot{x}_{21} \\:\\:\\dots \\:\\:\\dot{x}_{2(n-m)}]^{\\text{T}}\\!\n \\end{split}\n\\end{equation}\nwhere $\\gamma$ acts as a normalizing factor. Eq.~\\ref{eq:secondary} also implies at the \\textit{i}-th redundancy, the merging matrix $\\textbf{\\text{A}}$ dynamically\nallocates a virtual task $\\dot{x}_{2i}$ characterized by a weighted sum of $l$ elementary subtasks. \n\n\\vspace{2mm}\n\\noindent\n\\textbf{Null Space Control}:\nSubstituting Eq.~\\ref{eq:elmentary} and~\\ref{eq:secondary} into \nEq.~\\ref{eq:differen} yields\n\\vspace{-1mm}\n\\begin{equation}\\label{eq:mergedJaco}\n \\dot{\\textit{\\textbf{x}}}_2 = \\textbf{J}_2(\\textit{\\textbf{q}})\\dot{\\textit{\\textbf{q}}}\n =(1\/\\gamma)\\textbf{\\text{A}}\\textbf{J}_{\\text{sub}}(\\textit{\\textbf{q}})\\dot{\\textit{\\textbf{q}}}\n\\end{equation}\nwhere $\\textbf{J}_2 = (1\/\\gamma)\\textbf{\\text{A}}\\textbf{J}_{\\text{sub}}$ is the (merged) Jacobian matrix of the virtual secondary task. \nThen\nsubstituting Eq.~\\ref{eq:secondary} and~\\ref{eq:mergedJaco} into Eq.~\\ref{eq:controlLaw_0} yields our law of redundancy resolution subject to insufficiency \n\\begin{equation*}\\label{eq:controlLaw_2}\n \\dot{\\textit{\\textbf{q}}} \\!=\\!\\textbf{J}^{+}_1\\dot{\\textit{\\textbf{x}}}_1\\!+\\!\n \\textbf{N}_1\\textbf{J}^{\\text{T}}_{\\text{sub}}\\textbf{A}^{\\text{T}}(\\textbf{A}\\textbf{J}_{\\text{sub}}\\textbf{N}_1\\textbf{J}^{\\text{T}}_{\\text{sub}}\\textbf{A}^{\\text{T}})^{-1}(\\textbf{\\text{A}}\\dot{\\textbf{\\textit{x}}}_\\text{s} \\!-\\! \\textbf{\\text{A}}\\textbf{J}_{\\text{sub}}\\textbf{J}^{+}_1\\dot{\\textit{\\textbf{x}}}_1) \\!\\!\\!\\!\\\\\n\\end{equation*}\n\\begin{equation}\\label{eq:controlLaw_1}\n\\textbf{N}_1\\!=\\textbf{I}-\\textbf{J}_1^{+}\\textbf{J}_1 \\in \\mathbb{R}^{n \\times n} \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\:\\:\n\\end{equation}\nwhich plays a fundamental role in our control framework. The next section explains how our algorithm dynamically modulates $\\dot{\\textit{\\textbf{x}}}_2$ to manage subtasks under this framework.\n\n\\subsection{Update of the Merging Matrix}\\label{subsec:c}\nWith reference to Eq.~\\ref{eq:secondary}$-$\\ref{eq:controlLaw_1}, the dynamic control of multiple subtasks \nrelies essentially on the update of $\\textbf{\\text{A}}$. We formulate an updating strategy to proactively modulate the updating rate of $\\textbf{\\text{A}}$ by incorporating \\textit{task status} and \\textit{soft priority} derived heuristically. \n\n\\vspace{2mm}\n\\noindent\n\\textbf{Task Status Matrix}:\nWe define a \\text{task status matrix} $\\textbf{S}$ to modulate the updating rate in compliance with task status\n\\begin{equation}\n \\textbf{S} = \\text{diag}(\\bar{f}_1,\\bar{f}_2,\\dots,\\bar{f}_l)\n\\end{equation}\nwhere $\\bar{f}_i\\in[0,1]$ quantifies the activation status of the $i$-th elementary subtask $\\dot{x}_{\\text{s}i}$ with a normalized scalar.\nSpecifically, if $\\dot{x}_{\\text{s}i}$ arrives at a stable state, then there is $\\dot{x}_{\\text{s}i}= 0, \\bar{f}_i=0$. That is, the \\textit{i}-th elementary subtask has been completed and there is no need to assign redundancy to it. On the contrary, if $ \\bar{f}_i\\rightarrow1$, it indicates the \\textit{i}-th elementary subtask is still active and therefore waiting be allocated with a redundancy.\n\n\nHere we specify $\\bar{f}_i$ with the normalizing function \n\\begin{equation}\\label{eq:normalize}\n \\bar{f}_i=1\/(1+e^{k_i(d_i+\\dot{x}_{\\text{s}i})})+1\/(1+e^{k_i(d_i-\\dot{x}_{\\text{s}i})})\n\\end{equation}\n\\noindent where ${k_i}$ and ${d_i}$ are the response slope and sensitivity range of the normalizing function. \nNote one can come up with some other definitions of task status, e.g. one considering the task amplitude. Here we treat all subtasks equally and focus on if an elementary subtask is completed or not\n\n\n\n\n\n\n\n\\vspace{2mm}\n\\noindent\n\\textbf{Soft Priority Matrix}:\nWe derive a \\textit{soft priority matrix} $\\textbf{\\text{P}}$ to proactively modulate the updating rate \n\\begin{equation}\n \\! \\textbf{\\text{P}}(t):=\n \\begin{bmatrix}\n p_{11}(t) & p_{12}(t) &\\!\\dotsm & \\!p_{1l}(t)\\! \\\\\n p_{21}(t) & p_{22}(t) &\\!\\dotsm &\\! p_{2l}(t)\\! \\\\\n \\vdots &\\vdots &\\!\\ddots &\\! \\vdots\\\\\n p_{(n-m)1}(t) & p_{(n-m)2}(t) &\\!\\dotsm &\\! p_{(n-m)l}(t) \\!\n \\end{bmatrix}\n\\end{equation}\nwhere each entry $p_{ij}\\in (0,1)$ implies a certain value of \\textit{soft priority} proactively modulating the updating rate of the weight $\\alpha_{ij}$\nThe soft priority is derived\nby the following rules\\footnote{Note that a \\text{dummy row} $\\alpha_{0j}=0,\\;j=1,2,\\dots,l$ and a \\text{dummy column} $\\alpha_{i0}=0,\\;i=1,2,\\dots,(n-m)$ are added into $\\textbf{\\text{A}}$ for the sake of expression simplicity. $\\alpha_{0j}=0$ implies the dummy \\text{0}-th redundancy will not be assigned to any subtask. $\\alpha_{i0}=0$ implies the dummy \\text{0}-th task will not occupy any redundancy.}\n\\begin{equation}\\label{eq:rules}\n \\begin{split}\n p_{ij} = \\prod_{u=0}^{i-1}(1-\\alpha_{uj})\\prod_{v=0}^{j-1}(1-\\alpha_{iv})\\prod_{u\\neq i}(\\gamma-\\alpha_{uj})\\\\\n \\end{split}\n\\end{equation}\nfor $i=1,2,\\dots,(n-m)$ and $j=1,2,\\dots,l$.\nEach entry $p_{ij}$ extracts implicit soft priority information from $\\textbf{\\text{A}}$ by explicitly considering the weight distribution over its corresponding redundancy ($i$-th row) and elementary subtask ($j$-th column):\n\\begin{itemize}\n \\item The term $\\prod_{u=0}^{i-1}(1-\\alpha_{uj})$ indicates the updating rate of $\\alpha_{ij}$ is affected by the weight distribution (for the \\textit{j}-th elementary subtask) over the ($i-1$) redundancies previous to the current \\textit{i}-th one. Specifically, given a \\textit{j}-th elementary subtask, \n if its weight at any other redundancy (denoted as \\textit{u}-th) previous to the current \\textit{i}-th one is close to $\\gamma$ (i.e. $\\alpha_{uj} \\rightarrow \\gamma$), it is more likely to be assigned to the \\textit{u}-th redundancy. Therefore, the weight at the current \\textit{i}-th redundancy will be relatively reduced to proactively quit the competition for the $j$-th elementary subtask. On the contrary, if its weight at any previous redundancy is close to zero, the weight at the current redundancy will be relatively raised proactively to improve the chance of winning.\n \\item The term $\\prod_{v=0}^{j-1}(1-\\alpha_{iv})$ indicates, symmetrically,\n the updating rate of $\\alpha_{ij}$ is affected by the weight distribution (at the $i$-th redundancy) over the $j-1$ elementary subtasks previous to the current \\textit{j}-th one. This term decides if the \\textit{j}-th elementary subtask should proactively quit or stay in the competition for the \\textit{i}-th redundancy.\n \n \n \\item The term $\\prod_{u\\neq i}(\\gamma-\\alpha_{uj})$ acts as a redundancy keeper by rejecting or zeroing out the weight update at $\\alpha_{ij}$ if the $j$-th elementary subtask has been allocated to any other redundancy (denoted as $u$-th and therefore $\\alpha_{uj}=\\gamma$) rather than the current $i$-th one. This guarantees the \\textit{j}-th elementary subtask will be kept in a redundancy once being allocated to and therefore would not jump back and forth among different redundancies.\n\n \n\\end{itemize}\n\nThe soft priority derived above is consistent with the aforementioned indexing priority by explicitly considering the weight distribution over previous redundancies and subtasks. \nIt proactively tuning the updating rate\nand therefore leads to a faster convergence speed of the merging matrix $\\textbf{\\text{A}}$.\nSuch a prioritizing strategy is aimed at improving the efficiency of redundancy resolution, such that all elementary subtasks can be suitably performed in due course. Note one can come up with some other prioritizing strategies in accordance with context~\\cite{penco2020learning,modugno2016learning,di2019handling}. \n\n\n\n\n\\vspace{2mm}\n\\noindent\\textbf{Updating the Merging Matrix}:\n %\nWe define the updating rate $\\dot{\\textbf{\\text{A}}}$ as a combined effect of the task status $\\textbf{S}$ and the soft priority $\\textbf{\\text{P}}$, \nand formulate it based on the winner-take-all strategy\\footnote{A detailed explanation of the algorithm and a proof of weight convergence are provided here: \\url{https:\/\/github.com\/AccotoirDeCola\/WinnerTakeAll.} } ${\\textbf{W}(\\cdot)}$ \n\\begin{equation}\\label{eq: update_rate}\n \\dot{\\textbf{\\text{A}}} = \\textbf{W}(\\textbf{\\text{P}},\\textbf{S},\\textbf{\\text{A}})\n\\end{equation}\nThen the subtask merging matrix $\\textbf{\\text{A}}$ is updated as follows\n\\begin{equation}\\label{eq: update}\n{{\\bf{A}}_{{{t + 1}}}} = \\max ({\\bf{0}},\\min ({\\gamma\\bf{E}},{{\\bf{A}}_{{t}}}{{ + }}{{\\bf{\\dot A}}_{{t}}}\\Delta {{t}}))\n\\end{equation}\nwhere $\\bf{E}$ is an all-ones matrix, and $\\Delta{t}$ is the update interval.\n\n\n\n\\section{Related Work}\\label{sec:relatedwork}\n\nOur work is in the intersection of \\textit{inverse~kinematic control}, \\textit{redundancy resolution} and \\textit{prioritized multitasking}.\n\n\nThe very early works of redundant robots have derived the fundamental solution \nto redundancy resolution by using Jacobian pseudoinverse to find the instantaneous relationship between the joint and task velocities.\nThe later extensive investigations, essentially, have been developed explicitly or implicitly from the Jacobian pseudoinverse via either optimization or task augmentation. Typically, redundancy resolution via optimization incorporates additional additional subtasks or objectives by minimizing certain task-oriented criteria~\\cite{ficuciello2015variable,khatib1986real}. For example, obstacle avoidance is enforced by minimizing a function of artificial potential defined over the obstacle region in configuration space~\\cite{ge2002dynamic}. The task augmentation approaches address additional subtasks by augmenting an integrated task vector containing all subtasks, where the extended or augmented Jacobians are formulated to enforce additional tasks~\\cite{sciavicco1988solution,zanchettin2011general,benzaoui2010redundant}.\n \n\n\n\n\n\n\n\n\n\n\nThe majority of frequently applied approaches to redundancy resolution are fundamentally based on the null space projection strategy~\\cite{sadeghian2013dynamic,ott2015prioritized,dietrich2018hierarchical}.\nIn compliance with a dynamically consistent task hierarchy of this line of work, additional subtasks are preformed only in the null space of a certain higher-priority task, typically by successive null space projections~\\cite{ott2015prioritized,dietrich2012integration} or augmented null space projections~\\cite{slotine1991general, sentis2005synthesis}. We also build our control law upon this technique by performing all subtasks in the null space of the primary task. The aforementioned \nJacobian pseudoinverse centered approaches, however, work mostly under the premise of sufficient redundancies for multitasking, which instead is the major challenge motivating and addressed by our work.\n\n\\text{Our work} is also related to prioritized multitask control, which is mainly focused on addressing task incompatibility by defining suitable evolution of task priorities~\\cite{lober2015variance, dehio2019dynamically,penco2020learning}. Typically, priorities are given to safety-critical tasks such as balancing if conflict or incompatibility occurs~\\cite{modugno2016learning,di2019handling}. Different from this line of studies, our work mainly focus on the issue of insufficient robot redundancy, and therefore all substasks have to compete for redundancy even in the absence of task incompatibility.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Stochastic reconstruction in a nutshell}\n\nIn recent years, the reconstruction theorem from Martin Hairer's regularity structure theory \\cite{Hairer} has gained popularity in the mathematical community, not just within the said theory, but also as a standalone theorem (see \\cite{reconProof}, \\cite{caravenna}).\n\nIt answers the following question: Given a family of distributions $(F_x)_{x\\in\\mathbb R^d}$, is it possible to find a single distribution $f$, which around any point $x\\in\\mathbb{R}^d$ locally looks like $F_x$? A simple example for such a problem is the following: Given an $0<\\alpha$-H\u00f6lder continuous function $g$ and a distribution, which is $0>\\beta$-H\u00f6lder continuous, with $\\alpha+\\beta>0$, how can we define the product $g\\cdot h$, if $g$ is not smooth? \\cite{RS} and \\cite{BCD} have used paraproducts to show that a natural choice called the Young product does exist for $\\alpha+\\beta>0$. \n\nAs it turns out, it is easy to find a local approximation for this product: Around any point $x\\in\\mathbb{R}$, $g(y)$ looks like it's Taylor approximation $g(y)\\approx Tg_x(y):= \\sum_{k=0}^n \\frac{g^{(k)}(x)}{k!}(y-x)^k$ and the distribution $(Tg_x\\cdot h)(\\psi) := h(Tg_x\\cdot \\psi)$ is well defined since $Tg_x\\cdot \\psi$ is a smooth test function. This allows us to locally approximate our product as $g\\cdot h \\approx Tg_x\\cdot h$ around $x$. The only ingredient missing to find $g\\cdot h$ is a way to reconstruct the distribution from its local approximations.\n\nTo return to the general setting, let us consider a family of distributions $(F_x)_{x\\in\\mathbb{R}^d}$ and let us formalize what it means to ``locally look like $F_x$'' around $x$: Given a smooth, compactly supported mollifier $\\psi$, one can use the rescaled function $\\psi^\\epsilon_x$, which approaches the Dirac distribution $\\delta_x$ as $\\epsilon\\to 0$. (One can think of $\\psi^\\epsilon$ as a mollified Dirac delta.) Since $\\psi^\\epsilon_x$ is highly concentrated around $x$, we can analyze the local behaviour around an $x\\in\\mathbb{R}^d$ of a distribution by analysing $f(\\psi^\\epsilon_x)$. To be precise, the rigorous condition to ``locally look like $F_x$'' is given by\n\n\\begin{equation}\\label{caravenna_result}\n\\abs{(f-F_x)(\\psi_x^\\epsilon)}\\lesssim \\epsilon^\\lambda\n\\end{equation}\n\n\\noindent for some $\\lambda>0$. It especially holds that $\\abs{(f-F_x)(\\psi_x^\\epsilon)}\\to 0$ for $\\epsilon\\to 0$ at rate $\\lambda$, which justifies the expression ``$f(x) =F_x(x)$''.\n\nCaravenna and Zambotti found a condition called coherence in \\cite{caravenna}, which ensures \\eqref{caravenna_result} and is given as follows: $(F_x)_{x\\in\\mathbb{R}^d}$ is called coherent, if\n\n\\begin{equation*}\n\\abs{(F_x-F_y)(\\psi_x^\\epsilon)}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\gamma-\\alpha}\n\\end{equation*}\n\n\\noindent for some $\\alpha\\in\\mathbb{R}$ (typically negative) and a $\\gamma>0$. This quite technical condition can be interpreted as follows: If $F_x,F_y$ are distributions of negative regularity, we expect $(F_x-F_y)(\\psi_x^\\epsilon)$ to explode for $\\epsilon\\to0$, which it indeed does at rate $-\\alpha$. However, if we link $\\abs{x-y}$ with $\\epsilon$, i.e. take $\\abs{x-y}\\approx \\epsilon$, we get the convergence $\\abs{(F_x-F_y)(\\psi_x^\\epsilon)}\\lesssim\\epsilon^\\gamma\\to 0$ as $\\epsilon\\to 0$. This indicates a form of continuity, i.e. $F_x$ and $F_y$ look very similar as $x$ tends to $y$, which can be used to construct some form of limit distribution ``$f(x) \\approx F_x(x)$''.\n\nUnder this assumption, Caravenna and Zambotti proved the following theorem:\n\n\\begin{theorem}[\\cite{caravenna}]\nLet $(F_x)_{x\\in\\mathbb{R}^d}$ be a coherent germ for some $\\gamma>0$. Then there exists a unique distribution $f$ with\n\n\\begin{equation*}\n\\abs{(f-F_x)(\\psi_x^\\epsilon)}\\lesssim \\epsilon^\\gamma.\n\\end{equation*}\n\\end{theorem}\n\n\\begin{remark}\nThey also showed that it is possible to find such a distribution $f$ for $\\gamma<0$. However, this distribution is no longer unique. Also, note that for $\\gamma<0$, the bound $\\epsilon^\\gamma$ does explode, so the condition is much weaker. Thus, we will concentrate on the $\\gamma>0$ case.\n\\end{remark}\n\n\\noindent At this point, the reconstruction theorem is a purely analytic tool. This is surprising, as it is often applied in SPDEs, see for example \\cite{gPam}, \\cite{Hairer}, \\cite{roughVol},\\cite{brault} and many more. In all those examples, the reconstruction theorem only uses the analytic properties of the stochastic processes, neglecting their stochastic properties.\n\nThis naturally leads to the question, if there exists a stochastic version of the reconstruction theorem, especially if one considers its close connection to the sewing lemma \\cite{gubinelli}, \\cite{pradelle}: In rough paths theory, rough integrals can be constructed with the sewing lemma (see \\cite{lyons}, \\cite{roughPaths}), whereas in regularity structure theory the reconstruction theorem is used to construct the corresponding products. And in \\cite{le}, Khoa L\u00ea showed the existence of a stochastic version of the sewing lemma.\n\n\\[\\begin{tikzcd}\n\\fbox{\\parbox{3cm}{\\centering sewing lemma}} \\arrow[r] \\arrow[d] & \\fbox{\\parbox{3cm}{\\centering reconstruction theorem}}\\arrow[d] \\\\\n\\fbox{\\parbox{3cm}{\\centering stochastic sewing lemma}}\\arrow[r] & \\fbox{\\parbox{3cm}{\\centering stochastic reconstruction theorem}}\n\\end{tikzcd}\\]\n\n\nIn this paper, we show that L\u00ea's approach can be generalized to the multidimensional setting of the reconstruction theorem: Consider a family of random distributions $(F_x)_{x\\in\\mathbb{R}^d}$, i.e. $F_x$ maps test functions $\\psi$ onto random variables in some $L_p$ space, $2\\le p<\\infty$, which is somehow adapted to a filtration $(\\mathcal F_y)_{y\\in\\mathbb R}$ (this will be made rigorous in Section \\ref{mainResult}). We can enrich the coherence property by a condition on the conditional expectation of $F_x$ as follows:\n\n\\begin{align*}\n\\norm{(F_x-F_y)(\\psi_x^\\epsilon)}_{L_p}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\gamma-\\alpha} \\\\\n\\norm{E^{\\mathcal F_{y_1}}(F_x-F_y)(\\psi_x^\\epsilon)}_{L_p}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\tilde \\gamma-\\alpha},\n\\end{align*}\n\n\\noindent where $y = (y_1,\\dots)$ and for some $\\tilde \\gamma > \\gamma$, so the conditional expectation is better behaved than the germ itself. We then hope, that $(F_x)_{x\\in\\mathbb{R}^d}$ can ``borrow regularity'' from its conditional expectation.\n\nKhoa L\u00ea showed in \\cite{le} a technique, based on the Doob-Meyer decomposition, which allows precisely that: He managed to borrow up to $\\frac 12$ regularity from the conditional expectations of stochastic processes, under suitable adaptedness assumptions. This indicates that we should assume $\\tilde\\gamma = \\gamma+\\frac 12$. \n\nThe main technique he used in his paper is the following: Consider a sequence of random variables $(Z_n)_{n\\in\\mathbb N}\\subset L^p$ for some $2\\le p<\\infty$, which is adapted to some filtration $(\\mathcal F_{n+1})_{n\\in\\mathbb N}$ (i.e. $Z_n$ is $\\mathcal F_{n+1}$-measurable), then one can use the Doob-Meyer \\cite{doob} decomposition and the Burkholder-Davis-Gundy inequality \\cite{BDG} to show\n\n\\begin{equation}\\label{BDG}\n\\norm{\\sum_{i=1}^N Z_i}_{L_p} \\lesssim\\sum_{i=1}^N\\norm{E^{\\mathcal F_i}Z_i}_{L_p} + \\left(\\sum_{i=1}^N \\norm{Z_i-E^{\\mathcal F_i}Z_i}_{L_p}^2\\right)^{\\frac 12}.\n\\end{equation}\n\n\\noindent Under the assumption that the conditional expectations $E^{\\mathcal F_i}Z_i$ are somehow better behaved than $Z_i$ itself, this is a strictly better bound than the naive bound $\\sum_{i=1}^N\\norm{Z_i}_{L_p}$. For all details, see \\cite{le}.\n\nHowever, the higher dimension of our germ enables us to repetitively use this technique: That is, if we have suitable filtrations $\\mathcal F^{(i)}_{y_i}$ for $i=1,\\dots,e\\le d$ and\n\n\\begin{equation*}\n\\norm{E^{\\mathcal F^{(i)}_{y_i}}(F_x-F_y)(\\psi_x^\\epsilon)}_{L_p}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\tilde \\gamma-\\alpha},\n\\end{equation*}\n\n\\noindent holds for all directions $i = 1,\\dots, e$, we can borrow $\\frac 12$ regularity from all of those direction, resulting in $\\gamma = \\tilde{\\gamma}-\\frac e2$. We call $e$ the stochastic dimension of $(F_x)$. We say that $(F_x)$ is stochastically $\\gamma$-coherent, if it fulfills certain adaptedness properties (see Definition \\ref{stochDimension} for details), and for $i=1,\\dots,e\\le d$\n\n\\begin{align*}\n\\norm{(F_x-F_y)(\\psi_x^\\epsilon)}_{L_p}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\gamma-\\frac e2-\\alpha} \\\\\n\\norm{E^{\\mathcal F^{(i)}_{y_i}}(F_x-F_y)(\\psi_x^\\epsilon)}_{L_p}\\lesssim \\epsilon^\\alpha(\\abs{x-y}+\\epsilon)^{\\gamma-\\alpha}.\n\\end{align*}\n\n\\noindent Under this condition, we can formulate our main result:\n\n\\begin{theorem}[Stochastic reconstruction]\nAssume the adaptedness condition of Definition \\ref{stochDimension} and that $(F_x)_{x\\in\\mathbb{R}^d}$ is stochastically $\\gamma$-coherent for some $\\gamma>0$ with stochastic dimension $e\\le d$. Then, there exists a unique random distribution $f$, which fulfills\n\n\\begin{align*}\n\\norm{(f-F_x)(\\psi_x^\\epsilon)}_{L_p}&\\lesssim \\epsilon^{\\gamma-\\frac e2}\\\\\n\\norm{E^{\\mathcal F^{(i)}_{x_i}}(f-F_x)(\\psi_x^\\epsilon)}_{L_p}&\\lesssim \\epsilon^{\\gamma}\n\\end{align*}\n\n\\noindent for $i=1,\\dots,e$.\n\\end{theorem}\n\n\n\\subsection{The structure of this paper}\n\nThis paper is structured as follows: In Section \\ref{prelim}, we will formally introduce distribution and scalings, as well as the wavelet techniques needed for the proofs. This section mainly follows \\cite{Hairer} and \\cite{meyer}.\n\nSection \\ref{mainResult} will then talk about the main result and prove the stochastic reconstruction theorem. It should be noted that while our setting closely resembles the setting of Caravenna and Zambotti \\cite{caravenna}, our proofs are closer to Hairer's original proof, as his wavelet techniques are better suited to apply L\u00ea's stochastic techniques.\n\nIn Section \\ref{sectionSewing}, we show that in one dimension the stochastic reconstruction theorem becomes the stochastic sewing lemma under weak additional assumptions needed for the distributional framework.\n\nThe last three sections of this paper are dedicated to showing simple examples, of how the stochastic reconstruction theorem could be applied. In Section \\ref{sectionGMM}, we consider a Gaussian martingale measure $W_t(A)$ and a stochastic process $X(t,x)$, and show how the Walsh type integral \\cite{walshOriginal}\n\n\\begin{equation*}\n\\int_0^\\infty \\int_{\\mathbb{R}^d} X(s,x) W(ds,dx)\n\\end{equation*}\n\n\\noindent can be reconstructed as a product between the process $X$ and the distribution $W(ds, dx)$. This closely resembles the case of the Young product in the deterministic case, which was described at the very start of this paper. Thus, this highlights how the integral can be seen as a stochastic version of the Young product of a function $f$ of some H\u00f6lder regularity $\\alpha>0$ and a distribution $g$ of H\u00f6lder regularity $\\beta<0$.\n\nSince the above integral only assumes martingale properties in one dimension, this case only yields stochastic dimension $e=1$. To give an example with $e>1$, we show a similar reconstruction for integration against white noise in Section \\ref{sectionWN} and reconstruct the integral\n\n\\begin{equation*}\n\\int_{\\mathbb{R}^d} X(z)\\xi(dz),\n\\end{equation*}\n\n\\noindent which can be found in \\cite{KPZ}.\n\nFinally, in Section \\ref{sectionDis}, we discuss our results and where the stochastic reconstruction theorem might lead to in the future.\n\n\\section{Preliminaries}\\label{prelim}\n\n\\subsection{Distributions and H\u00f6lder continuity}\n\nA property which is typical for the theory of SPDEs is that most solutions are too irregular to be viewed as functions and need to be seen as distributions. Let us quickly introduce those: Let $C_c^\\infty$ be the space of smooth, compactly supported test functions, and let $C_c^r$ be the space of compactly supported, $r$ times continuously differentiable functions. We equip those spaces with the following $C_c^r$ norms: For any multiindex $k=(k_1,\\dots,k_n)$ and $k_1+\\dots+k_n$ times continuously differentiable function $\\psi$, let $\\psi^{(k)} := \\frac{\\partial^{k_1+\\dots+k_n}}{\\partial x_1^{k_1}\\dots\\partial x_d^{k_d}}\\psi$ be a partial derivative of $\\psi$. Then, for some $r\\in\\mathbb N$, we set\n\n\\begin{equation*}\n\\norm{\\psi}_{C_c^r} = \\sum_{k_1+\\dots+k_d\\le r} \\norm{\\psi^{(k)}}_\\infty.\n\\end{equation*}\n\n\\noindent This allows us to define the space of distribution as follows:\n\n\\begin{definition}\n\nFor any $r\\in\\mathbb N$, a \\emph{distribution} is a linear map $f:C_c^r \\to\\mathbb{R}$, such that for any compact set $K$ there is a constant $C(K)$, such that for any $\\psi\\in C_c^r$ with support in $K$, it holds that\n\n\\begin{equation*}\n\\abs{f(\\psi)}\\le C(K)\\norm{\\psi}_{C_c^r}.\n\\end{equation*}\n\n\\noindent Further, for any $p<\\infty$, we call a linear map $f:C_c^r\\to L_p(\\Omega)$ a \\emph{random distribution}, if it fulfills \n\n\\begin{equation*}\n\\norm{f(\\psi)}_{L_p}\\le C(K)\\norm{\\psi}_{C_c^r}.\n\\end{equation*}\n\\end{definition}\n\n\\noindent We use the notation $\\abs{f(\\psi)}\\lesssim \\norm{\\psi}_{C_c^r}$, where we allow constants hidden in $\\lesssim$ to depend on underlying compact sets.\n\nWe need to extend the notion of H\u00f6lder continuity to distributions. This requires us to measure the local behavior of distributions, which can be achieved with the help \\emph{localizations} of test functions. To this end, we call the family of maps $(S^\\lambda)_{\\lambda\\ge 0}$ a \\emph{scaling} associated to the vector $s\\in\\mathbb R^d$ with $s_i>0$ for all $i=1,\\dots,d$, if it is given by\n\n\\begin{align*}\nS^\\lambda :\\mathbb R^d&\\to\\mathbb{R}^d \\\\\n(x_1,\\dots,x_d)&\\mapsto (\\lambda^{s_1} x_1,\\dots, \\lambda^{s_d} x_d).\n\\end{align*}\n\n\\noindent The rank of $S$ is given by $\\abs{S} = \\abs{s} := s_1+\\dots+s_d$. If $\\mathbb{R}^d$ has a scaling $S$ assigned to it, we also equip the space with the seminorm\n\n\\begin{equation*}\n\\abs{x} = \\abs{x_1}^{\\frac 1{s_1}}+\\dots+\\abs{x_d}^{\\frac 1{s_d}},\n\\end{equation*}\n\n\\noindent which is often referred to as a \\emph{homogeneous norm}, although not being a norm in the strict sense. Note that it has the property, that $\\abs{S^\\lambda x} = \\lambda\\abs{x}$.\n\nTo give two short examples, the canonical scaling $x\\mapsto \\lambda x$ is simply given by the $S$ associated to $s=(1,\\dots,1)$, while the parabolic scaling is associated to $s=(2,1,\\dots,1)$.\n\nWith this in mind, we can now take a look at the localization of a test function: Let $\\psi\\in C_c^r$ be a test function, $x\\in\\mathbb{R}^d$ and $\\lambda\\in(0,1)$. Then, the \\emph{localized} test function $\\psi_x^\\lambda$ (under the scaling $S$) is given by\n\n\\begin{equation*}\n\\psi_x^\\lambda := \\frac 1{\\lambda^{\\abs{S}}} \\psi\\left(S^{\\frac 1\\lambda}(x-y)\\right).\n\\end{equation*}\n\n\\noindent In the case that $S$ is the cannonical scaling associated to $(1,\\dots,1)$, this is simply given by the classical notion of localized test functions:\n\n\\begin{equation*}\n\\psi_x^\\lambda(y) :=\\frac 1{\\lambda^d}\\psi\\left(\\frac{y-x}{\\lambda}\\right).\n\\end{equation*}\n\n\\noindent The localization has the following properties, that we are interested in:\n\n\\begin{itemize}\n\\item Assume $supp(\\psi)\\subset[-C,C]^d$. Then the support of $\\psi^\\lambda_x$ lies in $[x-\\lambda^{s_1} C,x+\\lambda^{s_1} C]\\times\\dots\\times [x-\\lambda^{s_d} C,x+\\lambda^{s_d} C]$, and\n\\item it holds that $\\int_{\\mathbb{R}^d}\\abs{\\psi(y)}dy = \\int_{\\mathbb{R}^d}\\abs{\\psi_x^\\lambda(y)} dy$.\n\\end{itemize}\n\n\\noindent These properties justify thinking of $\\psi_x^\\lambda$ as the density of a measure on $\\mathbb{R}^d$, which is highly concentrated around $x$ for small $\\lambda$. Indeed, as $\\lambda\\to 0$, this converges to the Dirac measure $\\delta_x$: For all continuous functions $f:\\mathbb{R}^d\\to \\mathbb{R}$\n\n\\begin{equation}\\label{dirac}\n\\scalar{f,\\psi_x^\\lambda}\\xrightarrow{\\lambda\\to 0} f(x),\n\\end{equation}\n\n\\noindent where $\\scalar{f,g} = \\int_{\\mathbb{R}^d} f(x) g(x) dx$ is simply the $L_2(\\mathbb{R}^d)$ scalar product.\n\nFor a true distribution $g$ (i.e. there is no function $\\tilde g$, such that $g(\\psi) = \\scalar{\\tilde g,\\psi}$ for all $\\psi\\in C_c^r$), $g(\\psi_x^\\lambda)$ is bound to diverge as $\\lambda\\to 0$. (If it were to converge, we could set $\\tilde g(x) = g(\\delta_x) = \\lim_{\\lambda\\to 0}g(\\psi_x^\\lambda)$) The speed of divergence is going to give us a way to define the regularity of $g$, which will help us to define H\u00f6lder spaces of negative regularity. \n\nLet us first introduce H\u00f6lder continuity for positive regularities: For an $\\alpha\\in (0,1]$, and a compact set $K\\subset \\mathbb{R}^d$, let\n\n\\begin{equation*}\nC^{\\alpha}(K) := \\{f\\in C^0(\\mathbb{R}^d)~|~\\exists C>0\\forall x,y\\in K: \\abs{f(x)-f(y)}\\le C\\abs{x-y}^\\alpha\\},\n\\end{equation*}\n\n\\noindent and we call $C^\\alpha := \\bigcap_{K\\subset \\mathbb{R}^d} C^{\\alpha}(K)$ the set of (locally) $\\alpha$-H\u00f6lder continuous functions, where the intersection is taken over all compact sets $K$ in $\\mathbb{R}^d$. We also refer to this space simply as the space of H\u00f6lder-continuous functions (and drop the ``locally''), as every function and distribution in this paper will be only locally H\u00f6lder continuous. In consistence with the $\\lesssim$ notation, we write $\\abs{f(x)-f(y)}\\lesssim\\abs{x-y}^\\alpha$ for $f\\in C^\\alpha$.\n\nFor random processes, we define the space of $\\alpha$-H\u00f6lder continuous processes by\n\n\\begin{equation*}\nC^\\alpha(L_p(\\Omega)) := \\{X:\\mathbb{R}^d\\to L_P(\\Omega)~\\vert~ \\norm{X(x)-X(y)}_{L_p}\\lesssim \\abs{x-y}^\\alpha\\}.\n\\end{equation*}\n\n\\noindent As mentioned above, for $\\alpha<0$, we measure the H\u00f6lder regularity of a distribution $f$ by measuring the speed of divergence of $f(\\psi_x^\\lambda)$ for $\\lambda\\to 0$. More specifically, we define the space $C^\\alpha$ for negative $\\alpha$ as follows:\n\n\\begin{definition}\n\nLet $\\alpha<0$, and $r\\in\\mathbb N$ with $r >\\abs\\alpha$. A distribution $f:C_c^r\\rightarrow\\mathbb{R}$ is in the space $C^\\alpha$, if for every distribution $\\psi$ with $\\norm{\\psi}_{C_c^r}=1$ and compact support in $[-1,1]^d$, and for each compact set $K\\subset\\mathbb{R}^d$, there is some constant $C(K)$, such that for $x\\in K$ the following holds:\n\n\\begin{equation*}\n\\abs{f(\\psi_x^\\lambda)}\\le C(K) \\lambda^\\alpha.\n\\end{equation*}\n\n\\noindent For a random distribution, we say that $f\\in C^\\alpha(L_P(\\Omega))$, if it holds that\n\n\\begin{equation*}\n\\norm{f(\\psi_x^\\lambda)}_{L_p}\\le C(K) \\lambda^\\alpha.\n\\end{equation*}\n\\end{definition}\n\n\n\\subsection{Wavelet Basis}\n\nOne of the most powerful tools we will use is the wavelet basis. We introduce this tool, following section 3.1 of \\cite{Hairer}. If one is interested in a more detailed discussion of the topic, it can be found in \\cite{meyer}.\n\nIntuitively, a wavelet basis is given by a smooth, compactly supported function $\\phi$, such that any test function $\\psi$ can be decomposed in the form\n\n\\begin{equation}\\label{wavelet}\n\\psi = \\lim_{n\\to\\infty} \\sum_{k\\in\\Delta_n} b_{n,k} \\phi_{k}^n,\n\\end{equation}\n\n\\noindent where the limit holds in $L_2(\\mathbb{R}^d)$ and $\\phi_k^n$ is a certain localized version of $\\phi$ around $k$ and $\\Delta_n$ is a mesh, which gets increasingly fine as $n\\to\\infty$. This allows us to analyze arbitrary test functions $\\psi$, using only the local properties of $\\phi_k^n$. Such a wavelet can be constructed using the following theorem of Daubechies, which can be found in \\cite{debauchies}:\n\n\\begin{theorem}[Daubechies]\\label{theoWave1}\n\nFor any $r\\in\\mathbb N$, there exists a function $\\phi\\in C_c^r(\\mathbb{R})$, such that\n\n\\begin{enumerate}\n\\item For all $z\\in\\mathbb Z$, it holds that\n\n\\begin{equation*}\n\\scalar{\\phi,\\phi(\\cdot-z)} = \\delta_{0,z}.\n\\end{equation*}\n\n\\item $\\phi$ is self-replicable, i.e. there exist constants $(a_k)_{k\\in\\mathbb Z}$, such that\n\n\\begin{equation*}\n\\frac 12\\phi(x\/2) = \\sum_{k\\in\\mathbb Z} a_k\\phi(x-k).\n\\end{equation*}\n\\end{enumerate}\n\\end{theorem}\n\n\\noindent Due to the compact support of $\\phi$, it holds that $a_k = 0$ for all but finitely many $k$. In Daubechies' setting, it does not matter where the support of $\\phi$ lies, as long as it is compact. However, the stochastic theory requires us to separate the number line $\\mathbb{R}$ into the ``future'' $[0,\\infty)$ and the ``past'' $(-\\infty,0]$. In particular, we will only be interested in functions $\\phi$ with compact support in the future, i.e. $supp(\\phi)\\subset[0,R]$ for some $R>0$. Such a wavelet can be easily constructed by replacing $\\phi$ with $\\phi(\\cdot-x)$ for a large enough $x>0$. Further observe that, by choosing $x$ large enough, we can also assure that $a_k=0$ for all $k<0$ in property ii).\n\nProperty $i)$ ensures that the recentered wavelets $\\phi_z$, $z\\in\\mathbb Z$ form an orthonormal system in $L_2(\\mathbb{R})$. Since this theory works in $L_2$, our usual $L_1$ localization is less useful than the following $L_2$ localization: Let $\\Delta_n := 2^{-n}\\mathbb Z$. Then, for all $k\\in\\Delta_n$, we define\n\n\\begin{equation*}\n\\phi_k^n(x) := 2^{n\\frac 12}\\phi(2^{n}(x-k)) = 2^{-n\\frac 12}\\phi_k^{2^{-n}}(x).\n\\end{equation*}\n\n\\noindent With this notation, it is indeed well understood that we get a representation in the form of \\eqref{wavelet}: For any $\\psi\\in L_2(\\mathbb{R})$, it holds that\n\n\\begin{equation}\\label{reihe}\n\\lim_{n\\to\\infty} \\sum_{k\\in\\Delta_n} \\scalar{\\psi,\\phi_k^n}\\phi_k^n = \\psi,\n\\end{equation}\n\n\\noindent where the limit holds in $L_2(\\mathbb{R}^d)$.\n\nLet $V_n := span\\{\\phi_k^n ~\\vert~ k\\in\\Delta_n\\}$. It follows from the self-replication-property of $\\phi$, that $V_{n-1} \\subset V_n$. Equation \\eqref{reihe} gives us that $L_2(\\mathbb{R}^d) = \\overline{\\bigcup_{n\\in\\mathbb N} V_n}$, so we can really think of $(\\phi_k^n)_{k\\in\\Delta_n}$ as ``almost an orthonormal basis'' of $L_2$ for sufficiently high $n$.\n\nOne of the strongest properties of this wavelet basis is, that we also have a precise description of the ``missing part'' $\\psi-\\sum_{k\\in\\Delta_n} \\scalar{\\psi,\\phi_k^n}\\phi_k^n$, given through the following theorem from \\cite{meyer}\n\n\\begin{theorem}[Meyer]\\label{theoWave2}\n\nThere is a function $\\hat\\phi\\in C_c^r$, such that\n\n\\begin{enumerate}\n\n\\item $\\hat V_n := span\\{\\hat\\phi_k^n~\\vert~k\\in\\Delta_n\\}$ is the orthorgonal complement of $V_{n}$ in $V_{n+1}$, i.e. $\\hat V_n = V_{n}^{\\perp V_{n+1}}$.\n\n\\item Let $\\hat\\phi_k^n := 2^{-n\\frac 12}\\hat\\phi_k^{2^{-n}}$. Then, $(\\hat \\phi_k^n)_{n\\in\\mathbb N,k\\in\\Delta_n}$ is a family of orthonormal functions:\n\n\\begin{equation*}\n\\scalar{\\hat\\phi_k^n,\\hat\\phi_l^m} = \\delta_{n,m}\\delta_{k,l}.\n\\end{equation*}\n\n\\item $\\hat\\phi$ eliminates all polynomials of degree less than $r$: For any monomial $x^m$, $m\\le r$, it holds that\n\n\\begin{equation*}\n\\scalar{x^m,\\hat\\phi} = 0.\n\\end{equation*}\n\n\\end{enumerate}\n\n\\end{theorem}\n\n\n\\begin{remark}\nProperty i) and ii) give us, that for all $n\\in\\mathbb N$,\n\n\\begin{equation*}\n\\{\\phi_k^n~\\vert~k\\in\\Delta_n\\}\\cup\\{\\hat\\phi_l^m~\\vert~m\\ge n,l\\in\\Delta_m\\}\n\\end{equation*}\n\n\\noindent forms an orthonormal basis of $L_2$. It especially holds that each $\\psi\\in L_2$ can be represented via\n\n\\begin{equation*}\n\\psi = \\sum_{k\\in\\Delta_n}\\scalar{\\psi,\\phi_k^n}\\phi_k^n+\\sum_{m=n}^\\infty\\sum_{l\\in\\Delta_m}\\scalar{\\psi,\\hat\\phi_l^m}\\hat\\phi_l^m,\n\\end{equation*}\n\n\\noindent where the right-hand side should be seen as an $L_2$ limit. This representation should be interpreted as $\\phi$ approximating $\\psi$, and $\\hat\\phi$ ``filling in the details''.\n\\end{remark}\n\n\\noindent How do we extend this wavelet basis to $\\mathbb{R}^d$ equipped with a scaling $S$? The trick is to use $d$ copies of $\\phi$ or $\\hat\\phi$, respectively, and multiplying them to get a multidimensional wavelet. To get started, let $S$ be associated to $s\\in\\mathbb R^d_+$, and let $\\Delta_n^S\\subset\\mathbb{R}^d$ be the rescaled mesh defined by\n\n\\begin{equation*}\n\\Delta_n^S = \\left\\{\\sum_{i=1}^d 2^{-ns_i}l_i\\vec e_i ~\\bigg\\vert~ l_i\\in\\mathbb Z\\right\\},\n\\end{equation*}\n\n\\noindent where $\\vec e_i = (0,\\dots,0,1,0,\\dots,0)$ is the $i$-th unit vector of $\\mathbb{R}^d$. We usually drop the index $S$ on $\\Delta_n^S$, since the scaling will be fixed in the later sections of this paper. We can define $\\phi$ as a map on $\\mathbb{R}^d$ as\n\n\\begin{equation*}\n\\phi(y) = \\prod_{i=1}^d \\phi(y_i)\n\\end{equation*}\n\n\\noindent to get a wavelet on $\\mathbb{R}^d$. Its $L_2$-localizations\n\n\n\\begin{align*}\n\\phi_k^n = 2^{-n\\frac{\\abs{S}}{2}}\\phi_k^{2^{-n}},\n\\end{align*}\n\n\\noindent where $k\\in\\Delta_n$, form a wavelet basis for $L_2(\\mathbb{R}^d)$, which respects $S$.\n\n\\begin{comment}\n\\begin{remark}\nThis leads to the same expressions, as directly defining\n\n\\begin{equation*}\n\\phi_k^n(x) = \\prod_{i=1}^d 2^{n\\frac {s_i}2}\\phi(2^{ns_i}(x_i-k_i))\n\\end{equation*}\n\n\\noindent for any $k\\in\\Delta_n$, which is the definition used in \\cite{Hairer}. However, this definition hides away the fact that $\\phi_y^n$ is a wavelet basis on $\\mathbb{R}^d$, which can be fully constructed from the localizations of a single function $\\phi\\in C_c^r(\\mathbb{R}^d)$.\n\\end{remark}\n\\end{comment}\n\nWe need to find a description of $V_{n}^{\\perp V_{n+1}}$, similar to before. While this is no longer possible with a single function $\\hat\\phi$ and its rescaled versions, it is possible to find a finite set $\\Phi$, such that\n\n\\begin{equation*}\nV_{n}^{\\perp V_{n+1}} = span\\{\\hat\\phi_k^n ~\\vert~ k\\in\\Delta_n,\\hat\\phi\\in\\Phi\\}.\n\\end{equation*}\n\n\\noindent In fact, $\\Phi$ is given by all possible functions of the form $\\hat\\phi(x_1,\\dots,x_n) = \\prod_{i=1}^n \\tilde \\phi_i(x_i)$, where all $\\tilde\\phi_i(x_i)$ are either $\\phi(x_i)$ or $\\hat\\phi(x_i)$ and at least one $\\tilde\\phi_i(x_i)$ is given by $\\hat\\phi(x_i)$ (for reference, see \\cite{meyer}). Since $\\hat\\phi$ eliminates polynomials of degree less or equal to $r$, this implies that for all $\\hat\\phi\\in\\Phi$, $\\scalar{\\hat\\phi,p} = 0$ for all multivariate polynomials $p$ of degree less or equal to $r$. \n\nHaving to deal with a set $\\Phi$ instead of a single function $\\hat\\phi$ is usually just a technical detail. As it turns out, the summation over $\\Phi$ only adds a constant to the estimates in Section \\ref{mainResult}. At no point does it help or obstruct our arguments in a meaningful way.\n\nOne can show, that $\\phi$ and all $\\hat\\phi\\in\\Phi$ fulfill the properties of Theorem \\ref{theoWave1} and \\ref{theoWave2}, which will be stated in detail in the Summary \\ref{summary}.\n\nOne of the most practical properties, which we are going to use extensively, is that both $\\scalar{\\phi_k^n,\\psi}$ and $\\scalar{\\hat\\phi_k^n,\\psi}$ go to $0$ as $n \\to\\infty$ at well known rates. This should not be surprising, as \\eqref{dirac} implies that $\\phi_k^{2^{-n}}\\to\\delta_k$ for $n\\to\\infty$. Thus, $\\phi_k^n = 2^{-n\\frac {\\abs{S}}2}\\phi_k^{2^{-n}}$ should go to $0$ at rate $2^{-n\\frac {\\abs{S}}2}$. What is surprising, is that $\\hat\\phi$ converges at a faster rate, thanks to property iii) of Theorem \\ref{theoWave2}: \n\n\\begin{lemma}\\label{lemma1}\nLet $\\psi\\in C_c^r$. Then, for all $\\hat\\phi\\in\\Phi$, $n\\in\\mathbb N, y\\in\\Delta_n,z\\in\\mathbb{R}^d$ and $2^{-n}\\le\\lambda\\le 1$, it holds that\n\n\\begin{align}\n\\abs{\\scalar{\\phi_y^n,\\psi_z^\\lambda}} &\\lesssim 2^{-n\\frac {\\abs{S}}2}\\lambda^{-\\abs{S}}\\norm{\\psi}_{C_c^r} \\label{LemIneq1}\\\\\n\\abs{\\scalar{\\hat\\phi_y^n,\\psi_z^\\lambda}} &\\lesssim 2^{-n\\frac {\\abs{S}}2-n\\tilde r}\\lambda^{-\\abs{S}-\\tilde r}\\norm{\\psi}_{C_c^r}.\\label{LemIneq2}\n\\end{align}\n\n\\noindent where $\\tilde r = r\\cdot\\min_{i=1,\\dots,d}(s_i)$.\n\\end{lemma}\n\n\\begin{proof}\nTo show \\eqref{LemIneq1}, observe that\n\n\\begin{align*}\n\\abs{\\scalar{\\phi_y^n,\\psi_z^\\lambda}} &\\le 2^{n\\frac {\\abs{S}}2}\\lambda^{-\\abs{S}} \\int \\abs{\\phi\\left(S^{2^{n}}(x-y)\\right)\\psi\\left(S^{\\frac 1\\lambda}(x-z)\\right)}dx \\\\\n&= 2^{-n\\frac {\\abs{S}}2} \\lambda^{-\\abs{S}} \\int\\abs{\\phi(u-S^{2^{n}}y)\\psi\\left(S{^\\frac 1{2^n\\lambda}}u - S^{\\frac 1\\lambda}z\\right)}du \\\\&\\le 2^{-n\\frac {\\abs{S}}2} \\lambda^{-\\abs{S}}\\sup_u\\abs{\\psi(u)}\\norm{\\phi}_{L_1} \\\\&\\lesssim 2^{-n\\frac {\\abs S}2} \\lambda^{-\\abs S} \\norm{\\psi}_{C_c^r}.\n\\end{align*}\n\n\\noindent To see \\eqref{LemIneq2}, let $T^r_{x_0}$ be the r-th Taylor approximation of $\\psi$ at the point $x_0$. Recall that\n\n\\begin{equation*}\n\\abs{\\psi(x_0+u)-T^r_{x_0}(x_0+u)}\\lesssim \\norm{u}_1^r\\norm{\\psi^{(r)}}_{\\infty},\n\\end{equation*}\n\n\\noindent where $\\norm{u}_1 = \\abs{u_1}+\\dots+\\abs{u_d}$ is the 1-norm of the vector $u$. Let $\\tilde s = \\min_{i=1,\\dots,d}(s_i)$ and observe that $\\frac{1}{2^n\\lambda}\\le 1$ implies\n\n\\begin{align*}\n \\norm{S^{\\frac 1{2^n\\lambda}}u}_1 &= \\abs{\\frac{1}{2^{ns_1}\\lambda^{s_1}}u_1}+\\dots+\\abs{\\frac{1}{2^{ns_d}\\lambda^{s_d}}u_d}\\\\\n & = \\left(\\frac{1}{2^n\\lambda}\\right)^{\\tilde s}\\left(\\abs{\\frac{1}{2^{ns_1-\\tilde s}\\lambda^{s_1-\\tilde s}}u_1}+\\dots+\\abs{\\frac{1}{2^{ns_d-\\tilde s}\\lambda^{s_d-\\tilde s}}u_d}\\right)\\\\\n &\\le \\left(\\frac{1}{2^n\\lambda}\\right)^{\\tilde s}\\norm{u}_1.\n\\end{align*}\n\n\\noindent With this in mind, we can use $\\int \\hat\\phi(u) T^r_{x_0}(u) du = 0$ to get\n\n\\begin{align*}\n\\abs{\\scalar{\\hat\\phi_y^n,\\psi_z^\\lambda}} &\\le 2^{-n\\frac {\\abs S}2}\\lambda^{-\\abs S}\\int\\abs{\\hat\\phi(u-S^{2^n}y)}\\underbrace{\\abs{\\psi\\left(S^{\\frac 1{2^n\\lambda}}u-S^{\\frac 1\\lambda}z\\right)-T_{-S^{\\frac 1\\lambda}z}^r\\left(S^{\\frac 1{2^n\\lambda}}u-S^{\\frac 1\\lambda}z\\right)}}_{\\lesssim \\norm{S^{\\frac 1{2^n\\lambda}}u}_1^r\\norm{\\psi^{(r)}}_{\\infty}\\lesssim 2^{-n\\tilde r}\\lambda^{-\\tilde r}\\norm{u}_1\\norm{\\psi^{(r)}}_{\\infty}}du \\\\&\\lesssim 2^{-n\\frac {\\abs S}2-n\\tilde r}\\lambda^{-{\\abs S}-\\tilde r} \\int\\abs{\\hat\\phi(u-S^{2^{-n}}y)}\\abs{u^r}\\norm{\\psi^{(r)}}_{\\infty}du \\\\&\\lesssim 2^{-n\\frac {\\abs S}2-n\\tilde r}\\lambda^{-\\abs S-\\tilde r}\\norm{\\psi}_{C_c^r}.\n\\end{align*}\n\n\\end{proof}\n\n\\noindent For the reader's convenience, we summarize the conditions that our constructed wavelets $\\phi,\\hat\\phi$ fulfill:\n\n\\begin{summary}\\label{summary}\n\n$\\Delta_n\\subset\\mathbb{R}^d$ denotes the scaled mesh, given by\n\n\\begin{equation*}\n\\Delta_n = \\left\\{\\sum_{i=1}^d 2^{-ns_i}l_i\\vec e_i ~\\bigg\\vert~ l_i\\in\\mathbb Z\\right\\}.\n\\end{equation*}\n\n\\noindent For any $r\\in\\mathbb N$, there exists a wavelet basis defined by a functions $\\phi \\in C_c^r(\\mathbb{R}^d)$ and a finite set $\\Phi\\subset C_c^r(\\mathbb{R}^d)$, such that the following properties hold:\n\n\\begin{enumerate}\n\\item There is an $R>0$, such that $supp(\\phi), supp(\\hat\\phi)\\subset [0,R]^d$ for all $\\hat\\phi\\in\\Phi$.\n\n\\item There exist coefficients $(a_k)_{k\\in\\mathbb Z^d}$ with $a_k=0$ if any entry of the vector $k$ is negative, such that\n \n\\begin{equation}\n2^{-\\frac {\\abs S}2} \\phi(S^{1\/2}x) = \\sum_{k\\in\\Delta_1} a_k\\phi_k(x) \\label{ak}\n\\end{equation}\n\n\\noindent Note that the compact support of $\\phi$ immediately implies that only finitely many $a_k$ are non-zero.\n\n\\item For all $\\hat\\phi\\in\\Phi$, denote \n\n\\begin{align*}\n\\phi^n_y &:= 2^{-n\\frac {\\abs S}2} \\phi_y^{2^{-n}} \\\\ \\hat\\phi^n_y &:= 2^{-n\\frac {\\abs S}2} \\hat\\phi_y^{2^{-n}}.\n\\end{align*}\n\nWe further denote the spaces spanned by these localized functions by\n\n\\begin{align*}\nV_n &:= \\overline{span\\{\\phi_{y}^n \\vert y\\in\\Delta_n\\}}\\\\\n\\hat V_n &:= \\overline{span\\{\\hat\\phi_{y}^n \\vert y\\in\\Delta_n,\\hat\\phi\\in\\Phi\\}}\n\\end{align*}\n\nand the projections onto these spaces by\n\n\\begin{align*}\nP_n f &:= \\sum_{y\\in\\Delta_n} \\scalar{f,\\phi_y^n}\\phi_y^n \\\\\n\\hat P_n f &:= \\sum_{y\\in\\Delta_n}\\sum_{\\hat\\phi\\in\\Phi} \\scalar{f,\\hat\\phi_y^n}\\hat\\phi_y^n.\n\\end{align*}\n\n\\item $\\hat V_n$ is the orthogonal complement of $V_{n}$ in the space $V_{n+1}$:\n\n\\begin{equation*}\n\\hat V_n = V_{n}^{\\perp V_{n+1}}.\n\\end{equation*}\n\n\n\\item For all $n\\in\\mathbb N$,\n\n\\begin{equation*}\n\\{\\phi_y^n ~\\vert~ y\\in\\Delta_n\\}\\cup\\{\\hat\\phi^m_y~\\vert~\\hat\\phi\\in\\Phi, y\\in\\Delta_m,m\\ge n\\}\n\\end{equation*}\n\nis an orthonormal basis of $L_2(\\mathbb R^d)$. In particular, $L_2(\\mathbb R^d) = V_n+\\bigoplus_{m\\ge n} \\hat V_m$ and it holds that\n\n\\begin{align*}\n\\psi &= P_n\\psi+\\sum_{m=n}^\\infty\\hat P_m\\psi\\\\ &=\\sum_{k\\in\\Delta_n}\\scalar{\\psi,\\phi_k^n}\\phi_k^n+\\sum_{m=n}^\\infty\\sum_{l\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi}\\scalar{\\psi,\\hat\\phi_l^m}\\hat\\phi_l^m.\n\\end{align*}\n\nfor any $\\psi\\in L_2(\\mathbb{R}^d)$.\n\n\\item The first part of the above decomposition converges to $\\psi$ in $L_2(\\mathbb{R}^d)$, i.e.\n\n\\begin{equation*}\n\\psi = \\lim_{n\\to\\infty}\\sum_{k\\in\\Delta_n}\\scalar{\\psi,\\phi_k^n}\\phi_k^n.\n\\end{equation*}\n\n\\item For all $\\hat\\phi\\in\\Phi, n\\in\\mathbb N, y\\in\\Delta_n,z\\in\\mathbb{R}^d$ and $2^{-n}\\le\\lambda\\le 1$, the following inequalities hold:\n\n\\begin{align*}\n\\abs{\\scalar{\\phi_y^n,\\psi_z^\\lambda}} &\\lesssim 2^{-n\\frac {\\abs{S}}2}\\lambda^{-\\abs{S}}\\norm{\\psi}_{C_c^r} \\\\\n\\abs{\\scalar{\\hat\\phi_y^n,\\psi_z^\\lambda}} &\\lesssim 2^{-n\\frac {\\abs{S}}2-n\\tilde r}\\lambda^{-\\abs{S}-\\tilde r}\\norm{\\psi}_{C_c^r},\n\\end{align*}\n\nwhere $\\tilde r = r\\cdot\\min_{i=1,\\dots,d}(s_i)$.\n\\end{enumerate}\n\n\\end{summary}\n\n\\section{Stochastic Reconstruction}\\label{mainResult}\n\nThe goal of this section is to show and prove our main result, the stochastic reconstruction theorem. Let $(F_z)_{z\\in\\mathbb{R}^d}$ be a family of distributions, which we call a \\emph{germ}. Recall that Martin Hairer showed in \\cite{Hairer} that the reconstructing sequence\n\n\\begin{equation}\\label{sequence1}\nf_n(\\psi) := \\sum_{y\\in\\Delta_n}F_y(\\phi_y^n)\\scalar{\\phi_y^n,\\psi}\n\\end{equation}\n\n\\noindent converges to a limiting distribution, whenever the germ $(F_z)_{z\\in\\mathbb R^d}$ fulfills a property called $\\gamma$-coherence for a $\\gamma > 0$. (Note that Hairer did not namely mention the coherence property. It was introduced by Caravenna and Zambotti in \\cite{caravenna}. However, Hairer's proof works for \\eqref{sequence1} under the coherence assumption of Caravenna-Zambotti.)\n\n\\noindent Our analysis of the stochastic case led us to the introduction of a new concept, which we call \\emph{stochastic dimension}. To motivate this, consider space time white noise $\\xi(t,x)$ in $1+1$ dimension and an $\\alpha$-H\u00f6lder continous (for now deterministic) function $X\\in C^\\alpha(\\mathbb{R}^2)$ for some $\\alpha>0$. We show in Section \\ref{sectionWN} that, for the germ $(F_{(t,x)})_{(t,x)\\in\\mathbb{R}^2}$ defined by\n\n\\begin{equation}\\label{whiteNoise}\nF_{(t,x)}(\\psi) := X(t,x)\\xi(\\psi) = X(t,x) \\int\\int \\psi(s,y)\\xi(s,y) dsdy,\n\\end{equation}\n\n\\noindent the reconstructing sequence $f_n(\\psi)$ converges in $L_p(\\Omega)$ for $p<\\infty$ to\n\n\\begin{equation*}\nf_n(\\psi) \\xrightarrow{n\\to\\infty} \\int\\int X(s,y)\\psi(s,y)\\xi(s,y) dsdy,\n\\end{equation*}\n\n\\noindent where the right-hand side is a Walsh-type integral against white noise, see \\cite{walshOriginal} for reference. It is also not hard to see, that $F_{(t,x)}$ is only $(-1+\\alpha)^-$-coherent (which reflects $\\xi$ having regularity $(-1)^-$, if we use the canonical scaling $s = (1,1)$, and $X$ having regularity $\\alpha$). This indicates that it is indeed possible to use stochastic machinery to improve the reconstructing theorem. Furthermore, there is a neat interpretation of the $\\gamma$ being slightly bigger than $-1$: Stochastic sewing allowed martingale properties to improve the required regularity by $\\frac 12$. The white noise from the example, however, has martingale properties in both directions: time and space. It is thus possible to apply the technique from the stochastic sewing lemma twice, improving the required regularity two times by $\\frac 12$. This allows the reconstruction up to $\\gamma > -1$.\n\nBefore we continue, let us make two quick remarks:\n\n\\begin{remark}\nThis also works for a stochastic process $X(t,x)$, which is adapted to the natural $\\sigma$-algebra of white noise in time and space, which is $\\alpha$-H\u00f6lder continuous in the following sense:\n\n\\begin{equation*}\n\\norm{X(t,x)-X(s,y)}_{L_p}\\lesssim \\abs{(t\n,x)-(s,y)}^\\alpha.\n\\end{equation*}\n\\end{remark}\n\n\\begin{remark}\n\nIt should be noted that for the above germ, Caravenna-Zambottis reconstructing sequence given by\n\n\\begin{equation*}\nf_n(\\psi) = \\int_{\\mathbb{R}^d} F_z(\\rho_z^{\\epsilon_n})\\psi(z)dz\n\\end{equation*}\n\n\\noindent converges against the same limit. We suspect that the main result of this paper can also be achieved using their mollification techniques.\n\\end{remark}\n\n\\noindent This motivates us to split the dimensions $1,\\dots,d$ into two types of directions: Those, in which we have certain stochastic properties, and those, which lack such properties. We call the number of directions, which have such properties the \\emph{stochastic dimension} of the germ. \n\nWhat is the correct stochastic setting for this process? Let us again take a look at the Example \\eqref{whiteNoise}: $(F_{(t,x)})_{(t,x)\\in\\mathbb{R}^2}$ is a random germ over the probability space $(\\Omega,\\mathcal F,P)$, where $\\mathcal F = \\sigma(\\xi(\\psi)|\\psi\\in L_2(\\mathbb{R}^2))$ is the sigma algebra generated by white noise. This sigma algebra comes with a natural filtration, which one can write informally as ``$\\mathcal F_t = \\{\\xi(s,x)|s\\le t\\}$''. The formal definition for this is of course given by \n\n\\begin{equation*}\n\\mathcal F_t = \\sigma(\\xi(\\psi)\\vert supp(\\psi)\\subset (-\\infty,t]\\times\\mathbb{R}).\n\\end{equation*}\n\n\\noindent Let now $t\\in\\mathbb R$ and $\\psi$ with a support in $(-\\infty,s]\\times\\mathbb{R}$ be fixed. It then holds that $\\xi(\\psi)$ is $\\mathcal F_s$-measurable and we assume that $X(\\cdot,x)$ is adapted to $(\\mathcal F_t)_{t\\in\\mathbb{R}}$, i.e. $X(t,x)$ is $\\mathcal F_t$-measurable for all $x\\in\\mathbb{R}$. It immediately follows, that $F_{t,x}(\\psi) = X(t,x)\\xi(\\psi)$ is $\\mathcal F_{\\max(s,t)}$-measurable.\n\n\nOf course, a similar construction can be made in the spatial direction $x$ with respect to the filtration $\\mathcal F_x^{\\text{spatial}} := \\sigma(\\xi(\\psi)\\vert supp(\\psi)\\subset\\mathbb{R}\\times(-\\infty,x])$. This gives us two different filtrations, and our germ is adapted to both in the sense that for $t,x\\in\\mathbb{R}$ and $\\psi$ with support in $(-\\infty,s]\\times (-\\infty,y]$, $F_{t,x}(\\psi)$ is $\\mathcal F_{\\max(s,t)}$ and $\\mathcal F_{\\max(x,y)}^{\\text{spatial}}$-measurable.\n\nWe want to generalise this adaptedness property: Let $(\\Omega,\\mathcal F,P)$ be a complete probability space, and let $(F_z)_{z\\in\\mathbb{R}^d}$ be a random germ over that space, i.e. for $z\\in\\mathbb{R}^d$, let $F_z:C_c^r\\to L_p(\\Omega)$ be a continuous, linear map for some $p<\\infty$. As in the example above, consider $e$ many filtrations $(\\mathcal F_t^{(i)})_{t\\in\\mathbb{R}}$, $i=1,\\dots,e$. We make the following assumptions on these filtrations:\n\n\\begin{itemize}\n\\item {\\bf Completeness:} We assume, that each $\\mathcal F^{(i)}_t$ is complete. This guarantees, that for an $\\mathcal F^{(i)}_t$-measurable $X$, any modification of this $X$ is still $\\mathcal F^{(i)}_t$-measurable.\n\n\\item {\\bf Right-continuity:} We assume, that for $i=1,\\dots, e$, $(\\mathcal F_t^{(i)})_{t\\in\\mathbb{R}}$ is a right continuous filtration. This will allow the limits in the proofs to come to have the right adaptedness properties.\n\n\\item {\\bf Orthogonality:} In the proof of Theorem \\ref{stochastic_reconstruction}, we will iteratively condition our germs onto $\\sigma$-algebras $\\mathcal F^{(i)}_t$ with different $i$. For this to make sense, we need to assume that conditioning in one direction does not change measurability along the other directions. To be precise, we assume, that if a random variable $X$ is $\\mathcal F_{z_i}^{(i)}$-measurable for some $1\\le i\\le e$ and $z_i\\in\\mathbb{R}$, then $E^{\\mathcal F_{z_j}^{(j)}}X$ is still $\\mathcal F_{z_i}^{(i)}$-measurable for all $1\\le j\\le e, j\\neq i$ and all $z_j\\in\\mathbb{R}$.\n\\end{itemize}\n\n\\noindent We denote by $\\pi_i$ the projection onto the $i$-th coordinate. With this notation, we can now define the term stochastic dimension: \n\n\\begin{definition}\\label{stochDimension}\nLet $(F_x)_{x\\in\\mathbb{R}^d}$ be a random germ (i.e. a family of random distributions) in $d$ dimensions, and let $e\\le d$. Let $ (\\mathcal F^{(i)}_z)_{z\\in\\mathbb{R}}, i=1,\\dots,e$ be filtrations of some $\\sigma$-Algebra $\\mathcal F$, which fulfill the above three properties of completeness, right-continuity and orthogonality.\n\n\\noindent\nWe say that $(F_x)$ is of \\emph{stochastic dimension} $e$, if for $i=1,\\dots,e$:\n\n\\noindent\nFor any test function $\\psi$ with support $supp(\\psi)\\subset \\mathbb{R}^{i-1}\\times(-\\infty,y]\\times\\mathbb{R}^{d-i}$, it holds that \n\n\\begin{equation}\\label{measurability}\nF_x(\\psi) \\text{ is } \\mathcal F^{(i)}_{\\max(\\pi_i(x),y)}\\text{-measurable}.\n\\end{equation}\n\n\\noindent We further say that a random distribution $f$ has stochastic dimension $e$, if for any test function $\\psi$ with support $supp(\\psi)\\subset \\mathbb{R}^{i-1}\\times(-\\infty,y]\\times\\mathbb{R}^{d-i}$\n\n\\begin{equation*}\nf(\\psi)\\text{ is } \\mathcal F^{(i)}_{y}\\text{-measurable}.\n\\end{equation*}\n\\end{definition}\n\n\\begin{remark}\nAs most readers have undoubtedly noticed, we use a two-directional setting in the sense that we always use $t\\in\\mathbb{R}$ instead of $t\\in[0,\\infty)$. In one dimension, our example \\eqref{whiteNoise} becomes a two-sided Brownian motion, not a classical one.\n\nSince this whole theory highly depends on recentering and scaling operations, the origin point does not have any special role, which makes this two-directional setting more natural. However, one can use the locality of this theory to achieve a one-sided setting, by restricting oneself to compact sets $K \\subset [0,\\infty)\\times\\mathbb{R}^{d-1}$ in Definition \\ref{defCoherence} and Theorem \\ref{stochastic_reconstruction}.\n\\end{remark}\n\n\\noindent Since we work with a scaling $S$ associated to some $s\\in\\mathbb{R}^d_+$, we do not consider $e$ to be the number of stochastic dimensions, but rather consider\n\n\\begin{equation*}\nE := \\sum_{i=1}^e s_i\\le \\abs S.\n\\end{equation*}\n\n\\noindent With the definition of the stochastic dimension in mind, we can now formulate a stochastic version of the coherence property. Since we aim to get $L_p$-convergence, let us fix now some $2\\le p <\\infty$ and define stochastic coherence as follows:\n\n\\begin{definition}\\label{defCoherence}\nLet $(F_x)_{x\\in\\mathbb{R}^d}$ be a stochastic germ with stochastic dimension $e\\le d$ and let $E$ be as above. We call $(F_x)$ \\emph{stochastically $\\gamma$-coherent}, if there is an $\\alpha\\in\\mathbb R$, such that the following holds for all test functions $\\psi\\in C_c^r(\\mathbb{R}^d)$ with $\\norm{\\psi}_{C_c^r} = 1$ and $supp(\\psi)\\subset [0,\\tilde R]^e\\times [-\\tilde R,\\tilde R]^{d-e}$ for some $\\tilde R>0$, for the stochastic directions $i=1,\\dots,e$, for any compact set $K\\subset\\mathbb{R}^d$ and for any $x,y\\in K$ with $\\pi_i(x)\\le \\pi_i(y)$:\n\n\n\\begin{align}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(F_x-F_y)(\\psi_y^\\epsilon)}_{L_p} &\\lesssim \\epsilon^{\\alpha}(\\abs{x-y}+\\epsilon)^{\\gamma-\\alpha} \\nonumber\\\\\n\\norm{(F_x-F_y)(\\psi_y^\\epsilon)}_{L_p} &\\lesssim \\epsilon^{\\alpha}(\\abs{x-y}+\\epsilon)^{\\gamma-\\frac E2-\\alpha},\\label{coherence}\n\\end{align}\n\n\\noindent where the constant appearing in $\\lesssim$ is allowed to depend on the compact set $K$ and the radius $\\tilde R$.\n\n\\end{definition}\n\n\\begin{remark}\nIn the usual construction of stochastic integrals, it is far more natural to use left-side approximations than two-sided ones. Thus, \\eqref{whiteNoise} is only a useful approximation, if the support of $\\psi$ is in $[t,\\infty)\\times[x,\\infty)$. We therefore only consider test functions with positive support in the stochastic directions in the definition above and in Theorem \\ref{stochastic_reconstruction}. It is however possible to get a two-sided stochastic reconstruction, which will be discussed in Section \\ref{twosided}. \n\\end{remark}\n\n\\begin{remark}\nIt is possible to allow $\\alpha = \\alpha(K)$ to depend on the compact set $K$. In this case, one simply has to replace $\\alpha$ with $\\alpha(\\tilde K)$ in the proofs, for some fattening $\\tilde K$ of $K$ which depends on the support of the wavelet.\n\\end{remark}\n\n\\noindent Note that each stochastic dimension reduces the coherence required by $(F_x)$ in \\eqref{coherence} by $\\frac 12$, resulting in a total reduction of $\\frac E2$, under the corresponding scaling. In particular, if the scaling is just the canonical scaling $s=(1,\\dots,1)$, we get a reduction by $\\frac e2$. Further note that the support of $\\psi_y^\\epsilon$ lies completely on the right-hand side of the time we condition on, $\\pi_i(x)$, with respect to the direction $i$.\n\nWe say that a random distribution $\\tilde f$ is a modification of $f$, if for all test functions $\\psi\\in C_c^r$, $f(\\psi) = \\tilde f(\\psi)$ almost surely. With this in mind, we can formulate our main result:\n\n\\begin{theorem}[Stochastic Reconstruction]\\label{stochastic_reconstruction}\nLet $(F_x)$ be a random germ with stochastic dimension $e\\le d$, which is stochastically $\\gamma$-coherent for some $\\gamma > 0$. Then, where is a unique random distribution $f$ (up to modifications) with stochastic dimension $e$ with regard to the same filtrations $\\mathcal F^{(i)}$ as our germ, such that the following holds for any test function $\\psi\\in C_c^r$ with $\\norm{\\psi}_{C_c^r}=1$ and which support is in $supp(\\psi)\\subset[2R,2R+\\tilde R]^e\\times[-\\tilde R,\\tilde R]^{d-e}$ for some $\\tilde R>0$:\n\n\\begin{align}\n\\norm{(f-F_x)(\\psi_x^\\lambda)}_{L_p}&\\lesssim \\lambda^{\\gamma-\\frac E2} \\label{unique1}\\\\\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(f-F_x)(\\psi_x^\\lambda)}_{L_p}&\\lesssim \\lambda^\\gamma,\\label{unique2}\n\\end{align} \n\n\\noindent where the constant in $\\lesssim$ again depends on the constant set $K\\subset \\mathbb{R}^d$ with $x\\in K$ and $\\tilde R$.\n\\end{theorem}\n\n\\noindent Note that the support of $\\psi$ does not just have to be positive, but it has to be far enough away from zero, to ``fit in'' the wavelet $\\phi$ between zero and the support of $\\psi$ twice. (Recall, that the support of the wavelet is in $[0,R]^d$.)\n\n\\begin{remark}\nThe restrictions to $\\psi$ might seem strange to the reader. Indeed, it seems that this theorem only makes a statement about test functions with (strictly) positive supports in the stochastic directions. However, this is not true, since the localizations $\\psi_y^\\epsilon$ can have negative supports in all directions depending on $y$. The support does only serve to separate the point of localization of $\\psi$ and the index of the germ $F_y$. Another way to formulate the result, is to use a $\\psi$ with any compact support, and take a look at $F_{(y-S^\\lambda C)}(\\psi_y^\\lambda)$ for a certain vector $C$ depending on the support of $\\psi$, and which directions are supposed to be stochastic.\n\\end{remark}\n\n\\noindent The proof will be split into three parts, which we show in separate lemmas:\n\n\\begin{enumerate}\n\\item Lemma \\ref{lem1} will show the $L_p$ convergence of the reconstructing sequence: For \\\\$f^{(n)} := \\sum_{x\\in\\Delta_n} F_x(\\phi_x^n)\\phi_x^n$, it holds that\n\n\\begin{equation*}\nf_n(\\psi)\\xrightarrow{L_p} f(\\psi).\n\\end{equation*}\n\n\\item Lemma \\ref{lem2} will show, that the limit $f$ fulfills \\eqref{unique1} and \\eqref{unique2}.\n\n\\item Lemma \\ref{lem3} will show that there is at most one distribution (up to modifications) fulfilling \\eqref{unique1} and \\eqref{unique2}, and thus the lemma shows the uniqueness of the reconstruction. \n\\end{enumerate}\n\n\\begin{lemma}\\label{lem1}\nLet $(F_x)$ be as in Theorem \\ref{stochastic_reconstruction}. Further, define the reconstructing sequence by\n\n\\begin{equation*}\nf^{(n)}(y) := \\sum_{n\\in\\Delta_n} F_x(\\phi_x^n)\\phi_x^n(y).\n\\end{equation*}\n\n\\noindent Then, for any test function $\\psi\\in C_c^r$ the sequence $f^{(n)}(\\psi) = \\scalar{f^{(n)},\\psi}$ converges in $L_p(\\Omega)$. We denote the limit by $f(\\psi)$ and it holds that $f$ is a random distribution with stochastic dimension e with regards to the filtrations $\\mathcal F^{(i)}$, $i=1,\\dots e$.\n\\end{lemma}\n\n\\begin{proof}\nOur strategy for this proof is to write $f^{(n)}$ as the telescopic sum $f^{(n)} = f^{(0)} +\\sum_{m=0}^{n-1}(f^{(m+1)}-f^{(m)})$ and show the convergence of this series by finding a $c>0$, such that $\\norm{(f^{(n+1)}-f^{(n)})(\\psi)}\\lesssim 2^{-nc}$ for any test function $\\psi\\in C_c^r$. This implies, that $f^{(n)}(\\psi)$ is Cauchy, and thus convergent in $L_p(\\Omega)$.\n\nWe first analyze the behavior of $f^{(n)}$ tested against wavelets $\\psi = \\phi_x^n$ with $x\\in\\Delta_n$. Using the definition of $f^{(n)}$ and \\eqref{ak}, we get\n\n\\begin{align*}\n(f^{(n+1)}-f^{(n)})(\\phi_x^n) &= \\sum_{z\\in\\Delta_{n+1}}F_z(\\phi_z^{n+1})\\scalar{\\phi_z^{n+1},\\phi_x^{n}}-\\sum_{y\\in\\Delta_n}F_y(\\phi_y^n)\\scalar{\\phi_y^n,\\phi_x^n}\\\\ \n&=\\sum_{z\\in\\Delta_{n+1}}\\sum_{k\\in\\Delta_{n+1}}F_z(\\phi_z^{n+1})\\scalar{\\phi_z^{n+1},a_k^{(n+1)}\\phi_{x+k}^{n+1}} -F_x(\\phi_x^n) \\\\\n&= \\sum_{k\\in\\Delta_{n+1}} a_k^{(n+1)} F_{x+k}(\\phi_{x+k}^{(n+1)}) - \\sum_{k\\in\\Delta_{n+1}} a_k^{(n+1)} F_{x}(\\phi_{x+k}^{(n+1)})\\\\\n&=\\sum_{k\\in\\Delta^{n+1}} a_k^{(n+1)}(F_{x+k}-F_x)(\\phi_{k+x}^{(n+1)}).\n\\end{align*}\n\n\\noindent Note that $a_k^{(n+1)}\\neq 0$ only for $\\abs{k}\\le dR2^{-n}$. This also implies, that the sum is finite. Thus,\n\n\\begin{align}\n\\norm{(f^{(n+1)}-f^{(n)})(\\phi_x^n)}_{L_p} &\\le \\sum_{k\\in\\Delta^{n+1}} \\abs{a_k^{(n+1)}}\\norm{(F_{x+k}-F_x)(\\phi_{k+x}^{(n+1)})}_{L_p} \\nonumber\\\\ &\\stackrel{\\text{coherence}}{\\le} \\sum_{k\\in\\Delta^{n+1}} \\abs{a_k^{(n+1)}}2^{-\\frac{(n+1)\\abs S}{2} - \\alpha(n+1)}(\\abs k +2^{-n-1})^{\\gamma-E\/2-\\alpha} \\nonumber\\\\ \n&\\lesssim 2^{-(n+1)(\\gamma-\\frac{E-\\abs S}{2})} \\lesssim 2^{-n\\gamma-n\\frac{\\abs S-E}{2}}, \\label{ineq1}\n\\end{align}\n\n\\noindent where we used in the last line that $a_k^{(n+1)}\\neq 0$ implies $\\abs{k}\\le dR2^{-n}$. Also note in the first line that $\\pi_i(k) \\ge 0$ for all non-zero terms and that the support of $\\phi$ is positive, which justifies using the coherence property. Analogously, for $i=1,\\dots,e$, it follows that\n\n\\begin{equation}\\label{ineq2}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(f^{(n+1)}-f^{(n)})(\\phi_x^n)}_{L_p} \\lesssim 2^{-n\\gamma-n\\frac {\\abs S}2}.\n\\end{equation}\n\n\\noindent Observe that $(f^{(n+1)}-f^{(n)})(\\phi_x^n)$ is $\\mathcal F^{(i)}_{\\hat S_i(x)}$-measurable for $\\hat S_i(x) := \\pi_i(x) +(2^{-ns_i}+2^{-(n+1)s_i})R$.\n\nWe want to bound $\\norm{(f^{(n+1)}-f^{(n)})(\\psi)}$ for an arbitrary test function $\\psi$ with compact support. To do so, we split $\\psi$ into\n\n\\begin{equation*}\n\\psi = P_n\\psi + \\sum_{m\\ge n} \\hat P_m\\psi,\n\\end{equation*}\n\n\\noindent and bound both terms individually. Since finding a bound for $(f^{(n+1)}-f^{(n)})(P_n\\psi)$ is quite lengthy, let us formulate this step as a separate lemma:\n\n\\begin{lemma}\\label{LemmaIteration}\nUnder the assumptions of Lemma \\ref{lem1}, it holds that for any $x_0$ in some compact set $K\\subset\\mathbb{R}^d$, and $1\\ge\\lambda\\ge 2^{-n}$:\n\n\\begin{equation*}\n\\norm{(f^{(n+1)}-f^{(n)})(P_n\\psi_{x_0}^\\lambda)}_{L_p} \\lesssim 2^{-n\\gamma}\\lambda^{-\\frac E2}\\norm{\\psi}_{C_c^r}.\n\\end{equation*}\n\n\\noindent There the constant in $\\lesssim$ is allowed to depend on $K$. In particular,\n\n\\begin{equation*}\n\\norm{(f^{(n+1)}-f^{(n)})(P_n\\psi)}_{L_p} \\lesssim 2^{-n\\gamma}\\norm{\\psi}_{C_c^r}.\n\\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\nRecall that\n\n\\begin{equation*}\n(f^{(n+1)}-f^{(n)})(P_n\\psi_{x_0}^\\lambda) = \\sum_{x\\in \\Delta_n}\\underbrace{(f^{(n+1)}-f^{(n)})(\\phi_x^n)\\scalar{\\phi_x^n,\\psi_{x_0}^\\lambda}}_{=:g_x}.\n\\end{equation*}\n\n\\noindent Our strategy will be to use \\eqref{BDG} on the outermost sum and then iterate this step over all stochastic directions. To this end, we specify the definition \n\n\\begin{equation*}\n\\Delta_n^{a,b} := \\left\\{\\sum_{i=a}^b 2^{-ns_i}l_i\\vec e_i ~\\bigg\\vert~ l_i\\in\\mathbb Z\\right\\}\n\\end{equation*} \n\n\\noindent to be the rescaled mesh in the variables $a$ to $b$. Unfortunately, naively applying \\eqref{BDG} does not work for our sum: If we set $Z_y := \\sum_{x\\in\\Delta_n^{2,d}} g_{(y,x)}$,so that $(f^{(n+1)}-f^{(n)})(P_n\\psi_{x_0}^\\lambda) = \\sum_{y\\in\\Delta_n^{1,1}} Z_y$, we note that $Z_y$ is $\\mathcal F_{\\hat S_1(y)}$-measurable. \\eqref{BDG} would now allow us to condition $Z_y$ onto $\\mathcal F_{\\hat S_1(y)-2^{-ns_1}}$, which results in a random variable which we do not control.\n\nInstead, we are going to split the sum into $\\lceil C\\rceil^e$ many sums for $C = (1+2^{-s_i})R$ in which the nets have a bigger mesh, to get the right conditioning. To this end, let $0\\le r_i\\le \\lceil C\\rceil-1, i=1,\\dots, e$ be natural numbers and set\n\n\\begin{equation*}\nr_n^{(k)} := S^{2^{-n}}(r_{k+1},\\dots,r_{e},0,\\dots,0) \\in\\Delta_n^{k+1,d}\n\\end{equation*} \n\n\\noindent for $k = 0,\\dots, e$. We define the rescaled and shifted net for each such $r_n^{(k)}$ by\n\n\\begin{equation*}\n\\Delta_n^{k+1,d}(r^{(k)}_n) = \\left\\{\\sum_{i=k+1}^d \\lceil C\\rceil\\cdot 2^{-ns_i}l_i\\vec e_i+r_n^{(k)}~\\bigg\\vert~ l_i\\in\\mathbb Z\\right\\}\n\\end{equation*} \n\n\\noindent and split \n\n\\begin{equation*}\n(f^{(n+1)}-f^n)(P_n\\psi_{x_0}^\\lambda) = \\sum_{r_n^{(0)}} \\sum_{x\\in\\Delta_n^{1,d}(r_n^{(0)})}g_x\n\\end{equation*}\n\n\\noindent into $\\lceil C\\rceil^e$ many sums, where the first sum runs over all possible $r_n^{(0)}$. It suffices to find a bound for the sum over $x$ as the sum over $r_n^{(0)}$ will merely add the constant factor $\\lceil C\\rceil^e$. Using \\eqref{BDG}, we now do indeed get\n\n\\begin{align}\n\\norm{\\sum_{x\\in\\Delta_n^{1,d}(r_n^{(0)})}g_x}_{L_p} \\lesssim &\\sum_{y\\in\\Delta_n^{1,1}(r_1)}\\norm{\\sum_{x\\in\\Delta_n^{2,d}(r_n^{(1)})}E^{\\mathcal F^{(1)}_y} g_{(y,x)}}_{L_p}\\nonumber\\\\&+\\left(\\sum_{y\\in\\Delta_n^{1,1}(r_1)}\\norm{\\sum_{x\\in\\Delta_n^{2,d}(r_n^{(1)})}g_{(y,x)}-E^{\\mathcal F^{(1)}_y}g_{(y,x)}}_{L_p}^2\\right)^{\\frac 12},\\label{iteration}\n\\end{align}\n\n\\noindent We want to iterate this step over all stochastic directions. To this end, we introduce the notation\n\n\\begin{align*}\ng_x &:= (f^{(n+1)}-f^{(n)})(\\phi_x^n)\\scalar{\\phi_x^n,\\psi_{x_0}^\\lambda} &&x\\in\\Delta_n^{1,d}\\\\\ng_{y,x} &:= g_{(y,x)}-E^{\\mathcal F^{(1)}_y}g_{(y,x)} &&x\\in\\Delta_n^{2,d}, y\\in\\Delta_n^{1,1} \\\\\ng_{y_1,y_2,x} &= g_{y_1,(y_2,x)}-E^{\\mathcal F^{(2)}_{y_2}}g_{y_1,(y_2,x)} &&x\\in\\Delta_n^{3,d}, y_1\\in\\Delta^{1,1}_n, y_2\\in\\Delta_n^{2,2}\\\\ &\\vdots \\\\\ng_{y_1,\\dots,y_e,x} &:= g_{y_1,\\dots,y_{e-1}(y_e,x)}-E^{\\mathcal F^{(e)}_{y_e}}g_{y_1,\\dots,y_{e-1}(y_e,x)} &&x\\in\\Delta_n^{e+1,d}, y_1\\in\\Delta_n^{1,1},\\dots,y_e\\in\\Delta_n^{e,e},\n\\end{align*}\n\n\\noindent And find bounds for $\\norm{\\sum_{x\\in\\Delta_n^{i+1,d}(r_n^{(i)})}g_{y_1,\\dots,y_i,x}}_{L_p}$ using a backwards induction over $i=0,\\dots,e$. Recall, that \\eqref{LemIneq1} gives us\n\n\\begin{equation}\n\\abs{\\scalar{\\phi_x^n,\\psi_{x_0}^\\lambda}}\\lesssim 2^{-\\frac{n\\abs S}2}\\lambda^{-\\abs S}\\norm{\\psi}_{C_c^r}.\n\\end{equation} \n\n\\noindent Together with \\eqref{ineq1}, this implies \n\n\\begin{align}\n\\norm{g_x}_{L_p} &\\le \\norm{(f^{(n+1)}-f^{(n)})(\\phi_{x}^n)}_{L_p}\\abs{\\scalar{\\phi_x^n,\\psi_{x_0}^\\lambda}} \\nonumber\\\\\n&\\lesssim 2^{-n\\gamma-n\\frac{\\abs S-E}{2}}2^{-n\\frac {\\abs S}2}\\lambda^{-\\abs S}\\norm{\\psi}_{C_c^r} = 2^{-n\\gamma-n\\abs S+n\\frac E2}\\lambda^{-\\abs S}\\norm{\\psi}_{C_c^r},\\label{g1}\n\\end{align}\n\n\\noindent and analogously, using \\eqref{ineq2}\n\n\\begin{equation}\\label{g2}\n\\norm{E^{\\mathcal F^{(1)}_y}g_{(y,x)}}_{L_p} \\lesssim 2^{-n\\gamma-n\\abs S}\\lambda^{-\\abs S}\\norm{\\psi}_{C_c^r}.\n\\end{equation}\n\n\\noindent Observe that for any $i=1,\\dots,e$, $g_{y_1,\\dots,y_i,x}$ is just a finite linear combination of $g_{(y_1,\\dots,y_i,x)}$ conditioned on different sigma algebras. By the contraction properties of the conditional expectation, \\eqref{g1} and \\eqref{g2} thus hold for any $g_{y_1,\\dots,y_i,x}, i=0,\\dots, e$.\n\nWe claim, that the following holds for $i=0,\\dots,e$:\n\n\\begin{equation*}\n\\norm{\\sum_{x_\\in\\Delta_n^{i+1,d}(r_n^{(i)})} g_{y_1,\\dots,y_i,x}}_{L_p}\\lesssim 2^{-n\\gamma-n\\frac {E_i}2}\\lambda^{-\\frac {E+E_i}{2}}\\norm{\\psi}_{C_c^r},\n\\end{equation*}\n\n\\noindent where $E_i := \\sum_{j=1}^i s_j$. As said above, we show this claim by backwards induction. For $i=e$, note that the $g_x\\neq 0$ implies $\\scalar{\\phi_x^n,\\psi_{x_0}^\\lambda}\\neq 0$, which s only true for $\\cong 2^{ns_i}\\lambda^{s_i}$ many $x$ for every coordinate $i = 1,\\dots,d$. Thus, using \\eqref{g1}, we conclude that\n\n\\begin{equation}\\label{g3}\n\\sum_{x\\in\\Delta_n^{e+1,d}(r_n^{(e)})}\\norm{g_{y_1,\\dots,y_e,x}}_{L_p} \\lesssim 2^{-n\\gamma-n\\abs S+n\\frac E2}\\lambda^{-\\abs S}\\cdot2^{n(\\abs S-E)}\\lambda^{\\abs S-E}\\norm{\\psi}_{C_c^r} = 2^{-n\\gamma-n\\frac E2}\\lambda^{-E}\\norm{\\psi}_{C_c^r}.\n\\end{equation}\n\n\\noindent For the induction step, observe that\n\n\\begin{align*}\n\\sum_{x\\in\\Delta_n^{i+1,d}(r_n^{(i)})}\\norm{E^{\\mathcal F^{(i)}_{y_i}}g_{y_1,\\dots,y_{i-1},(y_i,x)}}_{L_p} &\\lesssim 2^{-n\\gamma-n\\abs S}\\lambda^{-\\abs S}\\cdot2^{n(\\abs S-E_i)}\\lambda^{\\abs S-E_i}\\norm{\\psi}_{C_c^r} \\\\&= 2^{-n\\gamma -nE_i} \\lambda^{-E_i}\\norm{\\psi}_{C_c^r}\n\\end{align*}\n\n\\noindent holds, and that summing over the squares and taking the square root has the effect of multiplying a factor of $2^{\\frac {s_in}2}\\lambda^{\\frac {s_i}2}$:\n\n\\begin{equation*}\n\\left(\\sum_{k=1}^{\\lfloor 2^{s_in}\\lambda^{s_i}\\rfloor} a^2\\right)^{\\frac 12} \\le 2^{\\frac {s_in}2}\\lambda^{\\frac{s_i}{2}}a.\n\\end{equation*}\n\n\\noindent Now, using \\eqref{iteration}, we get\n\n\\begin{align*}\n\\norm{\\sum_{x_\\in\\Delta_n^{i+1,d}(r_n^{(i)})} g_{y_1,\\dots,y_i,x}}_{L_p} &\\lesssim \\underbrace{\\sum_{y_{i+1}\\in\\Delta_n^{i+1,i+1}(\\pi_{i+1}(r_n))}\\sum_{x\\in\\Delta_n^{i+2,d}(r_n^{(i+1)})} \\norm{E^{\\mathcal F^{(i+1)}_{y_{i+1}}} g_{y_1,\\dots,y_i,(y_{i+1},x)}}_{L_p}}_{\\lesssim 2^{-n\\gamma-nE_i}\\lambda^{-E_i}\\norm{\\psi}_{C_c^r}} \\\\&\\qquad+ \\bigg(\\sum_{y_{i+1}\\in\\Delta_n^{i+1,i+1}(\\pi_{i+1}(r_n))}\\underbrace{\\norm{\\sum_{x\\in\\Delta_n^{i+2,d}(r_n^{(i+1)})}g_{y_1,\\dots,y_{i+1},x}}_{L_p}^2}_{\\lesssim (2^{-n\\gamma-n\\frac{E_{i+1}}2}\\lambda^{-\\frac{E+E_{i+1}}{2}}\\norm{\\psi}_{C_c^r})^2}\\bigg)^{\\frac 12} \\\\\n&\\lesssim 2^{-n\\gamma-n\\frac {E_i}2}\\lambda^{-\\frac{E+E_i}{2}}\\norm{\\psi}_{C_c^r}.\n\\end{align*}\n\n\\noindent This shows the claim. Therefore, $i = 0$ implies\n\n\\begin{equation*}\n\\norm{\\sum_{x\\in\\Delta_n}g_x}_{L_p} \\lesssim \\sum_{r_n}\\norm{\\sum_{x\\in\\Delta_n^{1,d}(r_n)}g_x}_{L_p} \\lesssim 2^{-n\\gamma}\\lambda^{-\\frac E2}\\norm{\\psi}_{C_c^r},\n\\end{equation*}\n\n\\noindent which finishes the proof of the lemma.\n\n\\end{proof}\n\n\\noindent It remains to bound the norm of $(f^{(n+1)}-f^{(n)})\\left(\\sum_{m\\ge n}\\hat P_m\\psi\\right)$. Since $f^{(n+1)}-f^{(n)}\\in V_{n+1}\\perp \\hat V_m$ for any $m\\ge n+1$, it follows that\n\n\\begin{equation*}\n(f^{(n+1)}-f^{(n)})\\left(\\sum_{m\\ge n}\\hat P_m\\psi\\right) = (f^{(n+1)}-f^{(n)})(\\hat P_n \\psi).\n\\end{equation*}\n\n\\noindent so it suffices to bound this term. To do so, let us formulate a slight variation of Lemma 4.12 from \\cite{caravenna}:\n\n\\begin{lemma}\nLet $(F_x)_{x\\in\\mathbb{R}^d}$ be a stochastic $\\gamma$-coherent germ. Then, for any compact subset $K\\subset\\mathbb{R}$, there is a $\\beta =\\beta(K)\\in \\mathbb{R}^d$ such that $(F_x)_{x\\in\\mathbb{R}^d}$ fulfills the homogeneity bound\n\n\\begin{equation}\\label{homogeneity}\n\\norm{F_x(\\psi_x^\\lambda)}_{L_p}\\lesssim \\lambda^\\beta\n\\end{equation}\n\n\\noindent for all $x\\in K$.\n\\end{lemma}\n\n\\begin{proof}\nObserve that any stochastic $\\gamma$-coherent germ is $\\gamma-\\frac E2$ coherent (if one replaces the absolute value with the $L_p$-norm). Thus, we can apply Lemma 4.12 from \\cite{caravenna} to get the result.\n\\end{proof}\n\n\\noindent Using this lemma together with the fact, that for any $\\hat\\phi\\in\\Phi$\n\n\\begin{align*}\n\\sum_{y\\in\\Delta_{n+1}}\\abs{\\scalar{\\phi_y^{n+1},\\hat\\phi_x^n}} &\\le \\sum_{y\\in\\Delta_{n+1}}2^{(n+1\/2)\\abs S}\\int\\abs{\\phi(S^{2^{n+1}}(z-y))\\hat\\phi(S^{2^n}(z-x))}dz \\\\\n&= \\sum_{k\\in\\mathbb Z^d}2^{d\/2}\\int\\abs{\\phi(u-k)\\hat\\phi(S^{\\frac 12}u-S^{2^n}x)}du\n\\end{align*}\n\n\\noindent is bound by a constant independent of $n$, it follows, that\n\n\\begin{align*}\n\\norm{f^{(n+1)}(\\hat\\phi_x^n)}_{L_p} &\\le \\sum_{y\\in\\Delta_{n+1}} \\norm{F_y(\\phi_y^{n+1})}_{L_p} \\abs{\\scalar{\\phi_y^{n+1},\\hat\\phi_x^n}}\\\\\n&\\stackrel{\\eqref{homogeneity}}{\\lesssim} 2^{-n\\frac {\\abs S}2-n\\beta}\\sum_{y\\in\\Delta_{n+1}}\\abs{\\scalar{\\phi_y^{n+1},\\hat\\phi_x^n}}\\\\\n&\\lesssim 2^{-n\\frac {\\abs S}2-n\\beta}.\n\\end{align*}\n\n\\noindent Thus, using $f^{(n)}\\in V_n\\perp \\hat V_n$, we see that\n\n\\begin{align}\n\\norm{ (f^{(n+1)}-f^{(n)})(\\hat P_n\\psi)}_{L_p} &\\le \\sum_{x\\in\\Delta_n}\\sum_{\\hat\\phi\\in\\Phi}\\norm{(f^{(n+1)}-f^{(n)})(\\hat\\phi_x^n)}_{L_p}\\abs{\\scalar{\\hat\\phi_x^n,\\psi}} \\nonumber\\\\& = \\sum_{x\\in\\Delta_n}\\sum_{\\hat\\phi\\in\\Phi}\\norm{f^{(n+1)}(\\hat\\phi_x^n)}_{L_p}\\abs{\\scalar{\\hat\\phi_x^n,\\psi}} \\nonumber\\\\ &\\lesssim 2^{-n\\beta-n\\tilde r}\\norm{\\psi}_{C_c^r}, \\label{boundHatPhi}\n\\end{align}\n\n\\noindent where we used $\\sum_{x\\in\\Delta_n}\\abs{\\scalar{\\hat\\phi_x^n,\\psi}}\\lesssim 2^{n\\frac {\\abs S}2-n\\tilde r}\\norm{\\psi}_{C_c^r}$, which follows from \\eqref{LemIneq2} together with the fact, that the number of non-zero summands is of order $2^{n\\abs S}$. We can now conclude that\n\n\\begin{align*}\n\\norm{(f^{(n+1)}-f^{(n)})(\\psi)}_{L_p} &\\le \\norm{P_n (f^{(n+1)}-f^{(n)})(\\psi)}_{L_p} +\\norm{\\hat P_n (f^{(n+1)}-f^{(n)})(\\psi)}_{L_p} \\\\&\\lesssim 2^{-n\\cdot\\min(\\gamma,\\beta+\\tilde r)}\\norm{\\psi}_{C_c^r}.\n\\end{align*}\n\n\\noindent By choosing $\\tilde r$ large enough, it follows that $\\beta+\\tilde r>0$. Thus, the exponent is positive, which implies the convergence of the sequence. It also follows that the limit is bounded in $\\norm{\\psi}_{C_c^r}$, which shows that $f$ is indeed a random distribution.\n\nTo show that $f$ has stochastic dimension $e$, simply observe that $f^{(n)}(\\psi) = \\sum_{x} F_x(\\phi_x^n)\\scalar{\\phi_x^n,\\psi}$ is $\\mathcal F^{(i)}_{y+2^{-ns_i}R}$-measurable for any $\\psi$ with support in $\\mathbb{R}^{i-1}\\times(-\\infty,y]\\times\\mathbb{R}^{d-i}$. The right continuity of $\\mathcal F^{(i)}$ therefore implies, that the limit $f(\\psi)$ is $\\mathcal F^{(i)}_y$-measurable.\n\n\\end{proof}\n\n\\noindent We can now continue with the second step of the proof:\n\n\\begin{lemma}\\label{lem2}\nLet $(F_x)$ be as in Theorem \\ref{stochastic_reconstruction}, and let $f$ be the limit achieved in Lemma \\ref{lem1}. Let $\\psi$ be a smooth test function with compact support in $[2R,\\tilde R]^e\\times [-\\tilde R,\\tilde R]^{d-e}$. Then, \\eqref{unique1} and \\eqref{unique2} hold, i.e. for $i=1,\\dots,e$:\n\n\\begin{align*}\n\\norm{(f-F_x)(\\psi_x^\\lambda)}_{L_p}&\\lesssim \\lambda^{\\gamma-\\frac E2} \\\\\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(f-F_x)(\\psi_x^\\lambda)}_{L_p}&\\lesssim \\lambda^\\gamma.\n\\end{align*} \n\\end{lemma}\n\n\\begin{proof}\nUsing the fact that $P_n+\\sum_{m=n}^\\infty\\hat P_m = id$, as well as $f^{(n+1)}-f^{(n)}\\in V_{n+1} = V_n\\oplus \\hat V_n$, it follows, that\n\n\\begin{align*}\nf-F_x &= f^{(n)}+\\sum_{m=n}^\\infty (f^{(m+1)}-f^{(m)}) -F_x\\\\\n &= f^{(n)} +\\sum_{m=n}^\\infty (P_m+\\hat P_m)(f^{(m+1)}-f^{(m)}) + \\left(P_n+\\sum_{m=n}^\\infty\\hat P_m\\right) F_x\\\\\n&=\\underbrace{(f^{(n)}-P_nF_x)}_{(I)} + \\underbrace{\\sum_{m=n}^\\infty \\hat P_m(f^{(m+1)}-f^{(m)}-F_x)}_{(II)} + \\underbrace{\\sum_{m=n}^\\infty P_m(f^{(m+1)}-f^{(m)})}_{(III)},\n\\end{align*}\n\n\\noindent where $n$ is chosen in such a way, that $2^{-n}\\le\\lambda\\le 2^{-n+1}$. We will find bounds for these three terms individually.\n\nRegarding $(I)$, observe that\n\n\\begin{equation*}\n(I)(\\psi_x^\\lambda) = \\sum_{y\\in\\Delta_n}(F_y-F_x)(\\phi_y^n)\\scalar{\\phi_y^n,\\psi_x^\\lambda}.\n\\end{equation*}\n\n\\noindent We use \\eqref{LemIneq1} to see $\\abs{\\scalar{\\phi^n_y,\\psi_x^\\lambda}}\\lesssim 2^{-n\\frac {\\abs S}2}\\lambda^{-\\abs S} \\lesssim 2^{n\\frac {\\abs S}2}$, since $\\lambda \\ge 2^{-n}$. Also, $\\scalar{\\phi^n_y,\\psi_x^\\lambda}$ is only non-zero for finitely many (independent of $n$) $y$. This implies\n\n\\begin{equation*}\n\\sum_{y}\\abs{\\scalar{\\phi_y^n,\\psi_x^\\lambda}} \\lesssim 2^{n\\frac {\\abs S}2}.\n\\end{equation*}\n\n\\noindent Further note that $\\scalar{\\phi_y^n,\\psi_x^\\lambda} \\neq 0$ implies, that the supports of $\\phi_y^n$ and $\\psi_x^\\lambda$ overlap. Thus, for $i=1,\\dots, e$, we get $\\pi_i(y) \\ge \\pi_i(x)+\\lambda^{s_i} 2R-2^{-ns_i}R \\ge \\pi_i(x)$. This allows us to use stochastic coherence, and we get:\n\n\\begin{align*}\n\\norm{(I)(\\psi_x^\\lambda)}_{L_p} &\\le \\sum_{y\\in\\Delta_n}\\norm{(F_y-F_x)(\\phi_y^n)}_{L_p}\\abs{\\scalar{\\phi_y^n,\\psi_x^\\lambda}} \\\\&\\lesssim \\lambda^{\\gamma-\\frac E2}2^{-n\\frac {\\abs S}2} \\sum_{y}\\abs{\\scalar{\\phi_y^n,\\psi_x^\\lambda}} \\\\\n&\\lesssim \\lambda^{\\gamma-\\frac E2} \\\\\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}} (I)(\\psi_x^\\lambda)}_{L_p} &\\le \\sum_{y\\in\\Delta_n}\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(F_y-F_x)(\\phi_y^n)}_{L_p}\\abs{\\scalar{\\phi_y^n,\\psi_x^\\lambda}} \\\\ &\\lesssim \\lambda^\\gamma 2^{-n\\frac {\\abs S}2} \\sum_{y}\\abs{\\scalar{\\phi_y^n,\\psi_x^\\lambda}} \\\\&\\lesssim \\lambda^\\gamma. \n\\end{align*}\n\n\\noindent To analyze $(II)$, we take a look at how $\\hat P_m$ acts on all terms involved:\n\n\\begin{align*}\n\\hat P_mf^{(m)} &= \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi} F_z(\\phi_z^m)\\scalar{\\phi_z^m,\\hat\\phi_y^m}\\hat\\phi_y^m = 0 \\\\\n\\hat P_mf^{(m+1)} &= \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_{m+1}}\\sum_{\\hat\\phi\\in\\Phi} F_z(\\phi_z^{m+1})\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}\\hat\\phi_y^m \\\\\n\\hat P_mF_x &= \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_{m+1}}\\sum_{\\hat\\phi\\in\\Phi} F_x(\\phi_z^{m+1})\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}\\hat\\phi_y^m,\n\\end{align*}\n\n\\noindent where the equation for $\\hat P_m F_x$ is of course only rigorous as\n\n\\begin{equation*}\n\\hat P_mF_x (\\psi) = \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_{m+1}}\\sum_{\\hat\\phi\\in\\Phi} F_x(\\phi_z^{m+1})\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}\\scalar{\\hat\\phi_y^m,\\psi}.\n\\end{equation*}\n\nUsing these and the notation $\\hat P_m f^{(m)}(\\psi) = \\scalar{\\hat P_m f^{(m)},\\psi}$, we get\n\n\\begin{align*}\n\\hat P_m(f^{(m+1)}-f^{(m)}-F_x)(\\psi_x^\\lambda) = \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_{m+1}}\\sum_{\\hat\\phi\\in\\Phi}(F_z-F_x)(\\phi_z^{m+1})\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}\\scalar{\\hat\\phi_y^m,\\psi_x^\\lambda}.\n\\end{align*}\n\n\\noindent Observe that the only non-zero terms fulfill for $i=1,\\dots,e$, that $\\pi_i(z)\\ge \\pi_i(y)-2^{-ms_i}R \\ge \\pi_i(x)+\\lambda^{s_i}\\cdot2R-2\\cdot 2^{-ms_i}R\\ge \\pi_i(x)$, so we can use stochastic coherence. Note that the number of $z$, such that $\\scalar{\\phi_z^{m+1},\\hat\\phi_y^{m}}\\neq 0$ for any fixed $y$ is of order $1$, and the number of $y$, such that $\\scalar{\\hat\\phi_y^m,\\psi_x^\\lambda}\\neq 0$ is of order $2^{m\\abs S}\\lambda^{\\abs S}$. Using \\eqref{LemIneq2}, $\\abs{\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}}\\lesssim 1$ as well as stochastic coherence, we get:\n\n\\begin{align*}\n\\norm{\\hat P_m(f^{(m+1)}-f^{(m)}-F_x)(\\psi_x^\\lambda)}_{L_p} &\\le \\sum_{y\\in\\Delta_m}\\sum_{z\\in\\Delta_{m+1}}\\sum_{\\hat\\phi\\in\\Phi}\\underbrace{\\norm{(F_z-F_x)(\\phi_z^{m+1})}_{L_p}}_{\\lesssim 2^{-m\\frac {\\abs S}2 -m\\alpha}\\lambda^{\\gamma-\\frac E2-\\alpha}}\\\\&\\qquad\\qquad\\qquad\\qquad\\times\\underbrace{\\abs{\\scalar{\\phi_z^{m+1},\\hat\\phi_y^m}}}_{\\lesssim 1}\\underbrace{\\abs{\\scalar{\\hat\\phi_y^m,\\psi_x^\\lambda}}}_{\\lesssim 2^{-m\\frac {\\abs S}2-m\\tilde r}\\lambda^{-{\\abs S}-\\tilde r}} \\\\&\\lesssim 2^{m\\abs S}\\lambda^{\\abs S} 2^{-m\\frac {\\abs S}2-m\\alpha}\\lambda^{\\gamma-\\frac E2-\\alpha}2^{-m\\frac {\\abs S}2-m\\tilde r}\\lambda^{-\\abs{S}-\\tilde r} \\\\\n&= 2^{-m\\alpha-m\\tilde r}\\lambda^{\\gamma-\\frac E2-\\alpha-\\tilde r}.\n\\end{align*}\n\n\\noindent This allows us to find the following bound for $(II)$:\n\n\\begin{align*}\n\\norm{(II)(\\psi_x^\\lambda)}_{L_p} \\lesssim \\sum_{m=n}^\\infty 2^{-m\\alpha-m\\tilde r}\\lambda^{\\gamma-\\frac E2-\\alpha-\\tilde r} \\lesssim \\lambda^{\\gamma-\\frac E2},\n\\end{align*}\n\n\\noindent where we used $2^{-n}\\cong \\lambda$. Analogously, it further holds that\n\n\\begin{equation*}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}\\hat P(f^{(m+1)}-f^{(m)}-F_x)(\\psi_x^\\lambda)}_{L_p}\\lesssim \\lambda^\\gamma.\n\\end{equation*}\n\n\\noindent It remains to bound $(III)$. By Lemma \\ref{LemmaIteration}, it follows that\n\n\\begin{align*}\n\\norm{(III)(\\psi_x^\\lambda)}_{L_p} &\\le \\sum_{m=n}^\\infty \\norm{P_m(f^{(m+1)}-f^{(m)})(\\psi_x^\\lambda)}_{L_p} \\\\\n&\\lesssim \\sum_{m=n}^\\infty 2^{-m\\gamma}\\lambda^{-\\frac E2} \\\\\n&\\lesssim 2^{-n\\gamma}\\lambda^{-\\frac E2}\\\\\n&\\cong \\lambda^{\\gamma-\\frac E2}.\n\\end{align*}\n\n\\noindent For the conditional expectation of $(III)$, we use \\eqref{ineq2} and \\eqref{LemIneq1} and see\n\n\\begin{align*}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)}}(III)(\\psi_x^\\lambda)}_{L_p} &= \\norm{\\sum_{m=n}^\\infty E^{\\mathcal F^{(i)}_{\\pi_i(x)}}P_m(f^{(m+1)}-f^{(m)})(\\psi_x^\\lambda)}_{L_p}\\\\&\\lesssim \\sum_{m=n}^\\infty 2^{-m\\gamma-m\\frac{\\abs S}{2}}\\sum_{y\\in\\Delta_m}\\abs{\\scalar{\\phi_y^m,\\psi_x^\\lambda}}\\\\\n&\\lesssim \\sum_{m=n}^\\infty 2^{-m\\gamma-m\\frac{\\abs S}{2}}2^{-m\\frac {\\abs S}2}\\lambda^{-\\abs S}2^{m\\abs S}\\lambda^{\\abs S} \\\\\n&\\lesssim 2^{-n\\gamma}\\\\\n&\\cong \\lambda^{\\gamma},\n\\end{align*}\n\n\\noindent where we used in the third line, that the number of $y$ with $\\scalar{\\phi_y^m,\\psi_x^\\lambda}\\neq 0$ is of order $2^{m\\abs S}\\lambda^{\\abs S}$.\n\nAs all three terms fulfill the necessary bounds, the lemma is proven.\n\n\\end{proof}\n\n\\noindent It remains to show that \\eqref{unique1} and \\eqref{unique2} uniquely characterize the reconstructed distribution $f$. We show this in the following lemma, and thus finish the proof of Theorem \\ref{stochastic_reconstruction}:\n\n\\begin{lemma}\\label{lem3}\nThere is at most one random distribution $f$ (up to modifications) with stochastic dimension $e$, which fulfills \\eqref{unique1} and \\eqref{unique2}.\n\\end{lemma}\n\n\\begin{proof}\n\nLet $f,g$ fulfill \\eqref{unique1} and \\eqref{unique2}, and let $\\mu$ be any distribution with support in $supp(\\mu)\\subset [0,\\tilde R]^d$. Let $r = (2R,\\dots,2R,0,\\dots,0)$, such that $\\mu_r = \\mu(\\cdot-r)$ has the right support to apply \\eqref{unique1} and \\eqref{unique2}. It then follows that\n\n\\begin{align*}\n\\norm{(f-g)(\\mu_x^\\lambda)}_{L_p} &= \\norm{(f-g)((\\mu_r)_{x-S^\\lambda r}^\\lambda)}_{L_p}\\\\\n&\\le \\norm{(f-F_{x-S^\\lambda r})((\\mu_r)_{x-S^\\lambda r}^\\lambda)}_{L_p} + \\norm{(g-F_{x-S^\\lambda r})((\\mu_r)_{x-S^\\lambda r}^\\lambda)}_{L_p} \\\\\n&\\lesssim \\lambda^{\\gamma-\\frac E2},\n\\end{align*}\n\n\\noindent and analogously, for $i=1,\\dots,e$:\n\n\\begin{align*}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(x)-\\lambda^{s_i} 2R}}(f-g)(\\mu_x^\\lambda)}_{L_p} \\lesssim \\lambda^\\gamma. \n\\end{align*}\n\n\\noindent This especially holds for our wavelets $\\phi,\\hat\\phi$. Now, let $\\psi$ be any smooth test function with bounded support. It then follows that for any $n\\in\\mathbb N$:\n\n\\begin{align}\n(f-g)(\\psi) &= \\underbrace{\\sum_{y\\in\\Delta_n}(f-g)(\\phi_y^n)\\scalar{\\phi_y^n,\\psi} \\nonumber}_{(I)}\\\\&\\qquad+\\underbrace{\\sum_{m=n}^\\infty \\sum_{y\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi} (f-g)(\\hat\\phi_y^m)\\scalar{\\hat\\phi_y^m,\\psi}}_{(II)}.\\label{aslim}\n\\end{align}\n\n\\noindent Observe that only $\\cong 2^{n\\abs S}$ many summands of $(I)$ are non-zero, while the second (inner sum) has $\\cong 2^{m\\abs S}$ many summands not equal to zero. Let us first analyse $(II)$:\n\n\\begin{align*}\n\\norm{\\sum_{y\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi} (f-g)(\\hat\\phi_y^m)\\scalar{\\hat\\phi_y^m,\\psi}}_{L_p} &\\le \\sum_{y\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi} \\underbrace{\\norm{(f-g)(\\hat\\phi_y^m)}_{L_p}}_{\\lesssim 2^{-m\\frac{\\abs S-E}{2}-m\\gamma}}\\underbrace{\\abs{\\scalar{\\hat\\phi_y^m,\\psi}}}_{\\lesssim 2^{-m\\frac {\\abs S}2-m\\tilde r}} \\\\&\\lesssim 2^{-m(\\gamma -\\frac E2 + \\tilde r)}.\n\\end{align*}\n\n\\noindent Since $\\tilde r$ can be chosen to be big enough, such that $\\gamma-\\frac E2+\\tilde r>0$, we conclude\n\n\\begin{align*}\n\\norm{\\sum_{m=n}^\\infty \\sum_{y\\in\\Delta_m}\\sum_{\\hat\\phi\\in\\Phi} (f-g)(\\hat\\phi_y^m)\\scalar{\\hat\\phi_y^m,\\psi}}_{L_p} &\\lesssim \\sum_{m=n}^\\infty 2^{-m(\\gamma-\\frac E2+\\tilde r)} \\\\&\\lesssim 2^{-n(\\gamma-\\frac E2+\\tilde r)} \\xrightarrow{n\\to\\infty}0.\n\\end{align*}\n\n\\noindent For $(I)$, we will use the same technique as in the proof of the existence of the limit. This will require us to split $\\Delta_n$ into $(\\lceil 3 R\\rceil)^e$ many nets $\\Delta_n^{i+1,d}(r_n^{(i)})$, similar to before. Of course, this only adds a factor $(\\lceil 3 R\\rceil)^e$ to the inequality and does not change the underlying argument. We thus suppress this notation. \n\nIteratively define\n\n\\begin{align*}\ng_y &:= (f-g)(\\phi_y^n)\\scalar{\\phi_y^n,\\psi} \\\\\ng_{y_1,y} &:= g_{(y_1,y)}-E^{\\mathcal F^{(1)}_{S_1(y_1)}}g_{(y_1,y)} \\\\\n&\\vdots \\\\\ng_{y_1,\\dots,y_e,y} &:= g_{y_1,\\dots,y_{e-1},(y_e,y)}-E^{\\mathcal F^{(e)}_{S_e(y_e)}}g_{y_1,\\dots,y_{e-1},(y_e,y)},\n\\end{align*}\n\n\\noindent where $S_i(y_i) := y_i-2^{-ns_i}\\cdot 2R$. Observe, that \n\n\\begin{equation*}\n\\norm{g_y}_{L_p} = \\underbrace{\\norm{(f-g)(\\phi_y^n)}_{L_p}}_{\\lesssim 2^{-n\\gamma-n\\frac{\\abs S-E}{2}}}\\underbrace{\\abs{\\scalar{\\phi_y^n,\\psi}}}_{\\lesssim 2^{-n\\frac {\\abs S}2}} \\lesssim 2^{-n\\gamma-n\\abs S+n\\frac E2},\n\\end{equation*}\n\n\\noindent and analogously\n\n\\begin{equation*}\n\\norm{E^{\\mathcal F^{(1)}_{S_1(y_1)}}g_{(y_1,y)}}_{L_p} \\lesssim 2^{-n\\gamma-n\\abs S}.\n\\end{equation*}\n\n\\noindent Of course, the same bounds apply to all $g_{y_1,\\dots,y_i,y}$. We claim, that for $i=0,\\dots, e$, it holds that\n\n\\begin{equation*}\n\\norm{\\sum_{y\\in\\Delta_n^{i+1,d}}g_{y_1,\\dots,y_i,y}}_{L_p} \\lesssim 2^{-n\\gamma-n\\frac {E_i}2}.\n\\end{equation*}\n\n\\noindent This especially implies, that \n\n\\begin{equation*}\n\\norm{\\sum_{y\\in\\Delta_n}(f-g)(\\phi_y^n)\\scalar{\\phi_y^n,\\psi_x^\\lambda}}_{L_p} = \\norm{\\sum_{y\\in\\Delta_n}g_y}_{L_p}\\lesssim 2^{-n\\gamma}\\xrightarrow{n\\to\\infty} 0.\n\\end{equation*}\n\n\\noindent We show the claim inductively: for $i=e$, it holds that only $\\cong 2^{n(\\abs S-E)}$ many terms of the following sum are non-zero, which implies\n\n\\begin{equation*}\n\\norm{\\sum_{y\\in\\Delta_n^{e+1,d}}g_{y_1,\\dots,y_e,y}}_{L_p} \\lesssim 2^{n(\\abs S-E)}\\cdot 2^{-n\\gamma-n\\abs S+n\\frac E2} = 2^{-n\\gamma-n\\frac E2}.\n\\end{equation*}\n\n\\noindent If we assume that the claim holds for $i+1$, the stochastic machinery gives us\n\n\\begin{align*}\n\\norm{\\sum_{y\\in\\Delta_n^{i+1,d}}g_{y_1,\\dots,y_i,y}}_{L_p} &\\lesssim \\sum_{y_{i+1}\\in\\Delta_n^{i+1,i+1}}\\sum_{y\\in\\Delta_n^{i+2,d}} \\norm{E^{\\mathcal F^{(i+1)}_{S_{i+1}(y_{i+1})}}g_{y_1,\\dots,y_i,(y_{i+1},y)}}_{L_p} \\\\\n&\\hspace{50pt} +\\bigg(\\sum_{y_{i+1}\\in\\Delta_n^{i+1,i+1}}\\norm{\\sum_{y\\in\\Delta_n^{i+2,d}}g_{y_1,\\dots,y_{i+1},y}}_{L_p}^2\\bigg)^{\\frac 12} \\\\\n&\\lesssim 2^{n(\\abs S-E_i)}2^{-n\\gamma-n\\abs S} + 2^{\\frac {ns_{i+1}}2}2^{-n\\gamma-n\\frac {E_{i+1}}2} \\\\\n&\\lesssim 2^{-n\\gamma-n\\frac {E_i}2}.\n\\end{align*}\n\n\\noindent Thus, we have shown the claim, and conclude $(f-g)(\\psi) = 0$, since $n$ can be chosen arbitrarily large.\n\\end{proof}\n\n\n\\subsection{Two-sided Reconstruction}\\label{twosided}\n\nThe theorem we showed so far is modeled after left-sided approximations of stochastic integrals. However, it is also possible (and closer to the original reconstruction Theorem) to have a two-sided reconstruction, i.e. to use test functions $\\psi$ with support in $[-\\tilde R,\\tilde R]^d$ for some $\\tilde R>0$. In this setting, it no longer makes sense to condition $F_x(\\psi_x^\\lambda)$ onto $\\mathcal F^{(i)}_{\\pi_i(x)}$, since the conditioning would not have any effect on test functions with purely negative support. Instead, we are going to condition it on the left boundary of the support of $\\psi_x^\\lambda$, i.e. onto $\\mathcal F^{(i)}_{\\pi_i(x)-\\lambda^{s_i}\\tilde R}$.\n\n\\begin{definition}[two-sided stochastic coherence]\n\nLet $(F_x)_{x\\in\\mathbb{R}^d}$ be a stochastic germ with stochastic dimension $e\\le d$. We call $(F_x)$ \\emph{two-sided stochastically $\\gamma$-coherent}, if there is an $\\alpha\\in\\mathbb R$, such that the following holds for all $\\psi\\in C_c^r$ with $\\norm{\\psi}_{C_c^r}=1$ and support in $[-\\tilde R,\\tilde R]$ for some $\\tilde R>0$, for the stochastic directions $i=1,\\dots,e$, for any compact set $K\\subset \\mathbb{R}^d$ and for any $x,y\\in K$:\n\n\n\\begin{align*}\n\\norm{E^{\\mathcal F^{(i)}_{\\pi_i(y)-\\lambda^{s_i} \\tilde R}}(F_x-F_y)(\\psi_y^\\lambda)}_{L_2} &\\lesssim \\lambda^{\\alpha}(\\abs{x-y}+\\lambda)^{\\gamma-\\alpha}\\\\\n\\norm{(F_x-F_y)(\\psi_y^\\lambda)}_{L_2} &\\lesssim \\lambda^{\\alpha}(\\abs{x-y}+\\lambda)^{\\gamma-\\frac E2-\\alpha},\n\\end{align*}\n\n\\noindent where the constant in $\\lesssim$ is allowed to depend on the compact set $K$ and $\\tilde R$.\n\\end{definition}\n\n\\noindent Note that this definition also does not have the condition, that $\\pi_i(x)\\le \\pi_i(y)$, but rather uses arbitrary $x,y$. Further note that there is no cheap way to get from the one-sided coherence property to the two-sided, or vice versa: If one replaces $\\psi$ with a shifted function $\\psi_{\\vec r}$ to get a test function with a purely positive support, and substitute $y$ with $\\tilde y = y-S^\\lambda\\vec r$, such that $\\psi_y^\\lambda = (\\psi_{\\vec r})_{\\tilde y}^\\lambda$, one gets $F_{\\tilde y + S^\\lambda \\vec r}$ instead of $F_{\\tilde y}$. This highlights the main difference between the two versions: In the one-sided reconstruction, $F_y$ only gets tested against the future, while in the two-sided version, $F_y$ sits ``in the middle'' of the support of the test function. This makes the two-sided version closely connected to Stratonovich integration. Since we are interested in the martingale properties, the one-sided version (which is closely connected to It\u00f4 integration) is more natural.\n\nUsing this property, one can show the following theorem:\n\n\\begin{theorem}[two sided stochastic reconstruction]\\label{two-sided stochastic_reconstruction}\nLet $(F_x)$ be a random germ with stochastic dimension $e\\le d$, which is two-sided stochastically $\\gamma$-coherent for some $\\gamma > 0$. Then, where is a unique (up to modifications) random distribution $f$ with stochastic dimension $e$ with respect to the same filtrations as the germ $(F_x)$, such that the following holds for a test function $\\psi$ with support in $[-\\tilde R,\\tilde R]^d$ for some $\\tilde R>0$:\n\n\\begin{align*}\n\\norm{(f-F_x)(\\psi_x^\\lambda)}_{L_2}&\\lesssim \\lambda^{\\gamma-\\frac E2} \\\\\n\\norm{E^{\\mathcal F^{(i)}_{\\check S_i(x)}}(f-F_x)(\\psi_x^\\lambda)}_{L_2}&\\lesssim \\lambda^\\gamma,\n\\end{align*} \n\n\\noindent where $\\check S_i(x) := \\pi_i(x)-\\lambda^{s_i}(\\tilde R+4R)$, and the constant in $\\lesssim$ is allowed to depend on the compact set $K\\subset\\mathbb{R}^d$ with $x\\in K$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof is essentially the same as for the one-sided version. However, one can use the more usual wavelets with supports fulfilling $supp(\\phi),supp(\\hat\\phi)\\subset [- R, R]^d$.\n\\end{proof}\n\n\\begin{remark}\nNote that this theorem has a similar fattening happening in the uniqueness-property: We do not condition on the left boundary $\\pi_i(x)-\\lambda^{s_i}\\tilde R$, but we leave a ``gap'', in which enough wavelets fit.\n\\end{remark}\n\n\\section{Stochastic reconstruction is stochastic sewing in 1 dimension}\\label{sectionSewing}\n\nAs we discussed in the introduction, sewing and reconstruction are closely related to one another. It is well known (although we are not aware of a reference, which has written this down in a concise way), that for a two-parameter process $(A(s,t))_{s,t\\in[0,T]}$ which fulfills the conditions of the classical sewing lemma, the germ given by\n\n\\begin{equation*}\nF_s(t) = \\frac{\\partial}{\\partial t}A(s,t)\n\\end{equation*}\n\n\\noindent fulfills the conditions of the reconstruction theorem (as formulated in \\cite{caravenna}). Here, $\\frac{\\partial}{\\partial t}$ should be seen as a distributional derivative, so the rigorous way to write it should be\n\n\\begin{equation*}\nF_s(\\psi) = -\\int_{-\\infty}^{\\infty} A(s,t)\\psi'(t) dt\n\\end{equation*}\n\n\\noindent for any $\\psi\\in C_c^r$. It further holds, that the following diagram commutes:\n\n\\[\\begin{tikzcd}\n\\fbox{\\parbox{2cm}{\\centering $A(s,t)$}} \\arrow[r,\"{\\frac{\\partial}{\\partial t}}\"] \\arrow[d,\"{\\text{sewing}}\"] & \\fbox{\\parbox{3cm}{\\centering $F_s(t) = \\frac{\\partial}{\\partial t} A(s,t)$}}\\arrow[d,\"\\text{reconstruction}\"] \\\\\n\\fbox{$I(t)$}\\arrow[r,\"{\\frac{\\partial}{\\partial t}}\"] & \\fbox{$f = \\dot I$}\n\\end{tikzcd}\\]\n\n\\noindent Thus, sewing can indeed be seen as the one-dimensional case of reconstruction. By a result of \\cite{brault}, Lemma 3.10, we can reverse the time derivative and regain $I(t)$ as \"$f(1_{[0,t]})$\", which is of course only rigorous as the function $z$ from Brault's Lemma, so the arrows in the middle can be reversed.\n\nGiven all this, it should come as little surprise, that the statement stays true in the stochastic case: indeed, the stochastic reconstruction theorem is nothing else than the distributional version of Khoa Le's stochastic sewing Lemma in one dimension. We recall said stochastic sewing lemma \\cite{le}:\n\n\\begin{theorem}[stochastic sewing lemma]\nLet $2\\le p\\le \\infty$, and let $(A(s,t))_{s,t\\in[0,T]}$ be a two parameter process in $L_p(\\Omega)$, which is adapted to some complete, right-continuous filtration $\\mathcal F_t$ in the sense, that $A(s,t)\\in\\mathcal F_t$ for all $0\\le s\\le t\\le T$. Suppose that there is a $\\gamma>0$, such that for all $0\\le s\\le u\\le t\\le T$, it holds that\n\n\\begin{align*}\n\\norm{A(s,t)-A(s,u)-A(u,t)}_{L_p}&\\lesssim \\abs{t-s}^{\\frac 12+\\gamma} \\\\\n\\norm{E^{\\mathcal F_s}(A(s,t)-A(s,u)-A(u,t))}_{L_p}&\\lesssim \\abs{t-s}^{1+\\gamma}.\n\\end{align*}\n\n\\noindent Then, there is a unique (up to modifications) process $(I(t))_{t\\in[0,T]}$ in $L_p(\\Omega)$, which is adapted to $\\mathcal F_t$, has $I(0) = 0$ and fulfills for all $0\\le s\\le t\\le T$\n\n\\begin{align*}\n\\norm{I(t)-I(s)-A(s,t)}_{L_p}&\\lesssim \\abs{t-s}^{\\frac 12+\\gamma} \\\\\n\\norm{E^{\\mathcal F_s}(I(t)-I(s)-A(s,t))}_{L_p}&\\lesssim \\abs{t-s}^{1+\\gamma}.\n\\end{align*}\n\\end{theorem}\n\n\\noindent Note that we can easily extend $A(s,t)$ to be a process over $\\mathbb{R}^2$, by setting $A(s,t) = A(0,t)$ for $s\\le 0, t\\in[0,T]$ and $A(s,t) = A(s,T)$ for $t\\ge T,s\\in[0,T]$. Should both parameter $s,t\\notin[0,T]$, we simply set $A(s,t) = A(0,T)$ if $s\\le 0$ and $t\\ge T$ and $A(s,t) = 0$, if both are less than $0$ or greater than $T$. This leads to a process $I(t)$, which is the same for $t\\in[0,T]$ and constant outside of this interval.\n\nWe want to set $F_s(t) = \\frac{\\partial}{\\partial t}A(s,t)$, as above. To do so, we need to assume that almost surely $A(s,\\cdot)$ is locally integrable. In practice, most candidates for $A$ are almost surely continuous or at least piecewise continuous, so this additional assumption is reasonable to make. Under this assumption, we define the germ\n\n\\begin{equation*}\nF_s(\\psi) = -\\int_{-\\infty}^{\\infty} A(s,t)\\psi'(t) dt\n\\end{equation*}\n\n\\noindent as before. Then, one can show the following:\n\n\\begin{theorem}\nLet $A(s,t)$ fulfill the assumptions of the stochastic sewing lemma, and assume $A(s,\\cdot)$ is in $L_1^{loc}(\\mathbb{R})$ almost surely. Then, $(F_s)_{s\\in\\mathbb{R}}$ is stochastic $\\gamma$-coherent with stochastic dimension $1$. Let $I(t)$ be the process we get from the sewing lemma from $A(s,t)$, and let $f$ be the distribution gained from Theorem \\ref{stochastic_reconstruction} applied to $(F_x)$. It holds that\n\n\\begin{equation*}\nf(\\psi) = -\\int_{-\\infty}^\\infty I(t)\\psi'(t) dt,\n\\end{equation*}\n\n\\noindent for all $\\psi\\in C_c^r$.\n\\end{theorem}\n\n\\begin{proof}\nLet us check the coherence property: Observe that\n\n\\begin{align*}\n(F_s-F_u)(\\psi) = \\int_{-\\infty}^\\infty (A(s,t)-A(s,u)-A(u,t))\\psi'(t) dt,\n\\end{align*}\n\n\\noindent where we used that $A(s,u)\\int_{-\\infty}^\\infty \\psi'(t) = 0$ since $\\psi$ is only compactly supported. It follows that for all $s\\le u$ and $\\psi$ with compact, positive support\n\n\\begin{align*}\n\\norm{(F_s-F_u)(\\psi_u^\\lambda)}_{L_2} &\\le \\sup_{t\\in supp(\\psi_u^\\lambda)}\\norm{A(s,t)-A(s,u)-A(u,t)}_{L_2}\\int_{-\\infty}^\\infty (\\psi_u^\\lambda)'(t) dt \\\\&\\lesssim (\\abs{u-s}+\\lambda)^{\\frac 12+\\gamma}\\lambda^{-1},\n\\end{align*}\n\n\\noindent and analogously\n\n\\begin{equation*}\n\\norm{E^{\\mathcal F_s}(F_s-F_u)(\\psi_u^\\lambda)}_{L_2} \\le (\\abs{u-s}+\\lambda)^{1+\\gamma}\\lambda^{-1}.\n\\end{equation*}\n\n\\noindent Thus, $(F_s)_{s\\in\\mathbb R}$ is indeed stochastic $\\gamma$-coherent and there exists a reconstruction $f$. It remains to show that $f=\\dot I$. To this end, we calculate\n\n\\begin{align*}\n(F_s-\\dot I)(\\psi) = -\\int_{-\\infty}^\\infty (A(s,t) - (I(t)-I(s)))\\psi'(t) dt,\n\\end{align*}\n\n\\noindent where we again used that $I(s) \\int\\psi'(t) dt = 0$. It follows that for all test functions with strictly positive support\n\n\\begin{align*}\n\\norm{(F_s-\\dot I)(\\psi_s^\\lambda)}_{L_2} &\\le \\sup_{t\\in supp(\\psi_s^\\lambda)}\\norm{A(s,t)-(I(t)-I(s))}_{L_2}\\int_{-\\infty}^\\infty (\\psi_s^\\lambda)'(t)dt \\\\\n&\\lesssim \\lambda^{-\\frac 12+\\gamma},\n\\end{align*}\n\n\\noindent and analogously,\n\n\\begin{equation*}\n\\norm{E^{\\mathcal F_s}(F_s-\\dot I)(\\psi_s^\\lambda)}_{L_2}\\lesssim \\lambda^\\gamma.\n\\end{equation*}\n\n\\noindent By the uniqueness of the reconstruction, this concludes the proof.\n\\end{proof}\n\n\\section{Gaussian Martingale Measure}\\label{sectionGMM}\n\nAs an application of stochastic reconstruction, we show that integration against Gaussian martingale measures can be seen as a product of a process in $C^\\alpha$ and a distribution in $C^\\beta$, similar to the young product between distributions presented in the introduction. The martingale properties of the measure will allow us to do this reconstruction up to $\\alpha+\\beta>-\\frac 12$, in comparison to the classical assumption $\\alpha+\\beta>0$.\n\nWe begin by introducing the notation of martingale measures, which can be found in \\cite{walshOriginal} or \\cite{davar}.\n\n\\subsection{Gaussian martingale measures and Walsh-type Integration}\n\nLoosely speaking, a Gaussian martingale measure is a Gaussian family $(W_t(A))_{t\\ge 0, A\\in \\mathcal A_K}$ for some subset $\\mathcal A_K\\subset B(\\mathbb{R}^d)$, such that\n\n\\begin{itemize}\n\\item $W_t(A)$ is a martingale in $t$, and\n\\item $W_t(A)$ is a Gaussian measure in $A$.\n\\end{itemize}\n\n\\noindent To get a Gaussian measure (or a Gaussian family in general), we need to clarify what its covariance is. This is given through the notion of a covariance measure:\n\n\\begin{definition}\nLet $K$ be a symmetric, positive definite and $\\sigma$-finite signed measure on $(\\mathbb{R}^d\\times\\mathbb{R}^d,B(\\mathbb{R}^d)\\otimes B(\\mathbb{R}^d))$. Then, $K$ is called a \\emph{covariance measure}, if there is a symmetric, positive definite, and $\\sigma$-finite measure $\\abs{K}$, such that for all $A,B\\in B(\\mathbb{R}^d)$:\n\n\\begin{equation*}\n\\abs{K(A\\times B)}\\le\\abs{K}(A\\times B).\n\\end{equation*}\n\\end{definition}\n\n\\noindent We set \n\n\\begin{equation*}\n\\norm{f}_K^2 := \\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} f(x)f(y) K(dx,dy),\n\\end{equation*}\n\n\\noindent and define $\\norm{f}_{\\abs K}$ analogously. We further set $\\mathcal A_K := \\{A\\in B(\\mathbb{R}^d)~\\vert~K(A\\times A)<\\infty\\}$. This allows us to formally define the Gaussian martingale measure $W$ as follows:\n\n\\begin{definition}\nLet $K$ be a covariance measure. A \\emph{Gaussian martingale measure} is a family of centered Gaussian variables\n\n\\begin{equation*}\n(W_t(A)~\\vert~t\\ge 0,A\\in\\mathcal A_k),\n\\end{equation*}\n\n\\noindent such that\n\n\\begin{enumerate}\n\\item $W_0(A) = 0$.\n\\item For all $A\\in \\mathcal A_K$, $(W_t(A))_{t\\ge 0}$ is a continuous martingale with respect to the filtration $\\sigma(W_s(A)~\\vert~0\\le s\\le t,A\\in\\mathcal A_K)$.\n\\item Its covariance is given by\n\n\\begin{equation*}\nE(W_s(A)W_t(B)) = (s\\land t) K(A\\times B).\n\\end{equation*}\n\\end{enumerate}\n\\end{definition}\n\n\\noindent We also denote the closure of the above filtration as\n\n\\begin{equation*}\n\\mathcal F_t := \\overline{\\sigma(W_s(A)~\\vert~0\\le s\\le t,A\\in\\mathcal A_K)}.\n\\end{equation*}\n\n\\noindent The construction of the integral over such martingale measure is rather straightforward: We call processes of the form\n\n\\begin{equation*}\nH(s,x) = \\sum_{k=1}^{n-1}\\sum_{l=1}^{L_k} h_{k,l} 1_{(t_k,t_{k+1}]}(s) 1_{A_{k,l}}(x)\n\\end{equation*}\n\n\\noindent with $h_{k,l}\\in L_{\\infty}(\\mathcal F_{t_k})$ and $A_{k,l}\\in\\mathcal A_K$ \\emph{elementary processes}, and define the integral over such processes by\n\n\\begin{equation*}\n\\int_0^\\infty\\int_{\\mathbb{R}^d} H(s,x) W(ds,dx) := \\sum_{k=1}^{n-1}\\sum_{l=1}^{L_k} h_{k,l}(W_{t_{k+1}}(A_{k,l})-W_{t_k}(A_{k,l})).\n\\end{equation*} \n\n\\noindent We want to extend this definition to a certain $L_2$ space. To do this, let $\\mathcal P$ be the predictable $\\sigma$-algebra, which is generated by the elementary processes. We set\n\n\\begin{equation*}\nL_2(W) := \\left\\{H \\text{ predicatble}, E\\left(\\int_0^\\infty \\norm{H(s,\\cdot)}_{\\abs K}^2ds\\right)<\\infty \\right\\}.\n\\end{equation*}\n\n\\noindent The following result can be found in \\cite{walshOriginal}:\n\n\\begin{lemma}\nThe set of elementary processes is dense in $L_2(W)$.\n\\end{lemma}\n\n\\noindent Our notion of an integral extends to the space $L_2(W)$, as the following theorem states:\n\n\\begin{theorem}[Walsh Integral]\n\nThe above given integral can be extended uniquely to a continuous, linear map\n\n\\begin{align*}\nL_2(W)&\\rightarrow M^2_0 \\\\\nH&\\mapsto \\int_0^t\\int_{\\mathbb{R}^d} H(s,x) W(ds,dx),\n\\end{align*}\n\n\\noindent where $M^2_0$ is the set of square-integrable martingales starting in 0.\n\n\\end{theorem}\n\n\\noindent The proof can be found in \\cite{walshOriginal}.\n\nBy a simple approximation argument, one can show that the \\emph{It\u00f4-isometry} holds for all $H\\in L_2(W)$ and $t\\ge 0$:\n\n\\begin{equation*}\nE\\left(\\left(\\int_0^t\\int_{\\mathbb{R}^d} H(s,x) W(ds,dx)\\right)^2\\right) = E\\left(\\int_0^t \\norm{H(s,\\cdot)}_K^2ds\\right).\n\\end{equation*}\n\n\\noindent With this, we can tackle the main result of this section:\n\n\\subsection{Reconstruction of the Walsh-type Integral}\n\nLet $W$ be a Gaussian martingale measure, and let $X(t,x)$ be a predictable process. We want to show, that the distribution associated with the Walsh-type integral\n\n\\begin{equation*}\n\\psi\\mapsto \\int_0^\\infty\\int_{\\mathbb{R}^d} \\psi(s,x)X(s,x)W(ds,dx).\n\\end{equation*}\n\n\\noindent Can be constructed with the stochastic reconstruction theorem. Note that the integral is a martingale in the time $t$, making our stochastic dimension $e=1$. \n\nWe need a local approximation for the integral, which we get by the constant approximating $X(s,x)\\approx X(t,y)$ for $(s,x)$ close to $(t,y)$. This motivates using the germ $F_{t,y} = X(t,y)W$, i.e.\n\n\\begin{equation*}\nF_{t,y}(\\psi) := X(t,y)\\int_0^\\infty \\int_{\\mathbb{R}^d} \\psi(s,x)W(ds,dx).\n\\end{equation*}\n\n\\begin{remark}\nWhile this allows us to formally define the germ on any test function, it is only really useful on test functions with a support on the right-hand-side of $t$, so that $X(t,y)\\int_0^\\tau\\int_{R^d}\\psi(s,x)W(ds,dx)$ is a martingale. Our left-sided version of the reconstruction theorem mirrors this property.\n\\end{remark}\n\n\\noindent For simplicity, we restrict ourselves to $L_2(\\Omega)$, but one can always use Gaussianity to extend our result into $L_p$ spaces for more general $p$. The regularity of $X$ is governed by its H\u00f6lder-continuity, i.e. we assume $X\\in C^\\alpha(L_2(\\Omega))$ for some $\\alpha>0$, where we use the classical scaling $s=(1,1,\\dots,1)$. It is however not clear a priori how to measure the regularity of $W$. To this end, let $K$ be the covariance measure of $W$. We say that $K$ has a scaling $\\delta>0$ if it holds that\n\n\\begin{equation*}\nK(\\lambda dx,\\lambda dy) = \\lambda^\\delta K(dx,dy).\n\\end{equation*}\n\n\\noindent The following lemma shows, that $W$ as a distribution is $-d+\\frac 12+\\frac\\delta 2$-H\u00f6lder continuous, if its covariance measure has the scaling $\\delta$:\n\n\\begin{lemma}\\label{lemma4}\nLet $K$ have scaling $\\delta$, as above. It holds that:\n\n\\begin{equation*}\n\\int_0^\\infty\\norm{\\phi_{t,y}^\\lambda(s,\\cdot)}_K^2 ds \\lesssim \\lambda^{2\\alpha}\n\\end{equation*}\n\n\\noindent for $\\alpha = -d+\\frac 12+\\frac\\delta2$. We say that $K$ is of homogeneity $\\alpha.$\n\\end{lemma}\n\n\\begin{proof}\nA straightforward calculation shows:\n\n\\begin{align*}\n\\int_0^\\infty \\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} \\phi_{t,y}^\\lambda(s,x_1)&\\phi_{t,y}^\\lambda(s,x_2)K(dx_1,dx_2)ds \\\\& = \\lambda^{-2d}\\int_0^\\infty \\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} \\phi_{t,y}(s\/\\lambda,x_1\/\\lambda)\\phi_{t,y}(s\/\\lambda,x_2\/\\lambda)K(dx_1,dx_2)ds \\\\\n&= \\lambda^{-2d}\\int_0^\\infty \\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} \\phi_{t,y}(r,v_1)\\phi_{t,y}(r,v_2)K(\\lambda dv_1,\\lambda dv_2)\\lambda ds \\\\\n&= \\lambda^{-2d+1+\\delta} \\int_0^\\infty \\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} \\phi_{t,y}(r,v_1)\\phi_{t,y}(r,v_2)K(dv_1,dv_2) ds \\\\&= \\lambda^{-2d+1+\\delta}\\int_0^\\infty\\norm{\\phi_{t,y}(s,\\cdot)}_K^2 ds \\\\&= \\lambda^{-2d+1+\\delta}\\int_0^\\infty\\norm{\\phi(s,\\cdot)}_K^2 ds.\n\\end{align*}\n\\end{proof}\n\n\\begin{remark}\\label{remark}\nUsing the It\u00f4-isometry, this implies that\n\n\\begin{equation*}\n\\norm{\\int_{\\mathbb{R}_+\\times\\mathbb{R}^d}\\psi(s,x)W(ds,dx)}_{L_2} = \\left(\\int_0^\\infty\\norm{\\phi_{t,y}^\\lambda(s,\\cdot)}_K^2 ds\\right)^{\\frac 12}\\lesssim \\lambda^\\alpha.\n\\end{equation*}\n\n\\noindent Thus, the distribution $\\psi\\mapsto \\int_{\\mathbb{R}_+\\times\\mathbb{R}^d}\\psi(s,x)W(ds,dx)$ is indeed in $C^\\alpha$.\n\\end{remark}\n\n\\noindent With this result, we can reconstruct the Walsh integral:\n\n\\begin{theorem}\\label{Walsh}\nLet $K$ be of homogeneity $\\alpha$, and let $X\\in C^\\beta(L_2(\\Omega))$ for some $\\beta\\in(0,1)$. Let $(F_{t,y})$ be as above. If $\\alpha+\\beta>-\\frac 12$, then $(F_{t,y})$ fulfills the assumption of Theorem \\ref{stochastic_reconstruction}, and thus there exists a distribution $f$ fulfilling \\eqref{unique1} and \\eqref{unique2}. It further holds, that $f$ is given by the Walsh-type integral, i.e.\n\n\\begin{equation}\\label{walshRec}\nf(\\psi) = \\int_0^{\\infty}\\int_{\\mathbb{R}^d} X(s,x)\\psi(s,x)W(ds,dx),\n\\end{equation}\n\n\\noindent for all $\\psi\\in C_c^r$, $r=\\max(\\lfloor-\\alpha\\rfloor+1,1)$.\n\\end{theorem}\n\n\\begin{remark}\nObserve that by Lemma \\ref{lemma4}, Theorem \\ref{Walsh} automatically applies to all $K$ with a scaling $\\delta$, such that\n\n\\begin{equation*}\n\\beta+\\frac \\delta2 > d-1.\n\\end{equation*}\n\\end{remark}\n\n\\begin{proof}\nWe first show that the assumptions of Theorem \\ref{stochastic_reconstruction} are fulfilled: Let $t\\le s$, and let $\\psi$ be a test function with support in $[0,1]\\times[-1,1]^d$. It holds that:\n\n\\begin{align*}\nE^{\\mathcal F_{t}} &(F_{t,y}-F_{s,x})(\\psi_{s,x}^\\lambda) \\\\\n&=E^{\\mathcal F_{t}} \\left((X(t,y)-X(s,x)) E^{\\mathcal F_s}\\int_0^\\infty\\int_{\\mathbb{R}^d} \\psi_{s,x}^\\lambda(u,v) W(du,dv)\\right) = 0,\n\\end{align*}\n\n\\noindent since $\\int_0^\\tau\\int_{\\mathbb{R}^d}\\psi_{s,x}^\\lambda(u,v) W(du,dv)$ is a martingale in $\\tau$ and $s$ is on the edge of the support of $\\psi_{s,x}^\\lambda$. It remains to show, that\n\n\n\\begin{equation*}\n\\norm{(F_{t,y}-F_{s,x})(\\psi_{s,x}^\\lambda)}_{L_2} \\lesssim \\lambda^{\\tilde\\alpha}(\\abs{(s,x)-(t,y)}\\lambda)^{\\gamma-\\frac 12-\\tilde\\alpha}\n\\end{equation*}\n\n\\noindent for some $\\tilde\\alpha\\in\\mathbb{R}, \\gamma > 0$. We chose $\\gamma = \\alpha+\\beta+\\frac 12$ and observe the following:\n\n\\begin{align}\n\\norm{(F_{t,y}-F_{s,x})(\\psi_{s,x}^\\lambda)}_{L_2}^2 &= \\norm{(X(t,y)-X(s,x)) \\int_0^\\infty\\int_{\\mathbb{R}^d} \\psi_{s,x}^\\lambda(u,v) W(du,dv)}_{L_2}^2 \\nonumber\n\\nonumber\\\\\n&\\le \\int_0^\\infty\\int_{\\mathbb{R}^d\\times\\mathbb{R}^d} \\underbrace{\\norm{(X(t,y)-X(s,x))}_{L_2}^2}_{\\lesssim \\abs{(t,y)-(s,x)}^{2\\beta}} \\psi_{s,x}^\\lambda(u,v)\\psi_{s,x}^\\lambda(u,w) K(dv,dw) du \\nonumber\n\\\\&\\lesssim \\abs{(t,y)-(s,x)}^{2\\beta} \\int_0^\\infty \\norm{\\psi_{s,x}^\\lambda}_K^2 ds \\nonumber\n\\\\&\\lesssim \\abs{(t,y)-(s,x)}^{2\\beta}\\lambda^{2\\alpha}. \\label{calc}\n\\end{align}\n\n\\noindent Thus, using $\\tilde \\alpha = \\alpha$, it follows that\n\n\\begin{align*}\n\\norm{(F_{t,y}-F_{s,x})(\\psi_{s,x}^\\lambda)}_{L_2} \\lesssim \\lambda^{\\tilde \\alpha} (\\abs{(t,y)-(s,x)}+\\lambda)^{\\alpha+\\beta-\\tilde\\alpha} = \\lambda^{\\tilde \\alpha} (\\abs{(t,y)-(s,x)}+\\lambda)^{\\gamma-\\frac 12-\\tilde\\alpha}.\n\\end{align*}\n\n\\noindent This shows that there is a unique reconstruction $f$. To find $r$, note that the proof of Theorem \\ref{stochastic_reconstruction} requires $\\tilde r = r$ to be strictly greater to the absolute value of the homogeneity of the germ, which is $\\alpha$ according to Remark \\ref{remark}, and $r\\ge \\gamma-\\frac 12 > -\\frac 12$. It follows that $r=\\max(\\lfloor-\\alpha\\rfloor+1,1)$.\n\nIt remains to show \\eqref{walshRec}. To this end, assume that the support of $\\psi$ is in $[C,C+1]\\times[-1,1]^d$ for some $C>0$. We need to show that \\[\\tilde f(\\psi) := \\int_0^{\\infty}\\int_{\\mathbb{R}^d} X(s,x)\\phi(s,x)W(ds,dx)\\] fulfills \\eqref{unique1} and \\eqref{unique2}. Note that \\eqref{unique2} is again obvious due to the martingale properties of \\newline $\\int_0^t\\int_{\\mathbb{R}^d} X(s,x)\\psi(s,x)W(ds,dx)$, so it remains to show \\eqref{unique1}. Using the same calculation as in \\eqref{calc}, we see that\n\n\\begin{align*}\n\\norm{(\\tilde f- F_{t,y})(\\psi_{t,y}^\\lambda)}_{L_2} &= \\norm{\\int_0^\\infty\\int_{\\mathbb{R}^d} (X(s,x)-X(t,y))\\psi_{t,y}^\\lambda(s,x)W(ds,dx)}_{L_2} \\\\\n&\\lesssim \\sup_{(s,x)\\in supp(\\psi_{t,y}^\\lambda)} \\underbrace{\\norm{X(s,x)-X(t,y)}_{L_2}}_{\\lesssim \\lambda^\\beta} \\int_0^\\infty \\norm{\\psi_{s,x}^\\lambda}_K^2 ds \\\\&\\lesssim \\lambda^{\\alpha+\\beta} = \\lambda^{\\gamma-\\frac 12}.\n\\end{align*}\n\n\\noindent Thus, $\\tilde f = f$ up to modifications, which finishes the proof.\n\\end{proof}\n\n\\section{Integration against white noise}\\label{sectionWN}\n\nWe would like to present an example with a non-trivial stochastic dimension. To do so, let $\\xi$ be white noise over $\\mathbb{R}^d$: Consider the Gaussian family $(\\xi(\\psi))_{\\psi\\in L_2(\\mathbb{R}^d)}$, where each $\\xi(\\psi)$ is a centered, Gaussian random variable with covariance\n\n\\begin{equation}\\label{covarianceNew}\nE(\\xi(\\psi)\\xi(\\phi)) = \\int_{z\\in\\mathbb{R}^d} \\psi(z)\\phi(z) dz,\n\\end{equation}\n\n\\noindent which is also referred to as $E(\\xi(x)\\xi(y)) = \\delta(x-y)$ and we define the integral of a deterministic function against white noise through\n\n\\begin{equation*}\n\\int \\psi(z)\\xi(dz) := \\xi(\\psi).\n\\end{equation*}\n\n\\noindent Note that $\\xi$ as a map from $L_2(\\mathbb{R}^d)\\to L_2(\\Omega)$ is continuous thanks to \\eqref{covarianceNew}, so $\\xi$ is an $L_2(\\Omega)$-valued distribution. Our goal is to construct the iterated integrals\n\n\\begin{equation}\\label{whiteNoiseInt}\n\\int_{z_1\\le\\dots\\le z_n\\in\\mathbb{R}^d} \\psi(z_1,\\dots,z_n)\\xi(dz_1)\\dots\\xi(dz_n),\n\\end{equation}\n\n\\noindent from the Wiener chaos (\\cite{nualart}, \\cite{janson}), where we use the notation $z_1\\le z_2$, iff for all $i=1,\\dots,d$ it holds that $z_{1,i}\\le z_{2,i}$. Ordering the variables $z_i$ like this will guarantee that we have the right adaptedness properties in each step of the integration. To get more specific on this, let us fix our filtrations: The natural choice for a $\\sigma$-algebra on our space is given by $\\mathcal F = \\sigma(\\xi(\\psi)|\\psi\\in L_2(\\mathbb{R}^d))$ and for each direction $i=1,\\dots,d$, we get a natural filtration which can be sloppily written as ``$\\mathcal F^{(i)}_{t} = \\sigma(\\xi(z)| \\pi_i(z)\\le t)$''. For a more rigorous definition, we set\n\n\\begin{equation*}\n\\mathcal F^{(i)}_{t} = \\sigma\\left(\\xi(\\psi)\\big| supp(\\psi)\\subset\\mathbb{R}^{i-1}\\times(-\\infty,t]\\times\\mathbb{R}^{d-i}\\right).\n\\end{equation*}\n\n\\noindent With these in mind, our approach to define the integral \\eqref{whiteNoiseInt} will be to set $X_0 = \\psi$ and recursively define\n\n\\begin{equation*}\nX_{k+1}(z_1,\\dots,z_{n-k-1}) = \\int_{z_{n-k}\\le z_{n-k-1}} X_k(z_1,\\dots,z_{n-k}) \\xi(d z_{n-k}).\n\\end{equation*}\n\n\\noindent It follows, that $X_{k}(z_1,\\dots,z_{n-k}) \\in \\mathcal F^{(i)}_{\\pi_i(z_{n-k})}$ for all $i=1,\\dots, d$, so the problem reduces to reconstructing the integral\n\n\\begin{equation}\\label{ItoInt}\n\\int_{z\\in\\mathbb{R}^d} X(z)\\xi(dz)\n\\end{equation}\n\n\\noindent for a stochastic process $X$, which is adapted in all directions $i=1,\\dots,d$, i.e. $X(z)\\in \\mathcal F^{(i)}_{\\pi_i(z)}$. Let us quickly think about the amount of regularity, $X$ will need to have: $\\xi$ as a distribution fulfills\n\n\\begin{equation*}\n\\norm{\\xi(\\psi_x^\\lambda)}_{L_2(\\Omega)} = \\norm{\\psi_x^\\lambda}_{L_2(\\mathbb{R}^d)} \\lesssim \\lambda^{-\\frac d2},\n\\end{equation*}\n\n\\noindent and is thus of regularity $-d\/2$. Since white noise has martingale properties in all of its directions, we expect it to have the stochastic dimension $e=d$. Thus, we get an increase in regularity of $d\/2$, and it should suffice for $X$ to have any regularity $\\alpha>0$. This also holds up to our intuition, that \\eqref{ItoInt} should make sense for any continuous process $X$ as an Ito integral over some compact interval.\n\nTo reconstruct the integral, we use the same germ as in Section \\ref{sectionGMM}, given by\n\n\\begin{equation*}\nF_z(\\psi) := X(z)\\xi(\\psi).\n\\end{equation*}\n\n\\noindent Thanks to the adaptedness of $X$, it follows that $(F_z)_{z\\in\\mathbb{R}^d}$ has stochastic dimension $d$ with respect to the filtrations $\\mathcal F^{(i)}$. The following result shows, that $(F_z)_{z\\in\\mathbb R^d}$ can indeed be reconstructed:\n\n\\begin{theorem}\\label{theoWhiteNoise}\nLet $X$ be an adapted process in the above sense ($X(z)\\in\\mathcal F_{\\pi_i(z)}^{(i)}$ for all $i=1\\dots,d$), with $X\\in C^\\alpha(L_2(\\Omega))$ for some $\\alpha>0$. Then $(F_z)_{z\\in\\mathbb{R}^d}$ is an stochastic $\\alpha$-coherent germ with stochastic dimension $e=d$. It follows that there is a unique distribution $f$, fulfilling \\eqref{unique1} and \\eqref{unique2}.\n\\end{theorem}\n\n\\begin{proof}\nIt is clear, that $(F_z)_{z\\in\\mathbb R^d}$ is of stochastic dimension $d$, so we only need to check \\eqref{coherence}. Let $\\psi\\in C_c^r$ be a test function with compact support in $[0,\\infty)^d$, and let $z,y\\in\\mathbb{R}^d$ with $\\pi_i(z)\\le\\pi_i(y)$ for some $i\\in\\{1,\\dots,d\\}$. Since $supp(\\psi_y^\\lambda)\\in\\mathbb{R}^{i-1}\\times(\\pi_i(y),\\infty)\\times\\mathbb{R}^{d-i}$, it follows that $X(z)-X(y)\\in\\mathcal F^{(i)}_{\\pi_i(y)}$ is independent of $\\xi(\\psi_y^\\lambda)$. Thus,\n\n\\begin{equation*}\n\\norm{(F_z-F_y)(\\psi_y^\\lambda)}_{L_2} = \\norm{X(z)-X(y)}_{L_2}\\norm{\\xi(\\psi_y^\\lambda)}_{L_2} \\lesssim \\abs{z-y}^\\alpha\\lambda^{-\\frac d2}.\n\\end{equation*}\n\n\\noindent Further observe that the martingale properties of $\\xi$ imply\n\n\\begin{align*}\nE^{\\mathcal F_{\\pi_i(z)}^{(i)}}(F_z-F_y)(\\psi_y^\\lambda) = E^{\\mathcal F_{\\pi_i(z)}^{(i)}}\\left((X(z)-X(y))\\underbrace{E^{\\mathcal F_{\\pi_i(y)}^{(i)}}\\xi(\\psi_y^\\lambda)}_{=0}\\right)=0.\n\\end{align*}\n\n\\noindent Thus, the germ fulfills \\eqref{coherence} and can be reconstructed.\n\\end{proof}\n\n\\noindent The distribution $f$ can be seen as the product between $X$ and $\\xi$, so we write it as $f = X\\cdot\\xi$. As one would suspect, it gets locally approximated by $(X\\cdot \\xi)(\\psi)\\approx X(z)\\int_{y\\in\\mathbb{R}^d}\\psi(y)\\xi(dy)$. We want to show that $(X\\cdot\\xi)(\\psi)$ coincides with the classical definition of the integral $\\int_{z\\in\\mathbb{R}^d} X(z)\\psi(z)\\xi(dz)$, so let us quickly recall this definition, following \\cite{KPZ}. For a more detailed construction, on can look up \\cite{nualart} or \\cite{janson}.\n\nWe call processes of the form\n\n\\begin{equation*}\nX(t,x) = \\sum_{i=1}^m X_i 1_{(a_i,b_i]}(t)\\psi_i(x)\n\\end{equation*}\n\n\\noindent \\emph{simple processes}, where for $i=1\\dots, m$ it holds that $a_i< b_i$, $\\psi_i\\in C_c^\\infty$ and $X_i$ is a bounded, $\\mathcal F^{(1)}_{a_i}$-measurable random variable. For simple processes, it is straightforward to define the integral against white noise as\n\n\\begin{equation*}\n\\int X(z) \\xi(dz) := \\sum_{i=1}^m X_i \\int_{(t,x)\\in\\mathbb{R}^d}1_{(a_i,b_i]}(t)\\psi_i(x)\\xi(dt,dx),\n\\end{equation*}\n\n\\noindent where the integral $\\int_{(t,x)\\in\\mathbb{R}^d}1_{(a_i,b_i]}(t)\\psi_i(x)\\xi(dt,dx)$ is well defined, since $1_{(a_i,b_i]}(t)\\psi_i(x)$ is non-random. Let us denote the set of simple processes by $\\mathcal S$ and the $\\sigma$-field generated by $\\mathcal S$ with $\\mathcal P$ and let $L_2(\\mathbb{R}^d\\times\\Omega,\\mathcal P)$ be the set of square-integrable, predictable processes. It is well known (if not, see \\cite{KPZ}, Lemma 2.1) that the set of simple processes is dense in the set of predictable, square integral processes and that the map\n\n\\begin{align*}\n\\int \\cdot ~\\xi(dz) : \\mathcal S&\\to L_2(\\Omega)\\\\\nX& \\mapsto \\int X(z)\\xi(dz)\n\\end{align*} \n\n\\noindent is continuous, so it can be extended by a simple approximation argument to $L_2(\\mathbb{R}^d\\times\\Omega,\\mathcal P)$.\n\nWe want to show that our reconstructed product $X\\cdot \\xi$ is equivalent to the integral described above, i.e. for all $\\psi\\in C_c^\\infty$:\n\n\\begin{equation*}\n(X\\cdot \\xi)(\\psi) = \\int_{z\\in\\mathbb{R}^d} X(z)\\psi(z)\\xi(dz).\n\\end{equation*}\n\n\\noindent To show this, we make the following observations: For a $\\mathcal F^{(1)}_{t}$-measurable random variable $X\\in L_2(\\Omega)$ and a test function $\\psi$ with support in $[t,\\infty)\\times\\mathbb{R}^{d-1}$, it holds that\n\n\\begin{equation}\\label{stepwise}\n\\int_{z\\in\\mathbb{R}^d}X\\psi(z)\\xi(dz) = X\\int_{z\\in\\mathbb{R}^d}\\psi(z)\\xi(dz) = F_y(\\psi),\n\\end{equation}\n\n\\noindent if $X(y) = X$. This simply follows from approximating $\\psi$ with piecewise constant functions, and applying the definition of our integral. We also recall the It\u00f4-isometry: For all $X\\in L_2(\\mathbb{R}^d\\times\\Omega,\\mathcal P)$,\n\n\\begin{equation}\\label{WienerIto}\n\\norm{\\int_{z\\in\\mathbb{R}^d}X(z)\\xi(dz)}_{L_2(\\Omega)}^2 = \\int_{\\mathbb{R}^d}\\norm{X(z)}_{L_2(\\Omega)}^2 dz,\n\\end{equation}\n\n\\noindent which can be found as equation (126) in \\cite{KPZ}. With this in mind, we can show:\n\n\\begin{theorem}\nLet $X$ be as in Theorem \\ref{theoWhiteNoise}. Then, for any $\\psi\\in C_c^\\infty$, it holds that almost surely\n\n\\begin{equation*}\n(X\\cdot\\xi)(\\psi) = \\int_{z\\in\\mathbb{R}^d}X(z)\\psi(z)\\xi(dz).\n\\end{equation*}\n\\end{theorem}\n\n\\begin{proof}\nSince $X$ is adapted and continuous, it follows that it is predictable and due to $\\psi$ being compactly supported, $X\\psi$ is square-integrable. Thus, $\\int_{z\\in\\mathbb{R}^d}X(z)\\psi(z)\\xi(dz)$ is well defined. We show that $\\int_{z\\in\\mathbb{R}^d}X(z)\\psi(z)\\xi(dz)$ fulfills \\eqref{unique1} and \\eqref{unique2}, the equality then follows from the uniqueness of the reconstruction. To do this, let $\\psi$ have a strictly positive support. Then \\eqref{stepwise} implies, that $F_z(\\psi_z^\\lambda) = \\int_{y\\in\\mathbb{R}^d}X(z)\\psi_z^\\lambda(y)\\xi(dy)$, and we can use \\eqref{WienerIto} to calculate\n\n\\begin{align*}\n\\norm{\\int_{y\\in\\mathbb{R}^d}X(y)\\psi_z^\\lambda(y)\\xi(dy)-F_z(\\psi_z^\\lambda)}_{L_2} &= \\norm{\\int_{y\\in\\mathbb{R}^d}(X(y)-X(z))\\psi_z^\\lambda(y)\\xi(dy)}_{L_2} \\\\\n&=\\left(\\int_{\\mathbb{R}^d}\\norm{X(y)-X(z)}_{L_2(\\Omega)}^2\\abs{\\psi_z^\\lambda(y)}^2 dy\\right)^{\\frac 12} \\\\\n&\\le \\sup_{y\\in supp(\\psi_z^\\lambda)}\\norm{X(y)-X(z)}_{L_2(\\Omega)}\\norm{\\psi_y^\\lambda}_{L_2(\\mathbb{R}^d)} \\\\\n&\\lesssim \\lambda^{\\alpha}\\lambda^{-\\frac d2}.\n\\end{align*}\n\n\\noindent It further holds that\n\n\\begin{equation*}\nE^{\\mathcal F^{(i)}_{\\pi_i(z)}}\\left(\\int_{y\\in\\mathbb{R}^d}X(y)\\psi_z^\\lambda(y)\\xi(dy)-F_z(\\psi_z^\\lambda)\\right) = E^{\\mathcal F^{(i)}_{\\pi_i(z)}}\\int_{y\\in\\mathbb{R}^d}(X(y)-X(z))\\psi_z^\\lambda(y)\\xi(dy) = 0,\n\\end{equation*}\n\n\\noindent since $\\int_{-\\infty}^\\tau\\int_{x\\in\\mathbb{R}{d-1}}X(t,x)\\xi(dt,dx)$ is a martingale in $\\tau$ for all predictable processes $X$, and we can use any direction $i=1,\\dots,d$ to construct $\\int_{z\\in\\mathbb{R}^d} X(z)\\xi(dz)$. Thus, $\\psi\\mapsto\\int_{z\\in\\mathbb{R}^d}X(z)\\psi(z)\\xi(dz)$ fulfills \\eqref{unique1} and \\eqref{unique2}, which shows the claim.\n\\end{proof}\n\n\\section{Discussion and outlook}\\label{sectionDis}\n\nSo far, we have demonstrated that a stochastic reconstruction theorem is possible and indeed a generalization of the stochastic sewing lemma. The theorem also offers us a new perspective on well-known constructions like the integrals against martingale measures. \n\nHowever, all those applications are merely reconstructing already well-known concepts, and we would like to outline a genuinely new application that we have in mind: In the new paper \\cite{roughSDE}, Peter Friz, Antoine Hocquet and Khoa L\u00ea use the stochastic sewing lemma to built a ``hybrid'' theory, allowing them to analyze several applications rough stochastic PDE, which combine rough and stochastic analysis. We conjecture that the stochastic reconstruction theorem might allow a similar theory for SPDEs.\n\nTo get more specific, consider a gPAM equation, which gets perturbed by a white noise in time:\n\n\\begin{equation}\\label{outlook}\ndu -\\Delta udt = f(u) dB + g(u) dW,\n\\end{equation}\n\n\\noindent where $dW$ is a space white noise, and $B_t$ is a Brownian motion in time. Since $W$ depends on space and $B$ depends on time, it is not possible to combine them into a single higher dimensional Brownian motion, so \\eqref{outlook} lies outside the scope of the classical approach.\n\nThe most natural way to approach such an equation would most certainly be a regularity structure approach, similar to the one in \\cite{gPam}, as it offers a natural way to treat both noises at the same time. However, this approach requires a rather high regularity of the coefficients $f,g$ since it is based on a renormalization approach. For coefficients of lower regularity, one might still be able to abuse the stochastic properties of $B$ and $W$ to formulate a weak solution theory, but this would lie outside the scope of renormalization methods.\n\nIn this case, the stochastic reconstruction theorem might allow us to get the best of both worlds: It could allow one to use regularity structure techniques in situations, which there previously only reached by weak solution theories. To illustrate our point, consider the simplified system of equations\n\n\\begin{align}\ndv -\\Delta vdt &= f_0 dB + g_0 dW \\label{simplifiedSys}\\\\\ndu -\\Delta u dt &= f(v) dB +g(v) dW\\nonumber\n\\end{align}\n\n\\noindent in $1+2$ dimensions. This system can trivially be solved through convolution with the heat kernel $K$:\n\n\\begin{align*}\nv(t,x) &= f_0 B_t + g_0(K*dW)(t,x)\\\\\nu(t,x) &= (K*(f(v)dB))(t,x) + (K*(g(v)dW))(t,x).\n\\end{align*}\n\nAs long as $f,g\\in C^{\\epsilon}$ for any $\\epsilon>0$, this is perfectly well defined in a weak sense. However, $\\epsilon\\le 1$ do not allow one to use renormalization methods. The regularity of dB and $dW$ is $-1^-$ in parabolic scaling, and the regualrity of $f(v), g(v)$ is most definitely smaller than $1$, with the exact regularities depending on the H\u00f6lder-contiunuity of $f$ and $g$. Thus, both products $f(v)dB$ and $g(v) dW$ are ill-defined, and require us to subtract constants of the form $f'(v^{\\epsilon})C_\\epsilon, g'(v^{\\epsilon})C_\\epsilon$. Thus, we need at least $f,g\\in C^{1+\\epsilon}$ to achieve a solution in the renormalized sense.\n\nWe believe that the stochastic reconstruction theorem may combine the advantages of both approaches. Indeed, if one looks at the simplified system \\eqref{simplifiedSys}, one can model $dB$ and $dW$ as abstract symbols {\\color{blue}$\\Xi_1, \\Xi_2$} and get an abstract solution for $u$ given by $\\bl{u} = f(\\bl{v})\\bl{\\Xi_1} + g(\\bl{v})\\bl{\\Xi_2}$. In the last step, the symbols $f(\\bl{v})\\bl{\\Xi_1}$ and $g(\\bl{v})\\bl{\\Xi_2}$ can then easily be reconstructed with the technique given in section \\ref{sectionWN} which coincide with the Ito and Walsh integral needed for the weak solution, respectively. The regularity needed for this is only $-1+\\epsilon$, since $dB$ and $dW$ both have rescaled stochastic dimensions of $E = 2$, so $f,g\\in C^\\epsilon$ suffices for this approach.\n\n\n\\section*{Acknowledgments}\n\nI want to thank Peter Friz for suggesting this topic to me, and for many helpful discussions throughout the process of writing this paper. I further want to thank Khoa L\u00ea and Philipp Forstner for their helpful remarks, and Carlo Bellingeri for our mathematical discussions.\n\nThis project was funded by the DFG international research training group 2544.\n\n\\printbibliography\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMany extra-dimensional models have four-dimensional (4D) brane-like defects \non the compact space, such as orbifold fixed points \nor solitonic objects~\\cite{ArkaniHamed:1998rs}-\\cite{Randall:1999vf}. \nWe can freely introduce 4D terms localized \nat the branes\\footnote{\nHere we do not consider branes spread over other dimensions. \nThe word~``brane'' is understood as the ``3-brane'' in this paper. \n}~\\cite{Goldberger:2001tn,Davoudiasl:2002ua,delAguila:2006atw}. \nSuch brane-localized terms are induced by quantum effect \neven if they are absent at tree level~\\cite{Mirabelli:1997aj,Georgi:2000ks}. \nThey change the Kaluza-Klein (KK) spectrum \nand deform the profiles of the mode \nfunctions~\\cite{Cacciapaglia:2005da,Dudas:2005vn,Maru:2010ap}. \nIn particular, the brane-localized mass terms are often introduced \nin order to remove unwanted modes from the 4D effective \ntheory~\\cite{Agashe:2006at,Hosotani:2007qw,Hosotani:2008tx}. \nIn five-dimensional (5D) theories, the effects of such brane masses \ncan be translated into the change of the boundary conditions \nfor the bulk fields. \nThis is because the branes in 5D can be regarded as the boundaries of the extra dimension. \nIn this case, \nlarge brane masses can make zero-modes of the bulk fields heavy enough \nup to half of the compactification scale. \n\nIn contrast, the branes are no longer the boundaries of the extra compact space \nin higher-dimensional theories. \nSince effects of the brane terms spread over higher-dimensional space \nand are diluted, they are expected to be smaller than those in the 5D case. \nTherefore, it is important to check whether the brane mass can make \nunwanted modes heavy enough or not. \nIn this paper, we evaluate the lightest mass eigenvalue of \na six-dimensional (6D) theory in the presence of the brane-localized mass term. \nThe authors of Ref.~\\cite{Dudas:2005vn} discussed a closely related issue \nin the case of the $T^2\/Z_2$ compactification \nwhose torus moduli parameter is $\\tau=i$, and obtained \nthe result that the inverse of the lightest \nmass eigenvalue has a logarithmic dependence on the cutoff scale. \nHere we generalize their setup and consider a generic torus whose moduli parameter \nis arbitrary. \nThen we can explicitly see the relation to the well-known results in the 5D theories \nby squashing or stretching the torus. \nBesides, we are interested in a different parameter region \nfrom that discussed in Ref.~\\cite{Dudas:2005vn}. \nWe mainly focus on the limit of a large brane mass, \nin which the dependence of the mass eigenvalues on the brane mass is negligible, \nand evaluate the ratio of the lightest mass to the compactification scale \nby numerical calculations.\n\nThe paper is organized as follows. \nAfter explaining the setup in the next section, \nwe will see the dependences of the lightest mass eigenvalue \non the cutoff scale of the theory and on the brane mass in Sec.~\\ref{MassMatrix}. \nIn Sec.~\\ref{ap_expr}, we find an approximate expression of the lightest mass \nas a function of the torus moduli parameter~$\\tau$, \nand estimate its ratio to the compactification scale. \nSec.~\\ref{summary} is devoted to the summary. \nWe provide a brief review of the case of a 5D theory in Appendix~\\ref{5Dcase}, \nand discuss theories with fermion or vector field in Appendix~\\ref{other_cases}. \n\n\n\n\\section{Setup} \\label{setup}\nWe consider a 6D theory of a complex scalar field~$\\phi$ as a simple example.~\\footnote{\nCases of fermion and vector fields are briefly discussed in Appendix~\\ref{other_cases}. \n}\nThe Lagrangian is given by\n\\be\n \\cL = -\\der^M\\phi^*\\der_M\\phi-\\cm^2\\abs{\\phi}^2\\dlt(x^4)\\dlt(x^5)+\\cdots, \n \\label{cL}\n\\ee\nwhere $M=0,1,2,\\cdots,5$, and the ellipsis denotes interaction terms, \nwhich are irrelevant to the following discussion. \nThe brane mass parameter~$\\cm$ is a real dimensionless constant. \nThe extra dimensions are compactified on a torus~$T^2$.\\footnote{\nThe spectrum in the case of $T^2\/Z_N$ compactification ($N=2,3,4,6$) \ncan easily be obtained by thinning out the spectrum on $T^2$. \n} \nThe background metric is assumed to be flat, for simplicity. \nFor the coordinates of the extra dimensions, it is convenient to use a complex \n(dimensionless) coordinate~$z\\equiv\\frac{1}{2\\pi R}(x^4+ix^5)$, \nwhere $R>0$ is one of the radii of $T^2$. \nThe torus is defined by identifying points in the extra dimensions as\n\\be\n z \\sim z+n_1+n_2\\tau, \\;\\;\\; (n_1, n_2 \\in \\mathbb{Z})\n\\ee\nwhere $\\tau$ is a complex constant that satisfies $\\Im\\tau>0$. \n\nThe Lagrangian~(\\ref{cL}) is then rewritten as\n\\be\n \\cL = -\\der^\\mu\\phi^*\\der_\\mu\\phi-\\frac{1}{2(\\pi R)^2}\\brc{\n \\abs{\\der_z\\phi}^2+\\abs{\\der_{\\bar{z}}\\phi}^2+\\cm^2\\abs{\\phi}^2\\dlt^{(2)}(z)}+\\cdots, \n \\label{cL2}\n\\ee\nwhere $\\mu=0,1,2,3$, and we have used that\n\\be \n \\dlt(x^4)\\dlt(x^5) = \\frac{1}{2(\\pi R)^2}\\dlt^{(2)}(z). \n\\ee\n\nWe can expand $\\phi$ as \n\\be\n \\phi(x^\\mu,z) = \\sum_{n,l=-\\infty}^\\infty f_{n,l}(z)\\phi_{n,l}(x^\\mu), \n \\label{KKexpand}\n\\ee\nwhere \n\\be\n f_{n,l}(z) = \\frac{1}{2\\pi R\\sqrt{\\Im\\tau}}\\exp\\brc{\n \\frac{2\\pi i}{\\Im\\tau}\\Im\\brc{(n+l\\bar{\\tau})z}} \n\\ee\nare normalized as\n\\bea \n &&\\int_{T^2}dx^4dx^5\\;\\abs{f_{n,l}\\brkt{\\frac{x^4+ix^5}{2\\pi R}}}^2 \n = 2(\\pi R)^2\\int\\dr^2z\\;\\abs{f_{n,l}(z)}^2 \\nonumber\\\\\n \\eql (2\\pi R)^2\\Im\\tau\\int_0^1 dw_1\\int_0^1 dw_2\\;\n \\abs{f_{n,l}(w_1+\\tau w_2)}^2 = 1, \n\\eea\nand satisfy \n\\be\n \\der_z\\der_{\\bar{z}}f_{n,l} = -\\tl{\\lmd}_{n,l}^2f_{n,l}, \\;\\;\\;\\;\\;\n \\tl{\\lmd}_{n,l} = \\frac{\\pi\\abs{n+l\\tau}}{\\Im\\tau}. \\label{wocm}\n\\ee\nThis corresponds to the KK expansion \nin the absence of the brane-localized mass term. \nThe KK masses are given by $\\tl{m}_{n,l}\\equiv \\tl{\\lmd}_{n,l}\/(\\pi R)$. \n\nSince the 6D theory is non-renormalizable, it should be regarded as \nan effective theory valid only below the cutoff scale~$\\Lmd$. \nHere we relabel the KK modes by using the KK label~$a=0,1,2,\\cdots$ \ndefined in such a way that \n\\be\n 0 = \\tl{m}_0 < \\tl{m}_1 \\leq \\tl{m}_2 \\leq \\cdots \\leq \\tl{m}_{N_\\Lmd}\n < \\Lmd \\leq \\tl{m}_{N_\\Lmd+1} \\leq \\cdots. \n\\ee\nThe correspondence of the labels~$(n,l)$ and $a$ depends on the value of $\\tau$, \nas shown in Tables~\\ref{relabelKK:1} and \\ref{relabelKK:2}. \n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|}\n \\hline \\rule[-2mm]{0mm}{7mm} \n $a$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \\hline\n $(n,l)$ & (0,0) & $(1,0)$ & $(0,1)$ & $(0,-1)$ & $(-1,0)$ & $(1,1)$ & $(1,-1)$ & \n $(-1,1)$ \\\\ \\hline \n $\\tl{\\lmd}_a$ & 0 & 3.14 & 3.14 & 3.14 & 3.14 & 4.44 & 4.44 & 4.44 \n \\\\\\hline\\hline \n $a$ & 8 & 9 & 10 & 11 & 12 & 13 & 14 & $\\cdots$ \\\\ \\hline\n $(n,l)$ & $(-1,-1)$ & (2,0) & (0,2) & $(0,-2)$ & $(-2,0)$ & (2,1) & $(2,-1)$ & \n $\\cdots$ \\\\ \\hline\n $\\tl{\\lmd}_a$ & 4.44 & 6.28 & 6.28 & 6.28 & 6.28 & 7.02 & 7.02 & $\\cdots$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Relabeling the KK modes in the case of $\\tau=i$}\n\\label{relabelKK:1}\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|}\n \\hline \\rule[-2mm]{0mm}{7mm} \n $a$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \\hline\n $(n,l)$ & (0,0) & $(2,-1)$ & $(-2,1)$ & $(1,0)$ & $(-1,0)$ & $(1,-1)$ & $(-1,1)$ & \n $(3,-1)$ \\\\ \\hline \n $\\tl{\\lmd}_a$ & 0 & 3.20 & 3.20 & 4.10 & 4.10 & 4.69 & 4.69 & 5.68 \n \\\\\\hline\\hline \n $a$ & 8 & 9 & 10 & 11 & 12 & 13 & 14 & $\\cdots$ \\\\ \\hline\n $(n,l)$ & $(-3,1)$ & $(4,-2)$ & $(-4,2)$ & $(3,-2)$ & $(-3,2)$ & (2,0) & (0,1) \n & $\\cdots$ \\\\ \\hline\n $\\tl{\\lmd}_a$ & 5.68 & 6.41 & 6.41 & 6.90 & 6.90 & 8.21 & 8.21 & $\\cdots$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Relabeling the KK modes in the case of $\\tau=2\\exp(\\pi i\/8)$}\n\\label{relabelKK:2}\n\\end{table}\nThe number of the KK excited modes below $\\Lmd$, \\ie, $N_\\Lmd$, grows as\n\\be\n N_\\Lmd \\propto \\Lmd^2, \n\\ee\nexcept for regions: $\\arg\\tau\\simeq 0,\\pi$, $\\abs{\\tau}\\ll 1$ and $\\abs{\\tau}\\gg 1$, \nin which the spacetime approaches 5D and thus $N_\\Lmd\\propto\\Lmd$. \n\nThen, (\\ref{KKexpand}) is rewritten as\n\\be\n \\phi(x^\\mu,z) = \\sum_{a=0}^\\infty f_a(z)\\phi_a(x^\\mu). \n \\label{KKexpand2}\n\\ee\nPlugging (\\ref{KKexpand2}) into (\\ref{cL2}) and performing the $d^2z$-integral, \nwe obtain the 4D Lagrangian: \n\\be\n \\cL^{\\rm (4D)} = -\\sum_a\\der^\\mu\\phi_a^*\\der_\\mu\\phi_a\n -\\sum_{a,b}M_{ab}^2\\phi_a^*\\phi_b+\\cdots, \n\\ee\nwhere\n\\bea\n M_{ab}^2 \\defa \\frac{\\tl{\\lmd}_a^2}{\\pi^2R^2}\\dlt_{ab}\n +\\cm^2f_a^*(0)f_b(0) \\nonumber\\\\\n \\eql \\tl{m}_a^2\\dlt_{ab}+\\frac{\\cm^2}{4\\pi^2R^2\\Im\\tau} \n \\label{def:M_ab}\n\\eea\nis the mass matrix of our theory. \n\n\n\\section{Cutoff dependence} \\label{MassMatrix}\nSince the theory is valid below $\\Lmd$, we only consider \nthe KK modes~$\\phi_a$ ($a=0,1,\\cdots,N_\\Lmd$). \nThen, the mass squared eigenvalues, which are denoted as \n$\\brc{m_0^2,m_1^2,\\cdots,m_{N_\\Lmd}^2}$, are obtained as \neigenvalues of the finite matrix~$M_{ab}^2$ ($a,b=0,1,\\cdots,N_\\Lmd$). \n\nSince $\\tl{m}_{n,l}^2=\\tl{m}_{-n,-l}^2$, all the nonzero modes have degenerate modes \nwhen the brane mass is absent. \nEspecially, $\\tl{m}_1^2=\\tl{m}_2^2$. \nThis means that $M_{ab}^2$ has the eigenvalue~$\\tl{m}_1^2$ \nwith the eigenvector~$(0,1,-1,0,0,\\cdots,0)$. \nIn fact, this is the second smallest eigenvalue of $\\tl{M}_{ab}^2$. \nNamely, the mass of the first KK excited mode~$m_1$ is independent of $\\cm$ \nand $\\Lmd$: \n\\be\n m_1 = \\tl{m}_1 = \\frac{1}{R\\Im\\tau}\\cdot\\min_{(n,l)\\neq (0,0)}\\abs{n+l\\tau}. \n \\label{expr:m1}\n\\ee\nThus we take $m_1$ as the compactification scale throughout the paper. \n\nPlots in Fig.~\\ref{Lmd-dep} show the $\\Lmd$-dependence of the lightest eigenvalue~$m_0$ \nin the cases of $\\tau=\\exp(\\frac{\\pi i}{120}),\\exp(\\frac{2\\pi i}{3})$, \nand $50\\exp(\\frac{2\\pi i}{3})$ and $\\cm=10.0$, in the unit of $m_1$. \n\\begin{figure}[th]\n\\begin{center}\n\\includegraphics[width=7.5cm]{Lmd-dep-1-2}\n\\includegraphics[width=7.5cm]{Lmd-dep-1-160}\n\\includegraphics[width=7.5cm]{Lmd-dep-50-160}\n\\end{center}\n\\caption{The lightest mass eigenvalue~$m_0$ as a function of $\\Lmd$ \nin the case of $\\tau=\\exp\\brkt{\\pi i\/120}$, $\\exp\\brkt{2\\pi i\/3}$ \nand $50\\exp\\brkt{2\\pi i\/3}$ and $\\cm=10.0$. \nThe solid lines represent the function~(\\ref{expr:lmd0}) \nwith the parameters~$(\\alp_1,\\alp_2,\\alp_3,\\alp_4)=(11.9,4.01,0.0728,0.466)$,\n$(1.82,3.69,0.270,0.142)$ and \n$(12.4,4.72,0.0701,0.470)$, respectively. }\n\\label{Lmd-dep}\n\\end{figure}\nThe right end of the horizontal axis in each plot corresponds to \nthe value of $\\Lmd$ such that $N_\\Lmd\\simeq 4000$. \nFor a given value of $\\cm$, the ratio~$m_0\/m_1$ can be approximated by \n\\be\n \\frac{m_0}{m_1} \\simeq \\brkt{\\alp_1+\\alp_2\\ln\\frac{\\Lmd}{m_1}\n +\\alp_3\\frac{\\Lmd}{m_1}}^{-1}+\\alp_4, \n \\label{expr:lmd0}\n\\ee\nwhere $\\alp_i$ ($i=1,2,3,4$) are real constants. \nThe solid lines in Fig.~\\ref{Lmd-dep} represent the fitting functions \nof the form~(\\ref{expr:lmd0}). \nThe constant~$\\alp_4$ is the asymptotic value of $m_0\/m_1$ in the limit of $\\Lmd\\to\\infty$:\n\\be\n \\lim_{\\Lmd\\to\\infty}\\frac{m_0(\\Lmd)}{m_1} = \\alp_4. \\label{alp4}\n\\ee\nThe horizontal axes in Fig.~\\ref{Lmd-dep} denote the asymptotic lines \nthat the curves approach. \nTypically, $m_0$ approaches to the limit value \nmuch more slowly compared with the 5D case \n(see Fig.~\\ref{Lmd-dep:5D} in Appendix~\\ref{5D:num}).\nThus the cutoff dependence of the spectrum cannot be neglected \neven when $\\Lmd\/m_1=\\cO(100)$. \nThis cutoff dependence becomes smaller \nwhen $\\arg\\tau\\simeq 0,\\pi$ or $\\abs{\\tau}\\ll 1$ or $\\abs{\\tau}\\gg 1$. \nThis is because the torus is squashed or stretched in such cases, \nand the spacetime approaches to 5D. \nIn fact, as we can see from Fig.~\\ref{Lmd-dep}, \n\\be\n \\frac{m_0(15m_1)}{m_1} \\simeq \\lim_{\\Lmd\\to\\infty}\\frac{m_0(\\Lmd)}{m_1} \\times\n \\begin{cases} 1.40 & (\\tau=e^{2\\pi i\/3}) \\\\ 1.07 & \n(\\tau=e^{\\pi i\/120},\\; 50e^{2\\pi i\/3}) \n \\end{cases}. \n\\ee\nNote that the curve for $\\Lmd<40m_1$ in the top-left plot \nor in the bottom plot are almost the same \nas that of the 5D case (shown in Fig.~\\ref{Lmd-dep:5D}). \nThe cusp at $\\Lmd=40m_1$ indicates that the field begins to feel the width of \nthe squashed torus or the smaller cycle of the long thin torus. \n\nIn the following, we focus on the limit value~(\\ref{alp4}). \nFig.~\\ref{vsc} shows its dependence on the brane mass~$\\cm$. \nThe unit here is taken as $1\/(\\pi R)$. \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=8cm]{lmd0vsc-1-160.eps}\n\\end{center}\n\\caption{The lightest mass eigenvalue~$m_0$ as a function of $\\cm$ \nin the case of $\\tau=e^{2\\pi i\/3}$. \nThe solid line represents (\\ref{lmd0:smallc}). }\n\\label{vsc}\n\\end{figure}\nFor small values of $\\cm$, the lightest mass eigenvalue~$m_0$ \nis approximated as \n\\be\n m_0 \\simeq \\sqrt{M_{00}^2} = \\frac{\\cm}{2\\pi R\\sqrt{\\Im\\tau}}, \n \\label{lmd0:smallc}\n\\ee\nwhich is plotted as the solid line in Fig.~\\ref{vsc}. \nThis is because the brane mass can be treated as a perturbation in this region, \nand the mixing among the KK modes induced by it is negligible. \nAs the brane mass grows, such mixing effect becomes significant, \nand $m_0$ saturates and is almost independent of $\\cm$ when $\\cm\\simgt 5$. \nThis situation is the same as the 5D case (see Fig.~\\ref{vsc:5D} in Appendix~\\ref{5D:anl}). \nIn the following discussion, we take $c=10.0$ as a representative of $\\cm\\gg 1$. \n\n\n\\section{Approximate expression} \\label{ap_expr}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=7cm]{lmd0vstht-02.eps}\n\\includegraphics[width=7cm]{lmd0vstht-08.eps}\n\\includegraphics[width=7cm]{lmd0vstht-1.eps}\n\\includegraphics[width=7cm]{lmd0vstht-5.eps}\n\\end{center}\n\\caption{The lightest mass eigenvalue~$m_0$ as a function of $\\arg\\tau$ \nfor various values of $\\abs{\\tau}$. \nThe solid lines represent the approximate expression~(\\ref{ap:m_0}). }\n\\label{lmdvstht}\n\\end{figure}\nFig.~\\ref{lmdvstht} shows the dependence of $m_0$ on $\\arg\\tau$ \nfor various values of $\\abs{\\tau}$. \nHere we will find an approximate expression of $m_0$ as a function of $\\tau$. \n\nFirst, we should note that the mass eigenvalues~$m_a$ are \nfunctions of $\\cm$ and $\\tau$, and should satisfy \n\\be\n m_a\\brkt{\\cm;-\\frac{1}{\\tau}} = \\abs{\\tau}m_a(\\cm;\\tau), \n \\label{cond1_for_lmd0}\n\\ee\nsince the theory is defined on the torus. \nBesides, from (\\ref{wocm}) and (\\ref{def:M_ab}), we also find that\n\\be\n m_a(\\cm;-\\bar{\\tau}) = m_a(\\cm;\\tau). \n \\label{cond2_for_lmd0}\n\\ee\n\nAs mentioned in the previous section, \nthere are two limits in which the spacetime approaches to 5D, \\ie, \n$\\arg\\tau\\to 0,\\pi$ (squashed torus) and $\\abs{\\tau}\\to 0,\\infty$ (stretched torus). \nIn these cases, the low-lying KK masses in the absence of the brane mass \nare approximately expressed as follows. \n\\begin{description}\n\\item[$\\bdm{\\abs{\\tau}\\gg 1}$] \n\\be\n \\tl{m}_a = \\tl{m}_{n(a),0} \n \\simeq \\frac{\\abs{n(a)}}{R\\Im\\tau}, \\label{apex1:ma}\n\\ee\nwhere $a\\simlt 2\\abs{\\tau}$, \nand $n(a)\\equiv (-1)^a{\\rm floor}\\brkt{\\frac{a+1}{2}}$. \n\n\\item[$\\bdm{\\tht\\equiv\\arg\\tau\\ll 0}$] \n\\be\n m_{n,l} = \\frac{1}{R}\\brc{\\frac{(n+l\\abs{\\tau}\\cos\\tht)^2}\n {\\abs{\\tau}^2\\sin^2\\tht}+l^2}^{1\/2}\n \\simeq \\frac{1}{R}\\brc{\\frac{1}{\\tht^2}\\brkt{\\frac{n}{\\abs{\\tau}}+l}^2+l^2}^{1\/2}. \n\\ee\nEspecially when $\\abs{\\tau}$ is a rational number, \\ie, $\\abs{\\tau}=p\/q$ \n($p$ and $q$ are relatively prime integers and $q>0$), \nthe light masses are approximated as \n\\be\n m_a = m_{n(a)p,-n(a)q} \\simeq \\frac{\\abs{n(a)q}}{R}, \n \\label{apex2:ma}\n\\ee\nwhere $a\\simlt\\frac{2}{\\tht}\\min(1,\\abs{\\tau^{-1}\\pm 1})$. \n\\end{description}\nAs for the cases of $\\abs{\\tau}\\ll 1$ and of $\\pi-\\arg\\tau\\ll 1$, \napproximate expressions of $m_a$ are obtained from (\\ref{apex1:ma}) and (\\ref{apex2:ma}) \nby using (\\ref{cond1_for_lmd0}) and (\\ref{cond2_for_lmd0}), respectively. \nThen, we identify the effective radius of $S^1$ as\n\\be\n R_{\\rm eff} = \\begin{cases} R\\Im\\tau & (\\abs{\\tau} \\gg 1) \\\\\n R\\Im\\tau\/\\abs{\\tau} & (\\abs{\\tau} \\ll 1) \\\\\n R\/q & (\\arg\\tau \\ll 1 \\;\\;\\mbox{or}\\;\\; \\pi-\\arg\\tau\\ll 1) \\end{cases}. \n\\ee\nUsing this, the low-lying KK masses~$m_a$ can be expressed as\n(see Appendix~\\ref{5D:anl}) \n\\be\n m_a \\simeq \\frac{\\abs{n(a)}}{R_{\\rm eff}}, \n\\ee\nor solutions of \n\\be\n m_a \\simeq \\frac{\\hat{c}_{\\rm eff}^2}{2}\\cot(\\pi R_{\\rm eff}m_a), \n \\label{cond3_for_lmd0}\n\\ee\nwhere the ``effective 5D brane mass''~$\\hat{c}_{\\rm eff}$ is defined as \n\\be\n \\hat{c}_{\\rm eff}^2 \\equiv \\frac{R_{\\rm eff}\\cm^2}{2\\pi R^2\\Im\\tau}, \n\\ee\nwhich is identified from the condition that (\\ref{lmd0:smallc}) is reproduced. \nWhen $c$ is sufficiently large, the solutions of (\\ref{cond3_for_lmd0}) are\n\\be\n m_a \\simeq \\frac{\\abs{n(a)+\\frac{1}{2}}}{R_{\\rm eff}}. \n\\ee\nEspecially, the lightest mass eigenvalue is\n\\be\n m_0 \\simeq \\frac{1}{2R_{\\rm eff}} \\simeq \\frac{m_1}{2}. \n \\label{cond4_for_lmd0}\n\\ee\n\nTaking into account the properties~(\\ref{cond1_for_lmd0}), \n(\\ref{cond2_for_lmd0}) and (\\ref{cond4_for_lmd0}), \nwe find an approximate expression of $m_0$ that fits Fig.~\\ref{lmdvstht} as\n\\be\n m_0^{\\rm (ap)} = \\frac{\\sqrt{\\sin\\brc{\\arcsin(\\tl{\\lmd}_1^2\\Im\\tau)}}}\n {2\\pi R\\sqrt{\\Im\\tau}}. \\label{ap:m_0}\n\\ee\nThis is plotted as solid lines in Fig.~\\ref{lmdvstht}. \n\nFinally, we evaluate the ratio of $m_0$ \nto the compactification scale~$m_1$. \nAs Fig.~\\ref{rtovstht} shows, this ratio is much smaller than the value \nof the 5D case, 1\/2, except for the extreme cases \nin which the spacetime is 5D-like. \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=7cm]{rtovstht-02.eps}\n\\includegraphics[width=7cm]{rtovstht-08.eps}\n\\includegraphics[width=7cm]{rtovstht-1.eps}\n\\end{center}\n\\caption{The ratio of the lightest mass eigenvalue~$m_0$ \nto the compactification~$m_1$. \nThe solid lines represents the ratio of (\\ref{ap:m_0}) to (\\ref{expr:m1}). }\n\\label{rtovstht}\n\\end{figure}\nTypically, $m_0$ is lighter than $m_1$ by one order of magnitude. \nNamely, the brane-localized mass cannot make the zero-modes \nas heavy as the compactification scale. \nThis is an important fact in model building. \n\n\n\\section{Summary and comments} \\label{summary}\n\\subsection{Summary}\nWe have evaluated the mass eigenvalues of a 6D theory compactified on a torus \nin the presence of the brane-localized mass term. \nEspecially we focus on the lightest mode that becomes massless \nin the zero brane-mass limit. \n\nFrom the numerical calculations, \nwe confirmed that the lightest mass eigenvalue~$m_0$ \nhas non-negligible dependence on the cutoff scale~$\\Lmd$ \neven when $\\Lmd$ is larger than the compactification scale \nby two orders of magnitude. \nThis indicates that $m_0$ is sensitive to the internal structure of \nthe brane when the brane has a finite size. \nThis is consistent with the results in Ref.~\\cite{Dudas:2005vn}. \n\nWe find an approximate expression of $m_0$ which is valid for a large brane mass. \nIt clarifies the dependence on the size and the shape of the torus, \nand reduces to the known result in the 5D case when the torus is squashed or \nstretched. \n\nIn contrast to the 5D case, $m_0$ is much smaller than \nthe compactification scale unless the torus is squashed or stretched. \nTheir ratio is typically $\\cO(0.1)$. \nThis is because the effects of the brane term are spread out \nover the codimension two compact space and diluted. \nHence we should be careful in model building \nespecially when we introduce the brane mass terms \nin order to decouple unwanted modes. \n\nAlthough we have not discussed in this paper, \nthe brane mass also deforms the profiles of the mode functions. \nThey can be obtained by calculating the eigenvectors of $M_{ab}^2$ \nin (\\ref{def:M_ab}). \nThe main effect of the brane mass on the mode functions is \nto push them out from the position of the brane. \nNamely, it reduces their absolute values at the brane to zero. \n\n\n\\subsection{On more general setups}\nWe have discussed in a theory of a scalar field because it is the simplest case. \nHowever, the properties of the spectrum clarified in the text are also found \nin cases of fermion and vector fields, as shown in Appendix~\\ref{other_cases}. \nSo our result is valid in a wider class of 6D theory. \n\nBesides, we have assumed that the bulk mass is zero and the brane squared mass \nis positive. \nIn the presence of the bulk mass~$M_{\\rm bk}$, \nthe mass matrix~(\\ref{def:M_ab}) becomes \n\\be\n M_{ab}^2 = \\brkt{M_{\\rm bk}^2+\\tl{m}_a^2}\\dlt_{ab}+\\frac{c^2}{4\\pi^2R^2\\Im\\tau}, \n\\ee\nwhere $\\tl{m}_{n,l}=\\abs{n+l\\tau}\/(R\\Im\\tau)$. \nThus the bulk mass just raises the whole spectrum. \nHowever, if we allow a tachyonic brane mass, \\ie, $c^2<0$, \na light mode may appear below the compactification scale. \nIf $\\abs{c}^2$ is large enough, $m_0$ becomes tachyonic \nand thus $\\vev{\\phi}=0$ is no longer the vacuum. \nIn such a case, $\\phi$ has a nontrivial background that depend on \nthe extra-dimensional coordinates~$z$ and $\\bar{z}$, and we have to expand $\\phi$ \naround it in order to obtain the mass matrix~$M_{ab}^2$. \nIt is not an easy work to find such a nontrivial background. \nHere we do not discuss this issue further, but give a comment on it. \nNote that the smallest diagonal \nelement~$M_{00}^2=M_{\\rm bk}^2+c^2\/(4\\pi^2R^2\\Im\\tau)$ provides the upper bound on $m_0$. \nThus, $2\\pi R\\sqrt{\\Im\\tau}M_{\\rm bk}>\\abs{c}$ must be satisfied \nin order to avoid the vacuum instability for $\\vev{\\phi}=0$. \nIn other words, there is a value of $c$ that leads to a tachyonic mass eigenvalue \nno matter how large $M_{\\rm bk}$ is. \nThis indicates that the effect of the brane mass on the spectrum does not saturate, \nwhich is in contrast to the non-tachyonic brane mass. \nThe mode function is attracted toward the brane by the tachyonic brane mass. \n\nWe considered the scalar field with the periodic boundary condition. \nTwisted boundary conditions are also allowed, but they just raise the mass spectrum. \nThis can be understood from the fact that imposing the twisted boundary conditions is \nequivalent to introducing a non-vanishing background gauge field coupled to \nthe scalar field with the periodic boundary conditions. \nSuch a background gauge field play the same role as the bulk scalar mass $M_{\\rm bk}$ \nmentioned above. \n\nWe have also assumed that the spacetime is flat, \nno background magnetic fluxes exist,\\footnote{\nThe introduction of the background fluxes leads to the multiplication \nof the modes at each KK level. \nThus the size of the mass matrix~(\\ref{def:M_ab}) becomes larger, \nand it will take much more time to calculate the mass eigenvalues. \nSo we need to develop more efficient way to discuss in such a case. \n}\nand there is only one brane, for simplicity. \nIt is an interesting and useful extension to relax these assumptions. \nThis will be discussed in separate papers. \n\n\n\n\\subsection*{Acknowledgements}\nThe author would like to thank Yukihiro Fujimoto for valuable information. \nThis work was supported in part by \nGrant-in-Aid for Scientific Research (C) No.~25400283 \nfrom Japan Society for the Promotion of Science (Y.S.). \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}