diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdylw" "b/data_all_eng_slimpj/shuffled/split2/finalzzdylw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdylw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nSecondary cosmic ray (CR) particles of an extensive air shower (EAS) approaching towards the ground as a thin disk through the atmosphere from the direction of their parent primary CR particle at the speed of light. After the first interaction point, the disk begins to form, and continues to grow, and then starts attenuating after the depth of shower maximum. The transverse and longitudinal momenta imparted on the shower particles emerging from their parent particles via the hadronic interactions would cause the lateral and longitudinal spreads for these particles in an EAS. consequently the periphery of successive iso-density contours get shortened about the EAS axis, which is analogous to an inclined inverted truncated cone.\n\\section{Cone model: An elliptic lateral density function}\nThe evolution of a conical shower profile of an inclined EAS is shown in the Fig. \\ref{Cone_Prof}. The geometric correction is done through the projection of the horizontal elliptic surface to the shower plane. Corresponding equi-density contours are shown in Fig. \\ref{2D_IsoDen}(a), (b). The projected electron density in the shower plane ($\\rho_s$) is\n\\begin{equation}\n\\label{eq:Geom_Corr}\n\\rho_s(r_s) = {\\rho_g(r_g)}\/{ \\cos\\Theta}\n\\end{equation}\nAn exponential fall of the density of the shower electrons results from the EAS attenuation with a factor $ e ^ {-\\eta \\cdot AB}$, where $\\eta$ is the attenuation length [1]. Electron density in the ground plane $(\\rho_g)$ would be \n\\begin{equation}\n\\label{eq:Attn_Dens}\n\\rho_g(r_g)= \\cos\\Theta \\cdot \\rho_s(r_s) \\cdot e^{-\\eta \\cdot AB}\n\\end{equation}\n\\begin{figure}[htbp]\n\t\\begin{minipage}{18pc}\n\t\t\\includegraphics[trim=0.8cm 0.8cm 0.8cm 1.5cm, clip=true, totalheight=0.22\\textheight, angle=0]{Conical_Geometry.pdf}\n\t\t\\caption{\\label{Cone_Prof}Conical shower profile.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}{18pc}\n\t\n\t\t\\includegraphics[trim=0.6cm -0.2cm 0.6cm -2.2cm, clip=true, width=0.16\\textheight]{Density_IsoLines_Ground.eps}\n\t\t\\includegraphics[trim=0.6cm -0.2cm 0.6cm -2.2cm, clip=true, width=0.16\\textheight]{Density_IsoLines_Shower.eps}\n\t\t\\caption{\\label{2D_IsoDen} 2-dimensional iso-density contours in the ground and shower plane.}\n\t\\end{minipage}\n\\end{figure}\n\nA characteristic function (CF) to describe the exponential behaviour of LDD with $r_s$ is proposed as follows,\n\\begin{equation}\n\\label{eq:CF}\n\\rho(r_s) \\simeq c \\cdot e^{-\\alpha (\\frac{r_s}{r_0})^\\kappa}\n\\end{equation}\nFinally the gap length parameter is given by,\n\\begin{equation}\n\\label{eq:Xc_1}\nx_C = 6813 y_R^{2-\\kappa} r_0^\\kappa \\eta (\\alpha \\kappa)^{-1}\n\t\\frac{\\tan \\Theta}{\\cos(\\Theta+\\sigma)} \n\t\\cdot\n\t\\frac{\\cos\\sigma}{H-r_s \\sin \\Theta}\n\\end{equation}\nSince $x_C>0$, the shower attenuation does shift the center of the ellipse towards the early part of the shower. Let us write the above equation as $x_C = 2 A_f y_R \\tan\\Theta$, where $A_f$ stands for,\n\\begin{equation}\n\\label{eq:Af}\nA_f = 6813 r_0^\\kappa \\eta (\\alpha \\kappa)^{-1}\n\\cdot\n\\frac{\\cos\\sigma}{2\\cos(\\Theta+\\sigma)(H-r_s \\sin \\Theta)}\n\\end{equation}\nThe modified length of semi-minor axis of an equi-density ellipse is,\n\\begin{equation}\n\\label{eq:yR}\n\ty_R = -2 A_f r_g \\cos\\beta_g \\tan\\Theta \\cdot \\frac{\\cos^2(\\Theta + \\sigma)}{\\cos^2\\sigma}\n\t+ r_g\\sqrt{1-\\cos^2\\beta_g \\sin^2(\\Theta+\\sigma)}\n\\end{equation}\nWe are familiar with most commonly used LDF as NKG function in CR air shower physics.\nThe polar angle dependent elliptic-LDF (ELDF) can be found from the NKG type Symmetric-LDF (SLDF) by making a substitution for the variable $r_s$ by $y_R$, and the equation for the ELDF finally takes the following structure. \n\\begin{equation\n\\label{eq:ELDF}\n\t\\rho(r_g,\\beta_g)=\\cos\\Theta\\cdot C(s_\\perp)N_e \\cdot (y_R\/r_0)^{s_\\perp-2} (1+y_R\/r_0)^{s_\\perp-4.5}\n\\end{equation}\nWhere, $C(s_\\perp)=\\frac{\\Gamma(4.5-s_\\perp)}{2\\pi r_0^2\\Gamma(s_\\perp)\\Gamma(4.5-2s_\\perp)}$ is the normalization factor, while $s_{\\perp}$, $r_0$ and $N_e$ respectively are called the age parameter, the moli\\`{e}re radius and shower size.\n\\section{Results and discussions}\nThe MC simulation code \\textit{CORSIKA} of version 7.69 with the hadronic interaction models QGSJet-01c and UrQMD is used. The reconstructed polar electron densities obtained using SLDF, ELDF including the projection and ELDF including both the attenuation and projection [2], at a core distance 50 m for an average 100 PeV proton shower with $\\Theta =50^o$. These are shown in Fig. \\ref{Polar_Dens_ELDF}, and the result reconfirms that the ELDF with GL is more appropriate for reconstruction of non-vertical EASs.\nIn the Fig. \\ref{LDD_PI}, the polar averaged LDD for P and Fe initiated showers are approximated by the CF (Eq. \\ref{eq:CF}). From the best possible fitting of the simulated data the parameter $\\alpha$ picks values 4.3 and 3.7 while $\\kappa$ takes 0.36 and 0.43 respectively for P and Fe.\nIn the Fig. \\ref{Iso_dens}, the center of the equi-density ellipse experiences a translation from \\textit{O} to \\textit{C} $(\\overline{OC} \\sim 9.75~m)$ solely due to attenuation of EAS electrons. On the other hand, the model predicted GL is about $6.35$~m, evaluated using the Eq. \\ref{eq:Xc_1}. \nThe GL parameter exhibits sensitivity to P and Fe initiated showers significantly for low-end values of $\\rho_e$ (Fig. \\ref{XcYr_PI}). GL is found to increase with energy of CRs (Fig. \\ref{XcYr_E}). The elongation of the iso-density curve with increasing $\\Theta$ is evident from the values of GL for different zenith angles (Fig. \\ref{XcYr_Z}). The model predicted values for GL which are shown by the dotted and short dashed lines (Fig. \\ref{XcYr_PI}-\\ref{XcYr_Z}) are in good agreement with the simulated data. We have studied the dependence of the GL on $\\Theta$ for a fixed electron density (Fig. \\ref{GL_Z_IsoD}) as well as at fixed $y_R$ value (Fig. \\ref{GL_Z_yR}), which shows a mass sensitivity of primary CR. A correlation between the mean GL with primary energy ($E$) corresponding to $\\Theta = 50^o$ and $\\rho_e = 1.5~m^{-2}$ for P and Fe induced EASs, is depicted in Fig. \\ref{GL_E}.\n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 1.cm 1.05cm, clip=true, totalheight=0.17\\textheight]{PDD_R50m_ELDF.eps}\n\t\t\\caption{\\label{Polar_Dens_ELDF}Ground plane polar density distribution.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.9cm 0.8cm, clip=true, totalheight=0.17\\textheight]{CF_PI.eps}\n\t\t\\caption{\\label{LDD_PI}Electron LDD fitted by CF.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{IsoDen_Curve.eps}\n\t\t\\caption{\\label{Iso_dens}Formation of GL from equi-density ellipse.}\n\t\\end{minipage}\n\\end{figure}\n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_PI.eps}\n\t\t\\caption{\\label{XcYr_PI}$x_C$ vs $y_R$ for P and Fe initiated showers.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_E.eps}\n\t\t\\caption{\\label{XcYr_E}$x_C$ vs $y_R$ for two energies.}\n\t\\end{minipage}\\hspace{1pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{XcYr_Z.eps}\n\t\t\\caption{\\label{XcYr_Z}$x_C$ vs $y_R$ at two zenith angles.}\n\t\\end{minipage}\n\\end{figure} \n\\begin{figure}[h]\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_Z_IsoD.eps}\n\t\t\\caption{\\label{GL_Z_IsoD}Mass sensitivity of GL from its variation with $\\Theta$ at fixed $\\rho_e$.}\n\t\\end{minipage}\\hspace{0.8pc}%\n\t\\begin{minipage}[b]{2in}\n\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_Z_yR.eps}\n\t\t\\caption{\\label{GL_Z_yR}Mass sensitivity of GL from its variation with $\\Theta$ at fixed $y_R$.}\n\t\\end{minipage}\\hspace{0.8pc}%\n\t\\begin{minipage}[b]{2in}\n\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{GL_E.eps}\n\t\\caption{\\label{GL_E} Mass sensitivity of GL from its variation with $E$.}\n\\end{minipage}\n\n\\end{figure}\n\nApplication of ELDF has been incarnated in terms of the local age parameter (LAP) in Fig. \\ref{LAP_PI}.\nThe analytical expression for the LAP [3] between two adjacent radial distances $[r_i, r_j]$ is:\n\\begin{equation\n\\label{eq:LAP}\ns_{ij}=\\frac{ln(F_{ij}X_{ij}^2Y_{ij}^{4.5})}{X_{ij}Y_{ij}} \n\\end{equation}\nHere, $F_{ij}=\\rho(y_R(i))\/\\rho(y_R(j))$,\n$X_{ij} = y_R(i)\/y_R(j)$, \n$Y_{ij} = (\\frac{y_R(i)}{r_0}+1)\/(\\frac{y_R(j)}{r_0}+1)$\nand $r_0$ is the moli\\`{e}re radius obtained from the best fit value of LDD by CF. A correlation between the mean LAP with primary energy for P and Fe induced EASs are given in Fig. \\ref{MPAL_PI}.\n\\begin{figure}[htbp]\n\t\\begin{center}\n\t\t\\begin{minipage}[b]{2.5in}\n\t\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{LAP_PI.eps}\n\t\t\t\\caption{\\label{LAP_PI}Variation of LAP with $r_g$.}\n\t\t\\end{minipage}\\hspace{1pc}%\n\t\t\\begin{minipage}[b]{2.5in}\n\t\t\t\\includegraphics[trim=0.6cm 0.6cm 0.6cm 1.05cm, clip=true, totalheight=0.17\\textheight]{MLAP_PI.eps}\n\t\t\t\\caption{\\label{MPAL_PI} Mean LAP versus $E$.}\n\t\t\\end{minipage}\n\t\\end{center}\n\\end{figure}\n\\section{Conclusions \\& future outlook}\nIn this work a modeling of the atmospheric attenuation effect on the LDD of electrons is made considering the conical shower profile. \nThe magnitude of the GL that determines the attenuation power of shower particles for a non-vertical EAS, possesses a clear primary CR mass dependence. The ELDF has been used to the simulated electron densities to estimate the LAP, which manifests different radial variation. The variation of mean LAP with primary energy\/shower size clearly shows sensitivity to CR mass composition. \n\nThere is a scope to judge the high energy hadronic interaction model dependence of our results in future. An analysis of simulation data considering the LDD of muons and also the combined LDD of electrons and muons are in progress.\n\n\n\\section*{Acknowledgments}\n\\noindent \nRKD acknowledges the financial support under grant. no. 1513\/R-2020. from the University of North Bengal.\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\n\n\nIn the so-called unitary limit of a quantum Bose or Fermi gas, the\nscattering length $a$ diverges. This occurs at a fixed point of the \nrenormalization group, thus these systems provide interesting examples of \ninteracting, scale-invariant theories with dynamical exponent $z=2$, \ni.e. non-relativistic. \nThey can be realized experimentally by tuning the scattering\nlength to $\\pm \\infty$ using a Feshbach resonance.\n(See for instance \\cite{Experiment1,Experiment2} and references\ntherein.) They are also thought to occur at the surface of\nneutron stars. These systems have also attracted much theoretical \ninterest\\cite{Leggett,Nozieres,Ohashi,Ho,HoMueller,Astrakharchik,Nussinov,Perali,LeeShafer,Wingate,Bulgac,Drummond,Burovski,Nishida,Nikolic}. \nThere have even been some proposals to use the AdS\/CFT correspondence \nto learn about these models\\cite{Son,Maldacena,Herzog,Adams}. \n\n\nBecause of the scale-invariance, the only length scales in the problem\nare based on the density $n^{1\/d}$ where $d$ is the spatial dimension, \nand the thermal wavelength $\\lambda_T = \\sqrt{2\\pi\/mT}$. Equivalently,\nthe only energy scales are the chemical potential $\\mu$ and the temperature $T$. \nThe problem is challenging since there is no small paramater to expand in\nsuch as $n a^3$. Any possible critical point must occur at a \nspecific value of $x=\\mu\/T$. This can be translated into \nuniversal values for $n_c \\lambda_T^3$, or for fermions \nuniversal values for $T_c\/T_F$ where $\\epsilon_F = k_B T_F$ is the Fermi energy. \nFor instance the critical point of an ideal Bose gas is the simplest example,\nwhere $n_c \\lambda_T^3 = \\zeta (3\/2) = 2.61$. \n\n\n\nThe present work is the sequel to \\cite{PyeTon2}, where we used the S-matrix \nbased formulation\nof the quantum statistical mechanics developed in\\cite{LeClairS,PyeTon}. \nThis approach is very well-suited to the problem because in the unitary limit\nthe S-matrix $S=-1$, and kernels in the integral equations simplify. \nIn fact, this approach can be used to develop an expansion in $1\/a$. \nThe main formulas for the 2 and 3 dimensional cases of both bosons and fermions\nwere presented, however only the 2-dimensional case was analyzed in detail\nin \\cite{PyeTon2}. \nHere we analyze the 3-dimensional case. \n\n\n\nThe models considered are the simplest \nmodels of non-relativistic bosons or fermions with quartic\ninteractions. The bosonic model is defined by the action\nfor a complex scalar field $\\phi$. \n\\begin{equation}\n\\label{bosonaction}\nS = \\int d^3 {\\bf x} dt \\( i \\phi^\\dagger \\partial_t \\phi - \n\\frac{ |\\vec{\\nabla} \\phi |^2}{2m} - \\frac{g}{4} (\\phi^\\dagger \\phi)^2 \\)\n\\end{equation}\nFor fermions, due to the fermionic statistics, \none needs at least a 2-component field \n$\\psi_{\\uparrow , \\downarrow} $:\n\\begin{equation}\n\\label{fermionaction}\nS = \\int d^3 {\\bf x} dt \\( \\sum_{\\alpha=\\uparrow, \\downarrow} \ni \\psi^\\dagger_\\alpha \\partial_t \\psi_\\alpha - \n\\frac{|\\vec{\\nabla} \\psi_\\alpha|^2}{2m} - \\frac{g}{2} \n\\psi^\\dagger_\\uparrow \\psi_\\uparrow \\psi^\\dagger_\\downarrow \\psi_\\downarrow \\) \n\\end{equation}\nIn both cases, positive $g$ corresponds to repulsive interactions. \nThe bosonic theory only has a $U(1)$ symmetry. The fermionic theory\non the other hand has the much larger SO(5) symmetry. \nThis is evident from the work\\cite{Kapit} which considered a relativistic\nversion, since the same arguments apply to a non-relativistic kinetic term. \nThis is also clear from the work\\cite{Nikolic} which considered \nan $N$-component version with Sp(2N) symmetry, and noting that\nSp(4) = SO(5). \n\n\n\nThe interplay between the scattering length, the bound state, \nand the renormalization group fixed point was discussed \nin detail from the point of view of the\nS-matrix in \\cite{PyeTon2}. In 3 spatial dimensions the fixed point occurs\nat negative coupling $g_*= - 4 \\pi^2 \/m\\Lambda$, where $\\Lambda$ is \nan ultra-violet cut-off. For $g$ less than $g_*$, there is a bound\nstate that can Bose-Einstein condense (BEC), and for the fermionic case, \nthis is referred to as the BEC side. As $g_*$ is approached from this side,\nthe scattering length goes to $+\\infty$. The bound state disappears at\n$g_*$. When $g$ approaches $g_*$ from above, the scattering length \ngoes to $-\\infty$, and this is referred to as the BCS side. In this paper we\nwork on the BCS side of the cross-over since the bound state does not have\nto be incorporated into the thermodynamics. On this side the interactions\nare effectively attractive. \n\n\nTheoretical studies have mainly focussed on the fermionic case, \nand for the most part at zero temperature, which is appropriate for a large Fermi energy.\n The bosonic case has been less studied, since \na homogeneous \nbosonic gas with attractive interactions is thought to be unstable against\nmechanical collapse, and the collapse occurs before any kind of BEC. \nThe situation is actually different for harmonically trapped gases, where BEC can occur\\cite{trapped}. \nHowever studies of the homogeneous bosonic case were based on a small, \nnegative scattering length\\cite{Stoof,MuellerBaym,Thouless,YuLiLee},\nand it is not clear that the conclusions reached there can be extrapolated \nto the unitary limit. Since the density of\ncollapse is proportional to $1\/a$\\cite{MuellerBaym}, extrapolation to infinite scattering length \nsuggests that the gas collapses at zero density, which seems unphysical, since the gas could in\nprinciple be stabilized at finite temperature by thermal pressure. \n One can also point out that in the van der Waals gas, the\ncollapse is stabilized by a finite size of the atoms, which renders\nthe compressibility finite. In the unitary limit, there is nothing to play\nsuch a role. \n In the sequel we will present evidence that the unitary \nBose gas undergoes BEC when $n \\lambda_T^3 \\approx 1.3$. \n This lower value\nis consistent with the attractive interactions. We also estimate \nthe critical exponent describing how the compressibility diverges at the critical point. \n\n\n\n\nIn the next section we define the scaling functions that determine the free energy\nand density, and derive expressions for the energy and entropy per particle, specific heat per\nparticle, and compressibility. In section III the formulation of the unitary limit in\n\\cite{PyeTon2} is summarized. The two-component fermion case is analyzed in section IV.\nEvidence is presented for a critical point with $T_c\/T_F \\approx 0.1$, which is consistent with\nlattice Monte Carlo simulations. Bosons are analyzed in section V, where we present evidence\nfor BEC in this strongly interacting gas. Motivated by the conjectured lower bound\\cite{Kovtun} \nfor\nthe ratio of the viscosity to entropy density $\\eta\/s > \\hbar\/4\\pi k_B$ for relativistic systems,\nwe study this ratio for both the fermionic and bosonic cases in section VI.\n Our results for fermions are\nconsistent with experiments, with $\\eta\/s > 4.72$ times the conjectured lower bound.\nFor bosons, this ratio is minimized at the critical point where \n$\\eta\/s > 1.26$ times the bound. \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Scaling functions in the unitary limit.} \n\n\n\nThe scale invariance in the unitary limit implies some\nuniversal scaling forms\\cite{Ho}. In this section we \ndefine various scaling functions with a meaningful normalization\nrelative to free particles. \n\nFirst consider a single species of bosonic or fermionic particle with\nmass $m$ at chemical potential $\\mu$ and temperature $T$. \n The free energy density\nhas the form\n\\begin{equation}\n\\label{freeenergy}\n\\CF = - \\zeta (5\/2) \\( \\frac{mT}{2\\pi} \\)^{3\/2} T \\, c(\\mu\/T) \n\\end{equation}\nwhere the scaling function $c$ is only a function of $x\\equiv \\mu\/T$.\n($\\zeta$ is Riemann's zeta function.) \nThe combination $\\sqrt{mT\/2\\pi} = 1\/\\lambda_T$, where \n$\\lambda_T$ is the thermal wavelength. For a single free boson or fermion:\n\\begin{equation}\n\\label{cfreelim}\n\\lim_{\\mu\/T \\to 0} ~~ c_{\\rm boson} = 1, ~~~~~~~\n c_{\\rm fermion} = 1- \\inv{2 \\sqrt{2}}\n~~~~~~({\\rm free ~ particles}). \n\\end{equation}\nIt is also convenient to define the scaling function $q$, which is a measure\nof the quantum degeneracy, in terms of the density as follows: \n\\begin{equation}\n\\label{nhat}\nn \\lambda_T^3 = q\n\\end{equation}\nThe two scaling functions $c$ and $q$ are of course related since\n$n= - \\partial \\CF \/ \\partial \\mu$, which leads to \n\\begin{equation}\n\\label{qcprime}\nq = \\zeta (5\/2) c'\n\\end{equation} \nwhere $c'$ is the derivative of $c$ with respect to $x$. \nHenceforth $g'$ will always denote the derivative of $g$ with respect to $x$.\nThe expressions for $c$ and $q$ for free theories will be implicit\nin the next section. \n\n\nAlso of interest are several energy per particle scaling functions. \nAt a renormalization group fixed point, the energy density\nis related to the free energy in the same way as for free particles:\n\\begin{equation}\n\\label{energyden}\n\\frac{E}{V} = -\\frac{3}{2} \\CF \n\\end{equation}\nwhere $V$ is the volume. For a free fermion, in the zero \ntemperature limit, the energy per particle $E\/N \\to \\frac{3}{5} \\mu$.\nThe Fermi energy is \n\\begin{equation}\n\\label{EF}\n\\epsilon_F =\\inv{m} \\( 3 \\pi^2 n\/\\sqrt{2} \\)^{2\/3} \n\\end{equation}\nThe above definition can also be used for bosons. \nSince $\\epsilon_F = \\mu$ in the zero temperature free fermionic gas, this leads\nus to define the scaling function $\\xi$:\n\\begin{equation}\n\\label{xi}\n\\xi (x) = \\frac{5}{3} \\frac{E}{N \\epsilon_F} = \\frac{5 \\zeta(5\/2)}{3}\n\\( \\frac{6}{\\pi} \\)^{1\/3} \\, \\frac{c}{q^{5\/3}}\n\\end{equation}\nIn the limit $T\\to 0$, i.e. $x\\to \\infty$, $\\xi \\to 1$ for a free\nfermion. \n\nA different energy per particle scaling function, $\\tilde{\\xi}$, is meaningful\nas $x\\to 0$:\n\\begin{equation}\n\\label{xitilde}\n\\frac{E}{N T} = \\frac{3 (2 \\sqrt{2} -1 ) \\zeta(5\/2) }{2(2 \\sqrt{2} -2) \n\\zeta(3\/2)} \n \\, \\tilde{\\xi} (x) \n\\end{equation}\nWith the above normalization $\\tilde{\\xi} = 1$ for a free fermion in the limit\n$x\\to 0$. In terms of the above scaling functions:\n\\begin{equation}\n\\label{xitildesc}\n\\tilde{\\xi} (x) = \\frac{ \\zeta (3\/2) \n( 2 \\sqrt{2} -2)}{(2 \\sqrt{2} -1)} \\, \\frac{c}{q} \n\\end{equation}\n\nThe entropy density is $s = - \\partial \\CF \/ \\partial T$, and the entropy per particle\ntakes the form\n\\begin{equation}\n\\label{sn}\n\\frac{s}{n} = \\zeta (5\/2) \n\\( \\frac{ 5 c\/2 - x c'}{q} \\) \n\\end{equation}\n\n \n\n\nNext consider the \n specific heat per particle at constant volume and particle number,\ni.e. constant density. One needs $\\partial x\/ \\partial T$ at constant density.\nUsing the fact that $n \\propto T^{3\/2} q$, at constant density\n$q \\propto T^{-3\/2}$. This gives\n\\begin{equation}\n\\label{CV.1}\nT \\( \\frac{\\partial x} {\\partial T} \\)_n = -\\frac{3}{2} \\frac{ q}{q'} \n\\end{equation}\nThe specific heat per particle is then:\n\\begin{equation}\n\\label{CV.2}\n\\frac{C_V}{N} = \\inv{N} \\( \\frac{\\partial E}{\\partial T} \\)_{N,V} =\n\\frac{\\zeta(5\/2)}{4} \\( 15 \\frac{c}{q} - 9 \\frac{c'}{q'} \\) \n\\end{equation}\n\nThe isothermal compressibility is defined as \n\\begin{equation}\n\\label{comp.1}\n\\kappa = - \\inv{V} \\( \\frac{ \\partial V}{\\partial p} \\)_T \n\\end{equation}\nwhere the pressure $p= - \\CF$. Since $n=N\/V$ and $N$ is kept fixed, \n\\begin{equation}\n\\label{comp.2}\n\\kappa = - n \\( \\frac{ \\partial n^{-1} }{\\partial p} \\)_T = \\inv{n T} \\frac{ q'}{q} =\n\\inv{T} \\( \\frac{mT}{2\\pi} \\)^{3\/2} \\frac{q'}{q^2} \n\\end{equation}\n\n\nFinally the equation of state can be expressed parametrically \nas follows. Given $n$ and $T$, one uses eq. (\\ref{nhat}) to find\n$x$ as a function of $n,T$. The pressure can then be written as\n\\begin{equation}\n\\label{eqnstate} \np = \\( \\frac{\\zeta(5\/2) c(x(n,T))}{q(x(n,T))} \\) n T\n\\end{equation}\n\n\n\nIn order to compare with numerical simulations and experiments,\nit will be useful to plot various quantities as a function of $q$ or \n$T\/T_F$:\n\\begin{equation}\n\\label{TTF}\n\\frac{T}{T_F} = \\( \\frac{4}{3 \\sqrt\\pi q} \\)^{2\/3} \n\\end{equation}\n\n\n\n\n\\section{Two-body scattering approximation}\n\n\n\nThe main features of the two-body scattering approximation developed in\n\\cite{PyeTon} are the following. Consider again first a single component\ngas. The filling fractions, or \noccupation numbers, are parameterized in terms of a pseudo-energy\n$\\varepsilon ({\\bf k} )$:\n\\begin{equation}\n\\label{fill}\nf({\\bf k} ) = \\inv{ e^{\\beta \\varepsilon ({\\bf k} ) } -s }\n\\end{equation}\nwhich determine the density:\n\\begin{equation}\n\\label{dens}\nn = \\int \\frac{d^3 {\\bf k}}{(2\\pi)^3} ~ \\inv{ e^{\\beta \\varepsilon ({\\bf k} ) } -s }\n\\end{equation}\nwhere $s=1,-1$ corresponds to bosons, fermions respectively\nand $\\beta = 1\/T$. \nThe consistent summation of 2-body scattering leads to \nan integral equation for the \n pseudo-energy $\\varepsilon ({\\bf k})$. It is convenient to define the\nquantity:\n\\begin{equation}\n\\label{ydef}\ny ({\\bf k} ) = e^{-\\beta (\\varepsilon({\\bf k}) - \\omega_{\\bf k} + \\mu )}\n\\end{equation}\nwhere $\\omega_{\\bf k} = {\\bf k}^2 \/ 2m$. Then $y$ satisfies the integral\nequation \n\\begin{equation}\n\\label{yinteq}\ny ({\\bf k} ) = 1 + \\beta \\int \\frac{d^3 {\\bf k}'}{(2\\pi)^3} \\, \nG({\\bf k} - {\\bf k}' ) \\frac{y({\\bf k}' )^{-1}}{e^{\\beta \\varepsilon ({\\bf k}')} -s}\n\\end{equation}\nThe free energy density is then\n\\begin{equation}\n\\label{freefoam2}\n\\CF = -T \\int \\frac{d^3 {\\bf k}}{ (2\\pi)^3} \\[ \n- s \\log ( 1- s e^{-\\beta \\varepsilon } ) \n-\\inv{2} \\frac{ (1-y^{-1} )}{e^{\\beta \\varepsilon} -s} \\] \n\\end{equation}\n\nThe kernel has the following structure:\n\\begin{equation}\n\\label{Gstructure}\nG = - \\frac{i}{\\mathcal{I}} \\log ( 1 + i \\mathcal{I} {\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O})\n\\end{equation}\nwhere ${\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O}$ is the scattering amplitude and $\\mathcal{I}$ represents\nthe available phase space for two-body scattering. The argument\nof the $\\log$ can be identified as the S-matrix function. \nIn the unitary limit, \n\\begin{equation}\n\\label{CMunit}\n{\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O} = \\frac{2i}{\\mathcal{I}} = \\frac{16 \\pi i}{m |{\\bf k} - {\\bf k}'|},\n\\end{equation}\nand the S-matrix equals $-1$. The kernel becomes\n\\begin{equation}\n\\label{kernel}\nG({\\bf k} - {\\bf k}' ) = \\mp \\frac{8 \\pi^2}{m |{\\bf k} - {\\bf k}'|} , \n\\end{equation} \nwhere the $-$ sign corresponds to $g$ being just below the fixed point\n$g_*$, where the scattering length $a\\to +\\infty$ on the BEC side,\nwhereas the $+$ sign corresponds to $a\\to - \\infty$ on the BCS side. \nAs explained in the Introduction, we work on the BCS side.\n\nThe angular integrals in eq. (\\ref{yinteq}) are easily performed. \nDefining the dimensionless variable $\\kappa = {\\bf k}^2 \/ 2mT$, \nthe integral equation becomes \n\\begin{equation}\n\\label{ykappa}\ny (\\kappa) = 1 + 4 \\int_0^\\infty d\\kappa' \\[ \n\\Theta (\\kappa - \\kappa') \\sqrt{\\kappa'\/\\kappa} + \n\\Theta (\\kappa' - \\kappa) \\] \\frac{z}{e^{\\kappa'} - s z y(\\kappa')} \n\\end{equation}\nwhere $z = e^{\\mu\/T}$ is the fugacity and $\\Theta(\\kappa)$ is the standard\nstep function equal to 1 for $\\kappa > 0$, zero otherwise. \n\n\nFinally comparing with the definitions in the last section \nthe scaling function for the density and free energy are\n\\begin{equation}\n\\label{nhatsc}\nq (x) = \\frac{2}{\\sqrt\\pi} \\int_0^\\infty d\\kappa \\sqrt{\\kappa} \n \\frac{ y(\\kappa) z }{e^\\kappa - s y(\\kappa) z} \n\\end{equation}\nand \n\\begin{equation}\n\\label{cscale}\nc = \\frac{2}{\\sqrt{\\pi} \\zeta (5\/2) } \n\\int_0^\\infty d\\kappa \\sqrt{\\kappa} \\( -s \\log \\( 1- s z y(\\kappa)\n e^{-\\kappa} \\) \n- \\inv{2} \\frac{ z ( y(\\kappa) - 1 ) }{e^\\kappa - s zy(\\kappa)} \\)\n\\end{equation}\nThe ideal, free gas limit corresponds to $y=1$ \nwhere $q= s {\\rm Li}_{3\/2} (s z)$ and $c= s {\\rm Li}_{5\/2} (sz)\/ \\zeta(5\/2)$,\nwhere ${\\rm Li}$ is the polylogarithm. The BEC critical point of the\nideal gas occurs at $\\mu=0$, i.e. $q=\\zeta(3\/2)$. \n\n\n\n\nConsider now two-component fermions with the action (\\ref{fermionaction}). \nHere the phase space factor $\\mathcal{I}$ is doubled and since $G\\propto 1\/\\mathcal{I}$, \nthe kernels have an extra $1\/2$:\n\\begin{equation}\n\\label{Gbosferm}\nG_{\\rm fermi} = \\inv{2} G_{\\rm bose} \n\\end{equation}\nDue to the SU(2) symmetry, the two-component fermion reduces to\ntwo identical copies of the above 1-component expressions, with the\nmodification (\\ref{Gbosferm}). \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Analysis of fermions}\n\n\n\n Recall that for 2 component fermions,\nthe $4$ is replaced by $2$ in eq. (\\ref{ykappa}), and by the SU(2) \nsymmetry, the system reduces to two identical copies of the one-component \nequations of the last sections; below we present results for \na single component, it being implicit that the free energy and density\nare doubled. \n\nThe integral equation for $y(\\kappa)$, eq. (\\ref{ykappa}), can be\nsolved numerically by iteration.\n One first substitutes \n$y_0 = 1$ on the right hand side and this gives the approximation\n$y_1$ for $y$. One then substitutes $y_1$ on the right hand side\nto generate $y_2$, etc. For regions of $z$ where there are no\ncritical points, this procedure converges rapidly, and as little\nas 5 iterations are needed. For fermions, as one approaches \nzero temperature, i.e. $x$ large and positive, more iterations are needed\nfor convergence. The following results are based on 50 iterations. \n\nWhen $z\\ll 1$, $y \\approx 1$, and the properties of the free ideal \ngas are recovered, since the gas is very dilute. There are solutions\nto eq. (\\ref{ykappa}) for all $-\\infty < x < \\infty$. ($x=\\mu\/T$). \nThe scaling function $c$, and it's comparision with a free theory, \nare shown in Figure \\ref{cF} as a function of $x$.\n The corrections to the free \ntheory become appreciable when $x>-2$. At $x=0$:\n\\begin{equation}\n\\label{czero}\nc(0) = 0.880, ~~~~~~\\tilde{\\xi} = 0.884, \n\\end{equation}\ncompared to the free gas values of $c(0) = 0.646$ and $\\tilde{\\xi} =1$. \n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$c$}\n\\psfrag{a}{$\\rm free$}\n\\includegraphics[width=10cm]{cF.eps} \n\\end{center}\n\\caption{$c(x)$ and its equivalent for a free theory as a function of\n$x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{cF} \n\\end{figure} \n\n\nThe scaling function $q$ for the density is shown as function of \n$x$ in Figure \\ref{qF}. Note that the density in the interacting case\nis always higher than for a free gas, due to the attractive interactions.\nAt $x=0$, $q(0) = 1.18$, whereas for a free gas $q=0.765$. \nAt low temperatures and high densities, $\\mu\/T \\gg 1$, the\noccupation numbers resemble that of a degenerate Fermi gas,\nas shown in Figure \\ref{fF}.\n\n \n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$q$}\n\\psfrag{a}{$\\rm free$}\n\\includegraphics[width=10cm]{qF.eps} \n\\end{center}\n\\caption{$q(x)$ and its equivalent for a free theory as a function of\n$x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{qF} \n\\end{figure} \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$\\kappa = \\beta {\\bf k}^2 \/2m$}\n\\psfrag{y}{$f$}\n\\psfrag{a}{$x=15$}\n\\includegraphics[width=10cm]{fF.eps} \n\\end{center}\n\\caption{The occupation numbers as a function of $\\kappa$ for $x=5,10,15$.} \n\\vspace{-2mm}\n\\label{fF} \n\\end{figure} \n\n\n\n\nWhereas $c$ and $q$ are nearly featureless, other quantities \nseem to indicate a critical point, or phase transition, at \nlarge density. For instance, the entropy per particle \ndecreases with decreasing temperature up to $x < x_c \\approx 11.2$,\nas shown in Figure \\ref{snF}. Beyond this point the entropy per particle\nhas the unphysical behavior of increasing with temperature. \nA further indication that the region $x>x_c$ is unphysical is\nthat the specific heat per particle becomes negative, as shown in\nFigure \\ref{CVNF}. When $x\\ll 0$, $C_V\/N$ approaches the \nclassical value $3\/2$. This leads us to suggest a phase transition,\nat $x=x_c$, corresponding to the critical temperature \n$ T_c \/T_F \\approx 0.1$. As we will show, our analysis of\nthe viscosity to entropy-density ratio suggests a higher $T_c\/T_F$. \nThere have been numerous estimates of $T_c\/T_F$ based on various\napproximation schemes, mainly using Monte Carlo methods on the lattice\n\\cite{Perali, LeeShafer,Wingate,Bulgac,Drummond,Burovski}, \nquoting results for $T_c\/T_F$ between $0.05$ and $0.23$. The work\n\\cite{LeeShafer} puts an upper bound $T_c \/ T_F < 0.14$,\nand the most recent results of Burovski et. al. quote $T_c\/T_F =0.152(7)$.\nOur result is thus consistent with previous work. \nThe equation of state at this point follows from eq. (\\ref{eqnstate}):\n\\begin{equation}\n\\label{eqstF}\np = 4.95 n T\n\\end{equation}\n\n\n \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$s\/n$}\n\\includegraphics[width=10cm]{snF.eps} \n\\end{center}\n\\caption{Entropy per fermionic particle as a function of $x$.} \n\\vspace{-2mm}\n\\label{snF} \n\\end{figure} \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$C_V\/N$}\n\\includegraphics[width=10cm]{CVNF.eps} \n\\end{center}\n\\caption{Specific heat per particle as a function of $x$ for fermions.} \n\\vspace{-2mm}\n\\label{CVNF} \n\\end{figure} \n\n\n\nThe energy per particle, normalized to the Fermi energy $\\epsilon_F$,\ni.e. $E\/N \\epsilon_F = 3 \\xi \/5$, and the entropy per particle, \nare shown in Figures \\ref{xiF},\\ref{snFTTF} as a function of $T\/T_F$, where\n$k_B T_F = \\epsilon_F$. At high temperatures it matches that of a free\nFermi gas, in agreement with the Monte Carlo simulations in\n\\cite{Bulgac,Burovski}. Note that there is no sign of \npair-breaking at $T^*\/T_F = 0.5$ predicted in \\cite{Perali},\nand this also agrees with the Monte Carlo simulations.\n However at low temperatures in the vicinity\nof $T_c$, the agreement is not as good. This suggests our approximation\nis breaking down for very large $z$, i.e. the limit of zero temperature. \nThe same conclusion is reached by examining $\\mu\/\\epsilon_F$,\ndisplayed in Figure \\ref{muepF}, since the zero temperature \nepsilon expansion and Monte Carlo give\n $\\mu\/\\epsilon_F \\approx 0.4 - 0.5$\\cite{Son,Burovski}. \n\n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\frac{E}{N \\epsilon_F}$}\n\\includegraphics[width=10cm]{xiF.eps} \n\\end{center}\n\\caption{Energy per particle normalized to $\\epsilon_F$ as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{xiF} \n\\end{figure} \n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$s\/n$}\n\\includegraphics[width=10cm]{snFTTF.eps} \n\\end{center}\n\\caption{Entopy per particle as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{snFTTF} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\mu \/ \\epsilon_F$}\n\\includegraphics[width=10cm]{muepF.eps} \n\\end{center}\n\\caption{Chemical potential normalized to $\\epsilon_F$ as a function of\n$T\/T_F$.} \n\\vspace{-2mm}\n\\label{muepF} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Analysis of bosons} \n\n\n\n\n\n\nFor bosons we again solved the integral equation (\\ref{ykappa}) \nby iteration, starting from $y=1$. Since the occupation numbers decay\nquickly as a function of $\\kappa$, we introduced a cut-off $\\kappa < 10$. \nFor $x$ less than approximately $-2$, the gas behaves nearly classically. \n\nThe main feature of the solution to the integral equation is that for\n$x>x_c \\equiv -1.2741$, there is no solution that is smoothly connected to\nthe classical limit $x\\to -\\infty$. Numerically, when there is no solution\nthe iterative procedure fails to converge. \nThe free energy scaling function is plotted in Figure \\ref{cB}. \nNote that $c<1$, where $c=1$ is the free field value. \n We thus take the physical region to \nbe $x< x_c$. We find strong evidence that the gas undergoes BEC \nat $x=x_c$. In Figure \\ref{epofx}, we plot $\\varepsilon ({\\bf k}=0 )$ as a function of\n$x$, and ones sees that it goes to zero at $x_c$. This implies the occupation\nnumber $f$ diverges at ${\\bf k} =0$ at this critical point. \nOne clearly sees this behavior in Figure \\ref{fB}. \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T $}\n\\psfrag{y}{$c$}\n\\psfrag{a}{$\\rm ideal$}\n\\includegraphics[width=10cm]{cB.eps} \n\\end{center}\n\\caption{The free-energy scaling function $c$ as a function of $\\mu\/T$ compared \nto the ideal gas case.} \n\\vspace{-2mm}\n\\label{cB} \n\\end{figure} \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T$}\n\\psfrag{y}{$\\varepsilon ({\\bf k}=0)\/T$}\n\\psfrag{a}{$x_c$}\n\\includegraphics[width=10cm]{epofx.eps} \n\\end{center}\n\\caption{The pseudo-energy $\\varepsilon$ at ${\\bf k} =0$ as a function of $x=\\mu\/T$.} \n\\vspace{-2mm}\n\\label{epofx} \n\\end{figure} \n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$\\kappa = \\beta {\\bf k}^2 \/2m $}\n\\psfrag{y}{$f$}\n\\psfrag{a}{$x_c$}\n\\includegraphics[width=10cm]{fB.eps} \n\\end{center}\n\\caption{The occupation number $f(\\kappa)$ for $x=-1.275$ and $x_c = -1.2741$.} \n\\vspace{-2mm}\n\\label{fB} \n\\end{figure} \n\n\nThe compressibility is shown in Figure \\ref{compressB}, and diverges at\n$x_c$, again consistent with BEC. \nWe thus conclude that there is a critical point at $x_c$ which a \nstrongly interacting, scale invariant version of the ideal BEC. \nIn terms of the density, the critical point is:\n\\begin{equation}\n\\label{xcb}\nn_c \\lambda_T^3 = 1.325, ~~~~~~~~~( \\mu\/T = x_c = -1.2741 )\n\\end{equation}\nThe negative value of the chemical potential is consistent with the\neffectively attractive interactions. The above should be compared with \nthe ideal BEC of the free theory, where $x_c = 0$ and \n$n_c \\lambda_T^3 = \\zeta (3\/2) = 2.61$, which is higher by a factor of 2. \nAt the critical point the equation of state is \n\\begin{equation}\n\\label{eqstB}\np = 0.318 n T\n\\end{equation}\ncompared to $p = 0.514 nT$ for the free case. ($0.514 = \\zeta(5\/2)\/ \\zeta (3\/2)$). \n\nA critical exponent $\\nu$ characterizing the diverging compressibility can be defined as \n\\begin{equation}\n\\label{expnu}\n\\kappa \\sim (T-T_c)^{-\\nu}\n\\end{equation}\nA log-log plot of the compressibility verses $T-T_c$ shows an approximately\nstraight line, and we obtain $\\nu \\approx 0.69$. This should be compared with\nBEC in an ideal gas, where $\\nu \\approx 1.0$. Clearly the unitary gas version \nof BEC is in a different universality class. \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$x=\\mu\/T $}\n\\psfrag{y}{$\\kappa T (mT)^{3\/2}$}\n\\includegraphics[width=10cm]{compressB.eps} \n\\end{center}\n\\caption{The compressibility $\\kappa $ as a function of $\\mu\/T$.} \n\\vspace{-2mm}\n\\label{compressB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\nThe energy per particle scaling function $\\tilde{\\xi}$ at the critical point is\n$\\tilde{\\xi} (x_c) = 0.281$ compared to $0.453 = (2 \\sqrt{2} -2)\/(2 \\sqrt{2} -1)$ \nfor the free case. The entropy per particle and specific heat per particle\nare plotted in Figures \\ref{snB}, \\ref{CVNB} as a function of $T\/T_c$. \nAt large temperatures, as expected $C_V\/N = 3\/2$, i.e. the classical value. \nIt increases as $T$ is lowered, however in contrast to the ideal gas case,\nit then begins to decrease as $T$ approaches $T_c$. \n\n\n\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$s\/n$}\n\\includegraphics[width=10cm]{snB.eps} \n\\end{center}\n\\caption{The entropy per particle as a function of $T\/T_c$.} \n\\vspace{-2mm}\n\\label{snB} \n\\end{figure} \n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm}\n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$C_V\/N $}\n\\includegraphics[width=10cm]{CVNB.eps} \n\\end{center}\n\\caption{The specific heat per particle as a function of $T\/T_c$.} \n\\vspace{-2mm}\n\\label{CVNB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Entropy to viscosity ratio}\n\n\nConsider first a single component gas. \nIn kinetic theory, the shear viscosity can be expressed as \n\\begin{equation}\n\\label{shear.1}\n\\eta = \\inv{3} n \\bar{v} m \\ell_{\\rm free} \n\\end{equation}\nwhere $\\bar{v}$ is the average speed and $\\ell_{\\rm free} $ is the mean\nfree path. The mean free path is \n$\\ell_{\\rm free} = 1\/(\\sqrt{2} n \\sigma)$ where $\\sigma$ is the total\ncross-section. (The $\\sqrt{2}$ comes from the ratio of the mean speed\nto the mean relative speed\\cite{Reif}.) In the unitary limit\nthe S-matrix $S=-1$, which implies the scattering amplitude in\neq. (\\ref{CMunit}). This leads to \n\\begin{equation}\n\\label{shear.2}\n\\sigma = \\frac{ m^2 |{\\cal M}}\t\\def\\CN{{\\cal N}}\t\\def\\CO{{\\cal O}|^2}{4\\pi} \n= \\frac{16 \\pi}{|{\\bf k}|^2}\n\\end{equation}\nwhere $|k|$ is the momentum of one of the particles in the center of mass\nframe, i.e. $|{\\bf k}_1 - {\\bf k}_2| = 2 |{\\bf k}|$. This gives\n\\begin{equation}\n\\label{shear.3}\n\\eta = \\frac{m^3 \\bar{v}^3}{48 \\sqrt{2} \\pi} \n\\end{equation}\n\nSince the equation (\\ref{energyden}) is the same relation \nbetween the pressure and energy of a free gas, and the pressure\nis due to the kinetic energy, this implies\n\\begin{equation}\n\\label{shear.4}\n\\inv{2} m \\bar{v}^2 = E\/N = \\frac{3}{2} \\frac{c}{c'} T \n\\end{equation}\nSince the entropy density $s= - \\partial \\CF \/ \\partial T$, one finally has\n\\begin{equation}\n\\label{etas}\n\\frac{\\eta}{s} = \\frac{\\sqrt{3\\pi}}{8 \\zeta (5\/2)} \n \\( \\frac{c}{c'} \\)^{3\/2} \\inv{ 5 c \/2 - x c' } \n\\end{equation}\n\n\nFor two-component fermions, the available phase space $\\mathcal{I}$ is doubled.\nAlso, spin up particles only scatter with spin down. This implies\n$\\eta$ is $8$ times the above expression. Since the entropy density is doubled,\nthis implies that $\\eta\/s$ is $4$ times the expression eq. (\\ref{etas}). \n\n\nThe ratio $\\eta\/s$ for fermions as a function of $T\/T_F$ is shown in\nFigure \\ref{etasF}, and is in good agreement both quantitatively and\nqualitatively with the experimental data \nsummarized in \\cite{Schafer}. The lowest value occurs at $x=2.33$, \nwhich corresponds to $T\/T_F = 0.28$, and \n\\begin{equation}\n\\label{etasFlim}\n\\frac{\\eta}{s} > 4.72 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\nThe experimental data\nhas a minimum that is about $6$ times this bound. \nIn the free fermion theory the minimum occurs at $\\mu\/T \\approx 2.3$,\nwhich gives $\\eta\/s > 7.2 \\hbar \/ 4 \\pi k_B$. \n\n\n\n \n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{X}{$T\/T_F$}\n\\psfrag{Y}{$\\eta \/ s$}\n\\includegraphics[width=10cm]{etasF.eps} \n\\end{center}\n\\caption{The viscosity to entropy-density ratio as a function of\n$T\/T_F$ for fermions. The horizontal line is $1\/4\\pi$.} \n\\vspace{-2mm}\n\\label{etasF} \n\\end{figure} \n\n\n\n\n\n\n\nFor bosons, the ration $\\eta\/s$ is plotted in Figure \\ref{etasB} as\na function of $T\/T_c$. One sees that it has a minimum at the critical point,\nwhere \n\\begin{equation}\n\\label{etasBlim}\n\\frac{\\eta}{s} > 1.26 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\n Thus the bosonic gas \nat the unitary critical point is a more perfect fluid than that of fermions. \nOn the other hand, the ideal Bose gas at the critical point has a lower value:\n\\begin{equation}\n\\label{etasBfree}\n\\frac{\\eta}{s} \\Bigg\\vert_{\\rm ideal} = \\frac{ \\sqrt{ 3 \\pi \\zeta (5\/2)}}\n{20 \\zeta (3\/2)^{3\/2}} = 0.53 \\frac{\\hbar}{4 \\pi k_B}\n\\end{equation}\n\n\n\n\\begin{figure}[htb] \n\\begin{center}\n\\hspace{-15mm} \n\\psfrag{x}{$T\/T_c$}\n\\psfrag{y}{$\\eta \/ s$}\n\\includegraphics[width=10cm]{etasB.eps} \n\\end{center}\n\\caption{The viscosity to entropy-density ratio as a function of\n$T\/T_c$ for bosons. The horizontal line is $1\/4\\pi$.} \n\\vspace{-2mm}\n\\label{etasB} \n\\end{figure} \n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\n\n\nWe presented a novel analytic treatment of unitary Bose and Fermi gases\nat finite temperature and chemical potential \nusing a new formulation of statistical mechanics\nbased on the exact, zero temperature, 2-body scattering. \nOur results appear to be consistent with lattice Monte Carlo methods. \nAll of the thermodynamic functions, such as entropy per particle, energy\nper particle, specific heat, compressibility, and viscosity \nare readily calculated once one numerically solves the integral equation\nfor the pseudo-energy. \n\nFor fermions, our 2-body approximation is good if the temperatures are not\ntoo low. We estimated $T_c\/T_F \\approx 0.1$, where the critical \npoint occurs at $\\mu\/T \\approx 11.2$. For bosons we presented\nevidence for a strongly interacting version of BEC at the critical \npoint $n \\lambda_T^3 \\approx 1.3$, corresponding to \n$\\mu \/ T = -1.27$. \n\n\n\n\n\n\n\\section{Acknowledgments}\n\n\nWe wish to thank Erich Mueller for helpful discussions. \n This work is supported by the National Science Foundation\nunder grant number NSF-PHY-0757868. \n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\\IEEEPARstart{T}o perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function \\cite{white1987left}. Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function \\cite{grothues2002comparison}. Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle's function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES). \n\nExisting semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising \\num{20} to \\num{40} phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle. \n\nRecently\\cite{litjens2017survey, leiner2019machine}, deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models \\cite{petitjean2011review, peng2016review}. However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80\\% of the CMRIs \\cite{bernard2018deep}. Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures. \n\nIn manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios \\cite{suinesiaputra2015quantification}. To increase segmentation performance, several methods have been proposed \\cite{tan2018fully, zheng20183, savioli2018automated, bai2018automated}. Tan et al. \\cite{tan2018fully} used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial \\cite{zheng20183} or temporal \\cite{savioli2018automated, bai2018automated} information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.\n\nAn alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al. \\cite{oktay2017anatomically} developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al. \\cite{duan2019automatic} developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al. \\cite{painchaud2019cardiac} developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy. \n\n\\begin{figure*}[t]\n\t\\center\n\t\\includegraphics[width=6in, height=3.5in]{figures\/overview_approach.pdf}%\n\t\n\t\\caption{Overview of proposed two step approach. Step 1 (left): Automatic CNN segmentation of CMR images combined with assessment of segmentation uncertainties. Step 2 (right): Differentiate tolerated errors from segmentation failures (to be detected) using distance transform maps based on reference segmentations. Detection of image regions containing segmentation failures using CNN which takes CMR images and segmentation uncertainties as input. Manual corrected segmentation failures (green) based on detected image regions.}\n\t\\label{fig_overview_method}\n\\end{figure*}\n\n\nA different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure. \n\nRecently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis\\cite{alba2018automatic, robinson2019automated}. Robinson et al. \\cite{robinson2019automated} using the approach of Reverse Classification Accuracy (RCA) \\cite{valindria2017reverse} predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al. \\cite{alba2018automatic} used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics. \n\nMethods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al. \\cite{frounchi2011automating} extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger el al. \\cite{kohlberger2012evaluating} proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation. \n\nA number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al. \\cite{nair2018exploring} computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al. \\cite{jungo2018uncertainty} aggregated voxel-wise uncertainties into a score per target structure\nand showed that the computed uncertainty score enabled identification of erroneous segmentations.\n\nUnlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance \\cite{roy2019bayesian, devries2018leveraging}. For example, Roy et al. \\cite{roy2019bayesian} aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. \nSimilar idea was presented by DeVries et al. \\cite{devries2018leveraging} that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.\n\nWe build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties \\cite{sander2019towards}. For the first measure the multi-class entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout) \\cite{gal2016dropout}. In \\cite{sander2019towards} we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In \\cite{sander2019towards} we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. \t\nIn contrast to previous methods that detect segmentation failure per-patient or per-structure\\cite{roy2019bayesian, devries2018leveraging}, we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge \\cite{bernard2018deep}, the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.\n\n\\section{Data}\n\nIn this study data from the MICCAI \\num{2017} Automated Cardiac Diagnosis Challenge (ACDC) \\cite{bernard2018deep} was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al.~\\cite{bernard2018deep}. Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (\\num{1.5} and \\num{3.0} T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from \\num{1.37} to \\SI{1.68}{\\milli\\meter} (average reconstruction matrix \\num{243} $\\times$ \\num{217} voxels) with slice spacing varying from \\num{5} to \\SI{10}{\\milli\\meter}. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [\\num{0.0}, \\num{1.0}] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to \\num{1.4}$\\times\\SI{1.4}{\\milli\\meter}^2$. \n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\n\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient099_slice02_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example1}}\n\t\n\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient097_slice00_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example2}}\n\t\n\t\\caption{Examples of automatic segmentations generated by different segmentation models for two cardiac MRI scans (rows) at ES at the base of the heart.}\n\t\\label{fig_seg_qualitative_results}\n\\end{figure*}\n\n\\section{Methods}\n\nTo investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Figure~\\ref{fig_overview_method} visualizes this approach.\n\n\n\\subsection{Automatic segmentation of cardiac MRI}\n\nTo perform segmentation of LV, RV, and LVM in cardiac MR images i.e. \\num{2}D CMR scans, three state-of-the-art CNNs are trained. Each of the three networks takes a CMR image as input and has four output channels providing probabilities for the three cardiac structures (LV, RV, LVM) and background. Softmax probabilities are calculated over the four tissue classes. Patient volumes at ED and ES are processed separately. During inference the \\num{2}D automatic segmentation masks are stacked into a \\num{3}D volume per patient and cardiac phase. After segmentation, the largest \\num{3}D connected component for each class is retained and volumes are resampled to their original voxel resolution. Segmentation networks differ substantially regarding architecture, number of parameters and receptive field size. To assess predictive uncertainties from the segmentation models \\textit{Monte Carlo dropout} (MC-dropout) introduced by Gal \\& Ghahramani \\cite{gal2016dropout} is implemented in every network. The following three segmentation networks were evaluated: Bayesian Dilated CNN, Bayesian Dilated Residual Network, Bayesian U-net.\n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian Dilated CNN (DN)}: The Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers \\num{1} to \\num{8} serve as feature extraction layers with small convolution kernels of size \\num{3}$\\times$\\num{3} voxels. No padding is applied after convolutions. The number of kernels increases from \\num{32} in the first eight layers, to \\num{128} in the final two fully connected classification layers, implemented as \\num{1}$\\times$\\num{1} convolutions. The dilation level is successively increased between layers \\num{2} and \\num{7} from \\num{2} to \\num{32} which results in a receptive field for each voxel of \\num{131}$\\times$\\num{131} voxels, or \\num{18.3}$\\times$ $\\SI{18.3}{\\centi\\meter}^2$. All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers \\num{2} to \\num{9}. In order to convert the original DN~\\cite{wolterink2017automatic} into a Bayesian DN dropout is added as the last operation in all but the final layer and \\num{10} percent of a layer's hidden units are randomly switched off. \n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian Dilated Residual Network (DRN)}: The Bayesian DRN is based on the original DRN from Yu et al. \\cite{yu2017dilated} for image segmentation. More specifically, the DRN-D-22\\cite{yu2017dilated} is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with \\num{1}$\\times$\\num{1} convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from \\num{16} in the first level, to \\num{512} in the two final levels. The first convolutional layer in level \\num{1} uses \\num{16} kernels of size \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3}. The remaining trainable layers use small \\num{3}$\\times$\\num{3} voxel kernels and zero-padding of size \\num{1}. Level \\num{2} to \\num{4} use a strided convolution of size \\num{2}. To further increase the receptive field convolutional layers in level \\num{5}, \\num{6} and \\num{7} use a dilation factor of \\num{2}, \\num{4} and \\num{2}, respectively. Furthermore, levels \\num{3} to \\num{6} consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off \\num{10} percent of a layer's hidden units converts the original DRN~\\cite{yu2017dilated} into a Bayesian DRN.\n\n\\vspace{1ex}\n\\noindent \\textbf{Bayesian U-net (U-net)}: The standard architecture of the U-net~\\cite{ronneberger2015u} is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of \\num{3}$\\times$\\num{3} voxels and zero-padding of size \\num{1}. Downsampling of the input is accomplished by employing a max pooling operation in block \\num{2} to \\num{4} of the contracting path and the bottleneck using a convolutional kernel of size \\num{2}$\\times$\\num{2} voxels and stride \\num{2}. Upsampling is performed by a transposed convolutional layer in block \\num{1} to \\num{4} of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using \\num{3}$\\times$\\num{3} voxel kernels with zero-padding size \\num{1}. The final convolutional layer of the network acts as a classifier and uses \\num{1}$\\times$\\num{1} convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from \\num{64} in the first block of the contracting path to \\num{1024} in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from \\num{1024} to \\num{64}. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and \\num{10} percent of a layer's hidden units are randomly switched off.\n\n\\subsection{Assessment of predictive uncertainties} \\label{uncertainty_maps}\nTo detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps $\\bf E \\in [0, 1]^{H\\times W}$ (e-map) are computed where $H$ and $W$ denote the height and width of the original CMRI, respectively.\n\nSecond, by applying MC-dropout in testing, softmax probabilities with a number of samples $T$ per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes $C$\\label{ref:maximum_variance} is computed\n\n\n\\begingroup\n\\small\n\\begin{align}\n\\textbf{B} (I)^{(x, y)} &= \\frac{1}{C} \\sum_{c=1}^{C} \\sqrt{\\frac{1}{T-1} \\sum_{t=1}^{T} \\big(p_t(I)^{(x, y, c)} - \\hat{\\mu}^{(x, y, c)} \\big)^2 } \\; ,\n\\end{align}\n\\endgroup\n\nwhere $\\textbf{B}(I)^{(x, y)} \\in [0, 1]$ denotes the normalized value of the Bayesian uncertainty map (b-map) at position $(x, y)$ in \\num{2}D slice $I$, $C$ is equal to the number of classes, $T$ is the number of samples and $p_t(I)^{(x, y, c)}$ denotes the softmax probability at position $(x, y)$ in image $I$ for class $c$. The predictive mean per class $\\hat{\\mu}^{(x, y, c)}$ of the samples is computed as follows:\n\n\\begingroup\n\\small\n\\begin{align}\n\\hat{\\mu}^{(x, y, c)} &= \\frac{1}{T} \\sum_{t=1}^{T} p_t(I)^{(x, y, c)} \\; .\n\\end{align}\n\\endgroup\n\nIn addition, the predictive mean per class is used to determine the tissue class per voxel.\n\n\\subsection{Calibration of uncertainty maps} \nIdeally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. \\textit{Risk-coverage curves} introduced by Geifman et al.\\cite{geifman2017selective} visualize whether incorrectly segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage). \n\nTo generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into \\num{100} percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between $[0, 100]$ percent.\n\n\n\\subsection{Detection of segmentation failures}\n\nTo detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of \\num{8}$\\times$\\num{8} voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.\n\nThe detection network is a shallow Residual Network (S-ResNet) \\cite{he2016deep} consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses \\num{16} kernels of \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3} and second layer \\num{32} kernels of \\num{3}$\\times$\\num{3} voxels and zero-padding of \\num{1} voxel. Level \\num{2} to \\num{4} each consist of one residual block that contains two convolutional layers with \\num{3}$\\times$\\num{3} voxels kernels with zero-padding of size \\num{1}. The first convolutional layer of each residual block uses a strided convolution of \\num{2} voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from \\num{16} in level \\num{1} to \\num{128} in level \\num{4}. The network is a \\num{2}D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. \\label{patch_size} The final classifier consists of three fully convolutional layers, implemented as \\num{1}$\\times$\\num{1} convolutions, with \\num{128} feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ($p=0.5$) were added between the residual blocks and the fully convolutional layers of the classifier.\n\n\\begin{table*}\n\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout enabled during testing) in terms of Dice coefficient (top) and Hausdorff distance (bottom) (mean $\\pm$ standard deviation). Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task whereas numbers accentuated in red\/bold are ranked first in the combined segmentation \\& detection task. The last row states the performance of the winning model in the ACDC challenge (on \\num{100} patient images) \\cite{isensee2017automatic}. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_overall_segmentation_performance}\n\t\\centering\n\t\\tiny\n\t\\subfloat[Dice coefficient]{\n\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\textbf{Uncertainty} & \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04) & \\phantom{x}0.875$\\pm$0.03) & \\phantom{x}0.901$\\pm$0.11 & \\phantom{x}0.832$\\pm$0.10) & \\phantom{x}0.884$\\pm$0.04 \\\\\n\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.949$\\pm$0.02 & *0.885$\\pm$0.03 & *0.937$\\pm$0.06 & *0.905$\\pm$0.05 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.922$\\pm$0.04 & \\phantom{x}0.875$\\pm$0.04 & \\phantom{x}0.912$\\pm$0.08 & \\phantom{x}0.839$\\pm$0.11 & \\phantom{x}0.882$\\pm$0.04 \\\\ \n\t\t\t& \\textbf{b-map} & *0.966$\\pm$0.01 & *0.950$\\pm$0.01 & *0.886$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.942}}$\\pm$0.03 & *0.916$\\pm$0.04 & *0.912$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & \\phantom{x}0.960$\\pm$0.02 & \\phantom{x}0.921$\\pm$0.04 & \\phantom{x}0.870$\\pm$0.04 & \\phantom{x}0.909$\\pm$0.08 & \\phantom{x}0.812$\\pm$0.12 & \\phantom{x}0.879$\\pm$0.04 \\\\\n\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.945$\\pm$0.02 & *0.879$\\pm$0.04 & *0.938$\\pm$0.03 & *0.891$\\pm$0.06 & *0.905$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & \\phantom{x}0.958$\\pm$0.02 & \\phantom{x}0.913$\\pm$0.05 & \\phantom{x}0.868$\\pm$0.04 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.818$\\pm$0.12 & \\phantom{x}0.875$\\pm$0.04 \\\\\n\t\t\t& \\textbf{b-map} & *0.964$\\pm$0.01 & *0.944$\\pm$0.02 & *0.877$\\pm$0.04 & *0.939$\\pm$0.03 & *0.900$\\pm$0.05 & *0.904$\\pm$0.03 \\\\ \t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.03 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.912$\\pm$0.06 & \\phantom{x}0.850$\\pm$0.09 & \\phantom{x}0.891$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.964$\\pm$0.01 & *0.943$\\pm$0.02 & *0.886$\\pm$0.03 & *0.937$\\pm$0.03 & *0.899$\\pm$0.04 & *0.908$\\pm$0.03 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.03 & \\phantom{x}0.877$\\pm$0.03 & \\phantom{x}0.913$\\pm$0.06 & \\phantom{x}0.847$\\pm$0.10 & \\phantom{x}0.890$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & *0.965$\\pm$0.01 & *0.948$\\pm$0.01 & *0.887$\\pm$0.03 & *0.939$\\pm$0.03 & *0.911$\\pm$0.04 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\phantom{x}0.964$\\pm$0.01 & \\phantom{x}\\textbf{0.937}$\\pm$0.02 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}\\textbf{0.919}$\\pm$0.06 & \\phantom{x}0.856$\\pm$0.09 & \\phantom{x}\\textbf{0.900}$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.967$\\pm$0.01 & *0.945$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & \\phantom{x}0.934$\\pm$0.04 & *0.892$\\pm$0.06 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\phantom{x}0.963$\\pm$0.02 & \\phantom{x}0.935$\\pm$0.03 & \\phantom{x}0.886$\\pm$0.03 & \\phantom{x}0.921$\\pm$0.06 & \\phantom{x}\\textbf{0.857}$\\pm$0.09 & \\phantom{x}0.899$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & *0.947$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & *0.938$\\pm$0.03 & *0.907$\\pm$0.04 & *0.912$\\pm$0.03 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.923$\\pm$0.05 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.840$\\pm$0.08 & \\phantom{x}0.885$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.966$\\pm$0.01 & *0.946$\\pm$0.02 & *0.890$\\pm$0.03 & *0.935$\\pm$0.04 & *0.901$\\pm$0.06 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.04 & \\phantom{x}0.879$\\pm$0.03 & \\phantom{x}0.909$\\pm$0.07 & \\phantom{x}0.849$\\pm$0.07 & \\phantom{x}0.887$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & \\textbf{\\textcolor{red}{*0.954}}$\\pm$0.02 & *0.893$\\pm$0.03 & *0.940$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.920}}$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}0.914$\\pm$0.08 & \\phantom{x}0.844$\\pm$0.09 & \\phantom{x}0.896$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{e-map} & \\phantom{x}0.968$\\pm$0.01 & *0.943$\\pm$0.03 & *0.898$\\pm$0.03 & \\phantom{x}0.930$\\pm$0.05 & *0.886$\\pm$0.07 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.04 & \\phantom{x}\\textbf{0.889}$\\pm$0.03 & \\phantom{x}0.911$\\pm$0.10 & \\phantom{x}0.845$\\pm$0.09 & \\phantom{x}0.897$\\pm$0.03 \\\\ \n\t\t\t& \\textbf{b-map} & \\phantom{x}\\textbf{\\textcolor{red}{0.968}}$\\pm$0.01 & *0.948$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.900}}$\\pm$0.03 & \\phantom{x}0.928$\\pm$0.09 & *0.895$\\pm$0.06 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\n\t\t\t\\hdashline\n\t\t\tIsensee et al. & & \\phantom{x}0.966& \\phantom{x}0.941 & \\phantom{x}0.899 & \\phantom{x}0.924 & \\phantom{x}0.875\t& \\phantom{x}0.908 \\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_seg_perf_dsc}\n\t} \n\t\\vspace{13ex}\n\t\\centering\n\t\\tiny\n\t\\subfloat[Hausdorff Distance]{\n\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\textbf{Uncertainty}& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & \\phantom{x}6.7$\\pm$3.1 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{x}10.2$\\pm$6.9 & \\phantom{x}10.7$\\pm$7.7 & \\phantom{x}16.7$\\pm$6.8 & \\phantom{x}12.3$\\pm$5.8 \\\\\n\t\t\t& \\textbf{e-map} & *5.7$\\pm$2.7 & *11.7$\\pm$5.2 & *\\phantom{x}8.3$\\pm$5.9 & *\\phantom{x}8.0$\\pm$6.5 & *14.2$\\pm$5.6 & *\\phantom{x}9.7$\\pm$5.0 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & \\phantom{x}6.9$\\pm$3.3 & \\phantom{x}13.1$\\pm$5.2 & \\phantom{xx}9.9$\\pm$5.9 & \\phantom{xx}9.9$\\pm$5.7 & \\phantom{x}15.0$\\pm$6.1 & \\phantom{x}12.0$\\pm$5.2 \\\\\n\t\t\t& \\textbf{b-map} & *5.5$\\pm$2.6 & *10.6$\\pm$5.1 & *\\phantom{x}7.4$\\pm$4.2 & *\\phantom{x}7.5$\\pm$6.0 & *12.6$\\pm$5.6 & *\\phantom{x}8.8$\\pm$4.0 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & \\phantom{x}7.1$\\pm$3.5 & \\phantom{x}14.8$\\pm$6.8 & \\phantom{x}11.0$\\pm$6.6 & \\phantom{x}10.2$\\pm$5.6 & \\phantom{x}17.7$\\pm$7.8 & \\phantom{x}12.9$\\pm$6.2 \\\\\n\t\t\t& \\textbf{e-map} & *5.6$\\pm$2.8 & *12.6$\\pm$5.5 & *\\phantom{x}8.6$\\pm$4.6 & *\\phantom{x}8.0$\\pm$5.0 & *14.6$\\pm$5.9 & *\\phantom{x}9.6$\\pm$4.5 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & \\phantom{x}7.7$\\pm$3.9 & \\phantom{x}14.4$\\pm$6.0 & \\phantom{x}10.5$\\pm$4.9 & \\phantom{x}10.1$\\pm$5.3 & \\phantom{x}17.2$\\pm$8.0 & \\phantom{x}12.5$\\pm$5.3 \\\\\n\t\t\t& \\textbf{b-map} & *6.3$\\pm$3.4 & *11.5$\\pm$4.0 & *\\phantom{x}8.6$\\pm$4.8 & *\\phantom{x}7.8$\\pm$4.6 & *13.6$\\pm$4.9 & *\\phantom{x}9.6$\\pm$4.7 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & \\phantom{x}5.5$\\pm$2.6 & \\phantom{x}11.7$\\pm$5.4 & \\phantom{xx}8.2$\\pm$6.2 & \\phantom{xx}9.1$\\pm$6.4 & \\phantom{x}13.7$\\pm$5.6 & \\phantom{xx}8.9$\\pm$5.3 \\\\\n\t\t\t& \\textbf{e-map} & *4.5$\\pm$1.9 & *\\phantom{x}9.0$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.1 & *\\phantom{x}6.2$\\pm$4.4 & *11.1$\\pm$5.3 & \\textbf{\\textcolor{red}{*\\phantom{x}6.7}}$\\pm$4.2 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\phantom{x}5.6$\\pm$2.6 & \\phantom{x}11.9$\\pm$5.5 & \\phantom{xx}8.0$\\pm$5.9 & \\phantom{xx}8.7$\\pm$5.5 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{xx}\\textbf{8.5}$\\pm$4.5 \\\\\t\t \n\t\t\t& \\textbf{b-map} & \\textbf{\\textcolor{red}{*4.2}}$\\pm$1.6 & \\textbf{\\textcolor{red}{*\\phantom{x}8.1}}$\\pm$3.7 & \\textbf{\\textcolor{red}{*\\phantom{x}6.1}}$\\pm$4.2 & \\textbf{\\textcolor{red}{*\\phantom{x}5.4}}$\\pm$3.6 & \\textbf{\\textcolor{red}{*10.1}}$\\pm$5.5 & *\\phantom{x}6.8$\\pm$3.8 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\phantom{x}\\textbf{5.5}$\\pm$2.8 & \\phantom{x}11.9$\\pm$6.1 & \\phantom{xx}\\textbf{7.7}$\\pm$5.9 & \\phantom{xx}8.5$\\pm$5.0 & \\phantom{x}13.5$\\pm$5.5 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.2 & *\\phantom{x}9.4$\\pm$4.5 & \\phantom{xx}6.7$\\pm$4.7 & *\\phantom{x}6.7$\\pm$4.4 & *11.6$\\pm$5.4 & *\\phantom{x}7.0$\\pm$3.3 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\phantom{x}5.7$\\pm$3.2 & \\phantom{x}\\textbf{11.5}$\\pm$5.1 & \\phantom{xx}8.0$\\pm$5.5 & \\phantom{xx}\\textbf{8.3}$\\pm$4.5 & \\phantom{x}\\textbf{13.3}$\\pm$5.1 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.2 & *\\phantom{x}9.3$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.0 & *\\phantom{x}6.2$\\pm$4.1 & *10.4$\\pm$5.0 & *\\phantom{x}7.0$\\pm$3.4 \\\\\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & \\phantom{x}6.4$\\pm$4.3 & \\phantom{x}15.7$\\pm$8.6 & \\phantom{xx}9.0$\\pm$6.0 & \\phantom{xx}9.7$\\pm$5.3 & \\phantom{x}17.0$\\pm$7.7 & \\phantom{x}12.7$\\pm$8.2 \\\\\n\t\t\t& \\textbf{e-map} & *4.9$\\pm$3.9 & *12.2$\\pm$8.1 & *\\phantom{x}7.1$\\pm$5.6 & *\\phantom{x}6.1$\\pm$3.2 & *12.6$\\pm$6.5 & *\\phantom{x}8.4$\\pm$6.3 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & \\phantom{x}6.2$\\pm$4.2 & \\phantom{x}15.3$\\pm$8.4 & \\phantom{xx}8.8$\\pm$5.8 & \\phantom{xx}9.2$\\pm$5.0 & \\phantom{x}16.5$\\pm$7.6 & \\phantom{x}12.0$\\pm$8.0 \\\\\n\t\t\t& \\textbf{b-map} & *4.3$\\pm$1.6 & *\\phantom{x}9.9$\\pm$6.6 & *\\phantom{x}6.7$\\pm$4.8 & *\\phantom{x}5.4$\\pm$2.8 & *10.3$\\pm$4.7 & *\\phantom{x}7.6$\\pm$6.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & \\phantom{x}6.1$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.6 & \\phantom{x}10.6$\\pm$8.4 & \\phantom{xx}9.2$\\pm$7.1 & \\phantom{x}16.3$\\pm$7.5 & \\phantom{x}12.6$\\pm$9.6 \\\\\n\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.3 & *11.3$\\pm$7.2 & *\\phantom{x}7.5$\\pm$5.5 & *\\phantom{x}7.3$\\pm$6.5 & *13.7$\\pm$7.6 & *\\phantom{x}9.8$\\pm$8.0 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & \\phantom{x}6.2$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.7 & \\phantom{x}10.5$\\pm$8.7 & \\phantom{xx}9.0$\\pm$7.0 & \\phantom{x}15.8$\\pm$7.5 & \\phantom{x}12.1$\\pm$9.2 \\\\\n\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.1 & *10.4$\\pm$7.2 & *\\phantom{x}7.6$\\pm$7.0 & *\\phantom{x}7.3$\\pm$6.9 & *12.9$\\pm$6.6 & *\\phantom{x}9.8$\\pm$8.4 \\\\\n\t\t\t\\hdashline\n\t\t\tIsensee et al. & & \\phantom{x}7.1 & \\phantom{x}14.3 & \\phantom{xx}8.9 & \\phantom{xx}9.8 & \\phantom{x}16.3 & \\phantom{x}10.4 \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_seg_perf_hd}\n\t} \n\t\n\\end{table*} \n\n\n\\section{Evaluation}\\label{evaluation}\n\nAutomatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the \\num{3}D Dice-coefficient (DC) and \\num{3}D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al.\\cite{bernard2018deep} for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).\n\nTo evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.\n\nFinally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann\u2013Whitney U test.\n\n\n\\section{Experiments}\n\nTo use stratified four-fold cross-validation the dataset was split into training (75\\%) and test (25\\%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively. \n\n\\subsection{Training segmentation networks} \\label{training_segmentation}\n\nDRN and U-net were trained with a patch size of \\num{128}$\\times$\\num{128} voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with \\num{151}$\\times$\\num{151} voxel samples were used. Zero-padding to \\num{281}$\\times$\\num{281} was performed to accommodate the \\num{131}$\\times$\\num{131} voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by \\num{90} degree rotations of the images. All models were initially trained with three loss functions: soft-Dice\\cite{milletari2016v} (SD); cross-entropy (CE); and Brier loss\\cite{brier1950verification}. However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.\n\n\n\\begingroup\n\\small\n\\begin{align}\n\\text{soft-Dice}_{c} = \\frac{\\sum_{i=1}^{N} R_{c}(i) \\; A_{c}(i) }{\\sum_{i=1}^{N} R_{c}(i) + \\sum_{i=1}^{N} A_{c}(i)} \\; ,\n\\end{align}\n\\endgroup\nwhere $N$ denotes the number of voxels in an image, $R_{c}$ is the binary reference image for class $c$ and $A_{c}$ is the probability map for class $c$.\n\n\\begingroup\n\\small\n\\begin{align}\n\\begin{split}\n\\text{Cross-Entropy}_{c} &= - \\; \\sum_{i=1}^{N} t_{ic} \\; \\log \\; p(y_i=c|x_i) \\; , \\\\& \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\\end{split}\n\\end{align}\n\\endgroup\n\n\\begingroup\n\\small\n\\begin{align}\n\\begin{split}\n\\text{Brier}_{c} &= \\sum_{i=1}^{N} \\big(t_{ic} - p(y_i=c|x_{i}) \\big)^2 \\; , \\\\ &\\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\\end{split}\n\\end{align}\n\\endgroup\n\nwhere $N$ denotes the number of voxels in an image and $p$ denotes the probability for a specific voxel $x_i$ with corresponding reference label $y_i$ for class $c$.\n\nChoosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss\\cite{sander2019towards}.\n\nAll models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of \\num{0.001} which decayed with a factor of \\num{0.1} after every 25,000 steps. Training DN used the snapshot ensemble technique~\\cite{huang2017snapshot}, where after every 10,000 iterations the learning rate was reset to its original value of \\num{0.02}.\n\nAll three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of \\num{16}. Network parameters were optimized using the Adam optimizer \\cite{kingmadp}. Furthermore, models were regularized with weight decay to increase generalization performance. \n\n\\subsection{Training detection network}\\label{label_training_detection}\n\nTo train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a \\num{2}D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average \\SI{4.65}{\\milli\\meter}) and two voxels (equivalent to on average \\SI{3.1}{\\milli\\meter}) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to $\\mathcal{S}_I$ if it is part of a \\num{2}D \\num{4}-connected cluster of minimum size \\num{10} voxels. This value was found in preliminary experiments by evaluating values $\\{1, 5, 10, 15, 20\\}$. However, for apical slices all segmentation errors are included in $\\mathcal{S}_I$ regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability~\\cite{bernard2018deep}. Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.\n\nUsing the set $\\mathcal{S}_I$ a binary label $t_j$ is assigned to each patch $P_j^{(I)}$ indicating whether $P_j^{(I)}$ contains at least one voxel belonging to set $\\mathcal{S}_I$ where $j \\in \\{1 \\dots M \\}$ and $M$ denotes the number of patches in a slice $I$. \n\nThe detection network is trained by minimizing a weighted binary cross-entropy loss:\n\n\\begingroup\n\\small\n\\begin{equation} \\label{eq_detection_loss}\n\\mathcal{L}_{DT} = - \\sum_{j \\in P^{(I)}} w_{pos} \\; t_j \\log p_j + (1 - t_j) \\log (1 - p_j) \\; ,\n\\end{equation}\n\\endgroup\n\nwhere $w_{pos}$ represents a scalar weight, $t_j$ denotes the binary reference label and $p_j$ is the softmax probability indicating whether a particular image region $P_j^{(I)}$ contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from \\num{1.5} to \\num{3} percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network $w_{pos}$ was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.\n\nEach fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total \\num{12} detection models were trained and evaluated resulting from the different combination of \\num{3} model architectures (DRN, DN and U-net), \\num{2} loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and \\num{2} uncertainty maps (e-maps, b-maps).\n\n\n\\begin{table*}\n\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout (MC) enabled during testing) in terms of clinical metrics: left ventricle (LV) end-diastolic volume (EDV); LV ejection fraction (EF); right ventricle (RV) EDV; RV ejection fraction; and LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations and 2) simulated manual correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements. Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task. Numbers in red indicate statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach for the specific clinical metric. Best viewed in color.}\n\t\\label{table_cardiac_function_indices}\n\t\\tiny\n\t\\centering\n\t\\begin{tabular}{| C{1.6cm} | C{1.cm} | C{0.3cm} c C{0.3cm} | C{0.2cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} |}\n\t\t\\hline\n\t\t& \\multicolumn{1}{c|}{\\thead{\\textbf{Uncertainty} \\\\ \\textbf{map for} \\\\ \\textbf{detection}}} & \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\n\t\n\t\t\\textbf{Method} & & \\textbf{$\\rho$} & \\multicolumn{1}{l}{\\textbf{bias$\\pm\\sigma$}} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} \\\\\n\t\t\\hline\n\t\t\\rowcolor{LightGreen}\n\t\tDN-Brier & & 0.997 & \\phantom{x}\\textbf{0.0$\\pm$6.1} & 4.5 & 0.892 & \\phantom{x}2.2$\\pm$\\phantom{x}9.2 & 4.2 & 0.977 & \\textbf{-0.2$\\pm$11.8} & \\phantom{x}8.5 & 0.834 & 5.3$\\pm$10.3 & \\phantom{x}8.5 & 0.984 & -2.7$\\pm$\\phantom{x}9.0 & \\phantom{x}\\textbf{7.0} \\\\\n\t\t& e-map & 0.997 & \\phantom{x}0.0$\\pm$5.5 & 4.0 & 0.982 & \\phantom{x}0.1$\\pm$\\phantom{x}3.8 & 2.2 & 0.992 & \\phantom{x}0.0$\\pm$\\phantom{x}6.9 & \\phantom{x}5.2 & 0.955 & 1.9$\\pm$\\phantom{x}5.5 & \\phantom{x}4.1 & 0.986 & -2.1$\\pm$\\phantom{x}8.4 & \\phantom{x}6.6 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDN-Brier+MC & & 0.997 & \\phantom{x}1.6$\\pm$6.0 & 4.4 & 0.921 & \\phantom{x}1.1$\\pm$\\phantom{x}7.9 & 3.9 & 0.975 & \\phantom{x}6.7$\\pm$12.4 & \\phantom{x}9.6 & 0.854 & 3.5$\\pm$\\phantom{x}9.9 & \\phantom{x}7.7 & 0.984 & \\phantom{x}0.7$\\pm$\\phantom{x}9.2 & \\phantom{x}7.1 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}1.0$\\pm$5.3 & 3.9 & 0.991 & \\phantom{x}0.0$\\pm$\\phantom{x}2.7 & 1.9 & 0.993 & \\phantom{x}3.2$\\pm$\\phantom{x}6.7 & \\phantom{x}5.7 & 0.975 & 0.8$\\pm$\\phantom{x}4.0 & \\phantom{x}3.0 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}8.3 & \\phantom{x}6.5 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightGreen}\n\t\tDN-soft-Dice & & 0.996 & \\phantom{x}1.2$\\pm$6.5 & 4.9 & 0.918 & \\phantom{x}1.5$\\pm$\\phantom{x}8.0 & 3.9 & 0.972 & \\phantom{x}\\textbf{0.2$\\pm$13.0} & \\phantom{x}9.6 & 0.802 & 7.2$\\pm$11.3 & 10.2 & 0.982 & -4.5$\\pm$\\phantom{x}9.6 & \\phantom{x}8.5 \\\\\n\t\t& e-map & 0.997 & \\phantom{x}1.0$\\pm$5.5 & 4.2 & 0.989 & \\phantom{x}0.2$\\pm$\\phantom{x}3.0 & 2.2 & 0.990 & \\phantom{x}0.2$\\pm$\\phantom{x}7.6 & \\phantom{x}5.9 & \\textcolor{red}{0.940} & \\textcolor{red}{3.3$\\pm$\\phantom{x}6.2} & \\textcolor{red}{\\phantom{x}5.2} & 0.983 & -4.3$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDN-soft-Dice+MC & & 0.996 & \\phantom{x}3.2$\\pm$7.1 & 5.6 & 0.958 & \\phantom{x}0.4$\\pm$\\phantom{x}5.7 & 3.6 & 0.964 & \\phantom{x}8.1$\\pm$14.9 & 12.3 & 0.827 & 4.8$\\pm$11.0 & \\phantom{x}8.9 & 0.978 & -0.7$\\pm$10.7 & \\phantom{x}8.3 \\\\\n\t\t& b-map & 0.997 & \\phantom{x}2.2$\\pm$5.6 & 4.4 & 0.988 & -0.2$\\pm$\\phantom{x}3.1 & 2.2 & 0.990 & \\phantom{x}4.0$\\pm$\\phantom{x}7.7 & \\phantom{x}7.0 & 0.959 & 1.8$\\pm$\\phantom{x}5.1 & \\phantom{x}4.1 & 0.982 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.6 \\\\\n\t\t\n\t\t\\hdashline[5pt\/5pt]\n\t\t\\rowcolor{LightGreen}\n\t\tDRN-CE & & 0.997 & -0.2$\\pm$5.5 & 4.1 & 0.968 & \\phantom{x}1.2$\\pm$\\phantom{x}5.0 & 3.5 & 0.976 & \\phantom{x}1.5$\\pm$12.1 & \\phantom{x}8.5 & 0.870 & 1.3$\\pm$\\phantom{x}9.2 & \\phantom{x}6.9 & 0.980 & \\phantom{x}\\textbf{0.6$\\pm$10.2} & \\phantom{x}7.8 \\\\\n\t\t& e-map & 0.998 & \\phantom{x}0.2$\\pm$4.5 & 3.5 & 0.992 & \\phantom{x}0.2$\\pm$\\phantom{x}2.5 & 1.9 & 0.988 & \\phantom{x}1.4$\\pm$\\phantom{x}8.5 & \\phantom{x}6.2 & 0.952 & 0.8$\\pm$\\phantom{x}5.6 & \\phantom{x}4.2 & 0.985 & \\phantom{x}0.4$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDRN-CE+MC & & \\textbf{0.998} & \\phantom{x}1.0$\\pm$4.9 & \\textbf{3.9} & 0.972 & \\phantom{x}0.8$\\pm$\\phantom{x}4.6 & 3.1 & 0.973 & \\phantom{x}4.8$\\pm$12.8 & \\phantom{x}9.4 & 0.876 & \\textbf{0.4$\\pm$\\phantom{x}9.1} & \\phantom{x}\\textbf{6.6} & 0.981 & \\phantom{x}1.9$\\pm$\\phantom{x}9.9 & \\phantom{x}7.6 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}0.7$\\pm$4.6 & 3.6 & 0.992 & -0.1$\\pm$\\phantom{x}2.5 & 1.8 & 0.992 & \\phantom{x}2.9$\\pm$\\phantom{x}6.9 & \\phantom{x}5.7 & 0.967 & 0.6$\\pm$\\phantom{x}4.6 & \\phantom{x}3.4 & 0.987 & \\phantom{x}1.2$\\pm$\\phantom{x}8.3 & \\phantom{x}6.6 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\n\t\t\\rowcolor{LightGreen}\n\t\tDRN-soft-Dice & & \\textbf{0.998} & \\phantom{x}0.8$\\pm$5.1 & 4.0 & 0.976 & \\phantom{x}0.2$\\pm$\\phantom{x}4.4 & 3.0 & 0.980 & \\phantom{x}\\textbf{0.2$\\pm$11.0} & \\phantom{x}\\textbf{7.5} & \\textbf{0.882} & 3.1$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 & 0.984 & -3.5$\\pm$\\phantom{x}9.1 & \\phantom{x}7.5 \\\\\n\t\t& e-map & 0.998 & \\phantom{x}0.7$\\pm$4.4 & 3.5 & 0.987 & -0.1$\\pm$\\phantom{x}3.1 & 2.2 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.4 & 0.938 & 1.9$\\pm$\\phantom{x}6.3 & \\phantom{x}4.9 & 0.986 & -3.5$\\pm$\\phantom{x}8.7 & \\phantom{x}7.1 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tDRN-soft-Dice+MC & & \\textbf{0.998} & \\phantom{x}1.8$\\pm$5.1 & \\textbf{3.9} & \\textbf{0.979} & -0.3$\\pm$\\phantom{x}4.1 & 2.9 & 0.977 & \\phantom{x}3.5$\\pm$11.7 & \\phantom{x}8.1 & 0.868 & 1.7$\\pm$\\phantom{x}9.5 & \\phantom{x}6.8 & 0.983 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.4 \\\\\n\t\t& b-map & 0.998 & \\phantom{x}1.7$\\pm$4.7 & 3.7 & 0.990 & -0.2$\\pm$\\phantom{x}2.9 & 2.1 & 0.989 & \\phantom{x}2.3$\\pm$\\phantom{x}8.1 & \\phantom{x}5.8 & 0.959 & 0.8$\\pm$\\phantom{x}5.2 &\\phantom{x}3.8 & 0.986 & -1.3$\\pm$\\phantom{x}8.5 & \\phantom{x}6.8 \\\\\n\t\t\n\t\t\\hdashline[5pt\/5pt]\n\t\t\\rowcolor{LightGreen}\n\t\tU-net-CE & & 0.995 & -4.7$\\pm$7.2 & 6.1 & 0.954 & \\phantom{x}4.1$\\pm$\\phantom{x}6.0 & 5.1 & 0.963 & -7.6$\\pm$15.2 & 12.1 & 0.870 & 5.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.1 & 0.971 & -8.5$\\pm$12.2 & 11.5 \\\\\n\t\t& e-map & 0.998 & -3.2$\\pm$4.8 & 4.4 & 0.992 & \\phantom{x}1.7$\\pm$\\phantom{x}2.6 & 2.4 & 0.987 & -4.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.7 & 0.957 & 2.6$\\pm$\\phantom{x}5.2 & \\phantom{x}4.1 & 0.983 & -5.7$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tU-net-CE+MC & & 0.995 & -4.3$\\pm$7.2 & 5.9 & 0.958 & \\phantom{x}3.8$\\pm$\\phantom{x}5.8 & 4.9 & 0.968 & -4.8$\\pm$14.1 & 10.7 & 0.867 & 5.0$\\pm$\\phantom{x}9.1 & \\phantom{x}7.9 & 0.972 & -8.1$\\pm$12.0 & 11.1 \\\\\n\t\t& b-map & 0.997 & -3.5$\\pm$5.5 & 4.9 & 0.990 & \\phantom{x}1.6$\\pm$\\phantom{x}2.9 & 2.6 & 0.992 & -1.8$\\pm$\\phantom{x}7.0 & \\phantom{x}4.9 & 0.974 & 1.6$\\pm$\\phantom{x}4.1 & \\phantom{x}3.3 & 0.981 & -6.8$\\pm$10.0 & \\phantom{x}9.4 \\\\\n\t\t\\hdashline[1pt\/2pt]\n\t\t\n\t\t\\rowcolor{LightGreen}\n\t\tU-net-soft-Dice & & 0.997 & -2.0$\\pm$6.0 & 4.5 & 0.853 & \\phantom{x}3.6$\\pm$10.9 & 5.0 & 0.968 & -1.0$\\pm$14.1 & 10.0 & 0.782 & 4.8$\\pm$11.6 & \\phantom{x}9.0 & \\textbf{0.985} & -7.7$\\pm$\\phantom{x}8.8 & \\phantom{x}9.2 \\\\\n\t\t& e-map & 0.997 & -1.7$\\pm$5.3 & 4.1 & 0.969 & \\phantom{x}1.9$\\pm$\\phantom{x}4.9 & 3.3 & 0.981 & -0.1$\\pm$10.9 & \\phantom{x}7.5 & 0.919 & 3.3$\\pm$\\phantom{x}7.0 & \\phantom{x}5.9 & 0.984 & -6.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.7 \\\\\n\t\t\n\t\t\\hdashline[1pt\/2pt]\n\t\t\\rowcolor{LightCyan}\n\t\tU-net-soft-Dice+MC & & 0.997 & -1.8$\\pm$5.9 & 4.4 & 0.941 & \\phantom{x}3.0$\\pm$\\phantom{x}6.7 & 4.4 & 0.969 & \\phantom{x}0.6$\\pm$13.9 & \\phantom{x}9.8 & 0.792 & 4.4$\\pm$11.3 & \\phantom{x}8.7 & \\textbf{0.985} & -7.2$\\pm$\\phantom{x}8.9 & \\phantom{x}8.9 \\\\\n\t\t& b-map & 0.997 & -1.5$\\pm$5.3 & 4.1 & 0.979 & \\phantom{x}1.1$\\pm$\\phantom{x}4.1 & 2.9 & 0.985 & \\phantom{x}1.2$\\pm$\\phantom{x}9.4 & \\phantom{x}6.5 & 0.939 & 2.9$\\pm$\\phantom{x}6.2 & \\phantom{x}4.9 & 0.984 & -5.9$\\pm$\\phantom{x}9.0 & \\phantom{x}8.5 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\nThe patches used to train the network were selected randomly (\\nicefrac{2}{3}), or were forced (\\nicefrac{1}{3}) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to \\num{80}$\\times$\\num{80} voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of \\num{80}$\\times$\\num{80} voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size \\num{8}$\\times$\\num{8} voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of \\num{80}$\\times$\\num{80} voxels. \n\nModels were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size \\num{32} and Adam as optimizer\\cite{kingmadp}. Learning rate was set to \\num{0.0001} and decayed with a factor of \\num{0.1} after \\num{10,000} steps. Furthermore, dropout percentage was set to \\num{0.5} and weight decay was applied to increase generalization performance.\n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_froc_voxel_detection_rate.pdf}%\n\t\t\\label{fig_froc_voxel_detection}}\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_slices_prec_recall.pdf}%\n\t\t\\label{fig_prec_rec_slice_detection}}\n\t\n\t\\caption{Detection performance of segmentation failures generated by different combination of segmentation architectures and loss functions. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) as a function of number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. Each figure contains a curve for the six possible combination of models (three) and loss functions (two). SD denotes soft-Dice and CE cross-entropy, respectively.}\n\t\\label{fig_dt_perf_all_models}\n\\end{figure*}\n\n\\subsection{Segmentation using correction of the detected segmentation failures}\n\nTo investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of \\num{50} patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in\\cite{bernard2018deep}. Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.\n\n\\section{Results}\n\nIn this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results. \n\n\\subsection{Segmentation-only approach} \\label{results_seg_only}\n\nTable~\\ref{table_overall_segmentation_performance} lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance.\n\nTable~\\ref{table_cardiac_function_indices} lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.\n\n\\noindent \\textbf{Effect of model architecture on segmentation}: Although quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure~\\ref{fig_seg_qualitative_results} shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.\n\n\\noindent \\textbf{Effect of loss function on segmentation}: The results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.\n\nA considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.\n\n\\noindent \\textbf{Effect of MC dropout on segmentation}: The results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC. \n\\begin{table}\n\t\\caption{Average precision and percentage of slices with segmentation failures generated by Dilated Network (DN), Dilated Residual Network (DRN) and U-net when trained with soft-Dice (SD), CE or Brier loss. Per patient, average precision of detected slices with failure using e- or b-maps (\\num{2}$^{nd}$ and \\num{3}$^{rd}$ columns). Per patient, average percentage of slices containing segmentation failures (reference for detection task) (\\num{4}$^{th}$ and \\num{5}$^{th}$ columns).}\n\t\\label{table_evaluation_slice_detection}\n\t\\begin{tabular}{l C{1.4cm} C{1.4cm} C{1.4cm} C{1.4cm} }\n\t\t\\textbf{Model} & \\multicolumn{2}{c}{\\textbf{Average precision}} & \\multicolumn{2}{c}{ \\thead{\\textbf{\\% of slices} \\\\ \\textbf{with segmentation failures} }} \\\\\n\t\t& e-map & b-map & e-map & b-map \\\\\n\t\t\\hline\n\t\tDN-Brier & 84.0 & 83.0 & 53.7 & 52.4 \\\\\n\t\tDN-SD & 87.0 & 85.0 & 58.3 & 58.1 \\\\\n\t\t\\hdashline\n\t\tDRN-CE & 75.0 & 69.0 & 39.5 & 39.4 \\\\\n\t\tDRN-SD & 67.0 & 67.0 & 34.9 & 33.7\\\\\n\t\t\\hdashline\n\t\tU-net-CE & 81.0 & 75.0 & 54.8 & 52.5 \\\\\n\t\tU-net-SD & 76.0 & 76.0 & 46.7 & 45.5 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Detection of segmentation failures}\n\n\\noindent \\textbf{Detection of segmentation failures on voxel level}: To evaluate detection performance of segmentation failures on voxel level Figure~\\ref{fig_froc_voxel_detection} shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Figure~\\ref{fig_froc_voxel_detection}, left) or b-maps (Figure~\\ref{fig_froc_voxel_detection}, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map. \n\nThe influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps.\n\n\\begin{table*}\n\t\\caption{Comparing performance of segmentation-only approach (auto-only) with combined segmentation and detection approach for two scenarios: simulated correction of detected segmentation failures (auto$+$simulation); and manual correction of detected segmentation failures by an expert (auto$+$expert). Automatic segmentations were obtained from a U-net trained with cross-entropy. Evaluation was performed on a subset of \\num{50} patients from the ACDC dataset. Scenarios are compared against segmentation-only approach (auto-only) in terms of (a) Dice Coefficient (b) Hausdorff Distance and (c) Clinical metrics. Results obtained from simulated manual correction represent an upper bound on the maximum achievable performance. Detection network was trained with e-maps. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_manual_corr_performance}\n\t\\centering\n\t\\small\n\t\\subfloat[\\textbf{Dice coefficient:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}0.964$\\pm$0.02 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.883$\\pm$0.03 & \\phantom{x}0.916$\\pm$0.05 & \\phantom{x}0.854$\\pm$0.08 & \\phantom{x}0.886$\\pm$0.04 \\\\ \n\t\t\tauto$+$simulation & \\phantom{x}0.967$\\pm$0.01 & *0.948$\\pm$0.03 & *0.894$\\pm$0.03 & *0.939$\\pm$0.03 & *0.915$\\pm$0.04 & *0.910$\\pm$0.03 \\\\ \n\t\t\n\t\t\tauto$+$expert & \\phantom{x}0.965$\\pm$0.02 & \\phantom{x}0.940$\\pm$0.03 & \\phantom{x}0.885$\\pm$0.03 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.868$\\pm$0.07 & \\phantom{x}0.894$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_dsc}\n\t} \n\t\n\t\\centering\n\t\n\t\\subfloat[\\textbf{Hausdorff Distance:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}5.6$\\pm$3.3 & \\phantom{x}15.7$\\pm$9.7 & \\phantom{x}8.5$\\pm$6.4 & \\phantom{x}9.2$\\pm$5.8 & \\phantom{x}16.5$\\pm$8.8 & \\phantom{x}13.4$\\pm$10.5 \\\\\n\t\t\tauto$+$simulation & \\phantom{x}4.5$\\pm$2.1 & *\\phantom{x}9.0$\\pm$4.6 & *5.9$\\pm$3.4 & *5.2$\\pm$2.5 & *10.3$\\pm$3.7 & *\\phantom{x}6.6$\\pm$2.9 \\\\\n\t\t\n\t\t\tauto$+$expert & \\phantom{x}4.9$\\pm$2.8 & *\\phantom{x}9.8$\\pm$4.3 & \\phantom{x}7.3$\\pm$4.3 & \\phantom{x}7.2$\\pm$3.3 & *12.5$\\pm$4.7 & *\\phantom{x}8.3$\\pm$3.5 \\\\\n\t\t\t\n\t\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_hd}\n\t} \n\t\n\t\\subfloat[\\textbf{Clinical metrics:} a) Left ventricle (LV) end-diastolic volume (EDV) b) LV ejection fraction (EF) c) Right ventricle (RV) EDV d) RV ejection fraction e) LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations; 2) simulated manual correction and 3) manual expert correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements.]{\n\t\t\\label{table_manual_cardiac_function_indices}\n\t\t\\small\n\t\t\\begin{tabular}{| C{2.cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm}C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\t\t\n\t\t\t\\textbf{Scenario} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\tauto-only & 0.995 & -4.4 $\\pm$7.0 & 5.7 & 0.927 & 5.0 $\\pm$7.1 & 5.8 & 0.962 & -6.4 $\\pm$16.2 & 11.9 & 0.878 & 5.8 $\\pm$8.7 & 8.0 & 0.979 & -6.4 $\\pm$10.6 & 9.5 \\\\\n\t\t\t\n\t\t\tauto$+$simulation & 0.998 & -3.9 $\\pm$5.2 & 4.8 & 0.989 & 2.3 $\\pm$2.9 & 2.9 & 0.984 & -3.7 $\\pm$10.4 & 6.8 & 0.954 & 2.7 $\\pm$5.5 & 4.5 & 0.983 & -5.5 $\\pm$9.6 & 8.1 \\\\\n\t\t\n\t\t\t\n\t\t\tauto$+$expert & 0.996 & -4.3 $\\pm$6.5 & 5.5 & 0.968 & 2.7 $\\pm$4.8 & 4.3 & 0.976 & -3.2 $\\pm$12.9 & 8.3 & 0.883 & 5.1 $\\pm$8.6 & 7.7 & 0.980 & -6.2 $\\pm$10.2 & 9.1 \\\\\n\t\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\n\\end{table*}\n\n\\vspace{1ex}\n\\noindent \\textbf{Detection of slices with segmentation failures}: To evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Figure~\\ref{fig_prec_rec_slice_detection}, left) or b-maps (Figure~\\ref{fig_prec_rec_slice_detection}, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e- or b-maps obtained from the DRN model. Table~\\ref{table_evaluation_slice_detection} shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.\n\n\\subsection{Calibration of uncertainty maps} \\label{result_eval_quality_umaps}\n\nFigure~\\ref{fig_risk_cov_comparison} shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions (Figure~\\ref{fig_risk_cov_comparison} left: CE or Brier loss, Figure~\\ref{fig_risk_cov_comparison} right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e- and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=4.7in, height=2.8in]{figures\/risk_cov\/cov_risk_curve_both_seg_errors.pdf}\n\t\n\t\\caption{Comparison of risk-coverage curves for different combination of model architectures, loss functions and uncertainty maps. Results are separated for loss functions (left cross-entropy and Brier, right soft-Dice loss). \\num{100}\\% coverage means that none of the voxels is discarded based on its uncertainty whereas a coverage of \\num{0}\\% denotes the scenario in which all predictions are replaced by their reference labels. Note, all models were trained with two different loss functions (1) soft-Dice (SD) for all models (2) cross-entropy (CE) for DRN and U-net and Brier loss for DN.}\n\t\\label{fig_risk_cov_comparison}\n\\end{figure*}\n\n\\begin{table*}\n\t\\caption{Effect of number of Monte Carlo (MC) samples on segmentation performance in terms of (a) Dice coefficient (DC) and (b) Hausdorff Distance (HD) (mean $\\pm$ standard deviation). Higher DC and lower HD is better. Abbreviations: Cross-Entropy (CE), Dilated Residual Network (DRN) and Dilated Network (DN).} \n\t\\label{table_seg_perf_per_samples}\n\t\\small\n\t\\centering\n\t\\subfloat[Dice coefficient]{\n\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\\hline\n\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}} & DRN-CE & U-net-CE & DN-soft-Dice\\\\\n\t\t\t\\hline\n\t\t\t1 & 0.894$\\pm$0.07 & 0.896$\\pm$0.07 & 0.871$\\pm$0.09 \\\\\n\t\t\t3 & 0.900$\\pm$0.07 & 0.901$\\pm$0.07 & 0.883$\\pm$0.08 \\\\\n\t\t\t5 & 0.902$\\pm$0.07 & 0.901$\\pm$0.07 & 0.887$\\pm$0.08 \\\\\n\t\t\t7 & 0.903$\\pm$0.07 & 0.901$\\pm$0.07 & 0.888$\\pm$0.08 \\\\\n\t\t\t10 & 0.904$\\pm$0.06 & 0.902$\\pm$0.07 &0.890$\\pm$0.08 \\\\\n\t\t\t20 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.890$\\pm$0.08 \\\\\n\t\t\t30 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t60 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t}\n\t\\subfloat[Hausdorff Distance]{\n\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\\hline\n\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}}& DRN-CE & U-net-CE & DN-soft-Dice \\\\\n\t\t\t\\hline\n\t\t\t1 & 9.88$\\pm$5.76 & 11.79$\\pm$8.23 & 13.54$\\pm$7.14 \\\\\n\t\t\t3 & 9.70$\\pm$6.13 & 11.40$\\pm$7.78 & 12.71$\\pm$6.79 \\\\\n\t\t\t5 & 9.54$\\pm$6.07 & 11.37$\\pm$7.81 & 12.06$\\pm$6.29 \\\\\n\t\t\t7 & 9.38$\\pm$5.86 & 11.29$\\pm$7.86 & 12.08$\\pm$6.38 \\\\\n\t\t\t10 & 9.38$\\pm$5.91 & 11.24$\\pm$7.71 & 11.85$\\pm$6.34 \\\\\n\t\t\t20 & 9.37$\\pm$5.83 & 11.27$\\pm$7.79 & 11.90$\\pm$6.52 \\\\\n\t\t\t30 & 9.39$\\pm$5.91 & 11.32$\\pm$7.93 & 11.90$\\pm$6.48 \\\\\n\t\t\t60 & 9.39$\\pm$5.93 & 11.22$\\pm$7.83 & 11.89$\\pm$6.56 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\t\n\t}\n\\end{table*}\n\n\\subsection{Correction of automatically identified segmentation failures} \\label{results_combined_approach}\n\n\\textbf{Simulated correction:} The results listed in Table~\\ref{table_overall_segmentation_performance} and \\ref{table_cardiac_function_indices} show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table~\\ref{table_overall_segmentation_performance} disclose that improvements attained by the combined approach are almost all statistically significant ($p \\leq 0.05$) at ES and frequently at ED (\\num{96}\\% resp. \\num{83}\\% of the cases). Moreover, improvements are in \\num{99}\\% of the cases statistically significant for HD compared to \\num{81}\\% of the cases for DC.\n\nResults in terms of clinical metrics shown in Table~\\ref{table_cardiac_function_indices} are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ($p \\leq 0.05$) in one case for RV ejection fraction.\t\n\nIn general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.\n\n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient018_slice07_ES_bmap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example1}}\n\t\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient070_slice05_ED_emap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example2}}\n\t\n\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient081_slice01_ES_emap_acorr.pdf}%\n\t\t\\label{fig_qual_result_sim_corr_example3}}\n\t\\caption{Three patients showing results of combined segmentation and detection approach consisting of segmentation followed by simulated manual correction of detected segmentation failures. First column shows MRI (top) and reference segmentation (bottom). Results for automatic segmentation and simulated manual correction respectively achieved by: Dilated Network (DN-Brier, \\num{2}$^{nd}$ and \\num{5}$^{th}$ columns); Dilated Residual Network (DRN-soft-Dice, \\num{3}$^{rd}$ and \\num{6}$^{th}$ columns); and U-net (soft-Dice, \\num{4}$^{th}$ and \\num{7}$^{th}$ columns).}\n\t\\label{fig_seg_detection_qualitative_results}\n\\end{figure*}\n\n\\begin{figure*}[!t]\n\t\\captionsetup[subfigure]{justification=centering}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient048_slice02_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example1} \\hspace{3ex}}\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient091_slice01_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example2}}\n\t\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient075_slice06_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example3} \\hspace{3ex}}\n\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient006_slice01_mcorr.pdf}%\n\t\t\\label{fig_qual_result_man_corr_example4}} \n\t\n\t\\caption{Four patients showing results of combined segmentation and detection approach consisting of segmentation followed by manual expert correction of detected segmentation failures. Expert was only allowed to adjust the automatic segmentations in regions where the detection network predicted segmentation failures (orange contour shown in 2$^{nd}$ column). Automatic segmentations were generated by a U-net trained with the cross-entropy loss. Segmentation failure detection was performed using entropy maps.}\n\t\\label{fig_qualitative_results_man_corr}\n\\end{figure*}\n\nThe best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps. \n\n\\vspace{1ex}\n\\textbf{Manual correction}: Table~\\ref{table_manual_corr_performance} lists results for the combined automatic segmentation and detection approach followed by \\textit{manual} correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in \\num{50} percent of the cases statistically significant ($p \\leq 0.05$) and most pronounced for RV and LV at end-systole. \n\nQualitative examples of the proposed approach are visualized in Figures~\\ref{fig_seg_detection_qualitative_results} and \\ref{fig_qualitative_results_man_corr} for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance.\nOn average manual correction of automatic segmentations took less than \\num{2} minutes for ED and ES volumes of one patient compared to \\num{20} minutes that is typically needed by an expert for the same task.\n\n\n\\section{Ablation Study}\n\nTo demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed.\n\n\\subsection{Impact of number of Monte Carlo samples on segmentation performance}\n\nTo investigate the impact of the number of Monte Carlo (MC) samples on the segmentation performance validation experiments were performed for all three segmentation architectures (Dilated Network, Dilated Residual Network and U-net) using $T$ $\\in \\{1, 3, 5, 7, 10, 20, 30, 60\\}$ samples. Results of these experiments are listed in Table~\\ref{table_seg_perf_per_samples}. We observe that segmentation performance started to converge using \\num{7} samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than \\num{10} samples did not increase segmentation performance. Hence, in the presented work $T$ was set to \\num{10}.\t\n\n\\subsection{Effect of patch-size on detection performance}\n\nThe combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: \\num{4}$\\times$\\num{4}, \\num{8}$\\times$\\num{8}, and \\num{16}$\\times$\\num{16} voxels. The results are shown in Figure~\\ref{fig_grid_compare}. We can observe in Figure~\\ref{fig_fn_grid_froc_voxel_detection} that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Figure~\\ref{fig_fn_grid_prec_rec_slice_detection} reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patch-size small. Therefore, in the experiments a patch-size of \\num{8}$\\times$\\num{8} voxels was used.\n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_froc_voxel_detection_rate.pdf}%\n\t\t\\label{fig_fn_grid_froc_voxel_detection}}\n\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_slices_prec_recall.pdf}%\n\t\t\\label{fig_fn_grid_prec_rec_slice_detection}} \n\t\n\t\\caption{Detection performance for three different patch-sizes specified in voxels. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) versus number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. In the experiments patch-size was set to \\num{8}$\\times$\\num{8} voxels.}\n\t\\label{fig_grid_compare}\n\\end{figure*}\n\n\\subsection{Impact of tolerance threshold on number of segmentation failures}\n\nTo investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Figure~\\ref{fig_threshold_compare} shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table~\\ref{table_evaluation_slice_detection}) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.\n\n\\section{Discussion}\n\nWe have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance.\nIn contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance\tfor the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Figure~\\ref{fig_qualitative_results_man_corr}).\nTherefore, the presented approach can potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice.\n\n\nThe proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods. \n\n\\begin{figure*}[t]\n\t\\center\n\t\\subfloat[]{\n\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_out_struc.pdf}%\n\t\t\\label{fig_threshold_struc_out_compare}\n\t}\n\t\\subfloat[]{\n\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_in_struc.pdf}%\n\t\t\\label{fig_threshold_struc_in_compare}\n\t}\n\t\\caption{Mean percentage of the segmentation failures per volume (y-axis) in the set of all segmentation errors (tolerable errors$+$segmentation failures) depending on the tolerance threshold (x-axis). Red, dashed vertical line indicates threshold value that was used throughout the experiments. Results are split between segmentation errors located (a) outside and (b) inside the target structure. Each figure contains a curve for U-net, Dilated Network (DN) and Dilated Residual Network (DRN) trained with the soft-Dice (SD) loss. Segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures and therefore, they are independent of the applied tolerance threshold.}\n\t\\label{fig_threshold_compare}\n\\end{figure*}\n\nIn agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier)\\cite{sander2019towards}. Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel \\cite{hendrycks2016baseline} showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al. \\cite{geifman2017selective} we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al. \\cite{jungo2019assessing} we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al.~\\cite{srivastava2014dropout} where a CNN with dropout used at testing is interpreted as an ensemble of models.\n\nQuantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors \\cite{bernard2018deep} and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations \\cite{bernard2018deep} that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis\\cite{bello2019deep} and very detailed morphology analysis such as myocardial trabeculae in adults\\cite{meyer2020genetic}.\n\nFor the ACDC dataset used in this manuscript, Bernard et al.\\cite{bernard2018deep} reported inter-observer variability ranging from \\num{4} to \\SI{14.1}{\\milli\\meter} (equivalent to on average \\num{2.6} to \\num{9} voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in \\num{2}D, while evaluation of segmentation is done in \\num{3}D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table~\\ref{table_evaluation_slice_detection} show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Figure~\\ref{fig_threshold_compare}) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Figure~\\ref{fig_dt_perf_all_models}) and hence, should be a focus of future work. \n\nThe presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.\n\nEvaluation of the proposed approach for \\num{12} possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks \\cite{lakshminarayanan2017simple} or variational dropout \\cite{kingma2015variational}. In addition, previous work by Kendall and Gal~\\cite{kendall2017uncertainties}, Tanno et al. \\cite{tanno2019uncertainty} has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures. \n\nFurthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance. \n\nFinally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.\n\n\\section{Conclusion}\n\nA method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs. \n\n\\section*{Data and code availability}\nAll models were implemented using the PyTorch\\cite{paszke2017automatic} framework and trained on one Nvidia GTX Titan X GPU with \\num{12} GB memory. The code to replicate the study is publicly available at \\href{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}.\n\n\n\\section*{Acknowledgements}\n\nThis study was performed within the DLMedIA program (P15-26) funded by Dutch Technology Foundation with participation of PIE Medical Imaging.\n\n\\section*{Author contributions statement}\n\nJ.S., B.D.V. and I.I. designed the concept of the study. J.S. conducted the experiments. J.S., B.D.V. and I.I. wrote the manuscript. All authors reviewed the manuscript. \n\n\\section*{Additional information}\n\n\\textbf{Competing interests}: The authors declare that they have no competing interests. \n\n\\ifCLASSOPTIONcaptionsoff\n\\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section*{Introduction}\n\t\n\t\\IEEEPARstart{T}o perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function \\cite{white1987left}. Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function \\cite{grothues2002comparison}. Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle's function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES). \n\t\n\tExisting semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising \\num{20} to \\num{40} phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle. \n\t\n\tRecently\\cite{litjens2017survey, leiner2019machine}, deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models \\cite{petitjean2011review, peng2016review}. However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80\\% of the CMRIs \\cite{bernard2018deep}. Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures. \n\t\n\tIn manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios \\cite{suinesiaputra2015quantification}. To increase segmentation performance, several methods have been proposed \\cite{tan2018fully, zheng20183, savioli2018automated, bai2018automated}. Tan et al. \\cite{tan2018fully} used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial \\cite{zheng20183} or temporal \\cite{savioli2018automated, bai2018automated} information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.\n\t\n\tAn alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al. \\cite{oktay2017anatomically} developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al. \\cite{duan2019automatic} developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al. \\cite{painchaud2019cardiac} developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy. \n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\includegraphics[width=6in, height=3.5in]{figures\/overview_approach.pdf}%\n\t\t\n\t\t\\caption{Overview of proposed two step approach. Step 1 (left): Automatic CNN segmentation of CMR images combined with assessment of segmentation uncertainties. Step 2 (right): Differentiate tolerated errors from segmentation failures (to be detected) using distance transform maps based on reference segmentations. Detection of image regions containing segmentation failures using CNN which takes CMR images and segmentation uncertainties as input. Manual corrected segmentation failures (green) based on detected image regions.}\n\t\t\\label{fig_overview_method}\n\t\\end{figure}\n\t\n\t\n\tA different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure. \n\t\n\tRecently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis\\cite{alba2018automatic, robinson2019automated}. Robinson et al. \\cite{robinson2019automated} using the approach of Reverse Classification Accuracy (RCA) \\cite{valindria2017reverse} predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al. \\cite{alba2018automatic} used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics. \n\t\n\tMethods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al. \\cite{frounchi2011automating} extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger el al. \\cite{kohlberger2012evaluating} proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation. \n\t\n\tA number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al. \\cite{nair2018exploring} computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al. \\cite{jungo2018uncertainty} aggregated voxel-wise uncertainties into a score per target structure\n\tand showed that the computed uncertainty score enabled identification of erroneous segmentations.\n\t\n\tUnlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance \\cite{roy2019bayesian, devries2018leveraging}. For example, Roy et al. \\cite{roy2019bayesian} aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. \n\tSimilar idea was presented by DeVries et al. \\cite{devries2018leveraging} that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.\n\t\n\tWe build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties \\cite{sander2019towards}. For the first measure the multi-class entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout) \\cite{gal2016dropout}. In \\cite{sander2019towards} we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In \\cite{sander2019towards} we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. \t\n\tIn contrast to previous methods that detect segmentation failure per-patient or per-structure\\cite{roy2019bayesian, devries2018leveraging}, we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge \\cite{bernard2018deep}, the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.\n\t\n\t\\section*{Data}\n\t\n\tIn this study data from the MICCAI \\num{2017} Automated Cardiac Diagnosis Challenge (ACDC) \\cite{bernard2018deep} was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al.~\\cite{bernard2018deep}. Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (\\num{1.5} and \\num{3.0} T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from \\num{1.37} to \\SI{1.68}{\\milli\\meter} (average reconstruction matrix \\num{243} $\\times$ \\num{217} voxels) with slice spacing varying from \\num{5} to \\SI{10}{\\milli\\meter}. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [\\num{0.0}, \\num{1.0}] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to \\num{1.4}$\\times\\SI{1.4}{\\milli\\meter}^2$. \n\t\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\n\t\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient099_slice02_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example1}}\n\t\n\t\t\\subfloat[]{\\includegraphics[width=5in]{figures\/qualitative\/auto\/patient097_slice00_ES_emap_auto.pdf}%\n\t\t\\label{fig_seg_qual_example2}}\n\t\t\n\t\t\\caption{Examples of automatic segmentations generated by different segmentation models for two cardiac MRI scans (rows) at ES at the base of the heart.}\n\t\t\\label{fig_seg_qualitative_results}\n\t\\end{figure}\n\t\n\t\\section*{Methods}\n\t\n\tTo investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Figure~\\ref{fig_overview_method} visualizes this approach.\n\t\n\t\n\t\\subsection*{Automatic segmentation of cardiac MRI}\n\t\n\tTo perform segmentation of LV, RV, and LVM in cardiac MR images i.e. \\num{2}D CMR scans, three state-of-the-art CNNs are trained. Each of the three networks takes a CMR image as input and has four output channels providing probabilities for the three cardiac structures (LV, RV, LVM) and background. Softmax probabilities are calculated over the four tissue classes. Patient volumes at ED and ES are processed separately. During inference the \\num{2}D automatic segmentation masks are stacked into a \\num{3}D volume per patient and cardiac phase. After segmentation, the largest \\num{3}D connected component for each class is retained and volumes are resampled to their original voxel resolution. Segmentation networks differ substantially regarding architecture, number of parameters and receptive field size. To assess predictive uncertainties from the segmentation models \\textit{Monte Carlo dropout} (MC-dropout) introduced by Gal \\& Ghahramani \\cite{gal2016dropout} is implemented in every network. The following three segmentation networks were evaluated: Bayesian Dilated CNN, Bayesian Dilated Residual Network, Bayesian U-net.\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian Dilated CNN (DN)}: The Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers \\num{1} to \\num{8} serve as feature extraction layers with small convolution kernels of size \\num{3}$\\times$\\num{3} voxels. No padding is applied after convolutions. The number of kernels increases from \\num{32} in the first eight layers, to \\num{128} in the final two fully connected classification layers, implemented as \\num{1}$\\times$\\num{1} convolutions. The dilation level is successively increased between layers \\num{2} and \\num{7} from \\num{2} to \\num{32} which results in a receptive field for each voxel of \\num{131}$\\times$\\num{131} voxels, or \\num{18.3}$\\times$ $\\SI{18.3}{\\centi\\meter}^2$. All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers \\num{2} to \\num{9}. In order to convert the original DN~\\cite{wolterink2017automatic} into a Bayesian DN dropout is added as the last operation in all but the final layer and \\num{10} percent of a layer's hidden units are randomly switched off. \n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian Dilated Residual Network (DRN)}: The Bayesian DRN is based on the original DRN from Yu et al. \\cite{yu2017dilated} for image segmentation. More specifically, the DRN-D-22\\cite{yu2017dilated} is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with \\num{1}$\\times$\\num{1} convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from \\num{16} in the first level, to \\num{512} in the two final levels. The first convolutional layer in level \\num{1} uses \\num{16} kernels of size \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3}. The remaining trainable layers use small \\num{3}$\\times$\\num{3} voxel kernels and zero-padding of size \\num{1}. Level \\num{2} to \\num{4} use a strided convolution of size \\num{2}. To further increase the receptive field convolutional layers in level \\num{5}, \\num{6} and \\num{7} use a dilation factor of \\num{2}, \\num{4} and \\num{2}, respectively. Furthermore, levels \\num{3} to \\num{6} consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off \\num{10} percent of a layer's hidden units converts the original DRN~\\cite{yu2017dilated} into a Bayesian DRN.\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Bayesian U-net (U-net)}: The standard architecture of the U-net~\\cite{ronneberger2015u} is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of \\num{3}$\\times$\\num{3} voxels and zero-padding of size \\num{1}. Downsampling of the input is accomplished by employing a max pooling operation in block \\num{2} to \\num{4} of the contracting path and the bottleneck using a convolutional kernel of size \\num{2}$\\times$\\num{2} voxels and stride \\num{2}. Upsampling is performed by a transposed convolutional layer in block \\num{1} to \\num{4} of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using \\num{3}$\\times$\\num{3} voxel kernels with zero-padding size \\num{1}. The final convolutional layer of the network acts as a classifier and uses \\num{1}$\\times$\\num{1} convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from \\num{64} in the first block of the contracting path to \\num{1024} in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from \\num{1024} to \\num{64}. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and \\num{10} percent of a layer's hidden units are randomly switched off.\n\t\n\t\\subsection*{Assessment of predictive uncertainties} \\label{uncertainty_maps}\n\tTo detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps $\\bf E \\in [0, 1]^{H\\times W}$ (e-map) are computed where $H$ and $W$ denote the height and width of the original CMRI, respectively.\n\t\n\tSecond, by applying MC-dropout in testing, softmax probabilities with a number of samples $T$ per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes $C$\\label{ref:maximum_variance} is computed\n\t\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\\textbf{B} (I)^{(x, y)} &= \\frac{1}{C} \\sum_{c=1}^{C} \\sqrt{\\frac{1}{T-1} \\sum_{t=1}^{T} \\big(p_t(I)^{(x, y, c)} - \\hat{\\mu}^{(x, y, c)} \\big)^2 } \\; ,\n\t\\end{align}\n\t\\endgroup\n\t\n\twhere $\\textbf{B}(I)^{(x, y)} \\in [0, 1]$ denotes the normalized value of the Bayesian uncertainty map (b-map) at position $(x, y)$ in \\num{2}D slice $I$, $C$ is equal to the number of classes, $T$ is the number of samples and $p_t(I)^{(x, y, c)}$ denotes the softmax probability at position $(x, y)$ in image $I$ for class $c$. The predictive mean per class $\\hat{\\mu}^{(x, y, c)}$ of the samples is computed as follows:\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\\hat{\\mu}^{(x, y, c)} &= \\frac{1}{T} \\sum_{t=1}^{T} p_t(I)^{(x, y, c)} \\; .\n\t\\end{align}\n\t\\endgroup\n\t\n\tIn addition, the predictive mean per class is used to determine the tissue class per voxel.\n\t\n\t\\subsection*{Calibration of uncertainty maps} \n\tIdeally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. \\textit{Risk-coverage curves} introduced by Geifman et al.\\cite{geifman2017selective} visualize whether incorrectly segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage). \n\t\n\tTo generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into \\num{100} percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between $[0, 100]$ percent.\n\t\n\t\n\t\\subsection*{Detection of segmentation failures}\n\t\n\tTo detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of \\num{8}$\\times$\\num{8} voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.\n\t\n\tThe detection network is a shallow Residual Network (S-ResNet) \\cite{he2016deep} consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses \\num{16} kernels of \\num{7}$\\times$\\num{7} voxels and zero-padding of size \\num{3} and second layer \\num{32} kernels of \\num{3}$\\times$\\num{3} voxels and zero-padding of \\num{1} voxel. Level \\num{2} to \\num{4} each consist of one residual block that contains two convolutional layers with \\num{3}$\\times$\\num{3} voxels kernels with zero-padding of size \\num{1}. The first convolutional layer of each residual block uses a strided convolution of \\num{2} voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from \\num{16} in level \\num{1} to \\num{128} in level \\num{4}. The network is a \\num{2}D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. \\label{patch_size} The final classifier consists of three fully convolutional layers, implemented as \\num{1}$\\times$\\num{1} convolutions, with \\num{128} feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ($p=0.5$) were added between the residual blocks and the fully convolutional layers of the classifier.\n\t\t\n\t\\begin{table*}\n\t\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout enabled during testing) in terms of Dice coefficient (top) and Hausdorff distance (bottom) (mean $\\pm$ standard deviation). Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task whereas numbers accentuated in red\/bold are ranked first in the combined segmentation \\& detection task. The last row states the performance of the winning model in the ACDC challenge (on \\num{100} patient images) \\cite{isensee2017automatic}. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\t\\label{table_overall_segmentation_performance}\n\t\t\\centering\n\t\t\\tiny\n\t\t\\subfloat[Dice coefficient]{\n\t\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\t\\hline\n\t\t\t\t& \\textbf{Uncertainty} & \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\t\\hline\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-Brier & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04) & \\phantom{x}0.875$\\pm$0.03) & \\phantom{x}0.901$\\pm$0.11 & \\phantom{x}0.832$\\pm$0.10) & \\phantom{x}0.884$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.949$\\pm$0.02 & *0.885$\\pm$0.03 & *0.937$\\pm$0.06 & *0.905$\\pm$0.05 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\t\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-Brier+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.922$\\pm$0.04 & \\phantom{x}0.875$\\pm$0.04 & \\phantom{x}0.912$\\pm$0.08 & \\phantom{x}0.839$\\pm$0.11 & \\phantom{x}0.882$\\pm$0.04 \\\\ \n\t\t\t\t& \\textbf{b-map} & *0.966$\\pm$0.01 & *0.950$\\pm$0.01 & *0.886$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.942}}$\\pm$0.03 & *0.916$\\pm$0.04 & *0.912$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-soft-Dice & & \\phantom{x}0.960$\\pm$0.02 & \\phantom{x}0.921$\\pm$0.04 & \\phantom{x}0.870$\\pm$0.04 & \\phantom{x}0.909$\\pm$0.08 & \\phantom{x}0.812$\\pm$0.12 & \\phantom{x}0.879$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{e-map} & *0.965$\\pm$0.01 & *0.945$\\pm$0.02 & *0.879$\\pm$0.04 & *0.938$\\pm$0.03 & *0.891$\\pm$0.06 & *0.905$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-soft-Dice+MC & & \\phantom{x}0.958$\\pm$0.02 & \\phantom{x}0.913$\\pm$0.05 & \\phantom{x}0.868$\\pm$0.04 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.818$\\pm$0.12 & \\phantom{x}0.875$\\pm$0.04 \\\\\n\t\t\t\t& \\textbf{b-map} & *0.964$\\pm$0.01 & *0.944$\\pm$0.02 & *0.877$\\pm$0.04 & *0.939$\\pm$0.03 & *0.900$\\pm$0.05 & *0.904$\\pm$0.03 \\\\ \t\t\t\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-CE & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.03 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.912$\\pm$0.06 & \\phantom{x}0.850$\\pm$0.09 & \\phantom{x}0.891$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.964$\\pm$0.01 & *0.943$\\pm$0.02 & *0.886$\\pm$0.03 & *0.937$\\pm$0.03 & *0.899$\\pm$0.04 & *0.908$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-CE+MC & & \\phantom{x}0.961$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.03 & \\phantom{x}0.877$\\pm$0.03 & \\phantom{x}0.913$\\pm$0.06 & \\phantom{x}0.847$\\pm$0.10 & \\phantom{x}0.890$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & *0.965$\\pm$0.01 & *0.948$\\pm$0.01 & *0.887$\\pm$0.03 & *0.939$\\pm$0.03 & *0.911$\\pm$0.04 & *0.909$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-soft-Dice & & \\phantom{x}0.964$\\pm$0.01 & \\phantom{x}\\textbf{0.937}$\\pm$0.02 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}\\textbf{0.919}$\\pm$0.06 & \\phantom{x}0.856$\\pm$0.09 & \\phantom{x}\\textbf{0.900}$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.967$\\pm$0.01 & *0.945$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & \\phantom{x}0.934$\\pm$0.04 & *0.892$\\pm$0.06 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-soft-Dice+MC & & \\phantom{x}0.963$\\pm$0.02 & \\phantom{x}0.935$\\pm$0.03 & \\phantom{x}0.886$\\pm$0.03 & \\phantom{x}0.921$\\pm$0.06 & \\phantom{x}\\textbf{0.857}$\\pm$0.09 & \\phantom{x}0.899$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & *0.947$\\pm$0.02 & \\phantom{x}0.893$\\pm$0.03 & *0.938$\\pm$0.03 & *0.907$\\pm$0.04 & *0.912$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-CE & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.923$\\pm$0.05 & \\phantom{x}0.878$\\pm$0.03 & \\phantom{x}0.907$\\pm$0.07 & \\phantom{x}0.840$\\pm$0.08 & \\phantom{x}0.885$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.966$\\pm$0.01 & *0.946$\\pm$0.02 & *0.890$\\pm$0.03 & *0.935$\\pm$0.04 & *0.901$\\pm$0.06 & *0.909$\\pm$0.03 \\\\ \n\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-CE+MC & & \\phantom{x}0.962$\\pm$0.02 & \\phantom{x}0.926$\\pm$0.04 & \\phantom{x}0.879$\\pm$0.03 & \\phantom{x}0.909$\\pm$0.07 & \\phantom{x}0.849$\\pm$0.07 & \\phantom{x}0.887$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}0.967$\\pm$0.01 & \\textbf{\\textcolor{red}{*0.954}}$\\pm$0.02 & *0.893$\\pm$0.03 & *0.940$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.920}}$\\pm$0.04 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-soft-Dice & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.928$\\pm$0.04 & \\phantom{x}0.888$\\pm$0.03 & \\phantom{x}0.914$\\pm$0.08 & \\phantom{x}0.844$\\pm$0.09 & \\phantom{x}0.896$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{e-map} & \\phantom{x}0.968$\\pm$0.01 & *0.943$\\pm$0.03 & *0.898$\\pm$0.03 & \\phantom{x}0.930$\\pm$0.05 & *0.886$\\pm$0.07 & *0.911$\\pm$0.03 \\\\ \n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-soft-Dice+MC & & \\phantom{x}\\textbf{0.965}$\\pm$0.02 & \\phantom{x}0.929$\\pm$0.04 & \\phantom{x}\\textbf{0.889}$\\pm$0.03 & \\phantom{x}0.911$\\pm$0.10 & \\phantom{x}0.845$\\pm$0.09 & \\phantom{x}0.897$\\pm$0.03 \\\\ \n\t\t\t\t& \\textbf{b-map} & \\phantom{x}\\textbf{\\textcolor{red}{0.968}}$\\pm$0.01 & *0.948$\\pm$0.03 & \\textbf{\\textcolor{red}{*0.900}}$\\pm$0.03 & \\phantom{x}0.928$\\pm$0.09 & *0.895$\\pm$0.06 & \\textbf{\\textcolor{red}{*0.914}}$\\pm$0.03 \\\\\n\t\t\t\t\n\t\t\t\t\\hdashline\n\t\t\t\tIsensee et al. & & \\phantom{x}0.966& \\phantom{x}0.941 & \\phantom{x}0.899 & \\phantom{x}0.924 & \\phantom{x}0.875\t& \\phantom{x}0.908 \\\\\n\t\t\t\t\n\t\t\t\t\\hline\n\t\t\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\label{table_seg_perf_dsc}\n\t\t} \n\t\t\\vspace{13ex}\n\t\t\\centering\n\t\t\\tiny\n\t\t\\subfloat[Hausdorff Distance]{\n\t\t\t\\begin{tabular}{| C{1.6cm} | C{0.8cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\t\\hline\n\t\t\t\t& \\textbf{Uncertainty}& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\t\\textbf{Model} & \\textbf{map for detection} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\t\\hline\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-Brier & & \\phantom{x}6.7$\\pm$3.1 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{x}10.2$\\pm$6.9 & \\phantom{x}10.7$\\pm$7.7 & \\phantom{x}16.7$\\pm$6.8 & \\phantom{x}12.3$\\pm$5.8 \\\\\n\t\t\t\t& \\textbf{e-map} & *5.7$\\pm$2.7 & *11.7$\\pm$5.2 & *\\phantom{x}8.3$\\pm$5.9 & *\\phantom{x}8.0$\\pm$6.5 & *14.2$\\pm$5.6 & *\\phantom{x}9.7$\\pm$5.0 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-Brier+MC & & \\phantom{x}6.9$\\pm$3.3 & \\phantom{x}13.1$\\pm$5.2 & \\phantom{xx}9.9$\\pm$5.9 & \\phantom{xx}9.9$\\pm$5.7 & \\phantom{x}15.0$\\pm$6.1 & \\phantom{x}12.0$\\pm$5.2 \\\\\n\t\t\t\t& \\textbf{b-map} & *5.5$\\pm$2.6 & *10.6$\\pm$5.1 & *\\phantom{x}7.4$\\pm$4.2 & *\\phantom{x}7.5$\\pm$6.0 & *12.6$\\pm$5.6 & *\\phantom{x}8.8$\\pm$4.0 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDN-soft-Dice & & \\phantom{x}7.1$\\pm$3.5 & \\phantom{x}14.8$\\pm$6.8 & \\phantom{x}11.0$\\pm$6.6 & \\phantom{x}10.2$\\pm$5.6 & \\phantom{x}17.7$\\pm$7.8 & \\phantom{x}12.9$\\pm$6.2 \\\\\n\t\t\t\t& \\textbf{e-map} & *5.6$\\pm$2.8 & *12.6$\\pm$5.5 & *\\phantom{x}8.6$\\pm$4.6 & *\\phantom{x}8.0$\\pm$5.0 & *14.6$\\pm$5.9 & *\\phantom{x}9.6$\\pm$4.5 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDN-soft-Dice+MC & & \\phantom{x}7.7$\\pm$3.9 & \\phantom{x}14.4$\\pm$6.0 & \\phantom{x}10.5$\\pm$4.9 & \\phantom{x}10.1$\\pm$5.3 & \\phantom{x}17.2$\\pm$8.0 & \\phantom{x}12.5$\\pm$5.3 \\\\\n\t\t\t\t& \\textbf{b-map} & *6.3$\\pm$3.4 & *11.5$\\pm$4.0 & *\\phantom{x}8.6$\\pm$4.8 & *\\phantom{x}7.8$\\pm$4.6 & *13.6$\\pm$4.9 & *\\phantom{x}9.6$\\pm$4.7 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-CE & & \\phantom{x}5.5$\\pm$2.6 & \\phantom{x}11.7$\\pm$5.4 & \\phantom{xx}8.2$\\pm$6.2 & \\phantom{xx}9.1$\\pm$6.4 & \\phantom{x}13.7$\\pm$5.6 & \\phantom{xx}8.9$\\pm$5.3 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.5$\\pm$1.9 & *\\phantom{x}9.0$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.1 & *\\phantom{x}6.2$\\pm$4.4 & *11.1$\\pm$5.3 & \\textbf{\\textcolor{red}{*\\phantom{x}6.7}}$\\pm$4.2 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-CE+MC & & \\phantom{x}5.6$\\pm$2.6 & \\phantom{x}11.9$\\pm$5.5 & \\phantom{xx}8.0$\\pm$5.9 & \\phantom{xx}8.7$\\pm$5.5 & \\phantom{x}13.5$\\pm$5.9 & \\phantom{xx}\\textbf{8.5}$\\pm$4.5 \\\\\t\t \n\t\t\t\t& \\textbf{b-map} & \\textbf{\\textcolor{red}{*4.2}}$\\pm$1.6 & \\textbf{\\textcolor{red}{*\\phantom{x}8.1}}$\\pm$3.7 & \\textbf{\\textcolor{red}{*\\phantom{x}6.1}}$\\pm$4.2 & \\textbf{\\textcolor{red}{*\\phantom{x}5.4}}$\\pm$3.6 & \\textbf{\\textcolor{red}{*10.1}}$\\pm$5.5 & *\\phantom{x}6.8$\\pm$3.8 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tDRN-soft-Dice & & \\phantom{x}\\textbf{5.5}$\\pm$2.8 & \\phantom{x}11.9$\\pm$6.1 & \\phantom{xx}\\textbf{7.7}$\\pm$5.9 & \\phantom{xx}8.5$\\pm$5.0 & \\phantom{x}13.5$\\pm$5.5 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.2 & *\\phantom{x}9.4$\\pm$4.5 & \\phantom{xx}6.7$\\pm$4.7 & *\\phantom{x}6.7$\\pm$4.4 & *11.6$\\pm$5.4 & *\\phantom{x}7.0$\\pm$3.3 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tDRN-soft-Dice+MC & & \\phantom{x}5.7$\\pm$3.2 & \\phantom{x}\\textbf{11.5}$\\pm$5.1 & \\phantom{xx}8.0$\\pm$5.5 & \\phantom{xx}\\textbf{8.3}$\\pm$4.5 & \\phantom{x}\\textbf{13.3}$\\pm$5.1 & \\phantom{xx}8.9$\\pm$5.1 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.2 & *\\phantom{x}9.3$\\pm$4.5 & *\\phantom{x}6.3$\\pm$4.0 & *\\phantom{x}6.2$\\pm$4.1 & *10.4$\\pm$5.0 & *\\phantom{x}7.0$\\pm$3.4 \\\\\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-CE & & \\phantom{x}6.4$\\pm$4.3 & \\phantom{x}15.7$\\pm$8.6 & \\phantom{xx}9.0$\\pm$6.0 & \\phantom{xx}9.7$\\pm$5.3 & \\phantom{x}17.0$\\pm$7.7 & \\phantom{x}12.7$\\pm$8.2 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.9$\\pm$3.9 & *12.2$\\pm$8.1 & *\\phantom{x}7.1$\\pm$5.6 & *\\phantom{x}6.1$\\pm$3.2 & *12.6$\\pm$6.5 & *\\phantom{x}8.4$\\pm$6.3 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-CE+MC & & \\phantom{x}6.2$\\pm$4.2 & \\phantom{x}15.3$\\pm$8.4 & \\phantom{xx}8.8$\\pm$5.8 & \\phantom{xx}9.2$\\pm$5.0 & \\phantom{x}16.5$\\pm$7.6 & \\phantom{x}12.0$\\pm$8.0 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.3$\\pm$1.6 & *\\phantom{x}9.9$\\pm$6.6 & *\\phantom{x}6.7$\\pm$4.8 & *\\phantom{x}5.4$\\pm$2.8 & *10.3$\\pm$4.7 & *\\phantom{x}7.6$\\pm$6.2 \\\\\n\t\t\t\t\n\t\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightGreen}\n\t\t\t\tU-net-soft-Dice & & \\phantom{x}6.1$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.6 & \\phantom{x}10.6$\\pm$8.4 & \\phantom{xx}9.2$\\pm$7.1 & \\phantom{x}16.3$\\pm$7.5 & \\phantom{x}12.6$\\pm$9.6 \\\\\n\t\t\t\t& \\textbf{e-map} & *4.6$\\pm$2.3 & *11.3$\\pm$7.2 & *\\phantom{x}7.5$\\pm$5.5 & *\\phantom{x}7.3$\\pm$6.5 & *13.7$\\pm$7.6 & *\\phantom{x}9.8$\\pm$8.0 \\\\\n\t\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\t\n\t\t\t\t\\rowcolor{LightCyan}\n\t\t\t\tU-net-soft-Dice+MC & & \\phantom{x}6.2$\\pm$3.9 & \\phantom{x}14.1$\\pm$7.7 & \\phantom{x}10.5$\\pm$8.7 & \\phantom{xx}9.0$\\pm$7.0 & \\phantom{x}15.8$\\pm$7.5 & \\phantom{x}12.1$\\pm$9.2 \\\\\n\t\t\t\t& \\textbf{b-map} & *4.5$\\pm$2.1 & *10.4$\\pm$7.2 & *\\phantom{x}7.6$\\pm$7.0 & *\\phantom{x}7.3$\\pm$6.9 & *12.9$\\pm$6.6 & *\\phantom{x}9.8$\\pm$8.4 \\\\\n\t\t\t\t\\hdashline\n\t\t\t\tIsensee et al. & & \\phantom{x}7.1 & \\phantom{x}14.3 & \\phantom{xx}8.9 & \\phantom{xx}9.8 & \\phantom{x}16.3 & \\phantom{x}10.4 \\\\\n\t\t\t\t\\hline\n\t\t\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\label{table_seg_perf_hd}\n\t\t} \n\t\t\n\t\\end{table*} \n\t\n\t\n\t\\section*{Evaluation}\\label{evaluation}\n\t\t\n\tAutomatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the \\num{3}D Dice-coefficient (DC) and \\num{3}D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al.\\cite{bernard2018deep} for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).\n\n\tTo evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.\n\t\n\tFinally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann\u2013Whitney U test.\n\t\n\n\t\n\t\\section*{Experiments}\n\t\n\tTo use stratified four-fold cross-validation the dataset was split into training (75\\%) and test (25\\%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively. \n\t\n\t\\subsection*{Training segmentation networks} \\label{training_segmentation}\n\t\n\tDRN and U-net were trained with a patch size of \\num{128}$\\times$\\num{128} voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with \\num{151}$\\times$\\num{151} voxel samples were used. Zero-padding to \\num{281}$\\times$\\num{281} was performed to accommodate the \\num{131}$\\times$\\num{131} voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by \\num{90} degree rotations of the images. All models were initially trained with three loss functions: soft-Dice\\cite{milletari2016v} (SD); cross-entropy (CE); and Brier loss\\cite{brier1950verification}. However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.\n\t\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{soft-Dice}_{c} = \\frac{\\sum_{i=1}^{N} R_{c}(i) \\; A_{c}(i) }{\\sum_{i=1}^{N} R_{c}(i) + \\sum_{i=1}^{N} A_{c}(i)} \\; ,\n\t\\end{align}\n\t\\endgroup\n\twhere $N$ denotes the number of voxels in an image, $R_{c}$ is the binary reference image for class $c$ and $A_{c}$ is the probability map for class $c$.\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{Cross-Entropy}_{c} = - \\; \\sum_{i=1}^{N} t_{ic} \\; \\log \\; p(y_i=c|x_i) \\; , \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\t\\end{align}\n\t\\endgroup\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{align}\n\t\t\\text{Brier}_{c} = \\sum_{i=1}^{N} \\big(t_{ic} - p(y_i=c|x_{i}) \\big)^2 \\; , \\text{ where } t_{ic} = 1 \\text{ if } y_{i}=c, \\text{ and \\num{0} otherwise.}\n\t\\end{align}\n\t\\endgroup\n\t\n\twhere $N$ denotes the number of voxels in an image and $p$ denotes the probability for a specific voxel $x_i$ with corresponding reference label $y_i$ for class $c$.\n\t\n\tChoosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss\\cite{sander2019towards}.\n\t\n\tAll models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of \\num{0.001} which decayed with a factor of \\num{0.1} after every 25,000 steps. Training DN used the snapshot ensemble technique~\\cite{huang2017snapshot}, where after every 10,000 iterations the learning rate was reset to its original value of \\num{0.02}.\n\t\n\tAll three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of \\num{16}. Network parameters were optimized using the Adam optimizer \\cite{kingmadp}. Furthermore, models were regularized with weight decay to increase generalization performance. \n\t\n\t\\subsection*{Training detection network}\\label{label_training_detection}\n\t\n\tTo train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a \\num{2}D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures $\\mathcal{S}_I$ an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average \\SI{4.65}{\\milli\\meter}) and two voxels (equivalent to on average \\SI{3.1}{\\milli\\meter}) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to $\\mathcal{S}_I$ if it is part of a \\num{2}D \\num{4}-connected cluster of minimum size \\num{10} voxels. This value was found in preliminary experiments by evaluating values $\\{1, 5, 10, 15, 20\\}$. However, for apical slices all segmentation errors are included in $\\mathcal{S}_I$ regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability~\\cite{bernard2018deep}. Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.\n\t\n\tUsing the set $\\mathcal{S}_I$ a binary label $t_j$ is assigned to each patch $P_j^{(I)}$ indicating whether $P_j^{(I)}$ contains at least one voxel belonging to set $\\mathcal{S}_I$ where $j \\in \\{1 \\dots M \\}$ and $M$ denotes the number of patches in a slice $I$. \n\t\n\tThe detection network is trained by minimizing a weighted binary cross-entropy loss:\n\t\n\t\\begingroup\n\t\\small\n\t\\begin{equation} \\label{eq_detection_loss}\n\t\t\\mathcal{L}_{DT} = - \\sum_{j \\in P^{(I)}} w_{pos} \\; t_j \\log p_j + (1 - t_j) \\log (1 - p_j) \\; ,\n\t\\end{equation}\n\t\\endgroup\n\t\n\twhere $w_{pos}$ represents a scalar weight, $t_j$ denotes the binary reference label and $p_j$ is the softmax probability indicating whether a particular image region $P_j^{(I)}$ contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from \\num{1.5} to \\num{3} percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network $w_{pos}$ was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.\n\t\n\tEach fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total \\num{12} detection models were trained and evaluated resulting from the different combination of \\num{3} model architectures (DRN, DN and U-net), \\num{2} loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and \\num{2} uncertainty maps (e-maps, b-maps).\n\t\n\t\n\t\\begin{table}\n\t\t\\caption{Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout (MC) enabled during testing) in terms of clinical metrics: left ventricle (LV) end-diastolic volume (EDV); LV ejection fraction (EF); right ventricle (RV) EDV; RV ejection fraction; and LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations and 2) simulated manual correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements. Each combination comprises a block of two rows. A row in which column \\textit{Uncertainty map for detection} indicates e- or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in black\/bold are ranked first in the segmentation only task. Numbers in red indicate statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach for the specific clinical metric. Best viewed in color.}\n\t\t\\label{table_cardiac_function_indices}\n\t\t\\tiny\n\t\t\\begin{tabular}{| C{1.6cm} | C{1.cm} | C{0.3cm} c C{0.3cm} | C{0.2cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} | C{0.3cm} C{0.8cm} C{0.3cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{1}{c|}{\\thead{\\textbf{Uncertainty} \\\\ \\textbf{map for} \\\\ \\textbf{detection}}} & \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\t\t\n\t\t\n\t\t\t\\textbf{Method} & & \\textbf{$\\rho$} & \\multicolumn{1}{l}{\\textbf{bias$\\pm\\sigma$}} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias$\\pm \\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-Brier & & 0.997 & \\phantom{x}\\textbf{0.0$\\pm$6.1} & 4.5 & 0.892 & \\phantom{x}2.2$\\pm$\\phantom{x}9.2 & 4.2 & 0.977 & \\textbf{-0.2$\\pm$11.8} & \\phantom{x}8.5 & 0.834 & 5.3$\\pm$10.3 & \\phantom{x}8.5 & 0.984 & -2.7$\\pm$\\phantom{x}9.0 & \\phantom{x}\\textbf{7.0} \\\\\n\t\t\t& e-map & 0.997 & \\phantom{x}0.0$\\pm$5.5 & 4.0 & 0.982 & \\phantom{x}0.1$\\pm$\\phantom{x}3.8 & 2.2 & 0.992 & \\phantom{x}0.0$\\pm$\\phantom{x}6.9 & \\phantom{x}5.2 & 0.955 & 1.9$\\pm$\\phantom{x}5.5 & \\phantom{x}4.1 & 0.986 & -2.1$\\pm$\\phantom{x}8.4 & \\phantom{x}6.6 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-Brier+MC & & 0.997 & \\phantom{x}1.6$\\pm$6.0 & 4.4 & 0.921 & \\phantom{x}1.1$\\pm$\\phantom{x}7.9 & 3.9 & 0.975 & \\phantom{x}6.7$\\pm$12.4 & \\phantom{x}9.6 & 0.854 & 3.5$\\pm$\\phantom{x}9.9 & \\phantom{x}7.7 & 0.984 & \\phantom{x}0.7$\\pm$\\phantom{x}9.2 & \\phantom{x}7.1 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}1.0$\\pm$5.3 & 3.9 & 0.991 & \\phantom{x}0.0$\\pm$\\phantom{x}2.7 & 1.9 & 0.993 & \\phantom{x}3.2$\\pm$\\phantom{x}6.7 & \\phantom{x}5.7 & 0.975 & 0.8$\\pm$\\phantom{x}4.0 & \\phantom{x}3.0 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}8.3 & \\phantom{x}6.5 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDN-soft-Dice & & 0.996 & \\phantom{x}1.2$\\pm$6.5 & 4.9 & 0.918 & \\phantom{x}1.5$\\pm$\\phantom{x}8.0 & 3.9 & 0.972 & \\phantom{x}\\textbf{0.2$\\pm$13.0} & \\phantom{x}9.6 & 0.802 & 7.2$\\pm$11.3 & 10.2 & 0.982 & -4.5$\\pm$\\phantom{x}9.6 & \\phantom{x}8.5 \\\\\n\t\t\t& e-map & 0.997 & \\phantom{x}1.0$\\pm$5.5 & 4.2 & 0.989 & \\phantom{x}0.2$\\pm$\\phantom{x}3.0 & 2.2 & 0.990 & \\phantom{x}0.2$\\pm$\\phantom{x}7.6 & \\phantom{x}5.9 & \\textcolor{red}{0.940} & \\textcolor{red}{3.3$\\pm$\\phantom{x}6.2} & \\textcolor{red}{\\phantom{x}5.2} & 0.983 & -4.3$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDN-soft-Dice+MC & & 0.996 & \\phantom{x}3.2$\\pm$7.1 & 5.6 & 0.958 & \\phantom{x}0.4$\\pm$\\phantom{x}5.7 & 3.6 & 0.964 & \\phantom{x}8.1$\\pm$14.9 & 12.3 & 0.827 & 4.8$\\pm$11.0 & \\phantom{x}8.9 & 0.978 & -0.7$\\pm$10.7 & \\phantom{x}8.3 \\\\\n\t\t\t& b-map & 0.997 & \\phantom{x}2.2$\\pm$5.6 & 4.4 & 0.988 & -0.2$\\pm$\\phantom{x}3.1 & 2.2 & 0.990 & \\phantom{x}4.0$\\pm$\\phantom{x}7.7 & \\phantom{x}7.0 & 0.959 & 1.8$\\pm$\\phantom{x}5.1 & \\phantom{x}4.1 & 0.982 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.6 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-CE & & 0.997 & -0.2$\\pm$5.5 & 4.1 & 0.968 & \\phantom{x}1.2$\\pm$\\phantom{x}5.0 & 3.5 & 0.976 & \\phantom{x}1.5$\\pm$12.1 & \\phantom{x}8.5 & 0.870 & 1.3$\\pm$\\phantom{x}9.2 & \\phantom{x}6.9 & 0.980 & \\phantom{x}\\textbf{0.6$\\pm$10.2} & \\phantom{x}7.8 \\\\\n\t\t\t& e-map & 0.998 & \\phantom{x}0.2$\\pm$4.5 & 3.5 & 0.992 & \\phantom{x}0.2$\\pm$\\phantom{x}2.5 & 1.9 & 0.988 & \\phantom{x}1.4$\\pm$\\phantom{x}8.5 & \\phantom{x}6.2 & 0.952 & 0.8$\\pm$\\phantom{x}5.6 & \\phantom{x}4.2 & 0.985 & \\phantom{x}0.4$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-CE+MC & & \\textbf{0.998} & \\phantom{x}1.0$\\pm$4.9 & \\textbf{3.9} & 0.972 & \\phantom{x}0.8$\\pm$\\phantom{x}4.6 & 3.1 & 0.973 & \\phantom{x}4.8$\\pm$12.8 & \\phantom{x}9.4 & 0.876 & \\textbf{0.4$\\pm$\\phantom{x}9.1} & \\phantom{x}\\textbf{6.6} & 0.981 & \\phantom{x}1.9$\\pm$\\phantom{x}9.9 & \\phantom{x}7.6 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}0.7$\\pm$4.6 & 3.6 & 0.992 & -0.1$\\pm$\\phantom{x}2.5 & 1.8 & 0.992 & \\phantom{x}2.9$\\pm$\\phantom{x}6.9 & \\phantom{x}5.7 & 0.967 & 0.6$\\pm$\\phantom{x}4.6 & \\phantom{x}3.4 & 0.987 & \\phantom{x}1.2$\\pm$\\phantom{x}8.3 & \\phantom{x}6.6 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tDRN-soft-Dice & & \\textbf{0.998} & \\phantom{x}0.8$\\pm$5.1 & 4.0 & 0.976 & \\phantom{x}0.2$\\pm$\\phantom{x}4.4 & 3.0 & 0.980 & \\phantom{x}\\textbf{0.2$\\pm$11.0} & \\phantom{x}\\textbf{7.5} & \\textbf{0.882} & 3.1$\\pm$\\phantom{x}8.7 & \\phantom{x}6.8 & 0.984 & -3.5$\\pm$\\phantom{x}9.1 & \\phantom{x}7.5 \\\\\n\t\t\t& e-map & 0.998 & \\phantom{x}0.7$\\pm$4.4 & 3.5 & 0.987 & -0.1$\\pm$\\phantom{x}3.1 & 2.2 & 0.987 & \\phantom{x}0.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.4 & 0.938 & 1.9$\\pm$\\phantom{x}6.3 & \\phantom{x}4.9 & 0.986 & -3.5$\\pm$\\phantom{x}8.7 & \\phantom{x}7.1 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tDRN-soft-Dice+MC & & \\textbf{0.998} & \\phantom{x}1.8$\\pm$5.1 & \\textbf{3.9} & \\textbf{0.979} & -0.3$\\pm$\\phantom{x}4.1 & 2.9 & 0.977 & \\phantom{x}3.5$\\pm$11.7 & \\phantom{x}8.1 & 0.868 & 1.7$\\pm$\\phantom{x}9.5 & \\phantom{x}6.8 & 0.983 & -1.4$\\pm$\\phantom{x}9.5 & \\phantom{x}7.4 \\\\\n\t\t\t& b-map & 0.998 & \\phantom{x}1.7$\\pm$4.7 & 3.7 & 0.990 & -0.2$\\pm$\\phantom{x}2.9 & 2.1 & 0.989 & \\phantom{x}2.3$\\pm$\\phantom{x}8.1 & \\phantom{x}5.8 & 0.959 & 0.8$\\pm$\\phantom{x}5.2 &\\phantom{x}3.8 & 0.986 & -1.3$\\pm$\\phantom{x}8.5 & \\phantom{x}6.8 \\\\\n\t\t\t\n\t\t\t\\hdashline[5pt\/5pt]\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-CE & & 0.995 & -4.7$\\pm$7.2 & 6.1 & 0.954 & \\phantom{x}4.1$\\pm$\\phantom{x}6.0 & 5.1 & 0.963 & -7.6$\\pm$15.2 & 12.1 & 0.870 & 5.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.1 & 0.971 & -8.5$\\pm$12.2 & 11.5 \\\\\n\t\t\t& e-map & 0.998 & -3.2$\\pm$4.8 & 4.4 & 0.992 & \\phantom{x}1.7$\\pm$\\phantom{x}2.6 & 2.4 & 0.987 & -4.1$\\pm$\\phantom{x}9.1 & \\phantom{x}6.7 & 0.957 & 2.6$\\pm$\\phantom{x}5.2 & \\phantom{x}4.1 & 0.983 & -5.7$\\pm$\\phantom{x}9.3 & \\phantom{x}8.2 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-CE+MC & & 0.995 & -4.3$\\pm$7.2 & 5.9 & 0.958 & \\phantom{x}3.8$\\pm$\\phantom{x}5.8 & 4.9 & 0.968 & -4.8$\\pm$14.1 & 10.7 & 0.867 & 5.0$\\pm$\\phantom{x}9.1 & \\phantom{x}7.9 & 0.972 & -8.1$\\pm$12.0 & 11.1 \\\\\n\t\t\t& b-map & 0.997 & -3.5$\\pm$5.5 & 4.9 & 0.990 & \\phantom{x}1.6$\\pm$\\phantom{x}2.9 & 2.6 & 0.992 & -1.8$\\pm$\\phantom{x}7.0 & \\phantom{x}4.9 & 0.974 & 1.6$\\pm$\\phantom{x}4.1 & \\phantom{x}3.3 & 0.981 & -6.8$\\pm$10.0 & \\phantom{x}9.4 \\\\\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\n\t\t\t\\rowcolor{LightGreen}\n\t\t\tU-net-soft-Dice & & 0.997 & -2.0$\\pm$6.0 & 4.5 & 0.853 & \\phantom{x}3.6$\\pm$10.9 & 5.0 & 0.968 & -1.0$\\pm$14.1 & 10.0 & 0.782 & 4.8$\\pm$11.6 & \\phantom{x}9.0 & \\textbf{0.985} & -7.7$\\pm$\\phantom{x}8.8 & \\phantom{x}9.2 \\\\\n\t\t\t& e-map & 0.997 & -1.7$\\pm$5.3 & 4.1 & 0.969 & \\phantom{x}1.9$\\pm$\\phantom{x}4.9 & 3.3 & 0.981 & -0.1$\\pm$10.9 & \\phantom{x}7.5 & 0.919 & 3.3$\\pm$\\phantom{x}7.0 & \\phantom{x}5.9 & 0.984 & -6.6$\\pm$\\phantom{x}9.0 & \\phantom{x}8.7 \\\\\n\t\t\t\n\t\t\t\\hdashline[1pt\/2pt]\n\t\t\t\\rowcolor{LightCyan}\n\t\t\tU-net-soft-Dice+MC & & 0.997 & -1.8$\\pm$5.9 & 4.4 & 0.941 & \\phantom{x}3.0$\\pm$\\phantom{x}6.7 & 4.4 & 0.969 & \\phantom{x}0.6$\\pm$13.9 & \\phantom{x}9.8 & 0.792 & 4.4$\\pm$11.3 & \\phantom{x}8.7 & \\textbf{0.985} & -7.2$\\pm$\\phantom{x}8.9 & \\phantom{x}8.9 \\\\\n\t\t\t& b-map & 0.997 & -1.5$\\pm$5.3 & 4.1 & 0.979 & \\phantom{x}1.1$\\pm$\\phantom{x}4.1 & 2.9 & 0.985 & \\phantom{x}1.2$\\pm$\\phantom{x}9.4 & \\phantom{x}6.5 & 0.939 & 2.9$\\pm$\\phantom{x}6.2 & \\phantom{x}4.9 & 0.984 & -5.9$\\pm$\\phantom{x}9.0 & \\phantom{x}8.5 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\n\tThe patches used to train the network were selected randomly (\\nicefrac{2}{3}), or were forced (\\nicefrac{1}{3}) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to \\num{80}$\\times$\\num{80} voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of \\num{80}$\\times$\\num{80} voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size \\num{8}$\\times$\\num{8} voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of \\num{80}$\\times$\\num{80} voxels. \n\t\n\tModels were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size \\num{32} and Adam as optimizer\\cite{kingmadp}. Learning rate was set to \\num{0.0001} and decayed with a factor of \\num{0.1} after \\num{10,000} steps. Furthermore, dropout percentage was set to \\num{0.5} and weight decay was applied to increase generalization performance.\n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_froc_voxel_detection_rate.pdf}%\n\t\t\t\\label{fig_froc_voxel_detection}}\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/all_models_slices_prec_recall.pdf}%\n\t\t\t\\label{fig_prec_rec_slice_detection}}\n\t\t\n\t\t\\caption{Detection performance of segmentation failures generated by different combination of segmentation architectures and loss functions. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) as a function of number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. Each figure contains a curve for the six possible combination of models (three) and loss functions (two). SD denotes soft-Dice and CE cross-entropy, respectively.}\n\t\t\\label{fig_dt_perf_all_models}\n\t\\end{figure}\n\n\t\\subsection*{Segmentation using correction of the detected segmentation failures}\n\t\n\tTo investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of \\num{50} patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in\\cite{bernard2018deep}. Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.\n\n\t\n\t\\section*{Results}\n\t\n\tIn this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results. \n\t\n\t\\subsection*{Segmentation-only approach} \\label{results_seg_only}\n\t\n\tTable~\\ref{table_overall_segmentation_performance} lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance.\n\t\n\tTable~\\ref{table_cardiac_function_indices} lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.\n\n\t\\noindent \\textbf{Effect of model architecture on segmentation}: Although quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure~\\ref{fig_seg_qualitative_results} shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.\n\t\n\t\\noindent \\textbf{Effect of loss function on segmentation}: The results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.\n\t\n\tA considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.\n\t\n\t\\noindent \\textbf{Effect of MC dropout on segmentation}: The results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC. \n\n\t\\begin{table}\n\t\t\\caption{Average precision and percentage of slices with segmentation failures generated by Dilated Network (DN), Dilated Residual Network (DRN) and U-net when trained with soft-Dice (SD), CE or Brier loss. Per patient, average precision of detected slices with failure using e- or b-maps (\\num{2}$^{nd}$ and \\num{3}$^{rd}$ columns). Per patient, average percentage of slices containing segmentation failures (reference for detection task) (\\num{4}$^{th}$ and \\num{5}$^{th}$ columns).}\n\t\t\\label{table_evaluation_slice_detection}\n\t\t\\begin{tabular}{l C{1.4cm} C{1.4cm} C{1.4cm} C{1.4cm} }\n\t\t\t\\textbf{Model} & \\multicolumn{2}{c}{\\textbf{Average precision}} & \\multicolumn{2}{c}{ \\thead{\\textbf{\\% of slices} \\\\ \\textbf{with segmentation failures} }} \\\\\n\t\t\t& e-map & b-map & e-map & b-map \\\\\n\t\t\t\\hline\n\t\t\tDN-Brier & 84.0 & 83.0 & 53.7 & 52.4 \\\\\n\t\t\tDN-SD & 87.0 & 85.0 & 58.3 & 58.1 \\\\\n\t\t\t\\hdashline\n\t\t\tDRN-CE & 75.0 & 69.0 & 39.5 & 39.4 \\\\\n\t\t\tDRN-SD & 67.0 & 67.0 & 34.9 & 33.7\\\\\n\t\t\t\\hdashline\n\t\t\tU-net-CE & 81.0 & 75.0 & 54.8 & 52.5 \\\\\n\t\t\tU-net-SD & 76.0 & 76.0 & 46.7 & 45.5 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\subsection*{Detection of segmentation failures}\n\t\n\t\\noindent \\textbf{Detection of segmentation failures on voxel level}: To evaluate detection performance of segmentation failures on voxel level Figure~\\ref{fig_froc_voxel_detection} shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Figure~\\ref{fig_froc_voxel_detection}, left) or b-maps (Figure~\\ref{fig_froc_voxel_detection}, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map. \n\t\n\tThe influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps.\n\t\t\n\\begin{table*}\n\t\\caption{Comparing performance of segmentation-only approach (auto-only) with combined segmentation and detection approach for two scenarios: simulated correction of detected segmentation failures (auto$+$simulation); and manual correction of detected segmentation failures by an expert (auto$+$expert). Automatic segmentations were obtained from a U-net trained with cross-entropy. Evaluation was performed on a subset of \\num{50} patients from the ACDC dataset. Scenarios are compared against segmentation-only approach (auto-only) in terms of (a) Dice Coefficient (b) Hausdorff Distance and (c) Clinical metrics. Results obtained from simulated manual correction represent an upper bound on the maximum achievable performance. Detection network was trained with e-maps. Number with asterisk indicates statistical significant at \\num{5}\\% level w.r.t. the segmentation-only approach. Best viewed in color.}\n\t\\label{table_manual_corr_performance}\n\t\\centering\n\t\\small\n\t\\subfloat[\\textbf{Dice coefficient:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}0.964$\\pm$0.02 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.883$\\pm$0.03 & \\phantom{x}0.916$\\pm$0.05 & \\phantom{x}0.854$\\pm$0.08 & \\phantom{x}0.886$\\pm$0.04 \\\\ \n\t\t\tauto$+$simulation & \\phantom{x}0.967$\\pm$0.01 & *0.948$\\pm$0.03 & *0.894$\\pm$0.03 & *0.939$\\pm$0.03 & *0.915$\\pm$0.04 & *0.910$\\pm$0.03 \\\\ \n\t\t\n\t\t\tauto$+$expert & \\phantom{x}0.965$\\pm$0.02 & \\phantom{x}0.940$\\pm$0.03 & \\phantom{x}0.885$\\pm$0.03 & \\phantom{x}0.927$\\pm$0.04 & \\phantom{x}0.868$\\pm$0.07 & \\phantom{x}0.894$\\pm$0.03 \\\\ \n\t\t\t\n\t\t\t\\bottomrule\n\t\t\t\n\t\t\\end{tabular}\n\t\t\\label{table_manual_seg_perf_dsc}\n\t} \n\n\t\\centering\n\n\t\\subfloat[\\textbf{Hausdorff Distance:} Mean $\\pm$ standard deviation for left ventricle (LV), right ventricle (RV) and left ventricle myocardium (LVM).]{\n\t\t\\begin{tabular}{| C{2.cm} | R{1.7cm} R{1.7cm} R{1.7cm} | R{1.7cm} R{1.7cm} R{1.7cm} | }\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c|}{\\textbf{End-diastole}} & \\multicolumn{3}{c|}{\\textbf{End-systole}} \\\\\n\t\t\t\\textbf{Scenario} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} & \\multicolumn{1}{l}{\\textbf{LV}} & \\multicolumn{1}{l}{\\textbf{RV}} & \\multicolumn{1}{l|}{\\textbf{LVM}} \\\\ \n\t\t\t\\hline\n\t\t\tauto-only & \\phantom{x}5.6$\\pm$3.3 & \\phantom{x}15.7$\\pm$9.7 & \\phantom{x}8.5$\\pm$6.4 & \\phantom{x}9.2$\\pm$5.8 & \\phantom{x}16.5$\\pm$8.8 & \\phantom{x}13.4$\\pm$10.5 \\\\\n\t\t\tauto$+$simulation & \\phantom{x}4.5$\\pm$2.1 & *\\phantom{x}9.0$\\pm$4.6 & *5.9$\\pm$3.4 & *5.2$\\pm$2.5 & *10.3$\\pm$3.7 & *\\phantom{x}6.6$\\pm$2.9 \\\\\n\t\t\n\t\t\tauto$+$expert & \\phantom{x}4.9$\\pm$2.8 & *\\phantom{x}9.8$\\pm$4.3 & \\phantom{x}7.3$\\pm$4.3 & \\phantom{x}7.2$\\pm$3.3 & *12.5$\\pm$4.7 & *\\phantom{x}8.3$\\pm$3.5 \\\\\n\t\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\label{table_manual_seg_perf_hd}\n\t} \n\n\t\\subfloat[\\textbf{Clinical metrics:} a) Left ventricle (LV) end-diastolic volume (EDV) b) LV ejection fraction (EF) c) Right ventricle (RV) EDV d) RV ejection fraction e) LV myocardial mass. Quantitative results compare clinical metrics based on reference segmentations with 1) automatic segmentations; 2) simulated manual correction and 3) manual expert correction of automatic segmentations using spatial uncertainty maps. $\\rho$ denotes the Pearson correlation coefficient, \\textit{bias} denotes the mean difference between the two measurements (mean $\\pm$ standard deviation) and \\textit{MAE} denotes the mean absolute error between the two measurements.]{\n\t\t\\label{table_manual_cardiac_function_indices}\n\t\t\\small\n\t\t\\begin{tabular}{| C{2.cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm}C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} | C{0.5cm} C{0.8cm} C{0.55cm} |}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{3}{c}{\\textbf{LV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{LV$_{EF}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EDV}$}} & \\multicolumn{3}{c}{\\textbf{RV$_{EF}$}} & \\multicolumn{3}{c|}{\\textbf{LVM$_{Mass}$}} \\\\\n\n\t\t\t \\textbf{Scenario} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\textbf{$\\rho$} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} & \\multicolumn{1}{c}{\\textbf{$\\rho$}} & \\textbf{bias $\\pm\\sigma$} & \\textbf{MAE} \\\\\n\t\t\t\\hline\n\t\t\n\t\t auto-only & 0.995 & -4.4 $\\pm$7.0 & 5.7 & 0.927 & 5.0 $\\pm$7.1 & 5.8 & 0.962 & -6.4 $\\pm$16.2 & 11.9 & 0.878 & 5.8 $\\pm$8.7 & 8.0 & 0.979 & -6.4 $\\pm$10.6 & 9.5 \\\\\n\t\t\t \n\t\t\t auto$+$simulation & 0.998 & -3.9 $\\pm$5.2 & 4.8 & 0.989 & 2.3 $\\pm$2.9 & 2.9 & 0.984 & -3.7 $\\pm$10.4 & 6.8 & 0.954 & 2.7 $\\pm$5.5 & 4.5 & 0.983 & -5.5 $\\pm$9.6 & 8.1 \\\\\n\t\t\t\n\t\t\t \n\t\t\t auto$+$expert & 0.996 & -4.3 $\\pm$6.5 & 5.5 & 0.968 & 2.7 $\\pm$4.8 & 4.3 & 0.976 & -3.2 $\\pm$12.9 & 8.3 & 0.883 & 5.1 $\\pm$8.6 & 7.7 & 0.980 & -6.2 $\\pm$10.2 & 9.1 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\n\\end{table*}\n\t\n\t\\vspace{1ex}\n\t\\noindent \\textbf{Detection of slices with segmentation failures}: To evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Figure~\\ref{fig_prec_rec_slice_detection}, left) or b-maps (Figure~\\ref{fig_prec_rec_slice_detection}, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e- or b-maps obtained from the DRN model. Table~\\ref{table_evaluation_slice_detection} shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.\n\t\n\t\\subsection*{Calibration of uncertainty maps} \\label{result_eval_quality_umaps}\n\t\n\tFigure~\\ref{fig_risk_cov_comparison} shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions (Figure~\\ref{fig_risk_cov_comparison} left: CE or Brier loss, Figure~\\ref{fig_risk_cov_comparison} right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e- and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=4.7in, height=2.8in]{figures\/risk_cov\/cov_risk_curve_both_seg_errors.pdf}\n\t\t\n\t\t\\caption{Comparison of risk-coverage curves for different combination of model architectures, loss functions and uncertainty maps. Results are separated for loss functions (left cross-entropy and Brier, right soft-Dice loss). \\num{100}\\% coverage means that none of the voxels is discarded based on its uncertainty whereas a coverage of \\num{0}\\% denotes the scenario in which all predictions are replaced by their reference labels. Note, all models were trained with two different loss functions (1) soft-Dice (SD) for all models (2) cross-entropy (CE) for DRN and U-net and Brier loss for DN.}\n\t\t\\label{fig_risk_cov_comparison}\n\t\\end{figure}\n\t\n\t\\begin{table}\n\t\t\\caption{Effect of number of Monte Carlo (MC) samples on segmentation performance in terms of (a) Dice coefficient (DC) and (b) Hausdorff Distance (HD) (mean $\\pm$ standard deviation). Higher DC and lower HD is better. Abbreviations: Cross-Entropy (CE), Dilated Residual Network (DRN) and Dilated Network (DN).} \n\t\t\\label{table_seg_perf_per_samples}\n\t\t\\small\n\t\t\\subfloat[Dice coefficient]{\n\t\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\t\\hline\n\t\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}} & DRN-CE & U-net-CE & DN-soft-Dice\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 0.894$\\pm$0.07 & 0.896$\\pm$0.07 & 0.871$\\pm$0.09 \\\\\n\t\t\t\t3 & 0.900$\\pm$0.07 & 0.901$\\pm$0.07 & 0.883$\\pm$0.08 \\\\\n\t\t\t\t5 & 0.902$\\pm$0.07 & 0.901$\\pm$0.07 & 0.887$\\pm$0.08 \\\\\n\t\t\t\t7 & 0.903$\\pm$0.07 & 0.901$\\pm$0.07 & 0.888$\\pm$0.08 \\\\\n\t\t\t\t10 & 0.904$\\pm$0.06 & 0.902$\\pm$0.07 &0.890$\\pm$0.08 \\\\\n\t\t\t\t20 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.890$\\pm$0.08 \\\\\n\t\t\t\t30 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\t60 & 0.904$\\pm$0.07 & 0.902$\\pm$0.07 & 0.891$\\pm$0.08 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\t\\subfloat[Hausdorff Distance]{\n\t\t\t\\begin{tabular}{|c C{1.5cm} C{1.5cm} c|}\n\t\t\t\t\\hline\n\t\t\t\t\\textbf{\\thead{Number of \\\\ MC samples}}& DRN-CE & U-net-CE & DN-soft-Dice \\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 9.88$\\pm$5.76 & 11.79$\\pm$8.23 & 13.54$\\pm$7.14 \\\\\n\t\t\t\t3 & 9.70$\\pm$6.13 & 11.40$\\pm$7.78 & 12.71$\\pm$6.79 \\\\\n\t\t\t\t5 & 9.54$\\pm$6.07 & 11.37$\\pm$7.81 & 12.06$\\pm$6.29 \\\\\n\t\t\t\t7 & 9.38$\\pm$5.86 & 11.29$\\pm$7.86 & 12.08$\\pm$6.38 \\\\\n\t\t\t\t10 & 9.38$\\pm$5.91 & 11.24$\\pm$7.71 & 11.85$\\pm$6.34 \\\\\n\t\t\t\t20 & 9.37$\\pm$5.83 & 11.27$\\pm$7.79 & 11.90$\\pm$6.52 \\\\\n\t\t\t\t30 & 9.39$\\pm$5.91 & 11.32$\\pm$7.93 & 11.90$\\pm$6.48 \\\\\n\t\t\t\t60 & 9.39$\\pm$5.93 & 11.22$\\pm$7.83 & 11.89$\\pm$6.56 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\t\n\t\t}\n\t\\end{table}\n\t\n\t\\subsection*{Correction of automatically identified segmentation failures} \\label{results_combined_approach}\n\t\n\t\\textbf{Simulated correction:} The results listed in Table~\\ref{table_overall_segmentation_performance} and \\ref{table_cardiac_function_indices} show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table~\\ref{table_overall_segmentation_performance} disclose that improvements attained by the combined approach are almost all statistically significant ($p \\leq 0.05$) at ES and frequently at ED (\\num{96}\\% resp. \\num{83}\\% of the cases). Moreover, improvements are in \\num{99}\\% of the cases statistically significant for HD compared to \\num{81}\\% of the cases for DC.\n\t\n\tResults in terms of clinical metrics shown in Table~\\ref{table_cardiac_function_indices} are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ($p \\leq 0.05$) in one case for RV ejection fraction.\t\n\t\n\tIn general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.\n\t\n\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient018_slice07_ES_bmap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example1}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient070_slice05_ED_emap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example2}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=6in]{figures\/qualitative\/sim_corr\/patient081_slice01_ES_emap_acorr.pdf}%\n\t\t\t\\label{fig_qual_result_sim_corr_example3}}\n\t\t\\caption{Three patients showing results of combined segmentation and detection approach consisting of segmentation followed by simulated manual correction of detected segmentation failures. First column shows MRI (top) and reference segmentation (bottom). Results for automatic segmentation and simulated manual correction respectively achieved by: Dilated Network (DN-Brier, \\num{2}$^{nd}$ and \\num{5}$^{th}$ columns); Dilated Residual Network (DRN-soft-Dice, \\num{3}$^{rd}$ and \\num{6}$^{th}$ columns); and U-net (soft-Dice, \\num{4}$^{th}$ and \\num{7}$^{th}$ columns).}\n\t\t\\label{fig_seg_detection_qualitative_results}\n\t\\end{figure}\n\t\n\t\\begin{figure}[!t]\n\t\t\\captionsetup[subfigure]{justification=centering}\n\t\t\\centering\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient048_slice02_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example1} \\hspace{3ex}}\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient091_slice01_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example2}}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient075_slice06_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example3} \\hspace{3ex}}\n\t\t\\subfloat[]{\\includegraphics[width=3.3in]{figures\/qualitative\/man_corr\/with_region_contour\/patient006_slice01_mcorr.pdf}%\n\t\t\t\\label{fig_qual_result_man_corr_example4}} \n\t\t\n\t\t\\caption{Four patients showing results of combined segmentation and detection approach consisting of segmentation followed by manual expert correction of detected segmentation failures. Expert was only allowed to adjust the automatic segmentations in regions where the detection network predicted segmentation failures (orange contour shown in 2$^{nd}$ column). Automatic segmentations were generated by a U-net trained with the cross-entropy loss. Segmentation failure detection was performed using entropy maps.}\n\t\t\\label{fig_qualitative_results_man_corr}\n\t\\end{figure}\n\t\n\n\tThe best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps. \n\t\n\t\\vspace{1ex}\n\t\\textbf{Manual correction}: Table~\\ref{table_manual_corr_performance} lists results for the combined automatic segmentation and detection approach followed by \\textit{manual} correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in \\num{50} percent of the cases statistically significant ($p \\leq 0.05$) and most pronounced for RV and LV at end-systole. \n\t\n\tQualitative examples of the proposed approach are visualized in Figures~\\ref{fig_seg_detection_qualitative_results} and \\ref{fig_qualitative_results_man_corr} for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance.\n\tOn average manual correction of automatic segmentations took less than \\num{2} minutes for ED and ES volumes of one patient compared to \\num{20} minutes that is typically needed by an expert for the same task.\n\t\n\t\n\t\\section*{Ablation Study}\n\t\n\tTo demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed.\n\t\n\t\\subsection*{Impact of number of Monte Carlo samples on segmentation performance}\n\n\tTo investigate the impact of the number of Monte Carlo (MC) samples on the segmentation performance validation experiments were performed for all three segmentation architectures (Dilated Network, Dilated Residual Network and U-net) using $T$ $\\in \\{1, 3, 5, 7, 10, 20, 30, 60\\}$ samples. Results of these experiments are listed in Table~\\ref{table_seg_perf_per_samples}. We observe that segmentation performance started to converge using \\num{7} samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than \\num{10} samples did not increase segmentation performance. Hence, in the presented work $T$ was set to \\num{10}.\t\n\t\t\n\t\\subsection*{Effect of patch-size on detection performance}\n\t\n\tThe combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: \\num{4}$\\times$\\num{4}, \\num{8}$\\times$\\num{8}, and \\num{16}$\\times$\\num{16} voxels. The results are shown in Figure~\\ref{fig_grid_compare}. We can observe in Figure~\\ref{fig_fn_grid_froc_voxel_detection} that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Figure~\\ref{fig_fn_grid_prec_rec_slice_detection} reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patch-size small. Therefore, in the experiments a patch-size of \\num{8}$\\times$\\num{8} voxels was used.\n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_froc_voxel_detection_rate.pdf}%\n\t\t\t\\label{fig_fn_grid_froc_voxel_detection}}\n\t\t\\subfloat[]{\\includegraphics[width=3.4in, height=1.7in]{figures\/dt\/compare_grid_slices_prec_recall.pdf}%\n\t\t\t\\label{fig_fn_grid_prec_rec_slice_detection}} \n\t\t\n\t\t\\caption{Detection performance for three different patch-sizes specified in voxels. (a) Sensitivity for detection of segmentation failures on voxel level (y-axis) versus number of false positive image regions (x-axis). (b) Precision-recall curve for detection of slices containing segmentation failures (where AP denotes average precision). Results are split between entropy and Bayesian uncertainty maps. In the experiments patch-size was set to \\num{8}$\\times$\\num{8} voxels.}\n\t\t\\label{fig_grid_compare}\n\t\\end{figure}\n\t\n\t\\subsection*{Impact of tolerance threshold on number of segmentation failures}\n\t\n\tTo investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Figure~\\ref{fig_threshold_compare} shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table~\\ref{table_evaluation_slice_detection}) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.\n\t\n\t\\section*{Discussion}\n\t\n\tWe have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance.\n\tIn contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance\tfor the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Figure~\\ref{fig_qualitative_results_man_corr}).\n\tTherefore, the presented approach can potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice.\n\n\t\n\t\n\tThe proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods. \n\t\n\t\\begin{figure}[t]\n\t\t\\center\n\t\t\\subfloat[]{\n\t\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_out_struc.pdf}%\n\t\t\t\\label{fig_threshold_struc_out_compare}\n\t\t}\n\t\t\\subfloat[]{\n\t\t\t\\includegraphics[width=3.in]{figures\/dt\/tp_per_threshold_in_struc.pdf}%\n\t\t\t\\label{fig_threshold_struc_in_compare}\n\t\t}\n\t\t\\caption{Mean percentage of the segmentation failures per volume (y-axis) in the set of all segmentation errors (tolerable errors$+$segmentation failures) depending on the tolerance threshold (x-axis). Red, dashed vertical line indicates threshold value that was used throughout the experiments. Results are split between segmentation errors located (a) outside and (b) inside the target structure. Each figure contains a curve for U-net, Dilated Network (DN) and Dilated Residual Network (DRN) trained with the soft-Dice (SD) loss. Segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures and therefore, they are independent of the applied tolerance threshold.}\n\t\t\\label{fig_threshold_compare}\n\t\\end{figure}\n\t\n\tIn agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier)\\cite{sander2019towards}. Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel \\cite{hendrycks2016baseline} showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al. \\cite{geifman2017selective} we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al. \\cite{jungo2019assessing} we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al.~\\cite{srivastava2014dropout} where a CNN with dropout used at testing is interpreted as an ensemble of models.\n\t\n\tQuantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors \\cite{bernard2018deep} and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations \\cite{bernard2018deep} that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis\\cite{bello2019deep} and very detailed morphology analysis such as myocardial trabeculae in adults\\cite{meyer2020genetic}.\n\t\n\tFor the ACDC dataset used in this manuscript, Bernard et al.\\cite{bernard2018deep} reported inter-observer variability ranging from \\num{4} to \\SI{14.1}{\\milli\\meter} (equivalent to on average \\num{2.6} to \\num{9} voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in \\num{2}D, while evaluation of segmentation is done in \\num{3}D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table~\\ref{table_evaluation_slice_detection} show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Figure~\\ref{fig_threshold_compare}) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Figure~\\ref{fig_dt_perf_all_models}) and hence, should be a focus of future work. \n\t\n\tThe presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.\n\t\n\tEvaluation of the proposed approach for \\num{12} possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks \\cite{lakshminarayanan2017simple} or variational dropout \\cite{kingma2015variational}. In addition, previous work by Kendall and Gal~\\cite{kendall2017uncertainties}, Tanno et al. \\cite{tanno2019uncertainty} has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures. \n\t\n\tFurthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance. \n\t\n\tFinally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.\n\t\n\t\\section*{Conclusion}\n\t\n\tA method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs. \n\t\n\t\\section*{Data and code availability}\n\tAll models were implemented using the PyTorch\\cite{paszke2017automatic} framework and trained on one Nvidia GTX Titan X GPU with \\num{12} GB memory. The code to replicate the study is publicly available at \\href{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}{https:\/\/github.com\/toologicbv\/cardiacSegUncertainty}.\n\t\n\n\t\\input{main_article.bbl}\n\t\n\t\\section*{Acknowledgements}\n\t\n\tThis study was performed within the DLMedIA program (P15-26) funded by Dutch Technology Foundation with participation of PIE Medical Imaging.\n\t\n\t\\section*{Author contributions statement}\n\t\n\tJ.S., B.D.V. and I.I. designed the concept of the study. J.S. conducted the experiments. J.S., B.D.V. and I.I. wrote the manuscript. All authors reviewed the manuscript. \n\t\n\t\\section*{Additional information}\n\t\n\t\\textbf{Competing interests}: The authors declare that they have no competing interests. \n\t\n\t\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEntropy production plays a fundamental role in both classical and quantum thermodynamics: by being related to the second law at a fundamental level, it enables to identify and quantify the irreversibility of physical phenomena \\cite{DeGrootMazur}. This intimate connection has raised a great deal of interest in relation to the theory of open quantum systems, where one is concerned about the dynamics of a system interacting with the infinitely many environmental degrees of freedom \\cite{Breuer-Petruccione}. In this scenario, a plethora of genuine quantum effects is brought about and a general and exhausting theory of entropy production is hitherto missing \\cite{Mauro_thermo2018}.\n\nThe second law of thermodynamics can be expressed in the form of a lower bound to the entropy change $\\Delta S$ undergone by the state of a given system that exchanges an amount of heat $Q$ when interacting with a bath at temperature $T$, that is\n\\begin{align}\n\\Delta S \\ge \\int \\frac{\\delta Q}{T} \\ .\n\\end{align}\nThe strict inequality holds if the process that the system is undergoing is irreversible. One can thus define the entropy production $\\Sigma$ as\n\\begin{align}\n\\label{eq:entropy_prod_def}\n\\Sigma \\equiv \\Delta S - \\int \\frac{\\delta Q}{T} \\ge 0 \\ .\n\\end{align}\nFrom \\Cref{eq:entropy_prod_def}, one can obtain the following expression involving the rates~\\cite{Landi2013, PhysRevLett.118.220601}:\n\\begin{align}\n\\frac{{\\rm{d}} S}{{\\rm{d}} t} = \\Pi(t) - \\Phi(t) \\ ,\n\\end{align}\nwhere $\\Pi(t)$ is the entropy production rate and $\\Phi(t)$ is the entropy flux from the system to the environment: at any time $t$, in addition to the entropy that is flowing from the system to the environment, there might thus be a certain amount of entropy intrinsically produced by the process and quantified by $\\Pi(t)$. \n\nEntropy production is an interesting quantity to monitor in the study of open quantum systems, since this is the context where irreversibility is unavoidably implied. The issue has been addressed in order to obtain an interesting characterisation and measure of the irreversibility of the system dynamics \\cite{Mauro_thermo2018}. In particular, it has been recently shown that the entropy production of an open quantum system can be split into different contributions: one is classically related to population unbalances, while the other is a genuine quantum contribution due to coherences \\cite{Landi:19, POLKOVNIKOV:2011, PhysRevE.99.042105}. This fundamental result holds whenever the system dynamics is either described by a map microscopically derived through the Davies approach or in the case of a finite map encompassing thermal operations~\\cite{Landi:19}. \nMost of these works, though, are solely focused on the Markovian case, when the information is monotonically flowing from the system to the environment. Under this hypothesis, the open dynamics is formally described by a quantum dynamical semigroup; this is essential to mathematically prove that the entropy production is a non-negative quantity \\cite{Spohn1978, Alicki_1979, PhysRevA.68.032105}. \nMoreover, whenever the quantum system undergoing evolution is composite (i.e., multipartite) beside system-environment correlations, also inter-system correlations will contribute to the overall entropy production. A full account of the role of such correlations (entanglement above all) on the entropy balance is not known.\n\n\nHowever, a strictly Markovian description of the dynamics does not encompass all possible evolutions. There might be circumstances in which there is no clear separation of time-scales between system and environment: this hampers the application of the Born-Markov approximation \\cite{Breuer-Petruccione}. In some cases, a backflow of information going from the environment to the system is observable, usually interpreted as a signature of a quantum non-Markovian process \\cite{Rivas:14, Breuer:16}. From a thermodynamical perspective the non-negativity of the entropy production rate is not always guaranteed, as there might be intervals of time in which it attains negative values. It has been argued that this should not be interpreted as a violation of the second law of thermodynamics \\cite{Benatti2017}, but it should call for a careful use of the theory, in the sense that -- in the entropy production balance -- the role of the environment cannot be totally neglected. This idea can be justified in terms of the backflow of information that quantum non-Markovianity entails: the system retrieves some of the information that has been previously lost because of its interaction with the surroundings.\n\nIn this paper, we investigate the way initial correlations affect the entropy production rate in an open quantum system by considering the case of non-Markovian Brownian motion. \nWe focus on the case of an uncoupled bipartite system connected to two independent baths. The rationale behind this choice is related to the fact that any interaction between the two oscillators would likely generate, during the evolution, quantum correlations between the two parties. In general, the entanglement dynamically generated through the interaction would be detrimental to the transparency of the picture we would like to deliver, as it would be difficult to isolate the contribution to $\\Pi(t)$ coming from the initial inter-system correlations. To circumvent this issue, in our study we choose a configuration where the inter-system dynamics is trivial (two independent relaxation processes) but the bipartite state is initially correlated. \\textit{De facto}, the entanglement initially present in the state of our ``medium'' acts as an extra knob which can be tuned to change the rate of entropy production, thus steering the thermodynamics of the open system that we consider.\n\nThe paper is organised as follows. In \\Cref{sec:Gauss} we introduce a closed expression of the entropy production rate for a system whose dynamics is described in terms of a differential equation in the Lyapunov equation. In \\Cref{sec:QBM} we introduce the model we would like to study: a system of two uncoupled harmonic oscillators, described by a non-Markovian time-local master equation. We also discuss the spectral properties of the two local reservoirs. This minimal, yet insightful, setting allows us to investigate -- both numerically and analytically -- how different initial states can affect the entropy production rate. We investigate this relation in depth in \\Cref{sec:main}, where, by resorting to a useful parametrisation for two-mode entangled states, we focus on the role of the purity of the total two-mode state and on the link between the entanglement we input in the initial state and the resulting entropy production. In \\Cref{sec:Markovian_Limit} we assess whether our results survive when we take the Markovian limit. Finally, in \\Cref{sec:conclusions}, we summarise the evidence we get and we eventually draw our conclusions.\n\n\\section{Entropy production rate for Gaussian systems}\n\\label{sec:Gauss}\n\nWe restrict our investigation to the relevant case of Gaussian systems.\n\\cite{Ferraro:05, Serafini:17, Carmichael}. This choice dramatically simplifies the study of our system dynamics, since the evolution equations only involve the finite-dimensional covariance matrix (CM) of the canonically conjugated quadrature operators. \nAccording to our notation, the CM $\\boldsymbol{\\sigma}$, defined as\n\\begin{align}\n\\label{eq:cov_def}\n\\sigma_{ij} =\\braket{\\{X_i, X_j \\}} - 2\\braket{X_i}\\braket{X_j} ,\n\\end{align}\nsatisfies the Lyapunov equation\n\\begin{align}\n\\label{eq:Lyapunov}\n\\dot{\\boldsymbol{\\sigma}} = \\mathbf{A} \\boldsymbol{\\sigma} + \\boldsymbol{\\sigma} {\\mathbf{A}^{\\rm{T}}} + \\mathbf{D} ,\n\\end{align}\nwhere $\\mathbf{A}$ and $\\mathbf{D}$ are the drift and the diffusion matrices, respectively, and $\\mathbf{X} = \\{q_1,p_1,\\ldots, q_N,p_N\\}^{\\rm T}$ is the vector of quadratures for $N$ bosonic modes. \nIn particular, the CM representing a two-mode Gaussian state can always be brought in the standard form \\cite{Ferraro:05, Serafini:17}:\n\\begin{align}\n\\label{eq:sigma_sf}\n\\boldsymbol{\\sigma} = \\begin{pmatrix}\na & 0 & c_{+} & 0 \\\\\n0 & a & 0 & c_{-} \\\\\nc_{+} & 0 & b & 0 \\\\\n0 & c_{-} & 0 & b \n \\end{pmatrix},\n\\end{align}\nwhere the entries $a, \\ b$, and $c_{\\pm}$ are real numbers. Furthermore, a necessary and sufficient condition for separability of a two-mode Gaussian state is given by the Simon criterion \\cite{PhysRevA.72.032334}:\n\\begin{align}\n\\label{eq:uncertainty}\n\\tilde{\\nu}_{-} \\ge 1, \n\\end{align}\nwhere $\\tilde{\\nu}_{-}$ is the smallest symplectic eigenvalue of the partially transposed CM $\\tilde{\\boldsymbol{\\sigma}} = \\boldsymbol{P\\sigma P}$, being $\\mathbf{P} = {\\rm diag} (1,1,1,-1)$. This bound expresses in the phase-space language the Peres-Horodecki PPT (Positive Partial Transpose) criterion for separability \\cite{Peres:96, Horodecki:97, Simon:14, PhysRevA.72.032334}.\n\nTherefore, the smallest symplectic eigenvalue encodes all the information needed to quantify the entanglement for arbitrary two-modes Gaussian states. For example, one can measure the entanglement through the violation of the PPT criterion \\cite{PhysRevLett.90.027901}. Quantitatively, this is given by the logarithmic negativity of a quantum state $\\varrho$, which -- in the continuous variables formalism -- can be computed considering the following formula \\cite{PhysRevA.65.032314, PhysRevA.72.032334}:\n\\begin{align}\n\\label{eq:log_neg}\nE_{\\mathcal{N}} ( \\varrho) = {\\rm max} \\left [ 0, - \\ln{\\tilde{\\nu}_{-}}\\right ] .\n\\end{align}\nGiven the global state $\\varrho$ and the two single-mode states $\\varrho_{i} = {\\rm Tr}_{j \\ne i} \\varrho$, global $\\mu \\equiv {\\rm Tr} \\varrho^2$ and the local $\\mu_{1,2} \\equiv {\\rm Tr} \\varrho_{1,2}^2$ purities can be used to characterise entanglement in Gaussian systems. It has been shown that two different classes of extremal states can be identified: states of maximum negativity for fixed global and local purities (GMEMS) and states of minimum negativity for fixed global and local purities (GLEMS) \\cite{PhysRevLett.92.087901}.\n\nMoreover, the continuous variables approach provides a remarkable advantage: the open quantum system dynamics can be remapped into a Fokker-Plank equation for the Wigner function of the system. This formal result enables us to carry out our study of the entropy production using a different approach based on phase-space methods, instead of resorting to the usual approach based on von Neumann entropy. The harmonic nature of the system we would like to consider makes our choice perfectly appropriate to our study and, as we will show in \\Cref{sec:main,sec:Markovian_Limit}, well suited to systematically scrutinise inter-system correlations. Our analysis is thus based on the Wigner entropy production rate \\cite{PhysRevLett.118.220601}, defined as\n\\begin{align}\n\\Pi(t) \\equiv - \\partial_t K(W(t) || W_{\\rm eq}),\n\\end{align}\nwhere $K(W|| W_{\\rm eq})$ is the Wigner relative entropy between the Wigner function $W$ of the system and its expression for the equilibrium state $W_{\\rm eq}$.\n\nFurthermore, we are in the position of using the closed expressions for $\\Phi(t)$ and $\\Pi(t)$ coming from the theory of classical stochastic processes \\cite{Brunelli, Landi2013}. In particular, it has been shown that the entropy production rate $\\Pi(t)$ can be expressed in terms of the matrices $\\mathbf{A},\\mathbf{D},\\boldsymbol{\\sigma}$ as~\\cite{Brunelli}\n\\begin{equation}\n\\label{eq:entropy_prod_rate}\n\\begin{aligned}\n\\Pi(t) &= \\frac{1}{2} {\\rm{Tr}} [ \\boldsymbol{\\sigma}^{-1} \\mathbf{D}] + 2 {\\rm{Tr}} [ \\mathbf{A}^{{\\rm irr}}] \\\\ \n&+ 2 {\\rm{Tr}} [ \\left (\\mathbf{A}^{{\\rm irr}}\\right )^{{\\rm T}} \\mathbf{D}^{-1} \\mathbf{A}^{{\\rm irr}} \\boldsymbol{\\sigma}] \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{A}^{{\\rm irr}}$ is the irreversible part of matrix $\\mathbf{A}$, given by\n$\\mathbf{A}^{{\\rm irr}} = \\left ( \\mathbf{A} + \\mathbf{E} \\mathbf{A} \\mathbf{E}^{{\\rm T}}\\right )\/2$,\nwhere $\\mathbf{E} = {\\rm diag}(1,-1,1-1)$ is the symplectic representation of time reversal operator.\n\n\\section{Quantum Brownian motion}\n\\label{sec:QBM}\nWe study the relation between the preparation of the initial state and the entropy production rate considering a rather general example: the quantum Brownian motion \\cite{Breuer-Petruccione, Weiss1999}, also known as Caldeira-Leggett model \\cite{CaldeiraLeggett1983a} . More specifically, we consider the case of a harmonic oscillator interacting with a bosonic reservoir made of independent harmonic oscillators. The study of such a paradigmatic system has been widely explored in both the Markovian \\cite{Breuer-Petruccione, CaldeiraLeggett1983a} and non-Markovian \\cite{PhysRevD.45.2843} regimes using the influence functional method: in this case, one can trace out the environmental degrees of freedom exactly. \nOne can also solve the dynamics of this model using the open quantum systems formalism \\cite{Breuer-Petruccione, Rivas2012}, where the Brownian particle represents the system, while we identify the bosonic reservoir with the environment. \nThe usual approach relies on the following set of assumptions, which are collectively known as Born-Markov approximation~\\cite{Breuer-Petruccione}:\n\\begin{enumerate}\n\\item The system is weakly coupled to the environment.\n\\item The initial system-environment state is factorised.\n\\item \\label{Born-Markov3} It is possible to introduce a separation of the timescales governing the system dynamics and the decay of the environmental correlations.\n\\end{enumerate}\nHowever, we aim to solve the dynamics in a more general scenario, without resorting to assumption \\ref{Born-Markov3}. We are thus considering the case in which, although the system-environment coupling is weak, non-Markovian effects may still be relevant. Under such conditions, one can derive a time-local master equation for the reduced dynamics of the system \\cite{PhysRevA.67.042108, Int1}.\n\nMore specifically, we consider a system consisting of two quantum harmonic oscillators, each of them interacting with its own local reservoir (see \\Cref{fig:system}). Each of the two reservoirs is modelled as a system of system of $N$ non-interacting bosonic modes. \nIn order to understand the dependence of the entropy production upon the initial correlations, we choose the simplest case in which the two oscillators are identical, i.e., characterised by the same bare frequency $\\omega_0$ and the same temperature $T$, and they are uncoupled, so that only the initial preparation of the global state may entangle them. The Hamiltonian of the global system thus reads as (we consider units such that $\\hbar = 1$ throughout the paper)\n\\begin{align}\n\\label{eq:Hamiltonian}\nH & = \\sum_{j=1,2} \\omega_0 \\, a_{j}^{\\dagger} a_j + \\sum_{j=1,2} \\sum_k \\omega_{jk} \\, b_{jk}^{\\dagger} b_{jk} \\nonumber \\\\\n& + \\alpha \\sum_{j=1,2} \\sum_k \\left ( \\frac{a_j + a_j^{\\dagger}}{\\sqrt{2}}\\right ) \\left ( g_{jk}^{*} b_{jk} + g_{jk} b_{jk}^{\\dagger} \\right ),\n\\end{align}\nwhere $a_j^{\\dagger}$ ($a_j$) and $b_{jk}^{\\dagger}$ ($b_{jk}$) are the system and reservoirs creation (annihilation) operators, respectively, while $\\omega_{1k}$ and $\\omega_{2k}$ are the frequencies of the reservoirs modes. The dimensionless constant $\\alpha$ represents the coupling strength between each of the two subsystems and the their local bath, while the constants $g_{jk}$ quantify the coupling between the $j-$th oscillator ($j=1,2$) and the $k-$th mode of its respective reservoir. These quantities therefore appear in the definition of the spectral density (SD)\n\\begin{align}\n\\label{eq:SD-def}\nJ_{j} (\\omega) = \\sum_{k} |g_{jk}|^2 \\, \\delta (\\omega - \\omega_{jk}) \\ .\n\\end{align}\nIn what follows, we will the consider the case of symmetric reservoirs, i.e., $J_1(\\omega) = J_2(\\omega) \\equiv J(\\omega)$. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{system.pdf}\n\\caption{\\small{System of two uncoupled quantum harmonic oscillators interacting with their local reservoirs. The latter are characterised by the same temperature $T$ and the same spectral properties. The two parties of the systems are initially correlated and we study their dynamics under the secular approximation so that non-Markovian effects are present.}}\n\\label{fig:system}\n\\end{figure}\n\nWe would also like to work in the secular approximation by averaging over the fast oscillating terms after tracing out the environment: unlike the rotating-wave approximation, in this limit not all non-Markovian effects are washed out \\cite{PhysRevA.67.042108}.\n\nUnder these assumptions, the dynamics of this system is governed by a time-local master equation, that in the interaction picture reads as\n\\begin{equation}\n\\label{eq:ME_sec}\n\\begin{aligned}\n\\dot{\\rho} (t) = &- \\frac{\\Delta(t) + \\gamma(t)}{2} \\sum_{j=1,2} \\left ( \\{\\adag_j a_j, \\rho\\} - 2 a_j \\rho \\adag_j \\right ) \\\\\n& - \\frac{\\Delta(t) - \\gamma(t)}{2} \\sum_{j=1,2} \\left (\\{a_j \\adag_j, \\rho \\}- 2 \\adag_j \\rho a_j \\right ),\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ is the reduced density matrix of the global system, while the time dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ account for diffusion and dissipation, respectively. \nThe coefficients in \\Cref{eq:ME_sec} have a well-defined physical meaning: $(\\Delta(t) + \\gamma(t))\/2$ is the rate associated with the incoherent loss of excitations from the system, while $(\\Delta(t) - \\gamma(t))\/2$ is the rate of incoherent pumping.\n\n\nThe coefficients $\\Delta(t)$ and $\\gamma(t)$ are ultimately related to the spectral density $J(\\omega)$ as\n\\begin{align}\n\\label{eq:QBM_delta}\n\\Delta(t) \\equiv \\frac{1}{2} \\int_{0}^{t} \\kappa (\\tau) \\cos{(\\omega_0 \\tau}) \\, d \\tau,\n\\end{align}\n\\begin{align}\n\\label{eq:QBM_gamma}\n\\gamma (t) \\equiv \\frac{1}{2} \\int_{0}^{t} \\mu (\\tau) \\sin{(\\omega_0 \\tau}) \\, d \\tau,\n\\end{align}\nwhere $\\kappa(\\tau)$ and $\\mu(\\tau)$ are the noise and dissipation kernels, respectively, which -- assuming reservoirs in thermal equilibrium -- are given by \n\n\n\\begin{equation}\n\\label{eq:kappamu}\n\\left[\n\\begin{matrix}\n\\kappa(\\tau)\\\\\n\\mu(\\tau)\n\\end{matrix}\\right]=\n2 \\alpha^2 \\int\\limits_{0}^{\\mathcal{1}} J (\\omega) \\left[\\begin{matrix}\n\\cos{(\\omega \\tau)} \\coth{\\left(\\frac{\\beta}{2} \\omega \\right)}\\\\\n\\sin(\\omega\\tau)\\end{matrix}\\right] d \\, \\omega,\n\\end{equation}\nwhere $\\beta = (k_B T)^{-1}$ is the inverse temperature and $k_B$ the Boltzmann constant.\n\nMoreover, it can be shown that the dynamics of a harmonic system that is linearly coupled to an environment can be described in terms of a differential equation in the Lyapunov form given by \\Cref{eq:Lyapunov} \\cite{Serafini:17}. We can indeed notice that in \\Cref{eq:Hamiltonian} the interaction between each harmonic oscillator and the local reservoir is expressed by a Hamiltonian that is bilinear (i.e., quadratic) in the system and reservoir creation and annihilation operators. Hamiltonians of this form lead to a master equation as in \\Cref{eq:ME_sec}, where the dissipators are quadratic in the system creation and annihilation operators $\\adag_j, a_j$. Under these conditions, one can recast the dynamical equations in the Lyapunov form in \\Cref{eq:Lyapunov} \\cite{Ferraro:05}, where the matrices $\\mathbf{A}$ and $\\mathbf{D}$ are time-dependent, due to non-Markovianity. Indeed, we get $\\mathbf{A} = -\\gamma(t) \\mathbbm{1}_4$ and $\\mathbf{D}= 2 \\Delta(t) \\mathbbm{1}_4$ (here $ \\mathbbm{1}_4$ is the $4 \\times 4$ identity matrix).\n\nThe resulting Lyapunov equation can be analytically solved, giving the following closed expression for the CM at a time $t$:\n\\begin{align}\n\\label{eq:sigma_t}\n\\boldsymbol{\\sigma}(t) = \\boldsymbol{\\sigma}(0) e^{-\\Gamma(t)} + 2 \\Delta_{\\Gamma}(t) \\mathbbm{1}_4,\n\\end{align}\nwith\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:del_gamma}\n\\Gamma(t) \\equiv 2 \\int_{0}^{t} d \\tau \\, \\gamma(\\tau)\\,\\,\n\\text{and}\\,\\,\n\\Delta_{\\Gamma} (t) \\equiv e^{-\\Gamma(t)} \\int_{0}^{t} d \\tau \\, \\Delta(\\tau) e^{\\Gamma(\\tau)}.\n\\end{aligned}\n\\end{equation}\nMoreover, a straightforward calculation allows us to determine the steady state of our two-mode system. By imposing $\\dot{\\boldsymbol{\\sigma}} \\equiv 0$ in \\Cref{eq:Lyapunov}, one obtains that the system relaxes towards a diagonal state with associated CM ${\\boldsymbol{\\sigma}}_{\\mathcal{1}} \\equiv \\Delta(\\infty)\/\\gamma(\\infty) \\mathbbm{1}_4$. By plugging ${\\boldsymbol{\\sigma}}_{\\mathcal{1}}$ in \\Cref{eq:entropy_prod_rate}, we find $\\Pi_\\infty\\equiv\\lim_{t\\rightarrow \\infty}\\Pi(t)=0$, showing a vanishing entropy production at the steady state. This instance can also be justified by noticing that, as $t \\to \\mathcal{1}$, we approach the Markovian limit. Therefore, the Brownian particles, exclusively driven by the interaction with their local thermal baths, will be relaxing toward the canonical Gibbs state with a vanishing associated entropy production rate \\cite{Breuer-Petruccione, Spohn1978, Landi:19}.\n\n\n\\subsection{Choice of the spectral density} \nIn order to obtain a closed expression for the time-dependent rates $\\Delta(t)$ and $\\gamma(t)$, one has to assume a specific form for the spectral density $J(\\omega)$, which -- to generate an irreversible dynamics -- is assumed to be a continuous function of the frequency $\\omega$. In quite a general way, we can express the SD as\n\\begin{align}\nJ(\\omega) = \\eta \\ \\omega_c^{1-\\epsilon} \\ \\omega^\\epsilon \\; f(\\omega, \\omega_c),\n\\end{align}\nwhere $\\epsilon>0$ is known as the Ohmicity parameter and $\\eta > 0$. Depending on the value of $\\epsilon$, the SD is said to be Ohmic ($\\epsilon=1$), super-Ohmic ($\\epsilon>1)$, or sub-Ohmic ($\\epsilon<1$). The function $f(\\omega, \\omega_c)$ represents the SD cut-off and $\\omega_c$ is the cut-off frequency. Such function is introduced so that $J(\\omega)$ vanishes for $\\omega \\to 0$ and $\\omega \\to \\mathcal{1}$. We focus on two different functional forms for $f(\\omega, \\omega_c)$, namely, the Lorentz-Drude cut-off $f(\\omega, \\omega_c) \\equiv \\omega_c^2 \/ (\\omega_c^2 + \\omega^2)$ and the exponential cut-off $f(\\omega, \\omega_c) \\equiv e^{-\\omega\/\\omega_c}$.\nIn particular, we choose an Ohmic SD with a Lorentz-Drude cut-off\n\\begin{align}\n\\label{eq:SD_Ohm_LD}\nJ(\\omega) = \\frac{2 \\omega}{\\pi} \\frac{\\omega_c^2}{\\omega_c^2 + \\omega^2},\n\\end{align}\nwhere $\\eta \\equiv 2 \/ \\pi$. Note that this choice is mathematically convenient, but is inconsistent from a physical point of view, as it implies instantaneous dissipation, as acknowledged in Refs.~\\cite{PhysRevD.45.2843, PazZurek2001}.\nWe also consider the following SDs\n\\begin{align}\n\\label{eq:SD_exp}\nJ(\\omega) = \\omega_c^{1-\\epsilon} \\ \\omega^\\epsilon \\; e^{-\\omega\/\\omega_c},\n\\end{align}\nwith $\\epsilon=1, \\, 3, \\,1\/2$ and $\\eta \\equiv 1$, as the coupling strength is already contained in the constant $\\alpha$. In all these cases, the time-dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ can be evaluated analytically~\\cite{PhysRevA.80.062324}. \n\n\n\\section{Initial correlations and entropy production}\n\\label{sec:main}\nWe can now use our system to claim that initial correlations shared by the non-interacting oscillators do play a role in the entropy production rate. We do this by employing a parametrisation that covers different initial preparations~\\cite{PhysRevA.72.032334}. The entries of the matrix given by \\Cref{eq:sigma_sf} can be expressed as follows\n\\begin{equation}\n\\label{eq:sigma_a_b}\na = s +d, \\qquad b = s-d\n\\end{equation}\nand\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sigma_c}\nc_{\\pm} = \\frac{ \\sqrt{\\left (4d^2 + f \\right )^2{-}4g^2} \\pm \\sqrt{\\left (4s^2 + f \\right )^2{-}4g^2} }{4 \\sqrt{s^2 -d^2}},\n\\end{aligned}\n\\end{equation}\nwith $f = (g^2 +1)(\\lambda -1) \/2- (2d^2+g)(\\lambda +1)$. This allows us to parametrise the CM using four parameters: $s, d, g, \\lambda$. The local purities are controlled by the parameters $s$ and $d$ as $\\mu_1 = (s+d)^{-1}$ and $\\mu_2 = (s-d)^{-1}$, while the global purity is $\\mu=1\/g$. Furthermore, in order to ensure legitimacy of a CM, the following constraints should be fulfilled\n\\begin{align}\n\\label{eq:constraints}\ns \\ge 1, \\quad | d | \\le s -1, \\quad g \\ge 2| d | + 1.\n\\end{align}\nOnce the three aforementioned purities are given, the remaining degree of freedom required to determine the negativities is controlled by the parameter $\\lambda$, which encompasses all the possible entangled two-modes Gaussian states. The two classes of extremal states are obtained upon suitable choice of $\\lambda$. For $\\lambda=-1$ ($\\lambda = +1$) we recover the GLEMS (GMEMS) mentioned in~\\Cref{sec:Gauss}. \n\nTo show a preview of our results, we start with a concrete case shown in \\Cref{fig1}. We prepare the system in a pure ($g=1$) symmetric ($d=0$) state, and investigate the effects of initial correlations on $\\Pi(t)$ by comparing the value taken by this quantity for such an initial preparation with what is obtained by considering the covariance matrix associated with the tensor product of the local states of the oscillators, i.e., by forcefully removing the correlations among them. Non-Markovian effects are clearly visible in the oscillations of the entropy production and lead to negative values of $\\Pi(t)$ in the first part of the evolution. This is in stark contrast with the Markovian case, which entails non-negativity of the entropy production rate. Crucially, we see that, for a fixed initial value of the local energies, the presence of initial correlations enhances the amount of entropy produced at later times, increasing the amplitude of its oscillations. We also stress that both curves eventually settle to zero (on a longer timescale than shown in \\Cref{fig1}) as argued in Sec.~\\ref{sec:QBM}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{correlations.pdf}\n\\caption{\\small{Entropy production rate in a system of two non-interacting oscillators undergoing the non-Markovian dynamics described in~\\Cref{sec:QBM}. We compare the behaviour of the entropy production rate resulting from a process where the system is initialised in a state with no initial correlations (solid line) to what is obtained starting from a correlated state (dashed-dotted line). The latter case refers to the preparation of a system in a pure ($g=1$), symmetric ($d=0$) squeezed state ($\\lambda = 1$). The former situation, instead, corresponds to taking the tensor product of the local states. In this plot we have taken $s=2$ and an Ohmic SD with Lorentz-Drude cut-off. The system parameters are $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig1}\n\\end{figure}\n\nWe now move to a more systematic investigation of $\\Pi(t)$ and its dependence on the specific choice of $s, d, g, \\lambda$.\nIn order to separate the contributions, we first study the behaviour of $\\Pi(t)$ when we vary one of those parameters, while all the others are fixed. We can first rule out the contribution of thermal noise by considering the case in which the reservoirs are in their vacuum state. Such zero-temperature limit can be problematic, as some approaches to the quantification of entropy production fail to apply in this limit~\\cite{PhysRevLett.118.220601}. In contrast, phase-space methods based on the R\\'{e}nyi-$2$-Wigner entropy allow to treat such a limit without pathological behaviours associated with such {\\it zero-temperature catastrophe}~\\cite{PhysRevLett.118.220601}. This formal consistency is preserved also in the case of a system whose dynamics is described by \\Cref{eq:ME_sec}, as shown in \\Cref{fig_vacuum}.\nWe take $T=0$ and choose an Ohmic SD with an exponential cut-off -- given by \\Cref{eq:SD_exp} -- with $\\epsilon=1$. \nThe map describing the dynamics converges to a stationary state characterised by a vanishing $\\Pi(t)$, although the oscillations are damped to zero more slowly, as non-Markovian effects are more persistent in the presence of zero-temperature reservoirs. Furthermore, we notice that the differences between different initial states are most pronounced in correspondence of the first peak: this suggests that the maximum value for the entropy production can be reasonably chosen as an apt figure of merit to distinguish the differences due to state preparation. \nSupported by this evidence, we adopt the value of the first maximum of $\\Pi(t)$ as an indicator of the irreversibility generated in the relaxation dynamics by different initial preparations.\n\nIn the inset of \\Cref{fig_vacuum} we show the logarithmic negativity given by \\Cref{eq:log_neg}. The interaction with zero-temperature reservoirs does not cause detrimental effects to entanglement, as the latter is preserved over time \\cite{PhysRevLett.100.220401,PhysRevA.75.062119, PhysRevA.80.062324}.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_vacuum_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to independent zero-temperature reservoirs. We consider preparations of the initial global state corresponding to different values of parameter $g$ (related to the global purity of the state), while fixing $s=2$, $d=0$, and $\\lambda = 1$. In the inset we plot the logarithmic negativity for the same choice of parameters: entanglement persists over time up to the reach of a steady state of the dynamics. We have taken an Ohmic SD with an exponential cut-off with $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$.}}\n\\label{fig_vacuum}\n\\end{figure}\n\n\nWe also address the case of finite-temperature reservoirs and an Ohmic SD with Lorentz-Drude cut-off given by \\Cref{eq:SD_Ohm_LD}~\\footnote{the analysis can easily be extended to the exponential cut-off in \\Cref{eq:SD_exp} for the Ohmic ($\\epsilon=1$), super-Ohmic ($\\epsilon=3$) and sub-Ohmic ($\\epsilon=1\/2$) case}. \nWe thus fix $s, d, \\lambda$ and let $g$ vary to explore the role played by the global purity. \\Cref{fig2} shows that, by increasing $g$ -- i.e., by reducing the purity of the global state -- $\\Pi(t)$ decreases: an initial state with larger purity lies far from an equilibrium state at the given temperature of the environment and is associated with a larger degree of initial entanglement [cf. inset of \\Cref{fig2} and the analysis reported in \\Cref{sec:entanglement}], which translates in a larger entropy production rate. Furthermore, our particular choice of the physical parameters leads to the observation of ``entanglement sudden death''~\\cite{PhysRevLett.100.220401, PhysRevA.80.062324}: an initial state with non-null logarithmic negativity completely disentangles in a finite time due to interaction with environment, the disentangling time being shortened by a growing $g$ [cf. inset of \\Cref{fig2}].\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to different preparations of the initial global state. We consider different values of parameter $g$, while taking $s=2$, $d=0$, and $\\lambda = 1$. In the inset we plot the logarithmic negativity for the same choice of the parameters. The dynamics of the system has been simulated using an Ohmic SD with a Lorentz-Drude cut-off. The system parameters are $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig2}\n\\end{figure}\n\nSimilarly, we can bias the local properties of the oscillators by varying $d$ and, in turn, $g= 2d +1$, while keeping $s, \\lambda$ fixed: in \\Cref{fig3} we can observe that, when the global energy is fixed, the asymmetry in the local energies -- and purities $\\mu_1$ and $\\mu_2$ -- reduces the entropy production rate. In the inset we show that, by increasing the asymmetry between the two modes, the entanglement takes less time to die out. These results are consistent with the trends observed in \\Cref{fig2}.\nIndeed, a bias in the local energies would make the reduced state of one of the two oscillators more mixed, and thus less prone to preserve the entanglement that is initially set in the joint harmonic state. Such imbalance would give different weights to the two local dissipation processes, thus establishing an effective preferred local channel for dissipation. In turn, this would result in a lesser weight to the contribution given by correlations.\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_d.pdf}\n\\caption{\\small{Entropy production rates corresponding to different values of $d$ and $g=2d+1$ in the parametrisation of the initial state (we have taken $s=4$ and $\\lambda = 1$). In the inset, the behaviour of the logarithmic negativity is shown. In this figure, we take an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig3}\n\\end{figure}\n\nWe conclude our analysis in this Section by exploring the parameter space in a more systematic way by fixing the global energy $s$ and randomly choosing the three parameters left, provided that the constraints in \\Cref{eq:constraints} are fulfilled. We see from \\Cref{fig4} that the curve for $\\Pi(t)$ comprising all the others is the one corresponding to unit global purity, i.e., $g=1$, and $d=0$, $\\lambda=1$ (dashed line). The globally pure state is indeed the furthest possible from a diagonal one: the rate at which entropy production varies is increased in order to reach the final diagonal state ${\\boldsymbol{\\sigma}}_{\\mathcal{1}}$. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_pure.pdf}\n\\caption{\\small{Entropy production rates $\\Pi(t)$ (absolute value) as a function of time. The initial CM is parametrised by fixing $s$ ($s=10$ in the figure) and randomly choosing $d, g, \\lambda$ such that they are uniformly distributed in the intervals $[0,s-1]$, $[2d+1,d+10]$ and $[-1,1]$ respectively. The figure reports $N_{R} = 1000$ different realisations of the initial state. The dashed line corresponds to the globally pure state ($g=1$) with $d=0$, $\\lambda=1$. All the plots are obtained considering an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig4}\n\\end{figure}\n\n\\subsection{Dependence on the initial entanglement}\n\\label{sec:entanglement}\n\nWe now compare the trends corresponding to different choices of the parameters characterising the initial state. As non-Markovian effects are reflected in oscillating behaviour of the entropy production, we can contrast cases corresponding to different initial preparations by looking at the maximum and the minimum values $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ that the entropy production rate assumes for each choice of the parameters. Taking into account the evidence previously gathered, in the simulations reported in this Section we fix the minimum value for $g$, i.e., $g=2d+1$, and $\\lambda = +1$ as significant for the points that we want to put forward. In fact, with such choices we are able to parametrise the initial state with a minimum number of variables, while retaining the significant features that we aim at stressing. We can further assume, without loss of generality, $d \\ge 0$: this is simply equivalent to assuming that the first oscillator is initially prepared in a state with a larger degree of mixedness than the second one, i.e., $\\mu_1 \\le \\mu_2$. In this case, we can express $d$ in terms of the smallest symplectic eigenvalue of the partially transposed CM $\\tilde{\\nu}_{-}$. Therefore, taking into account the constraints given by \\Cref{eq:constraints}, one has that $d = -\\frac{1}{2} ( \\tilde{\\nu}_{-}^2 - 2s \\tilde{\\nu}_{-} +1)$.\nWe already mentioned in \\Cref{sec:QBM} that we are able to derive a closed expression for the CM at any time $t$, given by \\Cref{eq:sigma_t}. We can further notice that the positive and negative peaks in the entropy production rate are attained at short times. We can thus perform a Taylor expansion of $\\Delta(t)$ in \\Cref{eq:del_gamma} to obtain\n\\begin{equation}\n\\begin{aligned}\n\\Delta_{\\Gamma} (t) = \n[1- \\Gamma(t)] \\int_{0}^{t} d \\tau \\Delta(\\tau) + \\int_{0}^{t} d \\tau \\; \\Gamma(\\tau) \\Delta(\\tau) + \\mathcal{O} (\\alpha^4) \\ .\n\\end{aligned}\n\\end{equation}\nAs $\\Delta(t) \\propto \\alpha^2$ and $\\Gamma(t) \\propto \\alpha^2$, we can retain only the first term consistently with the weak coupling approximation we are resorting to. Therefore, we can recast \\Cref{eq:sigma_t} in a form that is more suitable for numerical evaluations, namely\n\\begin{align}\n\\label{eq:sigma_t_wc}\n\\boldsymbol{\\sigma}(t) = \\left [ 1 - \\Gamma(t) \\right ] \\boldsymbol{\\sigma}(0) + \\left [ 2 \\int_{0}^{t} d \\tau \\Delta(\\tau) \\right ] \\mathbbm{1}_4 \\ .\n\\end{align}\nBy substituting \\Cref{eq:sigma_t_wc} into \\Cref{eq:entropy_prod_rate}, we get the analytic expression for the entropy production rate, given in \\Cref{app:a} for the sake of completeness but whose explicit form is not crucial for our analysis here. \n\nIn this way, all the information about the initial state is encoded in the value of $\\tilde{\\nu}_{-}$ while $s$ is fixed. Note that this expression holds for any SD: once we choose the latter, we can determine the time-dependent coefficients $\\Delta(t)$ and $\\gamma(t)$ and thus the entropy production rate $\\Pi(t)$. We can then compute the maximum of the entropy production rate and study the behaviour of $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ as functions of the entanglement negativity $E_{\\mathcal{N}}$ at $t=0$. In \\Cref{fig5} we compare numerical results to the curve obtained by considering the analytical solution discussed above and reported in \\Cref{eq:entropy_prod_rate_general}. Remarkably, we observe a monotonic behaviour of our chosen figure of merit with the initial entanglement negativity: the more entanglement we input at $t=0$ the higher the maximum of the entropy production rate is. We can get to the same conclusion (in absolute value) when we consider the negative peak $\\Pi_{\\rm min}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Ohm_LD_in_neg.pdf}\n\\caption{\\small{Maximum and minimum of the entropy production rate $\\Pi_{\\rm max}$ and $\\Pi_{\\rm min}$ as functions of the entanglement negativity at $t=0$. We take $s=4$, $g=2d +1$, $\\lambda = 1$, while $0 \\le d \\le 3$. We compare the numerical results (triangles and circles) to the analytical solution in \\Cref{eq:entropy_prod_rate_general} (solid line). We have used an Ohmic SD with a Lorentz-Drude cut-off and $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig5}\n\\end{figure}\n\nThe monotonic behavior highlighted above holds regardless of the specific form of the spectral density. In \\Cref{fig6} we study $\\Pi_{\\rm max}$ against the smallest symplectic eigenvalue $\\tilde{\\nu}_{-}$ of the partially transposed CM for the various spectral densities we have considered, finding evidence of a power law of the form $\\Pi_{\\rm max} \\propto \\tilde{\\nu}_{-}^{\\delta}$, where the exponent $\\delta$ depends on the reservoir's spectral properties.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Exponents.pdf}\n\\caption{\\small{Plot of $\\Pi_{\\rm max}$ against the smallest symplectic eigenvalue of the partially transposed CM at $t=0$ (logarithmic scale) for different SDs (as stated in the legend). The initial state is prepared using the same parametrisation chosen in \\Cref{fig5}, while we have taken $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$.}}\n\\label{fig6}\n\\end{figure}\n\n\n\n\\section{Markovian limit}\n\\label{sec:Markovian_Limit}\nWe are now interested in assessing whether the analytical and numerical results gathered so far bear dependence on the non-Markovian character of the dynamics. With this in mind, we explore the Markovian limit, in which the problem is fully amenable to an analytical solution, that can also be used to validate our numerical results. Such limit is obtained by simply choosing an Ohmic SD with a Lorentz-Drude regularisation -- \\Cref{eq:SD_Ohm_LD} -- and taking the long time and high temperature limits, i.e. $\\omega_0 t \\gg 1$ and $\\beta^{-1} \\gg \\omega_0$. This yields the time-independent coefficients\n\\begin{align}\n\\frac{\\Delta(t) - \\gamma(t)}{2}& \\longrightarrow \\gamma_M \\left ( 2 \\bar{n}(\\omega_0) + 1\\right ) , \\\\\n\\frac{\\Delta(t) + \\gamma(t)}{2}& \\longrightarrow \\gamma_M \\, \\bar{n}(\\omega_0) \\ ,\n\\end{align}\nwhere $\\bar{n}(\\omega_0) = (e^{\\beta \\omega_0} -1)^{-1}$ is the average number of excitations at a given frequency $\\omega_0$, whereas $\\gamma_M \\equiv 2 \\alpha^2 \\omega_c^2 \\omega_0 \/ (\\omega_c^2 + \\omega_0^2)$. \nTherefore, \\Cref{eq:ME_sec} reduces to a master equation describing the dynamics of two uncoupled harmonic oscillators undergoing Markovian dynamics, for which we take $\\mathbf{A} = -\\gamma_M \\mathbbm{1}_4$ and $\\mathbf{D}= 2 \\gamma_M (2 \\bar{n}(\\omega_0) +1) \\mathbbm{1}_4$ in \\Cref{eq:Lyapunov}.\n\nWorking along the same lines as in the non-Markovian case, we study the behavior of $\\Pi(t)$ by suitably choosing the parameters encoding the preparation of the initial state. For example, in \\Cref{fig7} we plot the entropy production rate as a function of time for different values of $g$. The limiting procedure gives back a coarse-grained dynamics monotonically decreasing toward the thermal state, to which it corresponds a non-negative entropy production rate, asymptotically vanishing in the limit $t \\to \\mathcal{1}$ . Moreover, the memoryless dynamics leads to a monotonic decrease of the entanglement negativity, as shown in the inset of \\Cref{fig7}.\nIn this case, the globally pure state ($g=1$, dashed line in \\Cref{fig8}) still plays a special role: all the curves corresponding to value of $g$ smaller that the unity remains below it.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_g.pdf}\n\\caption{\\small{Entropy production rates corresponding to different preparations of the initial global state in the Markovian limit. We have taken different values of $g$ (thus varying the global purity of the state of the system) with $s=2$, $d=0$, $\\lambda = 1$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig7}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_pure.pdf}\n\\caption{\\small{Entropy production rate $\\Pi(t)$ plotted against the dimensionless time $\\omega_0 t$ in the Markovian regime. The initial CM is parametrised by setting $s=10$ and randomly sampling (in a uniform manner) $d, g, \\lambda$ from the intervals $[0,s-1]$, $[2d+1,d+10]$ and $[-1,1]$, respectively. We present $N_{R} = 100$ different realisations of the initial state. The dashed line represent the state with unit global purity ($g=1$) and $d=0, \\; \\lambda=1$. For the dynamics, we have taken $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig8}\n\\end{figure}\n\nThe Markovian limit provides a useful comparison in terms of integrated quantities. In this respect, we can study what happens to the entropy production $\\Sigma = \\int_{0}^{+\\mathcal{1}} \\Pi(t) dt$. Although the non-Markovian dynamics entails the negativity of the entropy production rate in certain intervals of time, the overall entropy production is larger than the quantity we would get in the corresponding Markovian case, as can be noticed in \\Cref{fig9}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{sigma.pdf}\n\\caption{\\small{Entropy production in the non-Markovian case (solid line) as a function $E_{\\cal N}(t=0)$, compared with its counterpart achieved in the corresponding Markovian limit (dashed line). We have taken $s=2$, $d=0$, $\\lambda = 1$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.01 \\omega_0^{-1}$.}}\n\\label{fig9}\n\\end{figure}\n\nFinally, we can study the dependence of the Markovian entropy production rate on the initial entanglement. Note that, in this limit, \\Cref{eq:entropy_prod_rate} yields an analytic expression for the entropy production rate at a generic time $t$, which we write explicitly in \\Cref{eq:entropy_prod_rate_Markov}. From our numerical inspection, we have seen that the entropy production rate is maximum at $t=0$, so that\n\\begin{align}\n\\label{eq:entropy_prod_rate_Markov_max}\n\\Pi_{\\rm max} \\equiv \\Pi(0) & = - 8 \\gamma_M + 4 s \\, \\gamma_M \\tanh\\left (\\frac{\\beta \\omega_0}{2} \\right ) \\nonumber \\\\ \n& +\\frac{4 s \\, \\gamma_M \\coth \\left (\\frac{\\beta \\omega_0}{2} \\right )}{ (2 s -\\tilde{\\nu}_{-}) \\tilde{\\nu}_{-}} \\ .\n\\end{align} \n\nIf we fix the parameter $s$ and plot $\\Pi_{\\rm max}$ against $\\tilde{\\nu}_{-}$, we can contrast analytical and numerical results [cf.~\\Cref{fig10}]. We can draw the same conclusion as in the non-Markovian case: the more entanglement we input, the higher the entropy production rate.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.83\\linewidth]{Markov_nu.pdf}\n\\caption{\\small{Markovian limit: maximum entropy production rate $\\Pi_{\\rm max}$ as a function of the minimum symplectic eigenvalue of the partially transposed CM at $t=0$. We have taken $s=4$, $g=2d +1$, $\\lambda = 1$, $0 \\le d \\le 3$, $\\alpha = 0.1 \\omega_0$, $\\omega_c = 0.1 \\omega_0$, $\\beta = 0.1 \\omega_0^{-1}$. We compare the curve obtained numerically (triangles) to the analytical trend (solid line) found through \\Cref{eq:entropy_prod_rate_Markov_max}.}}\n\\label{fig10}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have studied -- both numerically and analytically -- the dependence of the entropy production rate on the initial correlations between the components of a composite system. We have considered two non-interacting oscillators exposed to the effects of local thermal reservoirs. By using a general parametrisation of the initial state of the system, we have systematically explored different physical scenarios. We have established that correlations play an important role in the rate at which entropy is intrinsically produced during the process. Indeed, we have shown that, when the system is prepared in a globally pure state, we should expect a higher entropy production rate. This is the case -- regardless of the spectral density chosen -- for initial entangled states of the oscillators: larger initial entanglement is associated with higher rates of entropy production, which turns out to be a monotonic function of the initial degree of entanglement. \nRemarkably, our analysis takes into full consideration signatures of non-Markovianity in the open system dynamics. \n\nIt would be interesting, and indeed very important, to study how such conclusions are affected by the possible interaction between the constituents of our system, a situation that is currently at the focus of our investigations, as well as non-Gaussian scenarios involving either non-quadratic Hamiltonians or spin-like systems.\n\n\\acknowledgements\nWe thank R. Puebla for insightful discussions and valuable feedback about the work presented in this paper.\nWe acknowledge support from the H2020 Marie Sk{\\l}odowska-Curie COFUND project SPaRK (Grant nr.~754507), the H2020-FETPROACT-2016 HOT (Grant nr.~732894), the H2020-FETOPEN-2018-2020 project TEQ (Grant nr.~766900), the DfE-SFI Investigator Programme (Grant 15\/IA\/2864), COST Action CA15220, the Royal Society Wolfson Research Fellowship (RSWF\\textbackslash R3\\textbackslash183013) and International Exchanges Programme (IEC\\textbackslash R2\\textbackslash192220), the Leverhulme Trust Research Project Grant (Grant nr.~RGP-2018-266). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n{H\\,{\\scshape ii}~regions}, that are an outcome of the photoionization of newly forming high-mass stars ($\\textup{M} \\gtrsim 8~M_{\\odot}$), not only play a crucial role in understanding processes involved in high-mass star formation but also reveal the various feedback mechanisms at play on the surrounding ambient interstellar medium (ISM) and the natal molecular cloud. Numerous observational and theoretical studies of {H\\,{\\scshape ii}~regions} have been carried out in the last two decades. However, dedicated multiwavelength studies of star-forming complexes add to the valuable observational database that provide a detailed and often crucial insight into the intricacies involved.\n \nIn this work, we study the massive star-forming region IRAS 17149$-$3916. This region is named RCW 121 in the catalog of $\\rm H{\\alpha}$ emission in the Southern Milky Way \\citep{1960MNRAS.121..103R}. The mid-infrared (MIR) dust bubble, S6, from the catalog of \\citet{2006ApJ...649..759C} is seen to be associated with this complex. IRAS 17149$-$3916 has a bolometric luminosity of $\\sim~9.4 \\times 10^4~L_{\\odot}$ \\citep{2006A&A...447..221B}. In literature, several kinematic distance estimates are found for this complex. The near and far kinematic distance estimates range between 1.6 -- 2.2 and 14.5 -- 17.7~kpc, respectively \\citep{{1997MNRAS.291..261W},{2004ApJS..154..553S},{2006A&A...447..221B},{2010ApJ...716.1478W}}. In a recent paper, \\citet{2014MNRAS.437..606T} use the spectral classification of the candidate ionizing star along with near-infrared (NIR) photometry to place this complex at 2~kpc. This is in agreement with the near kinematic distance estimates and conforms to the argument of \\citet{2006A&A...447..221B} for assuming the near kinematic distance of 2.1~kpc based on the height above the Galactic plane. Based on the above discussion, we assume a distance of 2.0~kpc in this work. \n\nThis star-forming region has been observed as part of several radio continuum surveys at 2.65~GHz \\citep{1969AuJPA..11...27B}, 4.85~GHz \\citep{1994ApJS...91..111W}, and more recently at 18 and 22.8~GHz by \\citet{2013A&A...550A..21S}. Using NIR data, \\citet{2006AJ....131..951R} detect a cluster of young massive stars associated with this IRAS source. These authors also suggest IRS-1, the brightest source in the cluster, to be the likely ionizing star of the {H\\,{\\scshape ii}~region} detected in radio wavelengths. \n\\citet{2008A&A...486..807A} probed the $^{12}\\rm CO$ molecular gas in the region. Based on this observation, they conclude that RCW 121 and RCW 122 are possibly unrelated star-forming regions belonging to a giant molecular complex while negating the speculation of these being triggered by Wolf-Rayet stars located in the HM~1 cluster. In the most recent work on this source, \\citet{2014MNRAS.437..606T} re-visit the cluster detected by \\citet{2006AJ....131..951R}. \nThese authors also detect three bright {\\it Herschel} clumps, the positions of which coincide with three of the 1.2~mm clumps of \\citet{2006A&A...447..221B}. \n\nIntroducing the IRAS 17149$-$3916 complex, in Fig.~\\ref{fig_intro}, we show the colour composite image of the associated region. The 5.8\\,$\\rm \\mu m$ IRAC band, which is mostly a dust tracer \\citep{2010ApJ...716.1478W}, displays an almost closed, elliptical ring emission morphology. The extent of the bubble, S6, as estimated by \\citet{2006ApJ...649..759C} traces this. The cold dust component, revealed by the \\textit{Herschel} 350\\,$\\rm \\mu m$ emission, is distributed along the bubble periphery with easily discernible cold dust clumps. Ionized gas sampled in the SuperCosmos $\\rm H{\\alpha}$ survey \\citep{2005MNRAS.362..689P} fills the south-west part of the bubble and extends beyond towards south. MIR emission at 21\\,$\\rm \\mu m$ is localized towards the south-west rim of the bubble. This emission is seen to spatially correlate with the central, bright region of ionized gas (see Fig.~\\ref{radioim})\nand is generally believed to be due to the Ly$\\alpha$ heating of dust grains to temperatures of around 100~K \\citep{1991MNRAS.251..584H}.\n\nIn this paper, we present an in-depth multiwavelength study of this star-forming region. In discussing the investigation carried out, we present the radio observations and the related data reduction procedure followed in Section \\ref{obs-data}. This section also briefly discusses the salient features of the archival data used in the study. Section \\ref{results} presents the results obtained for the associated ionized gas and dust environment. Section \\ref{discussion} delves into the detailed discussion and interpretation related to the observed morphology of the ionized gas, investigation of the pillar structures, dust clumps in the realm of triggered star formation, and the nature of the detected dust clumps and cores. Section \\ref{conclusion} highlights the main results obtained in this study. \n \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.5]{Fig1.eps}\n\\caption{Colour-composite image towards IRAS 17149$-$3916 with the MSX 21\\,$\\rm \\mu m$ (red), SuperCosmos $\\rm H_{\\alpha}$ (blue) and IRAC 5.8 \\,$\\rm \\mu m$ (green) are shown overlaid with the contours of the \\textit{Herschel} 350\\,$\\rm \\mu m$ map. The contour levels are 600, 700, 1000, 1500, 2000, 2500, 4000, and 6000~MJy\/sr. The ellipse shows the position and the extent of the bubble, S6, as identified by \\citet{2006ApJ...649..759C}.}\n\\label{fig_intro}\n\\end{figure*}\n\n\\section{Observation, data reduction and archival data} \\label{obs-data}\n\\subsection{Radio Continuum Observation } \\label{radio_obs}\nTo probe the ionized component of IRAS 17149$-$3916, we have carried out low-frequency radio continuum observations of the region at 610 and 1280~MHz with the Giant Meterwave Radio Telescope (GMRT), Pune, India. GMRT offers a hybrid configuration of 30 antennas (each of diameter 45~m) arranged in a Y-shaped layout. The three arms contain 18 evenly placed antennas and provide the largest baseline of $\\sim 25$~km. The central $\\rm 1\\,km^2$ region houses 12 antennas that are randomly oriented with shortest possible baseline of $\\sim 100$\\,m.\nA comprehensive overview of GMRT systems can be found in \\citet{1991ASPC...19..376S}.\nThe target was observed with the full array for nearly full-synthesis to maximize the {\\it uv} coverage which is required to detect the extended, diffuse emission. Observations were carried out during August 2014 at 610 and 1280\\,MHz with a bandwidth of 32 MHz over 256 channels. In the full-array mode, the resolution is $\\sim$5 and 2~arcsec and the largest detectable spatial scale is $\\sim$17 and 7~arcmin at 610 and 1280~MHz, respectively. Radio sources 3C 286 and 3C 48 were selected as the primary flux calibrators. The phase calibrator, 1714-252, was observed after each 40-min scan of the target to calibrate the phase and amplitude variations over the full observing run. The details of the GMRT radio observations and constructed radio continuum maps are listed in Table \\ref{radio_tab}.\n\nAstronomical Image Processing System (AIPS) was used to reduce the radio continuum data where we follow the procedure detailed in \\citet{2017MNRAS.472.4750D} and \\citet{2019MNRAS.485.1775I}. The data sets are carefully examined to identify bad data (non-working antennas, bad baselines, RFI, etc.), employing the tasks, {\\tt TVFLG} and {\\tt UVPLT}. Following standard procedure, the gain and bandpass calibration is carried out after flagging bad data. Subsequent to bandpass calibration, channel averaging is done while keeping bandwidth smearing negligible. Continuum maps at both frequencies are generated using the task {\\tt IMAGR}, adopting wide-field imaging technique to account for w-term effects. Several iterations of self-calibration (phase-only) are performed to minimize phase errors and improve the image quality. Subsequently, primary beam correction was carried out for all the generated maps. \n\nIn order to obtain a reliable estimate of the flux density, contribution from the Galactic diffuse emission needs to be accounted for. This emission follows a power-law spectrum with a steep negative spectral index of $-2.55$ \\citep{1999A&AS..137....7R} and hence has a significant contribution at the low GMRT frequencies (especially at 610~MHz). This results in the increase in system temperature, which becomes particularly crucial when observing close to the Galactic plane as is the case with our target, IRAS 17149$-$3916. The flux calibrators lie away from the Galactic plane and for such sources at high Galactic latitudes, the Galactic diffuse emission would be negligible. This makes it essential to quantify the system temperature correction to be applied in order to get an accurate estimate of the flux density. Since measurement of the variation in the system temperature of the antennas at GMRT are not automatically implemented during observations, we adopt the commonly used Haslam approximation discussed in \\citet{2015MNRAS.451...59M} and implemented in \\citet{2019MNRAS.485.1775I}.\n\nThe sky temperature, ${T_{\\rm sky}}$, at frequency $\\nu$ for the location of our source is determined using the equation\n\\begin{equation}\nT_{\\rm sky,\\nu} = T_{\\rm sky}^{408}\\bigg(\\frac{\\nu}{408~\\textrm{MHz}}\\bigg)^\\gamma\n\\end{equation}\n\\noindent\nwhere $\\gamma = -2.55$ is the spectral index of the Galactic diffuse emission and $\\it{T_{\\rm sky}^{\\rm 408}}$ \nis the sky temperature at 408~MHz obtained from the all-sky 408~MHz survey of \\citet{1982A&AS...47....1H}. Using this method, we estimate the scaling factors of 2.2 and 1.24 at 610 and 1280~MHz, respectively, which are used to rescale and obtain the final maps.\n\\begin{table}\n\\caption{Details of the GMRT radio continuum observations.} \n\\begin{center}\n\\label{radio_tab}\n\\begin{tabular}{lll}\n\\\\ \n\\hline \\hline\n & 610 MHz & 1280 MHz \\\\\n\\hline\nDate of Obs. & 8 August 2014 & 14 August 2014 \\\\\nFlux Calibrators & 3C286, 3C48 & 3C286, 3C48 \\\\\nPhase Calibrators & 1714-252 & 1714-252 \\\\\nOn-source integration time & $\\sim$4~h & $\\sim$4~h \\\\\nSynth. beam & 10.69\\arcsec$\\times$6.16\\arcsec & 4.41\\arcsec$\\times$2.24\\arcsec \\\\\nPosition angle. (deg) & 7.04 & 6.37 \\\\\n{\\it rms} noise (mJy\/beam) & 0.41 & 0.07 \\\\\nInt. Flux Density (Jy) & $14.1 \\pm 1.4$ & $12.6 \\pm 1.3$ \\\\\n{\\small (integrated within $3\\sigma$ level)} & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Other available data}\\label{data_archive}\n\\subsubsection{Near-infrared data from 2MASS} \\label{data_2mass}\nNIR ($\\rm JHK_s$) data for point sources around our region of interest has been obtained from the Two Micron All Sky Survey (2MASS) Point Source Catalog (PSC). Resolution of 2MASS images is $\\sim$5.0 arcsec. We select those sources for which the ``read-flag'' values are 1 - 3 to ensure a sample with reliable photometry. This data is used to study the stellar population associated with IRAS 17149$-$3916.\n\n\\subsubsection{Mid-infrared data from Spitzer} \\label{data_spitzer}\n\nThe MIR images towards IRAS 17149$-$3916 are obtained from the archives of the Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) survey of the {\\it Spitzer} Space Telescope. We retrieve images towards the region in the four Infrared Array Camera (IRAC; \\citealt{2004ApJS..154...10F}) bands (3.6, 4.5, 5.8, 8.0\\,$\\rm \\mu m$). These images have an angular resolution of $\\lesssim 2$ arcsec with a pixel size of $\\sim 0.6$ arcsec. We utilize these images in our study to present the morphology of the MIR emission associated with the region.\n\n\\subsubsection{Far-infrared data from Herschel} \\label{data_herschel}\nThe far-infrared (FIR) data used in this paper have been obtained from the {\\it Herschel} Space Observatory archives. Level 2.5 processed 70 - 500\\,$\\rm \\mu m$ images from Spectral and Photometric Imaging Receiver (SPIRE; \\citealt{2010A&A...518L...3G}) and JScanam images from the Photodetector Array Camera and Spectrometer (PACS; \\citealt{2010A&A...518L...2P}), that were observed as part of the {\\it Herschel} infrared Galactic plane Survey (Hi-GAL; \\citealt{2008A&A...481..345M}), were retrieved. We use the FIR data to examine cold dust emission and investigate the cold dust clumps in the regions.\n\n\\subsubsection{Atacama Large Millimeter Array archival data} \\label{data_alma}\nWe make use of the 1.4\\,mm (Band~6) continuum maps obtained from the archives of {\\it Atacama Large Millimeter Array (ALMA)} to identify the compact dust cores associated with IRAS 17149$-$3916. These observations were made in 2017 (PI: A.Sanchez-Monge \\#2016.1.00191.S) using the extended 12m-Array configuration towards four pointings, S61, S62, S63, and S64. Each of these pointings sample different regions of the IRAS 17149$-$3916 complex. The retrieved maps have an angular resolution of 1.4\\,arcsec $\\times$ 0.9\\,arcsec and a pixel scale of 0.16\\,arcsec.\n\n\\subsubsection{Molecular line data from MALT90 survey}\n\nThe Millimeter Astronomy Legacy Team 90 GHz survey (MALT90; \\citealt{{2011ApJS..197...25F},{2013PASA...30...57J}}) was carried out using the Australia Telescope National Facility (ATNF) Mopra 22-m telescope with an aim to characterize molecular clumps associated with {H\\,{\\scshape ii}~regions}. The survey dataset contains molecular line maps of more than 2000 dense cores lying in the plane of the Galaxy, the corresponding sources of which were taken from the ATLASGAL 870\\,$\\rm \\mu m$ continuum survey. The MALT90 survey covers 16 molecular line transitions lying near 90 GHz with a spectral resolution of $0.11$ km s$^{-1}$ and an angular resolution of 38\\,arcsec. In this study we use the optically thin $\\rm N_2 H^+$ line spectra to carry out the virial analysis of the detected dust clumps associated with this complex.\n\n\\section{Results}\n\\label{results}\n\\subsection{Ionized gas} \\label{ionized}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.3]{Fig2a.eps}\n\\includegraphics[scale=0.3]{Fig2b.eps}\n\\caption{{\\it Top}: 610\\,MHz GMRT map of the region associated with IRAS 17149$-$3916 overlaid with the 3$\\sigma$ ($\\sigma = 0.6\\,\\rm mJy\\,beam^{-1}$) level contour in light blue. White contours correspond to the 18\\,GHz emission mapped with ATCA \\citep{2013A&A...550A..21S}, with contour levels starting from 3$\\sigma$ and increasing in steps of 8$\\sigma$ ($\\sigma = 1.2\\times 10^{-2}\\,\\rm Jy\\,beam^{-1}$). {\\it Bottom:} Same as the top panel but for 1280\\,MHz GMRT ($\\sigma = 0.1\\,\\rm mJy\\,beam^{-1}$) and 22.8\\,GHz ATCA map with the contour levels starting from 3$\\sigma$ and increasing in steps of 5$\\sigma$ ($\\sigma = 1.2\\times 10^{-2}\\,\\rm Jy\\,beam^{-1}$). The GMRT maps presented here have been convolved to a resolution of 12\\,arcsec for 610~MHz and 5\\,arcsec for 1280~MHz. The beams sizes of the GMRT and ATCA maps are shown in the lower left and right hand corner, respectively.}\n\\label{radioim}\n\\end{figure}\nWe present the first low-frequency radio maps of the region associated with IRAS 17149$-$3916 obtained using the GMRT. The continuum emission mapped at 610 and 1280~MHz are shown in Fig.~\\ref{radioim}. The ionized gas reveals an interesting, large-extent cometary morphology where the head lies in the west direction with a fan-like diffuse tail opening in the east. The tail has a north-south extension of $\\sim$ 6 arcmin. The bright radio emission near the head displays a `horse shoe' shaped structure that opens towards the north-east and mostly traces the south-western portion of the dust ring structure presented in Fig.~\\ref{fig_intro}. This is enveloped within the extended and faint, diffuse emission. In addition, there are several discrete emission peaks seen at both frequencies. The {\\it rms} noise and the integrated flux density values estimated are listed in Table \\ref{radio_tab}. For the latter, the flux density is integrated within the respective 3$\\sigma$ contours. The quoted errors are estimated following the method discussed in \\citet{2018A&A...612A..36D}.\n\nAlso included in the figure are contours showing the high-frequency ATCA observations at 18 and 22.8~GHz, from \\citet{2013A&A...550A..21S}. These snapshot ($\\sim$ 10~mins) ATCA maps sample only the brightest region towards the head and the emission at 18~GHz is seen to be more extended. The GMRT and ATCA maps reveal the presence of several distinct peaks which are likely to be externally ionized density enhanced regions thus suggesting a clumpy medium. \nThe fact that some of these peaks could also be internally ionized by newly formed massive stars cannot be ruled out. Hence, further detailed study is required to understand the nature of these radio peaks. \nA careful search of the SIMBAD\/NED database rules out any possible association with background\/foreground radio sources in the line of sight. Comparing with Fig.~\\ref{fig_intro}, the ionized emission traced in the $\\rm H{\\alpha}$ image agrees well with the GMRT maps. \\citet{2006AJ....131..951R} and \\citet{2014MNRAS.437..606T} present the ionized emission mapped in the Br$\\gamma$ line which is localized to the central part and mostly correlates with the bright emission seen in the GMRT maps. \n\nAssuming the radio emission at 1280~MHz to be optically thin and emanating\nfrom a homogeneous, isothermal medium, we derive several physical parameters of the detected {H\\,{\\scshape ii}~region} using the following expressions from \\citet{2016A&A...588A.143S},\\\\\n\n{\\it Lyman continuum photon flux ($N_{\\rm Ly}$):}\n\n\\begin{equation}\n\\begin{split}\n\\left( \\frac{N_{\\rm Ly}}{\\rm s^{-1}}\\right) = 4.771 \\times 10^{42} \\left(\\frac{F_\\nu}{\\rm Jy}\\right) \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{-0.45} \\\\\n\\times \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.1} \\left( \\frac{D}{\\rm pc}\\right) ^{2}\n\\end{split}\n\\label{Lyman_flux}\n\\end{equation}\n\n{\\it Electron number density ($n_{\\rm e}$):}\n\\begin{equation}\n\\begin{split}\n\\left ( \\frac{n_{\\rm e}}{\\rm cm^{-3}}\\right) = 2.576 \\times 10^6 \\left(\\frac{F_\\nu}{\\rm Jy}\\right)^{0.5} \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{0.175} \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.05} \\\\\n\\times \\left( \\frac{\\theta_{\\rm source}}{\\rm arcsec}\\right)^{-1.5} \\left( \\frac{D}{\\rm pc}\\right) ^{-0.5}\n\\end{split}\n\\label{e_no_density}\n\\end{equation}\n\n{\\it Emission measure (EM):}\n\\begin{equation}\n\\begin{split}\n\\left( \\frac{\\rm EM}{\\rm pc\\ cm^{-6}}\\right) = 3.217 \\times 10^7 \\left(\\frac{F_\\nu}{\\rm Jy}\\right) \\left( \\frac{T_{\\rm e}}{\\rm K}\\right)^{0.35} \\\\\n\\times \\left( \\frac{\\nu}{\\rm GHz}\\right)^{0.1} \\left( \\frac{\\theta_{\\rm source}}{\\rm arcsec}\\right)^{-2}\n\\end{split}\n\\label{emission_measure}\n\\end{equation}\nwhere, $F_{\\nu}$ is the integrated flux density of the ionized region, $T_{\\rm e}$ is the electron temperature, $\\nu$ is the frequency, $\\theta_{\\rm source}$ is the angular diameter of the {H\\,{\\scshape ii}~region} and D is the distance to this region. $T_{\\rm e}$ is taken to be 5000~K from the radio recombination line estimates by \\citet{1987A&A...171..261C}. Approximating the emission region to an ellipse, the angular source size ($\\theta_{\\rm source}$) is taken to be the geometric mean of the axes of the ellipse and is estimated to be 6.25~arcmin (3.6~pc). The derived physical parameters are listed in Table \\ref{radio-physical-param}. \n\n\\begin{table}\n\\caption{Derived physical parameters of the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916.}\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\hline\nSize & log $N_{\\rm Ly}$ & EM & $n_{\\rm e}$ & Spectral Type \\\\\n(pc) & & (cm$^{-6}$pc) & (cm$^{-3}$) & \\\\\n\\hline \n\n3.6 & 48.73 & 5.8$\\times$10$^{5}$ & 1.3$\\times$10$^{2}$ & O6.5V -- O7V \\\\ \\hline\n\\end{tabular}\n\\label{radio-physical-param}\n\\end{center}\n\\end{table}\n \nIf we assume a single star to be ionizing the H\\,{\\scshape ii}~region\\ and compare the Lyman-continuum photon flux obtained from the 1280~MHz map with the parameters of O-type stars presented in \\citet[Table 1;][]{2005A&A...436.1049M}, we estimate its spectral type to be O6.5V -- O7V. \nThis can be considered as a lower limit as the emission at 1280~MHz could be optically thick as well. In addition, one needs to account for dust absorption of Lyman continuum photons, which can be significant as shown by many studies \\citep[e.g.][]{2011A&A...525A.132P}. \nThe estimated spectral type suggests a mass range of $\\sim 20 - 40~ M_{\\odot}$ for the ionizing star \\citep{2005A&A...436.1049M}. \n\nTo decipher the nature of the ionized emission, we determine the spectral index, $\\alpha$ which is defined as $F_{\\nu} \\propto \\nu^{\\alpha}$. The flux density, $F_{\\nu}$, is calculated from the GMRT radio maps. For this, we generate two new radio maps at 610 and 1280~MHz by setting the $\\it uv$ range to a common value of $0.14 - 39.7 $~k$\\lambda$. This ensures similar spatial scales being probed at both frequencies. Further, the beam size for both the maps is set to $\\rm 12~arcsec \\times 12~arcsec$. $F_{\\nu}$ is obtained by integrating within the area defined by the 3$\\sigma$ contour of the new 610~MHz map. \nThe integrated flux density values are estimated to be $\\rm 13.7 \\pm 1.3 \\,Jy\\,$, $\\rm 12.1 \\pm 1.2 \\,Jy\\,$ at 610 and 1280~MHz, respectively. \nThese yield a spectral index of $-0.17 \\pm 0.19$. Similar values are obtained for the central, bright radio emission as well. Within the quoted uncertainties, the average spectral index is fairly consistent with optically thin, free-free emission as expected from {H\\,{\\scshape ii}~regions} which are usually dominated by thermal emission. Spectral index estimate of $-0.1$, consistent with optically thin thermal emission, is also obtained by combining the GMRT flux density values with the available single dish measurements at 2.65~GHz \\citep{1969AuJPA..11...27B} and 4.85~GHz \\citep{1994ApJS...91..111W}.\n\n\\subsection{The dust environment}\n\\label{mir-dust}\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.4]{Fig3.eps}\n\\caption{Dust emission associated with IRAS 17149$-$3916 from the {\\it Spitzer}-GLIMPSE and the {\\it Herschel}-Hi-GAL surveys.} \n\\label{iremission}\n\\end{figure*}\nThe warm and the cold dust emission associated with IRAS 17149$-$3916 unravels interesting morphological features like a bubble, pillars, filaments, arcs, and clumps, that strongly suggest this to be a very active star forming complex where the profound radiative and mechanical feedback of massive stars on the surrounding ISM is clearly observed. Fig.~\\ref{iremission} compiles the MIR and FIR emission towards IRAS 17149$-$3916 from the GLIMPSE and Hi-GAL surveys. Apart from the stellar population probed at the shorter wavelengths, the diffuse emission seen in the IRAC-GLIMPSE images would be dominated by the emissions from polycyclic aromatic hydrocarbons (PAHs) excited by the UV photons in the photodissociation regions \\citep{2012A&A...542A..10A,2012ApJ...760..149P}. Close to the hot stars, there would also be significant contribution from thermally emitting warm dust that is heated by stellar radiation \\citep{2008ApJ...681.1341W}. In {H\\,{\\scshape ii}~regions}, emission from dust heated by trapped Ly$\\alpha$ photons \\citep{1991MNRAS.251..584H}, would also be present in these IRAC bands. In the wavelength regime of the 21~$\\rm \\mu m$ MSX, the emission is either associated with stochastically heated Very Small Grains (VSGs) or thermal emission from hot big grains (BGs). As we go to the FIR Hi-GAL maps, cold dust emission dominates and shows up as distinct clumps and filamentary structures where, emission in the 70~$\\rm \\mu m$ band is dominated by the VSGs and the longer wavelength bands like 250~$\\rm \\mu m$ band trace emissions from the BGs \\citep{2012ApJ...760..149P}.\n\n\\subsubsection*{Dust temperature and column density maps}\nTo understand the nature of cold dust emission, we generate the dust temperature and the molecular hydrogen column density maps following the procedure detailed in \\citet{2018A&A...612A..36D} and \\citet{2019MNRAS.485.1775I} and briefly stated here. A pixel-by-pixel modified blackbody modelling to the observed spectral energy distribution is carried out. As discussed in these two papers, the 70\\,$\\rm \\mu m$ data is not used because there would be appreciable contribution from the warm dust component hence rendering a single modified blackbody fit inaccurate. Thus, we have the FIR emission at 160, 250, 350, and 500\\,$\\rm \\mu m$ mostly on the Rayleigh-Jeans part to constrain the model given by\n\\begin{equation}\nF_{\\nu}-I_{\\rm bg} = B_{\\nu}(\\nu,T_{\\rm d})~\\Omega~(1-{\\rm e}^{-\\tau_{\\nu}}) \n\\label{MBB-Eqn}\n\\end{equation}\nwhere, $F_{\\nu}$ is the observed flux density, $I_{\\rm bg}$ is the background flux density, $B_{\\nu}(\\nu,T_{\\rm d})$ is the Planck function at the dust temperature $\\rm T_{\\rm d}$, $\\Omega$ is the solid angle subtended by a pixel (all maps are convolved to a common resolution of 35.7~arcsec and regridded to a common pixel size of $\\rm 14~arcsec\\times14~arcsec$). The background flux is estimated from a nearby region relatively free of clumpy and bright emission.\nThe optical depth $\\tau_{\\nu}$ in Eqn. \\ref{MBB-Eqn} can be expressed as\n\\begin{equation}\n\\tau_{\\nu} = \\mu_{\\rm H_2}~ N({\\rm H_2})~ m_{\\rm H}~ \\kappa_{\\nu}\n\\end{equation}\nwhere, $\\mu_{\\rm H_2}$ is the mean molecular weight which is taken as 2.8 \\citep{2008A&A...487..993K}, $N({\\rm H_2})$ is the column density, $m_{\\rm H}$ is the mass of hydrogen atom and $\\kappa_{\\nu}$ ($\\rm cm^2\\,g^{-1}$) is the dust opacity which is given as \\citep{1983QJRAS..24..267H}\n\\begin{equation}\n\\kappa_{\\nu} = 0.1\\left ( \\frac{\\nu}{1200~{\\rm GHz}} \\right )^{\\beta} \n\\label{kappa}\n\\end{equation}\nHere, $\\beta$ denotes the dust emissivity spectral index and a typical value of 2 estimated in several star regions is assumed. \nIn fitting the modified blackbody to the observed flux densities, $N({\\rm H_2})$ and $T_{\\rm d}$ are kept as free parameters. \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.32]{Fig4a.eps}\n\\includegraphics[scale=0.32]{Fig4b.eps}\n\\includegraphics[scale=0.32]{Fig4c.eps}\n\\caption{Column density (a), dust temperature (b), and chi-square ($\\chi^{2}$) (c) maps of the region associated with the H\\,{\\scshape ii}\\ region. The 610~MHz ionized emission overlaid as magenta and gray contours on column density and dust temperature maps, respectively. The contour levels are 0.01, 0.03, 0.06, 0.12, and 0.2~Jy\/beam. The retrieved clump apertures are shown on the column density (in blue) and dust temperature (in black) maps following the nomenclature as discussed in the text. The black contour in (a) shows the area integrated for estimating $n_0$ (refer Section \\ref{clumps-CC}.)}\n\\label{cdtchi}\n\\end{figure*}\nThe column density and dust temperature maps generated are shown in Fig.~\\ref{cdtchi}. The goodness of the fits for each pixel can be seen in the $\\chi^{2}$ map where the maximum $\\chi^{2}$ value is seen to be $\\sim 8$.\nThe column density map presents a triangular morphology with three distinct, bright and dense regions.\nA network of broad filaments are also seen in the map. The dust temperature map is relatively patchy with regions of higher temperature within the radio nebula. A region with warm temperature is seen to be located towards the south-east of IRAS 17149$-$3916, the signature of which can be seen in the {\\it Herschel} maps shown in Fig.~\\ref{iremission}. The western side of the {H\\,{\\scshape ii}~region} shows comparatively cold temperatures. Furthermore, the filamentary features seen in the column density map are mostly revealed as distinct low temperature lanes. \n\n\\subsubsection*{Dust clumps and cores}\\label{dust_clump}\nThe FIR and the column density maps show the presence of dust clumps. These clumps are identified using the \\textit{Herschel} 350\\,$\\rm \\mu m$ map and the {\\it Dendrogram}\\footnote{\\url{https:\/\/dendrograms.readthedocs.io\/en\/stable\/}} algorithm. Using this algorithm, we identify the smallest structures, called the `leaves', in the 350\\,$\\rm \\mu m$ map, which in this case are the cold dust clumps. The key input parameters for the identification of the clumps are (1) {\\it min\\_value} = $3\\sigma$ and (2) {\\it min\\_delta} = $\\sigma$, where $\\rm \\sigma (= 191.2\\,MJy\\,sr^{-1})$ is the {\\it rms} level of the 350\\,$\\rm \\mu m$ map. An additional parameter, {\\it min\\_pix = N}, is also used, which is the minimum number of pixels required for a `leaf' to be considered an independent entity. To ensure that the clumps are resolved, the value of {\\it N} is chosen to be 7\\,pixels, the beam area of the 350\\,$\\rm \\mu m$ map. Setting these parameters, we extract three cold dust clumps. The central panel of Fig.~\\ref{dustclumps} shows the 350\\,$\\rm \\mu m$ map overlaid with the retrieved apertures of the three detected clumps labelled 1, 2, and 3. The physical parameters of the detected clumps are listed in Table \\ref{clump-param}. These are derived from the 350\\,$\\rm \\mu m$, column density and dust temperature maps. The peak positions are determined from the 350\\,$\\rm \\mu m$ map. The clump radii, $r=(A\/\\pi)^{0.5}$, where $A$ is the enclosed area within the retrieved clump apertures. \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.5]{Fig5.eps}\n\\caption{ The central panel is the \\textit{Herschel} 350\\,$\\rm \\mu m$ map overlaid with the {\\it Dendrogram} retrieved clump apertures in black. White circles represent four pointings of the {\\it ALMA} observations, S61, S62, S63, and S64. The 1.4\\,mm {\\it ALMA} map towards each pointing is placed at the corners of the 350\\,$\\rm \\mu m$ image. The apertures of the dense cores extracted using the {\\it Dendrogram} algorithm are overlaid and labelled on these maps. The beam sizes of the 1.4\\,mm maps are given towards the lower left hand corner of each image.}\n\\label{dustclumps}\n\\end{figure*}\n\\begin{table*}\n\\caption{Physical parameters of the detected dust clumps associated with IRAS 17149$-$3916}\n\\begin{center}\n\\centering \n\\scalebox{0.85}{\n\\begin{tabular}{ccccccccccc}\n\\hline\nClump & \\multicolumn{2}{c}{Peak position} & Mean $T_{\\rm d}$ & $\\Sigma N({\\rm H_2})$ & Radius & Mass & Mean $N({\\rm H_2})$ & No. density $(n_{\\textup{H}_{2}})$ & \\multicolumn{1}{l}{$M_{\\rm vir}$} & \\multicolumn{1}{l}{$\\alpha_{\\rm vir}$} \\\\\n & RA (J2000) & DEC (J2000) & (K) & $(\\times 10^{23}$ cm$^{-2})$ & (pc) & ($M_{\\odot}$) & $(\\times 10^{22}$ cm$^{-2}$) & $(\\times 10^{4}$ cm$^{-3}$) & ($M_{\\odot}$) & \\multicolumn{1}{l}{} \\\\ \\hline\n1 & 17 18 24.32 & -39 19 28.04 & 27.8 & 6.1 & 0.3 & 250 & 5.5 & 5.0 & 452 & 1.75 \\\\\n2 & 17 18 18.77 & -39 19 04.54 & 25.0 & 7.1 & 0.3 & 292 & 5.4 & 5.1 & 600 & 2.15 \\\\\n3 & 17 18 22.57 & -39 18 36.44 & 27.3 & 2.8 & 0.2 & 117 & 3.6 & 4.6 & 450 & 4.08 \\\\ \\hline\n\\end{tabular}}\n\\label{clump-param}\n\\end{center}\n\\end{table*}\nTo estimate the masses of the detected clumps, we utilize the column density map and use the following expression\n\\begin{equation}\nM_{\\rm clump} =\\mu_{\\rm H_2}~ \\Sigma N({\\rm H_2})~A_{\\rm pixel}~m_{\\rm H}\n\\label{mass_eqn}\n\\end{equation}\nwhere, $\\mu_{\\rm H_2}$ is the mean molecular weight taken as 2.8, $\\Sigma N({\\rm H_2})$ is the integrated column density over the clump area, $A_{\\rm pixel}$ is the pixel area in $\\rm cm^2$ and $m_{\\rm H}$ is the mass of hydrogen atom. The number density is determined using the expression $n_{\\rm H_2} = 3\\,N({\\rm H_2})\/4r$. The peak positions of the Clumps 1, 2, and 3 agree fairly well with Clumps III, I, and II, respectively, detected by \\citet{2014MNRAS.437..606T} using {\\it Herschel} maps. Comparing the masses presented in Table \\ref{clump-param}, the estimates given in \\citet{2014MNRAS.438.2716T} are higher by a factor of 8, 1.5, and 3 for the Clumps 1, 2, and 3, respectively. \n\n{\\it ALMA} continuum data enables investigation of the detected clumps at high resolution. In Fig.~\\ref{dustclumps}, we show {\\it ALMA} dust continuum maps at 1.4\\,mm towards the four pointings marked and labelled as S61, S62, S63, and S64 where the first three pointings lie mostly within three clumps and S64 lies outside towards the east. Using the same {\\it Dendrogram} algorithm, several cores are identified. The key input parameters to the {\\it Dendrogram} algorithm are, {\\it min\\_value} = $3\\sigma$, {\\it min\\_delta} = $\\sigma$, and {\\it min\\_pix = N}, where $\\sigma$ is the {\\it rms} level and $N(=60)$ is the beam area of the 1.4\\,mm maps. In order to avoid detection of spurious cores, we only retain those with peak flux density greater than $5\\sigma$. Applying these constraints, seven cores are identified towards S61 and S62 each, one towards S63 and two towards S64. \n\nTo further study these dense cores, we estimate their physical parameters. Adopting the formalism described by \\citet{2018ApJ...853..160C} and assuming the emission at 1.4\\,mm to be optically thin, the masses are estimated using the following equation\n\n\\begin{eqnarray}\n M & = &\n \\displaystyle 0.0417 \\, M_{\\odot}\n \\left( {\\textrm e}^{0.533 (\\lambda \/ {1.3\\, \\textrm {mm}})^{-1}\n (T \/ {20\\, \\textrm {K}})^{-1}} - 1 \\right) \\left( \\frac{F_{\\nu}}{\\textrm {mJy}} \\right) \\nonumber \\\\\n & & \\displaystyle\n \\times \\left( \\frac{\\kappa_{\\nu}}{0.00638\\,\\textrm{cm}^2\\,\\textrm{g}^{-1}} \\right)^{-1}\n \\left( \\frac{d}{\\textrm {kpc}} \\right)^2\n \\left( \\frac{\\lambda}{1.3\\, \\textrm {mm}} \\right)^{3} \n \\label{core_mass}\n\\end{eqnarray}\nHere, $F_\\nu$ is the integrated flux density of each core, $d$ is the distance to the source and $\\lambda$ is the wavelength. Opacity, $\\kappa_\\nu$ is estimated using equation~\\ref{kappa} with the dust emissivity spectral index $\\beta$ fixed at 2.0. For cores detected in S61, S62, and S63, mean dust temperatures of the respective clumps are taken. For S64, the mean dust temperature for the region covering the S64 pointing is used. The effective radius, $r=(A\/\\pi)^{0.5}$, of each core is also estimated where $A$ is area enclosed within each core aperture.\nThe identified cores with the retrieved apertures are shown in Fig.~\\ref{dustclumps} and the estimated physical parameters are list in Table \\ref{alma-cores}. The uncertainties related to the missing flux effect are not taken into account in deriving these parameters since it may not be significant given that the largest recoverable scales is quoted to be $\\sim$10~arcsec for this ALMA dataset which is appreciably larger than the typical sizes of the detected cores. Barring the largest detected core which has an angular size of $\\sim$7~arcsec, the average size of the cores is 3~arcsec.\n\\begin{table*}\n\\caption{Parameters of the detected dust cores extracted associated with IRAS 17149$-$3916}\n\\begin{center}\n\\centering \n\\begin{tabular}{ccccccc}\n\\hline\n & Core & \\multicolumn{2}{c}{Peak position} & Flux density & Radius & Mass \\\\\n & & RA (J2000) & DEC (J2000) & (mJy) & (pc) & ($M_\\odot$) \\\\\n\\hline\nS61 & 1 & 17 18 23.03 & -39 19 12.95 & 9.7 & 0.01 & 1.7 \\\\\n & 2 & 17 18 23.75 & -39 19 18.47 & 62.2 & 0.02 & 10.7 \\\\\n & 3 & 17 18 23.45 & -39 19 19.53 & 32.8 & 0.02 & 5.7 \\\\\n & 4 & 17 18 24.17 & -39 19 22.35 & 11.2 & 0.01 & 1.9 \\\\\n & 5 & 17 18 24.34 & -39 19 25.09 & 106.7 & 0.02 & 18.4 \\\\\n & 6 & 17 18 24.76 & -39 19 19.50 & 4.5 & 0.01 & 0.8 \\\\\n & 7 & 17 18 24.93 & -39 19 28.12 & 31.8 & 0.02 & 5.5 \\\\\n\\hline \nS62 & 1 & 17 18 20.38 & -39 18 54.32 & 46.0 & 0.02 & 9.4 \\\\\n & 2 & 17 18 20.09 & -39 18 54.43 & 9.8 & 0.01 & 2.0 \\\\\n & 3 & 17 18 19.64 & -39 18 56.07 & 7.9 & 0.02 & 1.6 \\\\\n & 4 & 17 18 19.63 & -39 19 04.70 & 2.1 & 0.01 & 0.4 \\\\\n & 5 & 17 18 18.82 & -39 19 04.94 & 3.1 & 0.01 & 0.6 \\\\\n & 6 & 17 18 18.35 & -39 19 01.65 & 19.2 & 0.02 & 3.9 \\\\\n & 7 & 17 18 18.42 & -39 19 05.37 & 8.2 & 0.01 & 1.7 \\\\\n\\hline \nS63 & 1 & 17 18 23.48 & -39 18 40.70 & 357.7 & 0.03 & 62.3 \\\\\n\\hline \nS64 & 1 & 17 18 29.98 & -39 19 11.22 & 7.0 & 0.02 & 1.1 \\\\\n & 2 & 17 18 29.10 & -39 18 54.11 & 1.9 & 0.01 & 0.3 \\\\\n\n\\hline\n\\end{tabular}\n\\label{alma-cores}\n\\end{center}\n\\end{table*}\n\n\\subsection{Molecular line observation of identified clumps}\nWe use the optically thin $\\rm N_2 H^+$ line emission to determine the $V_{\\textup{LSR}}$ and the line width, $=\\Delta V$ of the clumps. The line spectra are extracted by integrating over the retrieved apertures of the clumps as shown in Fig.~\\ref{n2hplus_spectra}. The $\\rm N_2 H^+$ spectra have seven hyperfine structures and the {\\tt hfs} method of {\\tt CLASS90} is used to fit the observed spectra. The line parameters retrieved from spectra are listed in Table \\ref{n2hplus_fit_parameters}. The $V_{\\textup{LSR}}$ determined agrees well with the value of $\\rm 13.7\\, km s^{-1}$ obtained from $\\rm CS(2-1)$ observations of the region by \\citet{1996A&AS..115...81B} \n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.2]{Fig6a.eps}\n\\includegraphics[scale=0.2]{Fig6b.eps}\n\\includegraphics[scale=0.2]{Fig6c.eps}\n\\caption{Spectra of the optically thin N$_{2}$H$^{+}$ line emission extracted over the three identified clumps associated with IRAS 17149$-$3916. The red curves are the {\\tt hfs} fit to the spectra. The estimated $V_{\\textup{LSR}}$ is denoted by the dashed blue line and the location of the hyperfine components by magenta lines.}\n\\label{n2hplus_spectra}\n\\end{figure*}\n\\begin{table}\n\\centering\n\\caption{The retrieved $\\rm N_2 H^+$ line parameters, $V_{\\textup{LSR}}$, $\\Delta V$, $T_{\\rm mb}$ and $\\int T_{\\textup{mb}} {\\rm dV}$ for the identified clumps associated with IRAS 17149$-$3916.}\n\\begin{tabular}{ccccc}\n\\hline\nClump & $V_{\\textup{LSR}}$ & $\\Delta V$ & $T_{\\rm mb}$ & $\\int T_{\\textup{mb}} {\\rm dV}$ \\\\\n & (km s$^{-1}$) & (km s$^{-1}$) & (K) & (K km s$^{-1}$) \\\\ \\hline\n1 & -13.0 & 3.2 & 1.5 & 5.4 \\\\\n2 & -13.8 & 3.7 & 1.6 & 6.8 \\\\\n3 & -13.8 & 3.9 & 0.8 & 3.1 \\\\ \\hline\n\\end{tabular}\n\\label{n2hplus_fit_parameters}\n\\end{table}\n\n\\section{Discussion}\n\\label{discussion}\n\\subsection{Understanding the morphology of the ionized gas}\nAs seen, the GMRT maps reveal the large extent and prominent cometary morphology of the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916 which was earlier discussed as a roundish {H\\,{\\scshape ii}~region} by \\citet{2014MNRAS.437..606T}. In this section, we attempt to investigate the likely mechanism for this observed morphology. \nThe widely accepted models to explain the formation of cometary {H\\,{\\scshape ii}~regions} are (1) the bow-shock model (e.g. \\citealt{1992ApJ...394..534V}), (2) the champagne-flow model (e.g. \\citealt{1979A&A....71...59T}), and (3) the mass loading model (e.g. \\citealt{1997ApJ...476..166W}). However, subsequent studies, like those conducted by \\citet{2003ApJ...596..344C} and \\citet{2006ApJS..165..283A}, find \nthe `hybrid' models, that are a combination of these, to better represent the observed morphologies. \n\nThe bow-shock model assumes a wind-blowing, massive star moving supersonically through a dense cloud. Whereas, the champagne-flow model invokes a steep density gradient encountered by the expanding {H\\,{\\scshape ii}~region} around a newly formed stationary, massive star possibly located at the edge of a clump. Here, the ionized gas flows out towards regions of minimum density. In comparison, the model proposed by \\citet{1997ApJ...476..166W} invokes the idea of strong stellar winds mass loading from the clumpy molecular cloud and the cometary structure unfolds when a gradient in the geometrical distribution of mass loading centres are introduced. In this model the massive, young star is considered to be stationary as in the case of the champagne-flow model. \n\nWhile observation of ionized gas kinematics is required to understand the origin of the observed morphology, in the discussion that follows, we discuss a few aspects based on the\nradio, column density, and FIR maps of the region associated with IRAS 17149$-$3916 along with the identification of E4 as the likely ionizing star (refer Section \\ref{ionizing_star}). Following the simple analytic expressions discussed in \\citet{2018A&A...612A..36D}, we derive a few shock parameters to probe the bow-shock model. Taking the spectral type of E4 to be O6.5V -- O7V as estimated from the radio flux density, and assuming it to move at a typical speed of $\\rm 10~km\/s$ through the molecular cloud, we calculate the `stand-off' distance to range between $\\rm 2.6~arcsec (0.02~pc) - 3.1~arcsec (0.03~pc)$. This is defined as the distance from the star at which the shock occurs and where the momentum flux of the stellar wind equals the ram pressure of the surrounding cloud. The theoretically estimated value is significantly less than the observed distance of $\\sim \\rm 84~arcsec (0.8~pc)$ between E4 and the cometary head. Taking viewing angle into consideration would decrease the theoretical estimate thus widening the disparity further. Based on the above estimations, it is unlikely that the bow-shock model would explain the cometary morphology. To confirm further, we determine the trapping parameter which is the inverse of the ionization fraction. As the ionizing star moves supersonically through the cloud, the swept off dense shells trap the {H\\,{\\scshape ii}~region} within it and its expansion is eventually inhibited by the ram pressure. Trapping becomes more significant when recombinations far exceed the ionizing photons. Studies of a large number of cometary {H\\,{\\scshape ii}~regions} show the trapping parameter to be much greater than unity \\citep{1991ApJ...369..395M}. For IRAS 17149$-$3916, we estimate the value to lie in the range $3.2 - 3.5$ which indicates either weak or no bow shock. Similar interpretations are presented in \\citet{2018A&A...612A..36D} and \\citet{2016MNRAS.456.2425V}. The trapping parameters obtained by these authors lie in the range 1.2 -- 4.3.\n\nTo investigate the other models, namely the champagne-flow and clumpy\/mass loading wind models, we compare the observed spatial distribution of the dust component and the ionized gas. The FIR and column density maps presented in Section \\ref{mir-dust} show a complex morphology of pillars, arcs, filaments in the region with detected massive clumps towards the cometary head. The steep density gradient towards the cometary head is evident. Without the ionized and molecular gas kinematics information, it is difficult to invoke the champagne-flow model. However, the maps do show the presence of clumps towards the cometary head which could act as potential mass loading centres and thus support the clumpy cloud model. Further observations and modelling are essential before one can completely understand the mechanisms at work.\n\n\\subsection{Ionizing massive star(s)}\\label{ionizing_star}\n\\citet{2006AJ....131..951R} have studied the associated stellar population towards IRAS 17149$-$3916 in the NIR. Using the colour-magnitude diagram, these authors show the presence of a cluster of massive stars within the infrared nebula and suggest IRS-1 to be the likely ionizing source. In a later study, \\citet{2014MNRAS.437..606T} have supported this view citing the spectroscopic classification of IRS-1 as O5 -- O6 by \\citet{2005A&A...440..121B} and consistency with the Lyman continuum photon flux estimated from the radio observations by \\citet{2013A&A...550A..21S}. \n\\begin{figure*}\n\\centering\n\\includegraphics[width=9cm,height=6.6cm]{Fig7a.eps}\n\\includegraphics[width=8cm,height=6cm]{Fig7b.eps}\n\\includegraphics[width=9cm,height=7cm]{Fig7c.eps}\n\\caption{(a) J vs J-H colour magnitude diagram of the sources (cyan dots) associated with IRAS 17149$-$3916 and located within the 3$\\sigma$ radio contour. The nearly vertical solid lines represent the ZAMS loci with 0, 15, and 30 magnitudes of visual extinction corrected for the distance. The slanting lines show the reddening vectors for spectral types B0 and O5. \n(b) J-H vs H-K colour-colour diagram for sources (magenta dots) in the same region as (a). The cyan and black curves show the loci of main sequence and giants, respectively, and are taken from \\citet{1983A&A...128...84K} and \\citet{1988PASP..100.1134B}. The locus of classical T Tauri adopted from \\citet{1997AJ....114..288M} is shown as long dashed line. The locus of Herbig AeBe stars shown as short dashed line is adopted from \\citet{1992ApJ...393..278L}. The parallel lines are the reddening vectors where cross marks indicate intervals of 5 mag of visual extinction. The colour-colour plot is divided into three regions, namely, `F', `T', and `P' (see text for more discussion). The interstellar reddening law assumed is from \\citet{1985ApJ...288..618R}. The magnitudes, colours and various loci plotted in both the diagrams are in the \\citet{1988PASP..100.1134B} system. \nThe identified early type (earlier than B0) and YSOs candidates are highlighted as blue and red stars, respectively. Of these, the ones located towards the central, bright radio emission are labelled as `E' and `Y', respectively. (c) An enlarged view of the bottom left portion of (b) showing the position of spectral types on the main sequence locus. The location of the source E4 and the errors on the colour are also shown.}\n\\label{NIR-CC-CM}\n\\end{figure*}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.2]{Fig8.eps}\n\\caption{Panel a: {\\it Spitzer} 5.8\\,$\\rm \\mu m$ image in grey scale overlaid by 610~MHz radio contours. The contour levels are 0.002, 0.01, 0.03, 0.06, 0.12, and 0.2~Jy\/beam. The YSOs and massive stars detected towards the central ionized region are shown in red and blue coloured stars, respectively. Panel b: This shows the zoom-in view of central part of the H\\,{\\scshape ii}\\ region.}\n\\label{EY-Spa-dis}\n\\end{figure}\nWhile the spectral type estimated from the GMRT radio emission at 1280~MHz is consistent with that of IRS-1, we investigate the stellar population within the radio emission for a better understanding. \nIn Figs.~\\ref{NIR-CC-CM}(a) and (b), we plot the NIR colour-magnitude and colour-colour diagrams, respectively, of 2MASS sources located within the 3$\\sigma$ radio contour. \nFig.~\\ref{NIR-CC-CM}(c) shows an enlarged view of the bottom left portion of Fig.~\\ref{NIR-CC-CM}(b) to highlight the location of the star E4 with respect to the main sequence locus.\nFollowing the discussion given in \\citet[Fig. 7;][]{2006A&A...452..203T} the colour-colour plot is classified into `F', `T', and `P' regions. The `F' region is occupied by mostly field stars or Class III sources, the `T' region is for T-Tauri stars (Class II YSOs) and protostars (Class I YSOs) populate the `P' region.\nAs seen from the figures, there are sixteen sources earlier than spectral type B0 and eighteen identified YSOs. The sample of identified YSOs fall in the `T' region, the sources of which are believed to be Class II objects with NIR excess. Sources that lie towards the central bright, radio emitting region are labelled in the figures with prefixes of `E' for the sources earlier than B0 and `Y' for the YSOs. As indicated in Fig.~\\ref{NIR-CC-CM}(b), early type sources E1 and E2 are also the identified YSOs, Y1 and Y3, respectively. The coordinates and NIR magnitudes of these selected sources are listed in Table~\\ref{EY-sources}. Fig.~\\ref{EY-Spa-dis} shows the spatial distribution of the above sources with respect to the radio and 5.8\\,$\\rm \\mu m$ emission.\n\nAs seen from the above analysis, in addition to the presence of possible discrete radio sources that could be internally ionized, several massive stars are also identified from the NIR colour-magnitude and colour-colour diagrams. Hence, it is likely that ionization in this {H\\,{\\scshape ii}~region} is the result of this cluster of massive stars. However, the observed symmetrical, cometary morphology of the ionized emission strongly suggests that the ionization is mostly dominated by a single star. \nAs seen in Fig.~\\ref{NIR-CC-CM}(a), out of the early type sources that lie towards the central, bright radio emission, the colour and magnitude of the source E4 is consistent with a spectral type of $\\sim$O6.\nA careful scrutiny of the Fig.~\\ref{NIR-CC-CM}(b) show that early type stars E1 and E2 are embedded Class II sources and hence unlikely to be the main driving source. Sources E3, E5, E6 are possibly reddened giants or field stars. The location of early type star, E4 (which is the source IRS-1) in the colour-colour diagram (see enlarged view shown in Fig.~\\ref{NIR-CC-CM}c) agrees fairly well with the spectral type estimate of $\\sim$O6 obtained from the colour-magnitude diagram and strongly advocates it as the dominating exciting source. This is consistent with the identification of IRS-1 as the ionizing star in previous studies. Spatially also, the location of E4 clearly suggests its role in the formation of the network of pillar like structures observed (see Section \\ref{pillars}). As mentioned earlier, the spectral type of E4, estimated from NIR spectroscopy, is in good agreement with the radio flux. Location wise, however, it is 30~arcsec away from the radio peak. This offset could be attributed to density inhomogeneity or clumpy structure of the surrounding, ambient ISM. Supporting this scenario of E4 being the dominant player, are the interesting pillar like structures revealed in the MIR images discussed in the next section. \n\\begin{table*}\n\\caption{Early type and YSOs detected within the central, bright, radio emission of IRAS 17149$-$3916}\n\\begin{center}\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{cccccc}\n\\hline\nSource & \\multicolumn{2}{c}{Coordinates} & J & H & K \\\\\n & RA (J2000) & DEC (J2000) & & & \\\\ \\hline\n & & {\\it Early-type sources} & & & \\\\[1mm]\nE1 (Y1) & 17 18 22.21 & -39 18 42.24 & 11.364 & 10.039 & 8.957 \\\\\nE2 (Y3) & 17 18 22.85 & -39 18 22.45 & 15.356 & 12.086 & 10.086 \\\\\nE3 & 17 18 25.11 & -39 18 46.41 & 15.398 & 12.486 & 11.361 \\\\\nE4 & 17 18 25.45 & -39 19 08.61 & 8.654 & 8.208 & 7.927 \\\\\nE5 & 17 18 25.68 & -39 18 26.65 & 11.790 & 9.617 & 8.606 \\\\\nE6 & 17 18 25.94 & -39 18 00.89 & 7.715 & 7.191 & 7.021 \\\\ \\hline\n & & {\\it YSOs} & & & \\\\[1mm]\nY1 (E1) & 17 18 22.21 & -39 18 42.24 & 11.364 & 10.039 & 8.957 \\\\\nY2 & 17 18 22.28 & -39 18 12.69 & 15.064 & 14.161 & 13.578 \\\\\nY3 (E2) & 17 18 22.85 & -39 18 22.45 & 15.356 & 12.086 & 10.086 \\\\\nY4 & 17 18 22.87 & -39 18 58.45 & 14.413 & 13.346 & 12.393 \\\\\nY5 & 17 18 23.28 & -39 19 07.77 & 12.993 & 11.906 & 11.135 \\\\\nY6 & 17 18 24.44 & -39 19 11.04 & 14.324 & 12.958 & 12.166 \\\\\nY7 & 17 18 25.14 & -39 19 25.95 & 14.741 & 13.657 & 12.664 \\\\\nY8 & 17 18 25.28 & -39 18 24.76 & 12.818 & 11.372 & 10.420 \\\\\nY9 & 17 18 28.30 & -39 19 49.71 & 14.677 & 13.449 & 12.680 \\\\\n\\hline\n\\end{tabular}}\n\\label{EY-sources}\n\\end{center}\n\\end{table*}\n\n\\subsection{Triggered star formation}\\label{pillars}\n\n\\subsubsection*{Pillar Structures}\nIn Fig.~\\ref{pillar-structure}, we illustrate the identification of pillar structures in the IRAC 8\\,$\\rm \\mu m$ image. The MIR emission presents a region witnessing a complex interplay of the neutral, ambient ISM with the ionizing radiation of newly formed massive star(s). The boxes labelled `A' and `B' show prominent pillar structures, the orientation of these are clearly pointed towards E4. This strongly suggests E4 as the main sculptor of the detected pillars. Furthermore, it also supports the identification of E4 as the main ionizing source of the H\\,{\\scshape ii}~region\\ . \n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.15]{Fig9.eps}\n\\caption{(a) 8.0\\,$\\rm \\mu m$ map of IRAS 17149$-$3916 from the {\\it Spitzer}-GLIMPSE survey. Blue arrows highlight the pillar structures identified within Boxes `A' and `B'. The white `+' mark shows the position of E4 (IRS-1). (b) A zoom-in on `A' at IRAC 5.8\\,$\\rm \\mu m$. (c) 1280\\,MHz map covering the pillar `A'.}\n\\label{pillar-structure}\n\\end{figure}\nOne of the mechanisms widely accepted to explain the formation of these pillars is the radiative driven implosion (RDI) \\citep{1994A&A...289..559L}. Here, pre-existing clouds exposed to newly forming massive star(s) are sculpted into pillars by slow photoevaporation caused due to strong impingement of ionizing radiation. The other being the classical collect and collapse model of triggered star formation proposed by \\citet{1977ApJ...214..725E}. Under this framework, the expanding {H\\,{\\scshape ii}~region} sweeps up the surrounding material, creating dense structures that could eventually form pillars in their shadows. \n\nFigs.~\\ref{pillar-structure}(b) and (c) show the zoomed in IR and radio view of pillar `A'. Clearly seen is a slightly elongated and bright radio source at the pillar head. To ascertain the nature of the bright radio source, we estimate few physical parameters using the 1280~MHz GMRT map. Using the 2D fitting tool of {\\small CASA} viewer we fit a 2D Gaussian and determine the deconvolved size and flux density of this source to be $\\rm 19.05~arcsec \\times 8.07~arcsec$ ($\\theta_{\\rm source} = 12.4 \\rm arcsec; 0.12 \\rm pc$) and $\\rm 445 \\,mJy\\,$, respectively. Inserting these values in equations \\ref{Lyman_flux}, \\ref{e_no_density}, and \\ref{emission_measure}, we get $ log N_{\\rm Ly} = 47.30$ , $n_{\\rm e} = 3.9\\times10^3~\\rm cm^{-3}$ and EM= $\\rm 1.8\\times10^6~pc~cm^{-6}$. The ionized mass ($\\rm M_{ion} = \\frac{4}{3}\\pi r^{3} \\mathit{n_{\\rm e}} m_{p}$, where $\\rm r$ is the radius of the source and $\\rm m_{p}$ is the mass of proton) is also calculated to be $\\rm 0.09 ~ M_{\\odot}$. \nThe estimated values of these physical parameters lie in between the typical values of compact and UCHII region \\citep{2002ASPC..267...81K,2005A&A...433..205M,2021A&A...645A.110Y}.\nHence this radio source could well represent an intermediate evolutionary stage between a compact and an UC{H\\,{\\scshape ii}~region} thus indicating a direct signature of triggered star formation at the tip of the pillar. An alternate picture for the bright radio emission at the head of the pillar could also be external ionization by the ionizing front emanating from E4. Such externally ionized, tadpole-shaped structures have been studied in the Cygnus region, where the ionized front heads point towards the central, massive Cygnus OB2 cluster \\citep{2019A&A...627A..58I}. In support of the former scenario of the UC{H\\,{\\scshape ii}~region}, a bright and compact 5.8\\,$\\rm \\mu m$ emission region is seen that is co-spatial with radio emission. This compact IR emission is seen in all IRAC bands and {\\it Herschel} images. While it is not listed as an IRAC point source in the GLIMPSE catalog, it is included in the PACS 70~$\\rm \\mu m$ point source catalog \\citep{2017arXiv170505693M}. There also exists a 2MASS counterpart (within $\\sim$3~arcsec) but has been excluded from the YSO identification procedure owing to poor quality photometry in one or more 2MASS bands. It is thus likely that the compact IR emission sampled in the {\\it Spitzer}-GLIMPSE and {\\it Herschel} images is the massive YSO powering the UC{H\\,{\\scshape ii}~region}.\n\nSeveral studies (e.g. \\citealt{2010ApJ...712..797B,2017MNRAS.470.4662P}) have shown evidence of star formation in the pillar tips in the form jets, outflows, YSO population, etc. \nThe driving mechanism for this triggered star formation, RDI, is initiated when the propagating ionizing front traverses the pillar head creating a shell (known as the ionized boundary layer, IBL) of ionized gas. If the pressure of the IBL exceeds the internal pressure of the neutral gas within the pillar head then shocks are driven into it. This leads to compression and subsequent collapse of the clump leading to star formation. However, to comment further on the detected UC{H\\,{\\scshape ii}~region} on the tip of pillar `A' and link its formation to RDI, one needs to conduct pressure balance analysis using molecular line data as discussed in \\citet{2017MNRAS.470.4662P} and \\citet{2013A&A...556A.105O}. These authors have used $\\rm ^{13}CO$ transitions for their analysis which is not possible in our case as CO molecular line data with adequate spatial resolution is not available for this region of interest. Furthermore, attempting any study with the detected MALT90 transitions is difficult given the limited spatial resolution. In a recent study, \\citet{2020MNRAS.493.4643M} have carried out involved hydrodynamical simulations to study pillar formation in turbulent clouds. As discussed by these authors, star formation triggered in pillar heads can be explained without invoking the RDI mechanism. Gravitational collapse of pre-existing clumps can lead to star formation without the need for ionizing radiation to play any significant role. From their simulations, they conclude that compressive turbulence driven in {H\\,{\\scshape ii}~regions}, which competes with the reverse process of photoevaporation of the neutral gas, ultimately dictates the triggering of star formation in these pillars. Further high-resolution studies are required to understand the nature of the compact radio and IR emission at the head of the pillar `A'.\n\n\\subsubsection*{Dust clumps and the collect and collapse mechanism}\n\\label{clumps-CC}\nThe detection of clumps and the signature of fragmentation to cores is evident from the FIR and sub-mm dust continuum maps presented in Fig.~\\ref{dustclumps}. Investigating the collect and collapse hypothesis is necessary to explain whether the dust clumps are a result of swept-up material that is accumulated or these clumps are pre-existing entities. Towards this, we carry out a simple analysis and evaluate few parameters such as the dynamical age ($t_{\\rm dyn}$) of the {H\\,{\\scshape ii}~region} and the fragmentation time ($t_{\\rm frag}$) of the cloud. \n\nAssuming that the {H\\,{\\scshape ii}~region} associated with IRAS 17149$-$3916 expands in a homogeneous cloud, the dynamical timescale can be estimated using the classical expressions from \\citet{1978ppim.book.....S} and \\citet{1980pim..book.....D},\n\\begin{equation}\nt_{\\rm dyn} = \\frac{4}{7}\\frac{R_{\\rm St}}{a_{\\rm Hii}}\\left [ \\left ( \\frac{R_{\\rm if}}{R_{\\rm St}} \\right )^{7\/4} - 1 \\right ] \n\\end{equation}\nwhere, $a_{\\rm Hii}$ is the isothermal sound speed and is assumed to be 10~km s$^{-1}$, $R_{\\rm if} = 1.8$~pc is the radius of the {H\\,{\\scshape ii}~region} determined from the geometric mean of an ellipse visually fit to encompass the ionized emission in the 610~MHz GMRT map. $ R_{\\rm St}$ is the Str\\\"{o}mgren radius, given by the following equation\n\\begin{equation}\nR_{\\rm St} = \\left ( \\frac{3 N_{\\rm Ly}}{4 \\pi n^{2}_{0} \\alpha_{\\rm B}} \\right )^{1\/3} \n\\end{equation}\nwhere, $N_{\\rm Ly}$ is the Lyman continuum flux, $n_{0}$ is the initial particle density of the ambient gas. To derive $n_{0}$, we assume that the dense, bright region seen in the column density map is swept-up material due to the expansion of the H\\,{\\scshape ii}~region\\ and it was initially homogeneously distributed within the radius of the {H\\,{\\scshape ii}~region}. To estimate the mass of this swept-up material, we use equation \\ref{mass_eqn} and the column density map (refer Section \\ref{dust_clump}). Integrating within the black contour shown in Fig.~\\ref{cdtchi}(a), the mass is estimated to be $1645~ {M_{\\odot}}$. Taking the observed estimate of the radius, we calculate $n_{0}$ to be $9.5 \\times 10^{2}$ cm$^{-3}$. $\\alpha_{B}$ is the coefficient of radiative recombination and is determined using the expression \\citep{1980pim..book.....D}:\n\\begin{equation}\n\\alpha_{\\rm B} = 2.6\\times10^{-13} \\left ( \\frac{10^{4}\\textup{K}}{T_{\\rm e}} \\right )^{0.7} \\textup{cm}^{3} \\: \\textup{s}^{-1}\n\\end{equation}\nwhere, $T_{\\rm e} = 5000$~K, is the electron temperature. Using the above parameters and from the spectral type of the ionizing source (O6.5V - O7V), we estimate the $t_{\\rm dyn}$ to be $\\sim$ 0.2~Myr.\n\nUsing the formalism discussed in \\citet{1994MNRAS.268..291W}, we proceed next to estimate the fragmentation time scale of a cloud and that can be written as\n\\begin{equation}\nt_{\\rm frag} = 1.56 \\: a^{7\/11}_{{\\rm s},0.2}\\: \\mathit{N^{-1\/11}_{{\\rm Ly},\\textup{49}}}\\: n^{-5\/11}_{0,3} \\: \\textup{Myr}\n\\end{equation}\nwhere, $a_{\\rm s} = a_{\\rm s,0.2} \\times 0.2$ km s$^{-1}$ is the speed of sound in the shocked layer and is taken as 0.3 km s$^{-1}$ \\citep{2017MNRAS.472.4750D}. $N_{\\rm Ly} = N_{\\rm Ly,\\textup{49}} \\times 10^{49}$ s$^{-1}$ is the ionizing photon flux and $n_{0} = n_{0,3}\\times 10^{3}$ cm$^{-3}$ is the initial particle density of the ambient gas. Plugging in the values in the above expression, we estimate $t_{\\rm frag}$ to be $\\sim$ 2.2~Myr.\nComparing the estimates of the two time scales involved, it is seen that the fragmentation time scale is more than a factor of 10 larger than the dynamical time scale of the {H\\,{\\scshape ii}~region}. This essentially indicates that if the clumps detected are the result of swept-up material due to expansion of the {H\\,{\\scshape ii}~region}, then the shell has not got enough time to fragment thus making the collect and collapse process highly unlikely here. Such a scenario has been invoked by \\citet{2012A&A...544A..39J} for the dust bubble N22. In contrast, \\citet{2017MNRAS.472.4750D}, in their investigation of bubble CS51 found support for the collect and collapse hypothesis. Thus, further studies, as indicated earlier, are required to probe the RDI process not only with regards to the pillar structures but also the detected clumps. \n\n\\subsection{Nature of the detected dust clumps and cores}\n\\subsubsection*{Virial analysis of the dust clumps}\nHere, we investigate the gravitational stability of the identified dust clumps associated with IRAS 17149$-$3916. This would enable us to determine whether these clumps are gravitationally bound or not. The virial mass, $M_{\\rm vir}$, of a dust clump is the amount of mass that can be supported against self-gravity purely by thermal and non-thermal gas motion. This is given by \\citet{2016MNRAS.456.2041C}\n\\begin{equation}\nM_{\\rm vir} = \\frac{5\\ r\\ \\Delta V^2}{8\\ {\\rm ln}(2)\\ a_1\\ a_2\\ G} \\sim 209\\ \\frac{1}{a_1\\ a_2} \\left(\\frac{\\Delta V}{\\rm km\\ s^{-1}} \\right)^2\\ \\left(\\frac{r}{\\rm pc}\\right) M_{\\odot}\n\\end{equation}\nIn the above equation, $\\Delta V$ is the line width of the optically thin $\\rm N_2 H^+$ line, $r$ is the radius of clumps taken from Table \\ref{clump-param}, the constant $a_1$ accounts for the correction for power-law density distribution, and is given by $a_1 = (1-p\/3)\/(1-2p\/5)$, for $p< 2.5$ \\citep{1992ApJ...395..140B} where we adopt $p=1.8$ \\citep{2016MNRAS.456.2041C}. The constant $a_2$ accounts for the shape of the clump which we assume to be spherical and take $a_2$ as 1.\nWe also calculate the virial parameter, $\\alpha_{\\rm vir} = M_{\\rm vir}\/M_{\\rm clump}$. The estimated values of $M_{\\rm vir}$ and $\\alpha_{\\rm vir}$ are listed in Table \\ref{clump-param}. \nAs discussed in \\citet{2013ApJ...779..185K} and \\citet{2019ApJ...878...10T}, $\\alpha_{\\rm vir} = 2$ sets a lower threshold for gas motions to prevent collapse in the absence of magnetic field and\/or external pressure. The virial parameter estimate for Clump 1 is $ < 2$ indicating that it is gravitationally bound and hence likely to collapse. However, Clump 2 is marginally above this threshold and Clump 3, which shows signature of star formation in the form of an UC{H\\,{\\scshape ii}~region} has a higher virial parameter value of 4.1. Similar values of $\\alpha > 2$ has been observed for protostellar and prestellar dense cores by \\citet{2019ApJ...878...10T}. These authors have used the $\\rm C^{18}O$ line and discuss the contribution from turbulence as a primary factor that would significantly affect the line width and hence overestimate the virial mass. While turbulence gets dissipated in the densest region of molecular clouds and the $\\rm N_2 H^+$ line used here is a dense gas tracer, it is likely that the resolution of the MALT90 survey does not probe the inner dense cores and the observed velocity dispersion is influenced by the outer and more turbulent region. High-resolution molecular line observations are thus essential to probe the nature of the clumps. \n\n\\subsubsection*{Clump fragmentation and the detected cores}\n{\\it ALMA} 1.4~mm continuum map (Fig.~\\ref{dustclumps}) is seen to resolve the identified dust clumps into a string of cores, with masses ranging between $0.3 - 62.3~M_{\\odot}$ and radii $\\sim 0.01$~pc, thus indicating a scenario of hierarchical fragmentation. If we assume that fragmentation of the clumps is governed by thermal Jeans instability, then the initially homogeneous gas clump has a Jeans length and mass given by \\citet{2019ApJ...886..102S}\n\\begin{equation}\n \\lambda_J = \\sigma_{\\rm th} \\left ( \\frac{\\pi}{G\\rho} \\right )^{1\/2}\n\\end{equation}\nand \n\\begin{equation}\n M_J = \\frac{4\\pi\\rho}{3}\\left ( \\frac{\\lambda_J}{2} \\right )^3 = \\frac{\\pi^{5\/2}}{6}\\frac{\\sigma_{\\rm th}^3}{\\sqrt{G^3\\rho}}\n\\end{equation}\nwhere $\\rho$ is the mass density, $G$ the gravitational constant and $\\sigma_{\\rm th}$ the thermal velocity dispersion (the isothermal sound speed) and is given by\n\\begin{equation}\n \\sigma_{\\rm th} = \\left ( \\frac{k_B T}{\\mu m_{\\rm H}} \\right )^{1\/2}\n\\end{equation}\nwhere $ k_B$ is the Boltzmann constant and $\\mu$ the mean molecular weight. \nAs the thermal velocity dispersion will be dominated by $\\rm H_2$ and He, we consider $\\mu = 2.37$ \\citep{{2008A&A...487..993K},{2014MNRAS.439.3275W},{2019ApJ...886..102S}}. Using the clump parameters tabulated in Table \\ref{clump-param}, we estimate $\\lambda_J$ and $M_J$ of the clumps which are listed in Table \\ref{clump-Jeans}. If turbulence drives the fragmentation instead, then the turbulent Jeans length and mass for each clump is derived by replacing the thermal velocity dispersion with the clump velocity dispersion estimated from the observed line width of the dense gas tracer $\\rm N_2 H^+$ which is a good approximation for the turbulent line width. The calculated $\\lambda_{\\rm turb}$ and $M_{\\rm turb}$ values are given in Table \\ref{clump-Jeans}. \n\\begin{table*}\n\\caption{Hierarchical fragmentation of clumps associated with IRAS$-$3916}\n\\begin{center}\n\\centering\n\\begin{tabular}{ccccccccc}\n\\hline\nClump & $\\sigma_{\\rm th}$ & $\\sigma^a$ & $\\lambda_J$ & $M_J$ &$\\lambda_{\\rm turb}$ & $M_{\\rm turb}$ \\\\\n & ($\\rm km\\,s^{-1}$) & ( $\\rm km\\,s^{-1}$) & (pc) & ($M_{\\odot}$) & (pc) & ($M_{\\odot}$) \\\\ \n\\hline \n\n1 & 0.3 & 1.4 & 0.2 & 6.8 & 0.8 & 543 \\\\\n2 & 0.3 & 1.6 & 0.2 & 5.3 & 0.9 & 807 \\\\\n3 & 0.3 & 1.7 & 0.1 & 5.6 & 0.8 & 819\\\\\n\n\\hline\n\\end{tabular}\n\\label{clump-Jeans}\n\\end{center}\n$^a$ $\\sigma = \\Delta V\/\\sqrt{8\\,{\\rm ln}2}; \\Delta V$ being the $\\rm N_2 H^+$ line width.\n\\end{table*}\nThe turbulent Jeans masses are $\\sim 80 - 150$ times larger than the thermal Jeans mass. \nComparing with the derived core masses, it is seen that 11 out of the 15 detected cores ($\\sim 73\\%$) in the three clumps have masses less than the Jeans mass. This suggests that the observed cores are consistent with the prediction of Jeans fragmentation without invoking turbulence indicating that it does not play a significant role in the fragmentation process. Similar results are obtained by \\citet{2019ApJ...886..102S} who studied the 70\\,$\\rm \\mu m$ dark massive clumps in early stages using {\\it ALMA} data. As discussed by these authors, the majority of detected cores having masses less than the thermal Jeans mass supports competitive accretion and hierarchical fragmentation frameworks. The four cores whose mass exceeds the Jeans mass (the `super-Jeans' cores) are suitable candidates for forming high-mass cores. However, further high-resolution observations are essential to completely understand the fragmentation process, if any, at the core level. \n\n\\begin{figure*}\n\\centering \n\\includegraphics[scale=0.6]{Fig10.eps}\n\\caption{Masses of the dense cores identified from the 1.4\\,mm {\\it ALMA} maps and the cold dust clumps identified using the 350\\,$\\rm \\mu m$ \\textit{Herschel} map of IRAS 17149$-$3916 are plotted as a function of their effective radii, depicted by circles and `$\\star$'s, respectively. The shaded area corresponds to the low-mass star-forming region that do not satisfy the condition, $ M > 870\\,M_\\odot(r\/\\rm pc)^{1.33}$ \\citep{2010ApJ...723L...7K}. Black-dashed lines indicate the surface density thresholds of 0.05 and 1\\,$\\rm g\\,cm^{-2}$ defined by \\citet{2014MNRAS.443.1555U} and \\citet{2008Natur.451.1082K}, respectively. The red lines represent the surface density thresholds of 116\\,$M_\\odot\\,\\rm pc^{-2}$ ($\\sim 0.024\\,\\rm g\\,cm^{-2}$) and 129\\,$M_\\odot\\,\\rm pc^{-2}$ ($\\sim 0.027\\,\\rm g\\,cm^{-2}$) for active star formation proposed by \\citet{2010ApJ...724..687L} and \\citet{2010ApJ...723.1019H}, respectively.}\n\\label{alma_cores_mass_radius}\n\\end{figure*}\nIn Fig.~\\ref{alma_cores_mass_radius}, we plot the estimated mass and radius of the identified clumps and cores. The plot also compiles several surface density thresholds proposed by various studies to identify clumps\/cores with efficient and active star formation \\citep{2010ApJ...724..687L,2010ApJ...723.1019H,2014MNRAS.443.1555U}. In addition, criteria for these to qualify as high-mass star-forming ones are also included. All the detected clumps and cores associated with IRAS 17149$-$3916 are seen to be active star-forming regions. The three clumps satisfy the empirical mass-radius criteria, $ M > 870\\,M_\\odot(r\/\\rm pc)^{1.33}$ defined by \\citet{2010ApJ...723L...7K}, and hence are likely to harbour massive star formation. At the core scale ($\\rm < 0.1~pc$), \\citet{2008Natur.451.1082K} have posed a theoretical surface density threshold of $\\rm 1\\,g\\,cm^{-2}$, below which cores would be devoid of high-mass star formation. From the figure, we see that there are four cores (2 in Clump 1, 1 each in Clumps 2 and 3) which have masses $\\gtrsim 10~M_{\\odot}$ and above this surface density limit. These are the `super-Jeans' cores discussed above. \nHigh-resolution molecular line observations are essential to shed better light on the nature of the cores, the gas kinematics involved and for accurate determination of physical parameters like temperature, mass, etc.\n\n\\subsection{Conclusion}\n\\label{conclusion}\nUsing multiwavelength data, we have carried out a detailed analysis of the region associated with IRAS 17149$-$3916. The important results of this study are summarized below.\n\n\\begin{enumerate}\n\\item Using the GMRT, we present the first low-frequency radio continuum maps of the region mapped at 610 and 1280~MHz. The {H\\,{\\scshape ii}~region}, previously believed to be nearly spherical, displays a large-extent cometary morphology. The origin of this morphology is not explained by the bow shock model. The presence of dense clumps towards the cometary head indicates either the champagne flow or the clumpy cloud model but further observations of the ionized gas kinematics are essential to understand the observed morphology. \n\n\\item The integrated flux densities yields an average spectral index value of $-0.17\\pm0.19$ consistent with thermal {\\it free-free} emission. If powered by a single massive star, the estimated Lyman continuum photon flux suggests an exciting star of spectral type O6.5V -- O7V star. \n\n\\item NIR colour-magnitude and colour-colour diagrams show the presence of a cluster of massive stars (earlier than spectral type B0) located within the bright, central radio emitting region. MIR and FIR images show complex and interesting features like a bubble, pillars, clumps, filaments,and arcs revealing the profound radiative and mechanical feedback of massive stars on the surrounding ISM. \n\n\\item The spatial location of source, E4 (IRS-1), and the orientation of observed pillar structures with respect to it, strongly suggest it as the dominant driving source for the cometary {H\\,{\\scshape ii}~region}. This view finds support from the position of E4 in the colour-magnitude and colour-colour diagrams. Further, its spectral type estimation from literature agrees well with that estimated for the exciting source of the {H\\,{\\scshape ii}~region} from GMRT data. \n\n\\item The column density map reveals the presence of dust clumps towards the cometary head while the dust temperature map appears to be relatively patchy with regions of higher temperature within the radio nebula. The dust clumps identified using the \\textit{Herschel} 350\\,$ \\rm \\mu m$ map have masses ranging between $\\sim$100 - 300~$\\rm M_\\odot$ and radius $\\sim$0.2 - 0.3~pc. Virial analysis using the $\\rm N_2 H^+$ shows that the south-east clump (\\#1) is gravitationally bound. For the other two clumps (\\# 2 and 3), the line widths would possibly have contribution from turbulence thus rendering larger values of the virial parameter.\n\n\\item A likely compact\/UC{H\\,{\\scshape ii}~region} is seen at the tip of a pillar structure oriented towards the source E4 thus suggesting evidence of triggered star formation under the RDI framework. In addition, the detected dust clumps are investigated to probe the collect and collapse model of triggered star formation. The estimated dynamical time scales are seen to be smaller by a factor of $\\sim$10 compared to the fragmentation timescale of the clumps thus clearly negating the collect and collapse mechanism at work. \n\n\\item The {\\it ALMA} 1.4~mm dust continuum map probes the dust clumps at higher resolution and reveal the presence of 17 compact dust cores with masses and radii in the range of $0.3 - 62.3~M_\\odot$ and 0.01 -- 0.03~pc, respectively. The largest and the most massive core is located within Clump 3. The estimated core masses are consistent with thermal Jeans fragmentation and support the competitive accretion and hierarchical fragmentation scenario. \n\n\\item Four `super-Jeans' fragments are detected and are suitable candidates for forming high-mass stars and their mass and radius estimates satisfy the various threshold defined in literature for the potential high-mass star-forming cores.\n\n\\end{enumerate}\n\n\\section*{Acknowledgements}\nWe would like to thank the referee for comments and suggestions which helped in improving the quality of the manuscript. We thank the staff of the GMRT that made the radio observations possible. GMRT is run by the National Centre for Radio Astrophysics\nof the Tata Institute of Fundamental Research. The authors would like to thank Dr. Alvaro S\\'{a}nchez-Monge for providing the FITS image of the radio maps presented in \\citet{2013A&A...550A..21S}. CHIC acknowledges the support of the Department of Atomic Energy, Government of India, under the project 12-R\\&D-TFR-5.02-0700. This work is based\n[in part] on observations made with the {\\it Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication also made use of data products from {\\it Herschel} (ESA space observatory). This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center\/California Institute of Technology, funded by the NASA and the NSF. This work makes use of the ATLASGAL data, which is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile. This paper makes use of the following ALMA data: ADS\/JAO.ALMA\\#2016.1.00191.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO and NAOJ. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \n\n\\addcontentsline{toc}{section}{Acknowledgements}\n\\section*{Data Availability}\nThe original data underlying this article will be shared on reasonable request to the corresponding author.\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}