diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcmkq" "b/data_all_eng_slimpj/shuffled/split2/finalzzcmkq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcmkq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nCoronal mass ejections (CMEs), flares and jets are the major forms of eruptions in solar activities, and the physical mechanisms of their trigger and driver are an important research topic in solar physics. Numerous observational studies have reported that these eruptive activities frequently occur in solar active regions, and it is generally believed that the core structure of the pre-eruptive field is in the form of either a twisted flux tube, i.e., a magnetic flux rope (MFR) or a strongly sheared magnetic arcade \\citep{Green2011PhotosphericFC, Patsourakos2013}. The entire pre-eruption configuration consists of the core field (either an MFR or a sheared arcade) and an envelope field (overlying field) that confines the core field, while eruptions occur when some kind of instabilities destabilize their force balance \\citep{Archontis2012}.\n\nIt is currently accepted that solar active regions are formed by magnetic flux emergence, the process of magnetic fields generated by solar dynamo entering the solar atmosphere from the depths of the convection zone, which is also considered to be one of the key mechanisms in producing solar eruptive activity \\citep{DG2015EAR, ChenP2011}. Although the emerging magnetic field has been thought to be sufficient in itself to generate an eruption \\citep{Dmoulin2002WhatIT, Nindos2003}, in many cases it acts as a trigger for a pre-existing eruptive configuration \\citep{Feynman1995TheIO, Williams2005EruptionOA}. In a stable pre-eruption configuration, the upward magnetic pressure of the internal flux rope is in equilibrium with the downward tension of the envelope field \\citep{Archontis2012, Leake2013}. When a new flux emerges in the vicinity of the pre-existing eruption configuration, their interaction causes magnetic reconnection that could reduce the tension of the envelope field and lead to the eruption \\citep{Chen2000}. There are two possible ways of reconnection operating in this process, which are tether-cutting \\citep{Moore1992} and breakout reconnection \\citep{Antiochos1999}.\nIn other cases, the pre-eruption configuration is associated with the ideal instability. Continuous flux emergence may push the magnetic configuration higher, and when the envelope field decays too fast with height, the MFR will run into the torus instability and erupt \\citep{Kliem2006}. Flux emergence can also increases the degree of twist of the MFR, and when a certain value is exceeded, it triggers kink instability and an eruption \\citep{Anzer1967, Torok2004}.\n\nSince without a direct observational probe of the dynamics of magnetic flux emergence from below the solar surface (i.e., the photosphere), many efforts have been devoted to numerical magnetohydrodynamic (MHD) simulations of the flux emergence. As pioneered by the early work of Shibata and colleagues \\citep{Shibata1989}, a large number of works of flux emergence simulation (FES) have been carried out, in particular, for mimicking simulations of a twisted flux tube emergence into the solar atmosphere \\citep{ Morenoinsertis1996TheRO, Fan2001, Magara2001a, Arber2001a, Manchester2004Emergence, Archontis2004, Murray2006, Leake2006, Toriumi2010, Cheung2014, Syntelis2017, Toriumi2019, Fan2021}. These simulations have successfully reproduced some of the observed phenomena, such as the vortical motion of the emerging polarities on the photosphere, the sigmoid shaped coronal MFR, and these comparative results confirm the reliability of MHD simulations. \n\nTo simulate a flux tube emerging from the convection zone into the corona (and to further study how it erupts) requires the numerical model to incorporate the highly stratified solar plasma including all the different layers from below the surface, to the photosphere, the chromosphere, the transition region, and the corona, which have physical behaviors rather different from each other. Therefore, a major challenge in self-consistent simulations of magnetic flux emergence to its eruption is to resolve the multiple spatial and temporal scales in a single model. For example, near the photosphere, the scale heights (of gas pressure) are only about one hundred kilometer, and the gas density varies by more than eight orders of magnitude within a few megameters, while in the corona the scale height is tens of megameters, i.e., with nearly three orders of magnitude larger than the photospheric ones. On the contrary, the time scales of evolutions in the photosphere and below, in which the magnetic field is controlled mainly by the plasma, are much longer than that in the corona, in which the plasma is controlled by the magnetic field. Thus, most of the current 3D FESs chose to use relatively small computational domains of a few tens of megameters in three spatial directions and short time durations of, typically, a few hours. The burden of computational resources would be too much if one wants to simulate the long-term (e.g., days) evolution of active region size (e.g., hundreds of megameters). \n\nThe motivation of this paper is to develop a new numerical model of magnetic flux emergence by using our AMR--CESE--MHD code~\\citep{Jiang2010}, in particular, utilizing the features of adaptive mesh refinement \\citep[AMR,][]{BERGER198964}. The technique of AMR has been developed rapidly in computational fluid dynamics and is becoming a standard tool for treating problems with multi-orders of spatial or temporal scales, which fits well for FES. By automatically adapting the computational mesh to the solution of the governing partial differential equations (PDEs), methods based on AMR can assign more mesh points for regions demanding high resolution (e.g., high gradient regions) and at the same time, give fewer mesh points to other less interested regions (low gradient regions), thereby providing the required spatial resolution while minimizing memory requirements and CPU time. Although many classical numerical MHD solvers based on either finite difference or finite volume methods have been used in previous FESs, such as ZEUS--3D code \\citep{Stone2008}, the modified Lax-Wendroff method \\citep{Magara2001a, Toriumi2010}, and the Lagrangian remap scheme \\citep[Lare3d,][]{Arber2001a}, few of these FESs have implemented with the AMR. There are only two simulations used AMR \\citep{Cheung2006, Martinez-Sykora2015}, but both these two early simulations only studied the evolution of the flux tube below the photosphere and in \\cite{Cheung2006} the simulation is carried out within 2.5D rather than 3D. On the other hand, the CESE method is distinct from the classical numerical methods of the finite-difference or finite-volume schemes, as it has a much simplicity in mathematics without Riemann solver or eigen-decomposition, but can achieve higher accuracy at equivalent grid points, which is also desirable for the FES. The AMR--CESE--MHD code has achieved many excellent results in other simulations, such as in analysis of the fundamental initiation mechanism of solar eruptions \\citep{Jiang2021,Bian2022}, the data-driven active region evolution and eruptions \\citep{Jiang2016NC, Jiang2021b, Jiang2022DatadrivenMO}, and the solar wind modellings \\citep{Feng2012SoPh}.\n\nIn this paper, we report our first step of implementation of applying the AMR--CESE--MHD code to FES, by simulating the emergence of a twisted flux tube in a simply stratified solar atmosphere from the convection zone to the corona. In the following, Section \\ref{sec:method} describes the details of the model and numerical methods. In Section \\ref{sec:ressult}, we show the process and key features of the 3D magnetic flux emergence, which is overall consistent with previous FESs. In Section \\ref{sec:sum}, we summarize and give outlooks for future study based on the new FES model.\n\n\n\\section{Model}\n\\label{sec:method}\n\n\\subsection{Initial conditions}\n\\label{sec:init}\nThe initial settings of our model are similar to that used in typical\nsimulations of the emergence of twisted flux tube from below the\nphotosphere to the corona, and particularly the parameters are mostly\nclose to the values used in \\citet{Fan2009}. The simulation volume is\na Cartesian box of $-14.4$~Mm $\\le x \\le 14.4$~Mm, $-14.4$~Mm\n$\\le y \\le 14.4$~Mm and $0 \\le z \\le 28.8$~Mm where the $z$ axis\nis the height with $z=0$ denoting the lower boundary, which is a depth of $4.5$~Mm below the photosphere.\n\nThe initial conditions consist of a plasma in hydrostatic equilibrium stratified\nby solar gravity with a characteristic temperature profile from the\ntop layer of convection zone to the corona, which is given by a\npiece-wise function of height\n\n\\begin{equation}\\label{eq:T}\n T(z) = \\left\\{ \\begin{array} {r@{\\quad:\\quad}l}\n T_{\\rm ph} -\\frac{\\gamma -1}{\\gamma}\\frac{g_0}{R} (z-z_{\\rm ph}) & z \\le z_{\\rm ph} \\\\\n T_{\\rm ph} & z_{\\rm ph} < z \\le z_{\\rm ch} \\\\\n T_{\\rm ph}\\left( \\frac{T_{\\rm cor}}{T_{\\rm ph}}\n \\right)^{\\frac{z-z_{\\rm ch}}{z_{\\rm cor}-z_{\\rm ch}}} & z_{\\rm ch} 1\\times 10^{-3}$ and $p_{B}\/p > 1\\times 10^{-3}$ . {Figure}~\\ref{Fig:AMR} shows the evolution of the block (which contains $ 8^3 $ cells) distribution with these criteria applied during the simulation. We use three levels of AMR with the highest resolution of $45$~km and lowest resolution of $180$~km. Then we employed the PARAMESH software package~\\citep{MacNeice2000} to manage the AMR procedure and the paralleling computing.\n\n\nSince the spatial resolutions and the wave speeds of blocks within the\ncomputational domain vary significantly, the timesteps computed using\na fixed Courant number $ C \\sim 1$,\n$\\Delta t = C \\Delta_{\\rm c}\/w$, where $w$ is the maximal wave\nspeed in the block, will also vary significantly. A simple way is to\nuse a uniform timestep for all the blocks, which is defined as\n$\\Delta t_{\\rm g} = C \\Delta_{\\min}\/w_{\\max}$ where\n$\\Delta_{\\min}$ is the highest resolution and $w_{\\max}$ is the maximal\nwave speed in the entire computation domain. However, this will\nincrease significantly the numerical diffusion on the coarser blocks\nand in the low wave speed areas, especially evident contrasting the\nwave speeds (mainly the sound speed) in the photosphere and in the\ncorona, since the local timestep $\\Delta t$ is much larger than the\nglobal one $\\Delta t_{\\rm g}$, or in other words, the local Courant\nnumber defined as\n${C}_{\\rm l} = w\\Delta t_{\\rm g}\/\\Delta_{\\rm c} $ is much smaller than unity. This problem is especially serious for the CESE scheme which is\nsensitive to the local Courant number. To overcome this problem, we\nuse time marching with block-based variable timestep, in which\ndifferent timesteps are used for different blocks, with the timesteps\ndefined as $\\Delta t = {C} \\Delta_{\\rm c}\/w_{\\max}$ thus directly\nproportional to the resolutions of the blocks. Furthermore, we use the\nCourant number insensitive (CNIS) approach~\\citep{Chang2005} which can reduce\nthe numerical dissipation substantially in the case that the local\nCourant number is small.\n\n\\section{Result}\n\\label{sec:ressult}\n\n\\subsection{General evolution}\n\\label{sec:overview}\n\n\\begin{figure*}[htbp]\n \\center\n \\includegraphics[width=18cm]{fig4-eps-converted-to}\n \\caption{(a-l) Three perspective views of 3D structure and evolution of the magnetic flux tube during the emergence. (m-p) The iso-surface of the flux tube with magnetic field strength $B = 0.1B_0$. }\n \\label{Fig:Bline}\n\\end{figure*}\n\nThe whole process of subsurface twisted magnetic flux tube emerging in the atmosphere is consistent with previous simulations \\citep{Fan2001, Fan2009, Manchester2004Emergence, Archontis2004, Magara2004, Murray2006, Leake2006, Leake2013, Syntelis2017}. The middle section of the flux tube starts to rise upward from the convection zone due to the magnetic buoyancy as caused by density deficit, while the two ends of the tube sink slightly because of artificial anti-buoyancy. The middle part of the tube continues to rise and expand with height until its apex touches the surface. Then the accumulation of the magnetic field under the surface triggers the magnetic buoyancy instability, allowing part of the flux to enter the photosphere\/chromosphere and expand rapidly in the corona.\n\n\\begin{figure*}[htbp]\n \\center\n \\includegraphics[width=18cm]{fig5-eps-converted-to}\n \\caption{(a-c) The evolution of the $z$-component of the magnetic field ($B_z$, color) and the tangent velocity (arrows) on the surface. (d-f) show the distribution of the shear angle $\\theta$ of the emerging magnetic field on the central vertical plane ($x = 0$). (g-i) The yellow and black lines are the same as in Fig.\\ref{Fig:Bline}, and the blue and red lines are the coronal MFR at the position of minimal $\\theta$ on the central vertical plane.}\n \\label{Fig:theta}\n\\end{figure*}\n\n{Figure}~\\ref{Fig:Bline}(a-l) shows three perspective views of 3D structure and evolution of the magnetic flux tube during the emergence. The black line in these panels, which represents the axis of the initial flux tube, is obtained by tracing the O-point ($B_{\\theta}$ minimum) on a vertical cross section of the flux tube at different times. Here the cross section is selected as being the right $x$ boundary, since at its two ends the flux tube evolves much more slowly and is more regular than its middle part that emerges into the atmosphere. The yellow lines are the field lines through four points evenly distributed on this cross section with a small radial distance of $0.02 L_s$ from the O-point. Note that the two ends of the flux tube also expand and evolve (but very slightly) during the emergence of its central portion. Therefore, these field lines are not exactly the same set of field lines in the different panels (or times). Nevertheless, they are a good approximation of the same set of field lines and can reflect the topology and its evolution of the magnetic field. The horizontal slice in each panel represents the solar surface and the color indicates the $z$-component of the magnetic field ($B_z$).\n\nThe first column of {Figure}~\\ref{Fig:Bline} is the snapshot at $t=10$, when the middle of the flux tube has undergone a bulge into an $\\Omega$-shape, and then at $t=15$ (the second column in {Figure}~\\ref{Fig:Bline}), the front of the $\\Omega$-shaped flux has emerged into the atmosphere with a simple arcade configuration, and the central axis magnetic line (black field line) is in a weakly forward S-shape. With time goes on, the emerging flux rapidly expands to the higher corona while the magnetic field structure becomes more complex, and eventually more fluxes emerge forming a strongly reverse S-shaped, i.e., a sigmoid shaped magnetic structure.\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8.5 cm]{fig6-eps-converted-to}\n \\caption{The evolution of magnetic energy ($E_{B}$, top panel), magnetic energy flux ($F_{EB}$, middle panel) and unsigned magnetic flux ($F_{Bu}$, bottom panel). The solid black line in top panel indicates total magnetic energy and the dashed line indicates the injection magnetic energy through the surface, which is the sum of shear term and emergence term of $E_{B}$. The blue and red lines denote the shear term and the emergence term, respectively.}\n \\label{Fig:energy}\n\\end{figure}\n\n{Figure}~\\ref{Fig:Bline}(m-p) shows the iso-surface of the flux tube with magnetic field strength $B = 0.1B_0$. At $t=10$, the apex of the flux tube convex part reached the height of the surface. Then at $t=15$, part of the magnetic field has entered the atmosphere in a flattened spherical shape, which indicates that the lateral expansion of the emerging flux is faster than the vertical expansion. With the emergence of flux, the coronal magnetic field also expands wider and higher, eventually forming a ``mushroom\" shape.\n\n\\subsection{Vortical and shearing motion}\n\nThe gradual separation of the two photospheric magnetic polarities as the flux tube emerges can be observed on the horizontal slice (surface) in {Figure}~\\ref{Fig:Bline}. {Figure}~\\ref{Fig:theta}(a-c) shows the evolution of the tangent velocity on this slice. These snapshots reveal counterclockwise vortical and shearing motion in each polarity as the flux emerges. It has been suggested that this vortical motion is caused by the difference in the degree of twist $q$ between the subsurface flux tube and the emerged field \\citep{Fan2009, Longcope2000AMF}. The expansion and stretch of the emerged flux in corona causes its $q$ to decrease rapidly, and the vortical motion of the two polarities transports the twist of the subsurface flux tube into the atmosphere until the $q$-value equilibrates.\n\nDuring the evolution of the coronal magnetic field, the combined effect of the vortical motion of the two polarities and the shearing flow distorts the field lines of the emerging flux, turning it from an initial S-shape to an reverse S-shape. The photospheric shearing flow squeezes the bottom of the coronal magnetic field toward its middle, and it has been suggested that magnetic reconnection occurs directly under the sheared field to produce a coronal MFR \\citep{Fan2009}. {Figure}~\\ref{Fig:theta}(d-f) show the distribution of the shear angle $\\theta$ (indicating the angle between the magnetic field and the $y-z$ plane) of the emerging magnetic field on the central vertical plane ($x = 0$). We find that the distorted magnetic field gradually separates from the magnetic field that remains below the photosphere, eventually forming a coronal magnetic structure with a sigmoid shaped inner core of MFR. The newly formed coronal MFR at $t=26$ is shown in {Figure}~\\ref{Fig:theta}(g-i) (blue and red lines). \n\nThe shearing motion of the polarities provides an important way for the magnetic energy to enter the atmosphere through the photosphere, along with the direct upward injection of magnetic field. To quantify the different contributions from these effects, we calculated the total magnetic field energy above the photospheric surface ($z = 0.39$) as well as the Poynting flux through the surface for the shear term and the vertical injection term (or emergence term), respectively, using the formula as derived in \\citet{ Kusano2002} and \\citet{SolarPh2003SoPh}. As can be seen in {Figure}~\\ref{Fig:energy}, the total magnetic energy above the photosphere increases first quite fast in time from $t=10$ to $15$, in agreement with the fast increase of the unsigned magnetic flux through the photosphere. After then, the total magnetic energy becomes slower and eventually saturates near the end of the simulation (the top panel of {Figure}~\\ref{Fig:energy}). And the mismatch of the total magnetic energy and the injection magnetic energy means that the contribution by the dissipation and reconnection of magnetic fields is significant at the later phase, accounting for 22.4 $\\%$ of the injection magnetic energy. In the mlddle panel of {Figure}~\\ref{Fig:energy}, the early injection of magnetic energy is contributed mainly by the emergence term, which however decays quickly after $t=13$, and afterwards the shear term dominates. At the later stage of emergence, i.e., when the unsigned magnetic flux has nearly saturated, the emergence term has decreased to a value close to and even below zero at the end of the simulation. This suggests that a small submerge of the magnetic energy occurs. The shear term also decays, but with a much slower rate than that of the emergence term. The net contribution of these two terms eventually stabilizes the total magnetic energy. This is consistent with the simulation of \\citet{Magara2003}.\n\n\\subsection{Two step emergence}\n\nOur simulation agrees with many previous simulations that the emergence of flux from the convection zone to the corona experiences a two-step process, known as a ``two-step emergence\" mode \\citep{Matsumoto1993, Magara2001}. The first step is the rise of the flux tube in the convection zone by magnetic buoyancy. During this period, the rising speed of the flux tube initially increases and then decelerates as the flux approaches the surface. The second step is the evolution of the emerging field into the atmosphere. \\citet{Toriumi2010} tested the effect of the amount of flux and the initial field strength on the flux emergence in two-dimensional numerical simulations, dividing the results into ``two-step emergence\", ``direct emergence\" and ``failed emergence\". Direct emergence means that the rise of the flux tube is not reduced before breaking through the photospheric surface. Failed emergence is that the flux tube eventually fragments in the convection zone and cannot enter into the atmosphere. The work of \\citet{Murray2006} shows that the twist degree $q$ is also a factor affecting the emergence of the flux tube, with larger values of $q$ favoring for emergence. And \\citet{Toriumi2011} point out that the criterion for failure emergence is $q = 0.05$ in 2D simulation.\n\n{Figure}~\\ref{Fig:trace} shows the evolution of the height (top panel) and velocity (bottom panel) of the apex of the flux tube and the two O-points in the convection zone and corona on the central vertical plane ($x = 0$). Here the apex (black line) is defined to be the highest point where the magnetic field strength $B$ is greater than $0.1B_0$. The evolution of the magnetic flux tube on its middle section is more complicated than that at the boundary, since there are multiple positions of very small $B_{\\theta}$ generated during emergence, and thus that the center of the magnetic flux tube on this plane cannot be defined using the same way as in {Figure}~\\ref{Fig:Bline}. We consider the location with the largest $B_x$ in the minimal $B_{\\theta}$ positions on this cross section as the O-point of the flux tube in the convection zone, while the highest position with the minimal $B_{\\theta}$ is considered as the O-point of the coronal MFR, denoted by the ${\\rm O_{\\rm con}}$ and ${\\rm O_{\\rm cor}}$ points, respectively.\n\nThe velocity at the apex of the flux tube (black line) in the bottom panel of {Figure}~\\ref{Fig:trace} undergoes a process of increase, then decrease, and again increase. The position where the velocity decreases is near the solar surface ($z = 0.39$), thus our simulation belongs to the ``two-step emergence\" as defined in \\citet{Toriumi2010}. The difference in position between the red and black line in the top panel of {Figure}~\\ref{Fig:trace} can also reflect the first slow rise, flux pileup near the photosphere, and rapid expansion of the upper part of the flux tube in the corona. {Figure}~\\ref{Fig:Bline} (m-p) shows that the emerging magnetic field exhibits a significant horizontal expansion, which is one of the key features of the ``two-step emergence\". However, in the top panel of {Figure}~\\ref{Fig:trace}, the ``pileup\" of the apex of the flux tube (black line) near the surface is not obvious, and we consider that it is due to the relatively large $q$ and $B_0$.\n\nIn the first step of emergence, the buoyancy of the flux tube is suppressed near the photosphere due to the convective stability of the stratification there, which has a much smaller temperature gradient than that for convection instability \\citep{Cheung2007}. Consequently, more and more magnetic fluxes with the frozen plasma that rise from below accumulate near the photosphere, eventually resulting in an unstable configuration in which the heavy plasma (as supported by the magnetic pressure gradient) overlays on the lighter flux tube. Such an unstable configuration is called magnetic buoyancy instability \\citep{Matsumoto1993}. \\citet{Archontis2004} and \\citet{Hood2012b} have given the following critical condition for this instability\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8cm]{fig7-eps-converted-to}\n \\caption{The evolution of the height (top panel) and velocity (bottom panel) of the apex the flux tube and the two O-points in the convection zone and corona on the central vertical plane ($x = 0$). The dash line in the top panel indicates the height of the surface ($z = 0.39$), and the dash line in the bottom panel indicates $v_z = 0$.}\n \\label{Fig:trace}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8cm]{fig8-eps-converted-to}\n \\caption{The criterion of magnetic buoyancy instability (red line) for the front of the tube at each moment. The black line describes the variation of the magnetic field strength of the flux tube with height, and the blue line is the stratification effect of the atmosphere. }\n \\label{Fig:criterion}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8.5cm]{fig9-eps-converted-to}\n \\caption{The evolution of the magnetic field lines of flux tube. The transparent horizontal slice represents the solar surface. }\n \\label{Fig:centerline}\n\\end{figure}\n\n\t\\begin{equation}\n \\label{eq:crit}\n\t-H_p \\dfrac{\\partial }{\\partial z} \\left( \\log B \\right) > -\\dfrac{\\gamma}{2} \\beta \\delta + k^2_{\\parallel} \\left(1+ \\dfrac{k^2_z}{k^2_{\\perp}}\\right),\n\t\\end{equation}\nwhere $H_p$, $z$, $B$, $\\gamma$ and $\\beta$ denotes the local pressure scale height at the photosphere, the height, the magnetic field strength, the ratio of the specific heats and the ratio of the plasma pressure to the magnetic pressure, respectively. $\\delta$ is the superadiabatic index, which is $-0.4$ for a strong stabilization of the atmosphere. $k_{\\parallel}$, $ k_{\\perp}$, $k_{z}$ are the three components of the local perturbation wave vector. The left side of the equation describes the variation of the magnetic field strength of the flux tube with height, the first term on the right side indicates the stratification effect of the atmosphere, and the second term indicates the effect of the perturbation. This criterion helps us to determine the time of appearance of the flux tube on the surface, since the magnetic flux can only emerge across the surface with the criterion satisfied. We calculated the criterion for the front of the tube at each moment and plotted the result in {Figure}~\\ref{Fig:criterion}. The equation perturbation term is not shown in the figure since it is a small quantity, that has already included in the criterion (red line). We find that {Equation}~(\\ref{eq:crit}) is met at $t=12$, indicating that the buoyancy instability is triggered at around this moment, and indeed the magnetic flux first appears above the photosphere between $t=11$ and $12$. It is worth noting that the actual height of the solar surface islifted up by the rising flux tube, thus at $t=11$ the magnetic flux tube exceeds the initial height of the photosphere but is still suppressed by the stability of the stratification.\n\n\\subsection{Partial emergence }\n\\label{sec:pe}\nOur simulation also agrees with the existing theory that the magnetic flux tube in the convection zone can only partially emerge into the atmosphere, and the field lines behave on the central vertical plane ($x = 0$) as described in \\cite{Leake2013}, i.e., the up-concave part can expand into the corona, while the down-concave part under the original tube axis remains mostly trapped under the surface. To give more details, {Figure}~\\ref{Fig:centerline} shows the evolution of the field lines traced $17$ points within $0.04 L_s$ near the O-point on the cross section at right $x$ boundary. These points are the O-point and $4$ points uniformly in each direction along the positive and negative directions of $y$ and $z$, respectively, from O-point. The black line in each panel is obtained by tracing the O-point on right $x$ boundary. The red lines indicate the field lines in the center part of the tube while the yellow lines indicate the outer field lines. \n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8cm]{fig10-eps-converted-to}\n \\caption{(a-b) Two perspectives of the field lines traced at 20 points uniformly distributed in the height range 0.3$L_s$ to 0.6$L_s$ on the central vertical line at $t = 26$. (c) Add the slice $y=0$. The color indicates the density of current ($J$). (d) The current sheets on the central vertical plane ($x = 0$). The cyan line is the streamline of electric current of the current sheet. }\n \\label{Fig:straline}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width=8cm]{fig11-eps-converted-to}\n \\caption{The evolution of $B_x$ at the ${\\rm O_{\\rm con}}$-point. }\n \\label{Fig:Bxtime}\n\\end{figure}\n\nSimilar to the simulation of \\citet{Magara2004}, the outer field lines of the emerging flux tube spread out in a wide fan after breaking through the surface, and some filed lines even have a downward trend, such as the yellow field lines at $t = 24$ ({Figure}~\\ref{Fig:centerline}(c)). The lateral expansion of the inner field lines is restrained by the adjacent twisted field lines, which makes inner field lines tend to expand vertically. With time, the internal field of flux rises to higher corona to form MFR, and remains well connected to the convection zone flux tube.\n\n\\begin{figure*}[htbp]\n \\center\n \\includegraphics[width=18cm]{fig12-eps-converted-to}\n \\caption{The evolution of the current sheet iso-surface $J=8000$, and the transparent horizontal slice represents the solar surface. }\n \\label{Fig:J}\n\\end{figure*}\n\n\\begin{figure}[htbp]\n \\center\n \\includegraphics[width= 8.5cm]{fig13-eps-converted-to}\n \\caption{(a) The distribution of the current sheets on the central vertical plane ($x = 0$). (b) The iso-surface of the current sheet $J=300$. }\n \\label{Fig:J0}\n\\end{figure}\n\\subsection{Current sheet }\n\\label{sec:cs}\n\n{Figure}~\\ref{Fig:straline}(a-c) shows the field lines traced at $20$ points that uniformly distributed in the height range $0.3L_s$ to $0.6L_s$ on the central vertical line $(x, y) = (0, 0)$ at $t = 26$. The green lines are the field lines above the black line (same as in {Figure}~\\ref{Fig:centerline}), which have reverse S-shaped in the corona, with the middle part concave downward. The red lines indicate the magnetic field between the surface and the black line, and the blue lines are the field lines that do not fully emerge.\n\\citet{Archontis2004} pointed out that the plasma moves along the magnetic line of motion towards the lower part of the field line, and the heavy plasma gathered in the lower part increases the plasma $\\beta$, pulls the field lines toward the surface (becoming the structure of the red line in the {Figure}~\\ref{Fig:centerline}) and reduces the magnetic field gradient, which can cause the convective stability of the stratification to increase. That restrains further emergence of the flux tube in the middle region between the two polarities, resulting in the subsurface field lines in this region not breaking through the photospheric surface.\n\nAlthough the further emergence of the flux tube is suppressed, the magnetic field can still enter the atmosphere through the motion of the coronal MFR footpoints, which creates the structure of the field lines like the top two of the blue lines in {Figure}~\\ref{Fig:straline}. The blue and red field lines constitute an X-shaped magnetic field structure in the middle of the two polarity concentration regions, which induces a transverse current sheet. This current sheet is in contact with the current sheet of the subsurface original magnetic flux tube to form a ring current sheet ({Figure}~\\ref{Fig:straline}(c)), and {Figure}~\\ref{Fig:straline}(d) shows the streamline of electric current (cyan line) of the ring current sheet. We found that its induced magnetic field is in the same direction as the original flux tube, i.e., it will reduce the tendency of the original magnetic field decay. \n\n\n{Figure}~\\ref{Fig:Bxtime} shows the evolution of $B_x$ with time at O-point of the convection zone flux tube, where $B_x$ is hardly decreasing after $t = 20$, which is significantly different from the rapid decrease in the earlier period. This process implies that in the absence of a covered coronal field, the axial direct current is enhanced during the flux emergence and no return current is observed. \\citep[for more details on the study of current sheets in simulations of the covering field see][]{Torok2014}.\n{Figure}~\\ref{Fig:J} shows the evolution of the current sheet iso-surface $J=8000$, and the transparent horizontal slice represents the solar surface. We found that in the second step of flux emergence, the evolution of the current sheet is divided into two stages. The first stage is before $t = 20$, when the rapid emergence of partial fluxes causes the subsurface current density to decrease. The second stage is when the current sheet starts to reform and the subsurface current sheet reforms more rapidly. We believe that the formation of the subsurface current sheet is due to the suppression of the heavy plasma in the middle of the two polarity concentration regions, which causes the convection zone magnetic field to accumulate heavily under the surface. The current sheet above the surface is due to the X-shaped magnetic field structure in {Figure}~\\ref{Fig:straline}. These two current sheets eventually form a cavity configuration.\n\n\nIn addition, the red field lines in {Figure}~\\ref{Fig:straline} are pulled by the shear flow along the polarity reversal line and the heavy plasma, causing the sides of the coronal magnetic field to squeeze toward the middle, and forming a vertical current sheet, as shown in {Figure}~\\ref{Fig:J0}. In the real case, the resistivity in the corona is extremely low and reconnection is difficult to occur, which leads to a close reverse magnetic field on both sides and forms a thinner and thinner current sheet that accumulates more and more energy. Once reconnection occurs, a rapid eruption might be produced in the same way as shown in \\citet{Jiang2021} that a continuously sheared bipolar arcade can initiate an eruption by tether-cutting reconnection.\n\n\\section{Summary}\n\\label{sec:sum}\n\nIn this paper we have implemented the FES using the AMR--CESE--MHD code and has achieved consistent results with many previous FESs of similar configuration but using different numerical codes. The AMR--CESE--MHD method has its uniqueness that it is much simpler in algorithm than traditional numerical MHD solvers but can achieve higher resolution. Further aided with the AMR, it can handle well the drastic variations of many orders of magnitudes in both spatial and time scales in the computational domain that includes the convection zone and the different layers of the solar atmosphere. The computational cost is moderate with around 31 hours on 480 CPUs of 3GHz. \n\nThe simulation follows the whole process of the rising into the corona of a twisted flux tube that is initially placed in the convection zone. As driven by the magnetic buoyancy, the center part of the tube rises until it reaches the photospheric layer. At this position, the reduced gradient of the background temperature produces a stratification stabilization effect, which inhibits the further rise of the flux tube and the magnetic flux starts to pile up near the surface. When the accumulated magnetic field is sufficient to trigger the magnetic buoyancy instability, the upper part of the flux tube begins to emerge into the solar atmosphere and expands rapidly. The emerged magnetic field also suppresses the emergence of the following magnetic field, making only a portion of the original flux tube emerge.\n\nDuring the evolution of the emerging magnetic field in the corona, vortical and shearing motions of the magnetic polarities on the photosphere play an important role in transporting the magnetic energy and non-potentiality into the atmosphere. To store this energy, the coronal magnetic field has also been reshaped to a sigmoid configuration (containing a weakly twisted rope) from the simple arcade at the early time of the emergence. Due to the strong lateral expansion of the coronal field, the entire 3D profile of the coronal field resembles the shape of a ``mushroom''. \n\nIn addition, we also analyze the formation of the current sheet. The shear flow of the photospheric layer squeezes the sides of the coronal magnetic field toward the middle, and the reversed magnetic field (as seen on the central cross section) gets closer and closer, leading to the formation of a vertical current sheet. We also found that below this vertical current sheet, the horizontal current sheet on the surface forms a cavity structure with the current sheet in the convection zone, and the presence of the toroidal current increases the magnetic field in the convection zone, which may lead to the re-emergence of the magnetic field \\citep{Syntelis2017}.\n\nThe present work developed a framework for numerical experiments of magnetic flux emergence and its role in producing solar eruptions, which will be the focus of our future works. For example, with an ultra-high accuracy MHD simulation, \\citet{Jiang2021} established a fundamental mechanism behind solar eruption initiation: a bipolar field driven by slow shearing motion on the photosphere can form an internal current sheet in a quasi-static way, which is followed by fast magnetic reconnection (in the current sheet) that triggers and drives the eruption. However, their model domain includes only the corona by assuming the lower layers of atmosphere below the coronal base (i.e., the photosphere and chromosphere) as a line-tied boundary surface, and the surface driving velocity is also specified in an ad-hoc way. This inspires us to perform higher resolution FES to investigate whether the same mechanism can also operate to produce eruptions during the evolution of the emerging flux in the corona, with the shearing motion at the photosphere generated in a more self-consistently way. In another study, \\citet{Bian2022} showed that by the continuous shearing of the same PIL, the fundamental mechanism can effectively produce homologous CMEs by recurring formation and disruption of the internal current sheet. Such homologous eruptions will be also investigated with longer-term FESs to verify whether a second emergence will occur after the first emergence of the same flux tube. And the FESs have other important applications in studying the solar eruptions, in particular, to explore what are the key parameters that can be used to predict eruptions. One of the such applications has been shown by \\citet{Pariat2017} who used the FESs by \\citet{Leake2013} and found that the ratio of the magnetic helicity of the current-carrying magnetic field to the total relative helicity can potentially used for eruption prediction. This merits further studies using our FES model.\n\n\n\n\n\n\\acknowledgments This work is jointly supported by the National Natural Science Foundation of China (NSFC 42174200 and 41731067), the Fundamental Research Funds for the Central Universities (HIT.OCEF.2021033), and the Shenzhen Science and Technology Program (RCJC20210609104422048 and JCYJ20190806142609035). The computational work was carried out on TianHe-1(A), National Supercomputer Center in Tianjin, China.\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nNeural network quantization can effectively compress the size and runtime overhead of a network by reducing the bit-width of the network. \nUsing an equal bit-width for the entire network, a.k.a, fixed-precision quantization, is sub-optimal because different layers typically exhibit different sensitivities to quantization \\cite{wang2019haq,cai2020rethinking}. \nIt forces the quantization-insensitive layers to work at the same bit-width as the quantization-sensitive ones, missing the opportunity further to reduce the average bit-width of the whole network.\n\nMixed-precision quantization has thus become the focus of network quantization research, with its finer-grained quantization by allowing different bit-widths for different layers. \nIn this way, the quantization-insensitive layers can use much lower bit-widths than the quantization-sensitive layers, thus providing more flexible accuracy-efficiency trade-off adjustment than the fixed-precision quantization. \nFiner-grained quantization also means exponentially larger searching space to search from. \nSuppose we have an $L$-layers network, each layer has $n$ optional bit-widths for weights and activations, the resulting search space is $n^{2L}$. \n\nMost of the prior works are search-based. HAQ \\cite{wang2019haq} and AutoQ \\cite{lou2019autoq} utilize deep reinforcement learning (DRL) to search the bit-widths by modeling bit-width determination problem as a Markov Decision Process. \nHowever, due to the exploration-exploitation dilemma, most existing DRL-based methods require a significant amount of time to finish the search process.\nDNAS \\cite{wu2018mixed} and SPOS \\cite{guo2020single} apply Neural Architecture Search (NAS) algorithms to achieve a differentiable search process. \nAs a common drawback of NAS, the search space needs to be greatly and manually limited in order to make the search process feasible, otherwise the search time can be quite high. \nIn a word, the search-based approach is very time-consuming due to the need to evaluate the searched policy on the training set for multiple rounds (\\emph{e.g.,} 600 rounds in \\cite{wang2019haq})\n \nDifferent from these search-based approaches, some studies aim to define some ``critics'' to judge the quantization sensitivity of the layer. \nHAWQ \\cite{dong2019hawq} and HAWQ-v2 \\cite{dong2019hawq2} employ second-order information (Hessian eigenvalue or trace) to measure the sensitivity of layers and leverage them to allocate bit-widths. \nMPQCO \\cite{chen2021towards} proposes an efficient approach to compute the Hessian matrix and formulate a Multiple-Choice Knapsack Problem (MCKP) to determine the bit-widths assignment.\nAlthough these approaches reduce the searching time as compared to the search-based methods, they have the following defects: \\\\\n\\textit{(1) Biased approximation.} HAWQ and HAWQv2 approximate the Hessian information on the \\emph{full-precision} (unquantized) network to measure the relative sensitivity of layers. \nThis leads to not only an approximation error in these measurements themselves, but more importantly, an inability to perceive the existence of quantization operations.\nA full-precision model is a far cry from a quantized model. \nWe argue that using the information from the full-precision model to determine the bit-widths assignment of the quantized model is seriously biased and results in a sub-optimal searched MPQ policy.\n\\\\\n\\textit{(2) Limited search space.} \nMPQCO approximates their objective function with second-order Taylor expansion. \nHowever, the inherent problem in its expansion makes it impossible to quantize the activations with mixed-precision, which significantly limits the search space. \nA limited search space means that a large number of potentially excellent MPQ policies cannot be accessed during searching, making it more likely to result in sub-optimal performance due to a large number of MPQ policies being abandoned.\nMoreover, MPQCO needs to assign the bit-witdhs of activations manually, which requires expert involvement and leaves a considerable room for improving search efficiency.\n\n\nTo tackle these problems, we propose to allocate bit-widths for each layer according to the \\emph{learned end-to-end importance indicators}.\nSpecifically, we reveal that the learnable scale factors in each layer's quantization function (\\emph{i.e.,} quantizer), initially used to adjust the quantization mappings in classic quantization-aware training (QAT) \\cite{esser2019learned,jung2019learning}, can be used as the importance indicators to distinguish whether one layer is more quantization-sensitive than other layers or not. \nAs we will discuss later, they can perceive the numerical error transfer process and capture layers' characteristics in the quantization process (\\emph{i.e.,} rounding and clamping) during QAT, resulting in a significant difference in the value of quantization-sensitive and insensitive layers. \nSince these indicators are learned end-to-end in QAT, errors that might arise from the approximation-based methods are avoided.\nMoreover, the detached two indicators of each layer for weights and activations allow us to explore the whole search space without limitation (\\emph{e.g.,} only MPQ for weights). \n\nBesides, an $L$-layer network with $n$ optional bit-widths for each layer's weights and activations has $M=2 \\times L \\times n$ importance indicators. \nSeparately training these $M$ indicators requires $M$ training processes, which is time-prohibitive for deep networks and large-scale datasets.\nTo overcome this bottleneck, we propose a joint scheme to parallelize these $M$ training processes in a once QAT. \nThat considerably reduces the indicators training processes by $M\\times$.\n\nThen, based on these obtained layer-wise importance indicators, we transform the original iterative MPQ search problem into a one-time ILP-based mixed-precision search to determine bit-widths for each layer automatically.\nFor example, a sensitive layer (\\emph{i.e.,} larger importance) will receive a higher bit-width than an insensitive (\\emph{i.e.,} smaller importance) layer. \nBy this means, the time-consuming iterative search is eliminated, since we no longer need to use training data during the search. \nA concise comparison of our method and existing works is shown in Table \\ref{tab:overall_comparision}.\n\n\n\\begin{table}[t]\n\\centering\n\\setlength{\\tabcolsep}{0.19mm}\n\\caption{A comparison of our method and existing works. Iterative search avoiding can significantly reduce the MPQ policy search time. Unlimited search space can provide more potentially excellent MPQ policies. Quantization-aware search can avoid the biased approximation on the full-precision model. Fully automatic bit-width assignment can effectively save human efforts and also reduce the MPQ policy search time. $^*$: MPQCO only can provide the quantization-aware search for weights.}\n\\begin{tabular}{c|c|c|c|c|c|c}\n\\hline\n\\small\nMethod & AutoQ & DNAS & HAWQ & HAWQv2 & MPQCO & Ours \\\\ \\hline\nIterative search avoiding & No & No & Yes & Yes & Yes & Yes \\\\ \\hline\nUnlimited search space & Yes & No & Yes & Yes & No & Yes \\\\ \\hline\nQuantization-aware search & Yes & Yes & No & No & Partial yes$^*$ & Yes \\\\ \\hline\nFully automatic bit-width assignment & Yes & Yes & No & Yes & No & Yes \\\\ \\hline\n\\end{tabular}\n\\label{tab:overall_comparision}\n\\end{table}\n\nTo summarize, our contributions are the following:\n\\begin{itemize}\n \n \n \\item \n \n We demonstrate that a small number of learnable parameters (\\emph{i.e.,} the scale factors in the quantizer) can act as importance indicators, to reflect the relative contribution of layers to performance in quantization. \n These indicators are learned end-to-end without performing time-consuming fine-tuning or approximating quantization-unaware second-order information.\n \n \n \\item \n \n We transform the original \\emph{iterative} MPQ search problem into a \\emph{one-time} ILP problem by leveraging the learned importance of each layer, increasing time efficiency exponentially without limiting the bit-widths search space. Especially, we achieve about 330$\\times$ MPQ policy search speedup compared to AutoQ on ResNet50, while preventing 1.7\\% top-1 accuracy drop.\n \\item \n Extensive experiments are conducted on a bunch of models to demonstrate the state-of-the-art results of our method. \n The accuracy gap between full-precision and quantized model of ResNet50 is further narrowed to only 0.6\\%, while the model size is reduced by 12.2$\\times$.\n \n \n \n \n \n\\end{itemize}\n\n\n\\section{Related Work}\n\n\\subsection{Neural Network Quantization}\n\n\\subsubsection{Fixed-Precision Quantization}\nFixed-precision quantization \\cite{cai2017deep,zhou2017incremental,zhou2016dorefa,baskin2021nice} focus on using the same bit-width for all (or most of) the layers. \nIn particular, \\cite{zhang2018lq} introduces a learnable quantizer, \\cite{choi2018pact} uses the learnable upper bound for activations. \\cite{esser2019learned,jung2019learning} proposes to use the learnable scale factor (or quantization intervals) instead of the hand-crafted one.\n\n\\subsubsection{Mixed-Precision Quantization} \nTo achieve a better balance between accuracy and efficiency, many mixed-precision quantization methods which search the optimal bit-width for each layer are proposed. \n\n\\textit{Search-Based Methods.} \nSearch-based methods aim to sample the vast search space of choosing bit-width assignments more effectively and obtain higher performance with fewer evaluation times. \n\\cite{wang2019haq} and \\cite{lou2019autoq} exploit DRL to determine the bit-widths automatically at a layer and kernel level. \nAfter that, \\cite{uhlich2019mixed} determines the bit-width by parametrizing the quantizer with the step size and dynamic range.\nFurthermore, \\cite{habi2020hmq} repurposes the Gumbel-Softmax estimator into a smooth estimator of a pair of quantization parameters.\nIn addition, many NAS-based methods have emerged recently \\cite{wu2018mixed,yu2020search,cai2020rethinking,guo2020single}. \nThey usually organize the MPQ search problem as a directed acyclic graph (DAG) and make the problem solvable by common optimization methods (\\emph{e.g.,} stochastic gradient descent) through differentiable NAS-based algorithms.\n\n\\textit{Criterion-Based Methods.}\nDifferent from exploration approaches,\n\\cite{dong2019hawq} introduces automatically find the mixing-precision settings based on the second-order sensitivity of the model. \n\\cite{dong2019hawq2} selects the bit-width based on Pareto frontier.\nFurthermore, \\cite{chen2021towards} reformulates the problem as a MCKP and proposes a greedy search algorithm to solve it efficiently. \nThe successful achievement of criterion-based methods is that they reduce search costs greatly, but causing a biased approximation or limited search space as we discussed above. \n\n\n\n\n\n\n\n\\subsection{Indicator-Based Model Compression} \\label{sec:indicator_based_compression}\nMeasuring the importance of layers or channels using learned (\\emph{e.g.,} scaling factors of batch normalization layers) or approximated indicators are seen as promising work thanks to its excellent efficiency and performance.\nEarly pruning work \\cite{lecun1990optimal} uses second-derivative information to make a trade-off between network complexity and accuracy.\n\\cite{liu2017learning} pruning the unimportant channels according to the corresponding BN layer scale factors.\n\\cite{chen2021bn} sums the scale factors of BN layer to decide which corresponding convolution layer to choose in NAS search process. \nHowever, quantization is inherently different from these studies due to the presence of numerical precision transformation.\n\n\n\n\n\\section{Method}\nIn this section, we first review the convention QAT and its quantizer. \nNext, we discuss and demonstrate empirically that the scale factors in the quantizer can act as the importance indicators to indicate the quantization sensitivity of each layer.\nAlso, we propose a joint training method that obtains all importance indicators at once to avoid unnecessary training sessions.\nFinally, based on these learned importance indicators, we reformulate the MPQ search problem as a one-time ILP problem, thus eliminating the inefficient iterative evaluation on the whole training set.\n\\subsection{Understand the Role of Scale Factor in Quantizer}\n\\label{sec:understand_factor}\nQuantization maps the continuous values to discrete values. The uniform quantization function (a.k.a quantizer) under $b$ bits in QAT maps the input $float32$ activations and weights to the homologous quantized values $[0, 2^{b}-1]$ and $[-2^{b-1}, 2^{b-1}-1]$. The quantization functions $Q_b(\\cdot)$ that quantize the input values $v$ to quantized values $v^q$ can be expressed as follows:\n\\begin{equation} \nv^q=Q_b(v;s)= round(clip(\\frac{v}{s},min_b,max_b)) \\times s,\n\\label{eq:preliminary}\n\\end{equation}\nwhere $min_b$ and $max_b$ are the minimum and maximum quantization value \\cite{bhalgat2020lsq+,esser2019learned}. For activations, $min_b=0$ and $max_b=2^{b}-1$. For weights, $min_b=-2^{b-1}$ and $max_b=2^{b-1}-1$. \n$s$ is a learnable scalar parameter used to adjust the quantization mappings, called the \\emph{step-size scale factor}. For a network, each layer has two distinct scale factors in the weights and activations quantizer, respectively. \n\nTo understand the role of the scale factor, we consider a toy quantizer example under $b$ bits and omit the $clip(\\cdot)$ function. \nNamely,\n\\begin{equation} \nv^q=round(\\frac{v}{s}) \\times s=\\hat{v^q} \\times s,\n\\label{eq:preliminary_example}\n\\end{equation}\nwhere $\\hat{v^q}$ is the quantized integer value on the discrete domain. \n\nObviously, for two continuous values $v_i$ and $v_j$ ($v_i \\neq v_j$), their quantized integer values $\\left|\\hat{v^q_i}-\\hat{v^q_j}\\right|=0$ if and only if $0 < \\left|v_i-v_j\\right| \\leq \\frac{s}{2}$. \nThus $s$ actually controls the distance between two adjacent quantized values.\nA larger $s$ means that more different continuous values are mapped to the same quantized value.\n\n\\subsection{From Accuracy to Layer-wise Importance}\nSuppose we have an $L$-layer network with full-precision parameter tensor $\\mathcal{W}$, each layer has $n$ optional bit-widths $\\mathcal{B}=\\{b_0, ..., b_{n-1}\\}$ for activation and weights of each layer, respectively. \nThe bit-width combination of weights and activations $(b^{(l)}_{w}, b^{(l)}_{a})$ for layer $l$ is $b^{(l)}_{w} \\in \\mathcal{B}$ and $b^{(l)}_{a} \\in \\mathcal{B}$. \nThus $\\mathcal{S}=\\{(b^{(l)}_w,b^{(l)}_a)\\}_{l=0}^L$ is the bit-width combination for the whole network, and we use $\\mathcal{W_S}$ to denote the quantized parameter tensor. All possible $\\mathcal{S}$ construct the search space $\\mathcal{A}$.\nMixed-precision quantization aims to find the appropriate bit-width combination (\\emph{i.e.,} searched MPQ policy) $\\mathcal{S^*} \\in \\mathcal{A}$ for the whole network to maximize the validation accuracy $\\mathcal{ACC}_{val}$, under certain constraints $C$ (\\emph{e.g.,} model size, BitOps, etc.). \nThe objective can be formalized as follows:\n\n\\begin{subequations}\\label{eq:mp_original}\n\\begin{align}\n\\mathcal{S^*}=\\mathop{\\arg\\max}\\limits_{\\mathcal{S} \\thicksim \\Gamma(\\mathcal{A})} \\mathcal{ACC}_{val}(f(\\textbf{x}; \\mathcal{S}, \\mathcal{W_\\mathcal{S}}), \\textbf{y}) \\nonumber \\ \\ \\nonumber \\tag{\\ref{eq:mp_original}}\n\\end{align} \n\\begin{alignat}{2}\n\\text{s.t.} \\quad & \\mathcal{W_\\mathcal{S}}=\\mathop{\\arg\\min}_\\mathcal{W}\\mathcal{L}_{train}(f(\\textbf{x}; \\mathcal{S}, \\mathcal{W}), \\textbf{y}) \\\\\n& \\quad \\quad \\quad BitOps(\\mathcal{S}) \\leq C \n\\end{alignat}\n\\end{subequations}\nwhere $f(\\cdot)$ denotes the network, $\\mathcal{L}(\\cdot)$ is the loss function of task (\\emph{e.g.,} cross-entropy), \\textbf{x} and \\textbf{y} are the input data and labels, $\\Gamma(\\mathcal{A})$ is the prior distribution of $\\mathcal{S} \\in \\mathcal{A}$. For simplicity, we omit the data symbol of training set and validation set, and the parameter tensor of quantizer. This optimization problem is combinatorial and intractable, since it has an extremely large discrete search space $\\mathcal{A}$. As above, although it can be solvable by DRL or NAS methods, the time cost is still very expensive. This is due to the need to evaluate the goodness of a specific quantization policy $\\mathcal{S}$ on the training set to obtain metrics $\\mathcal{L}_{train}$ iteratively to guide the ensuing search. As an example, AutoQ \\cite{lou2019autoq} needs more than 1000 GPU-hours to determine a final quantization strategy $\\mathcal{S^*}$ \\cite{chen2021towards}.\n\nTherefore, we focus on \\emph{replacing the iterative evaluation on the training set with some once-obtained importance score of each layer}. \nIn this way, the layer-wise importance score indicates the impact of quantization between and within each layer on the final performance, thus avoiding time-consuming iterative accuracy evaluations. \nUnlike the approximated Hessian-based approach \\cite{dong2019hawq,dong2019hawq2}, which is imperceptible to quantization operations or limits the search space \\cite{chen2021towards}, we propose to \\emph{learn the importance in the Quantization-Aware Training}.\n\n\\subsection{Learned Layer-wise Importance Indicators}\nTo end-to-end learn the importance of layers, there are two options, depending on whether the probe is inserted outside or inside the quantizer.\n\n\\textit{The scale factor of BN layer.} \nA straightforward method is to apply the scale factors (a.k.a $\\gamma$) of BN layers, as in prior pruning \\cite{lecun1990optimal,liu2017learning} and NAS \\cite{chen2021bn} studies. In this way, the summed scale factors of a BN layer implicitly indicate the importance the previous convolution layer. However, the affine transformation of BN layer is produced after the quantization process, which means it cannot directly capture the variation of quantization.\n\n\\textit{The scale factor of quantizer.} \nQuantization mapping is critical for a quantized layer since it decides how to use confined quantization levels, and improper mapping is harmful to the performance \\cite{jung2019learning}.\nAs shown in Equation \\ref{eq:preliminary}, during QAT, the scale factor of the quantizer in each layer is trained to adjust the corresponding quantization mapping \\emph{properly} at a specific bit-width. \nThis means that it can naturally capture certain quantization characteristics to describe the layers due to its controlled quantization mapping being optimized directly by the task loss. \n \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.5]{fig\/motivation_vis.pdf}\n\\caption{Illustration of distribution for two example layers, both under 2 bits quantization. \nThe grey dashed line separates the different quantization levels (\\emph{i.e.,} $2^2=4$ quantized values \\{$q_0$, $q_1$, $q_2$, $q_3$\\} for 2 bits).\nFor example, the continuous values in green and red region are quantized to the same quantization level $q_1$ and $q_2$, respectively.\nThe first layer has less variance, resulting a smaller scale factor value. \nIn contrast, the second layer has large variance, thus a wider range of continuous values are quantized to the same quantization level.\n}\n\\label{fig:motivation_vis}\n\\end{figure}\nAs an example, we consider two layers with well-trained $s$ and weights (\\emph{i.e.,}, both in a local minimum after a quantization-aware training).\nAs shown in Figure \\ref{fig:motivation_vis}, 2 bits quantization has 4 quantized values, dividing the original continuous distribution into corresponding quantization levels. \nAs we discussed in \\cref{sec:understand_factor}, the continuous values in a uniform range fall into the same quantization level, the specific range is controlled by the scale factor $s$ of this layer.\nFor the first layer, since it has less variance after training, a smaller scale factor is given.\nBut for the second layer, its large variance makes the scale factor also increase.\nIn other words, for the second layer, a wider range of different continuous values are shared the same quantization level. \nFor example, while the green region in layer 1 and layer 2 are both quantized to the value $q_1$, the green region of layer 2 contains a much broader continuous range.\nIn extreme cases, this extinguishes the inherent differences of original continuous values, thus reducing the expressiveness of the quantized model \\cite{park2020profit}.\nThe only way to improve performance is to give more quantization levels to those layers that have large scale factors, namely, increasing their bit-width.\n\nTherefore, the numerically significant difference in the scale factors of heterogeneous layers can properly assist us to judge the sensitivity of layer.\nMoreover, the operation involved in the scale factor takes place in the quantizer, which allows it to be directly aware of quantization. \nLast but not least, there are two quantizers for activations and weights for a layer, respectively, which means that we can obtain the importance of weights and activations separately. In contrast, we cannot get the importance of weights through the BN layer since it only acts on activations.\n\n\\subsubsection{Feasibility Verification}\nDespite the success of indicator-based methods for model compression \\cite{chen2021bn,lecun1990optimal,liu2017learning} to avoid a time-consuming search process, to the best of our knowledge, there is no literature to demonstrate that the end-to-end learned importance indicators can be used for quantization. To verify the scale factors of quantizer can be used for this purpose, we conduct a contrast experiment for MobileNetv1 \\cite{howard2017mobilenets} on ImageNet \\cite{deng2009imagenet} as follows. \n\nIn the MobileNet family, it is well-known that the depth-wise convolutions (DW-convs) have fewer parameters than the point-wise convolutions (PW-convs); thus, the DW-convs are generally more susceptible to quantization than PW-convs \\cite{habi2020hmq,park2020profit}. \nTherefore, we separately quantized the DW-conv and PW-conv for each of the five DW-PW pairs in MobileNetv1 to observe whether the scale factors of the quantizer and accuracy vary. \nSpecifically, we quantized each layer in MobileNetv1 to 2 or 4 bits to observe the accuracy degradation. Each time we quantized only \\emph{one layer} to low bits while other layers are not quantized and updated, \\emph{i.e.,} we quantized 20 $(5 \\times 2 \\times 2)$ networks independently. If the accuracy of layer $l_i$ degrades higher when the quantization level changes from 4 bits to 2 bits than layer $l_j$, then $l_i$ is \\emph{more sensitive} to quantization than $l_j$. In addition, the input channels and output channels of these five DW-PW pairs are all 512. Namely, we used the same number of I\/O channels to control the variables. \n\\begin{figure}[t]\n\\setlength{\\belowcaptionskip}{-0.4cm}\n\\centering\n\\includegraphics[scale=0.5]{fig\/mbv1_separate_importance.pdf}\n\\caption{Results of the contrast experiment of MobileNetv1 on ImageNet. ``$\\bullet$'' and ``$\\star$'' respectively indicate that the DW-conv layer or the PW-conv layer is quantized. Different colors indicate that different layers are quantized. Large labels indicate that the quantization bit-width is set to 4 bits and small labels of 2 bits.}\n\\label{fig:mbv1_separate_importance}\n\\end{figure}\nThe results of the controlled variable experiment are shown in Figure \\ref{fig:mbv1_separate_importance}. \nBased on the results, we can draw the following conclusions: \n\nWhen the quantization bit-width decreases from 4 to 2 bits, the accuracy degradation of PW-convs is much lower than that of DW-convs, which consists of the prior knowledge that DW-convs are very sensitive.\nMeanwhile, the values of scale factors of all PW-convs are prominent smaller than those of DW-convs under the same bit-width. \nThat indicates the values of scale factor of whose sensitive layers are bigger than whose insensitive layers, which means the scale factor's value can adequately reflect the quantization sensitivity of the corresponding layer.\nNamely, the kind of layer with a large scale factor value is more important than the one with a small scale factor. \n\n\\subsubsection{Initialization of the Importance Indicators}\nInitializing the scale factors with the statistics \\cite{bhalgat2020lsq+,esser2019learned} of each layer results in the different initialization for each layer. \nWe verify whether the factors still show numerical differences by the same initialization value scheme to erase this initialization difference. \nThat is, for each layer, we empirically initialize each importance indicator of bit $b$ by $s_b = 0.1 \\times \\frac{1}{b}$ since we observed the value of factor is usually quite small ($\\leq$ 0.1) and increases as the bit-width decreases.\n\nAs shown in Figure \\ref{fig:5epochs_resnet18_uniformly_initialization}, after training of early instability, the scale factor still showed a significant difference at the end of training. \nThat means the scale factor can still function consistently when using the same initialization for each layer. \nNevertheless, we find that, compared to the same initialization value scheme, initialization with statistics \\cite{bhalgat2020lsq+,esser2019learned} can speedup and stabilize the training process compared to the same initialization value strategy for each layers, thus we still use the statistics initialization scheme in our experiments.\n\\begin{figure}[!htb]\n\\begin{tabular}{cc}\n\\hspace{-0.2cm}\n\\begin{minipage}[t]{0.25\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/layer2.1.conv1.pdf}\n\\end{minipage}\n\\hspace{-0.15cm}\n\\begin{minipage}[t]{0.25\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/layer2.1.conv2.pdf}\n\\end{minipage}\n\\hspace{-0.15cm}\n\\begin{minipage}[t]{0.25\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/layer3.1.conv1.pdf}\n\\end{minipage}\n\\hspace{-0.15cm}\n\\begin{minipage}[t]{0.25\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/layer3.1.conv2.pdf}\n\\end{minipage}\n\\end{tabular}\n\\caption{The importance value of four layers for ResNet18.}\n\\label{fig:5epochs_resnet18_uniformly_initialization}\n\\end{figure}\n\n\\subsection{One-time Training for Importance Derivation}\n\\label{sec:one_time_importance_training}\nSuppose the bit-width options for weights and activations are $\\mathcal{B}=\\{b_0, ..., b_{n-1}\\}$, there are $M = 2 \\times L \\times n$ importance indicators for an $L$-layers network. Training these $M$ indicators separately requires $M$ training sessions, which induces huge extra training costs. \nTherefore, we propose a joint training scheme to obtain importance indicators of all layers corresponding $n$ bit-width options $\\mathcal{B}$ at once training. \n\nSpecifically, we use a bit-specific importance indicator instead of the original notion $s$ in Equation \\ref{eq:preliminary} for each layer. That is, for the weights and activaions of layer $l$, we use the notion $s_{w, i}^{(l)}$ and $s_{a, j}^{(l)}$ as the importance indicator for $b_i \\in \\mathcal{B}$ of weights and $b_j \\in \\mathcal{B}$ of activations. In this way, $n$ different importance indicators can exist for each layer in a single training session. \nIt is worth noting that the importance indicator parameters are only a tiny percentage of the overall network parameters, thus do not incur too much GPU memory overhead.\nFor example, for ResNet18, if there are 5 bit-width options per layer, we have $M=2\\times19\\times5=190$, while the whole network has more than 30 million parameters. \n\n\n\nAt each training step $t$, we first perform $n$ times forward and backward propagation corresponding to $n$ bit-width options (\\emph{i.e.,} respectively using same bit-width $b_k \\in \\mathcal{B}, k=0,..,n-1$ for each layer), and inspired by one-shot NAS \\cite{guo2020single,chu2021fairnas} we then introduce one randomly bit-width assignment process for each layer to make sure different bit-widths in different layers can communicate with each other. \nWe define the above procedure as an atomic operation of importance indicators update, in which only the gradients are calculated $n+1$ times, but the importance indicators are not updated during the execution of the operation. \nAfter that, we aggregate the above gradients and use them to update the importance indicators. \n\nWe show in Figure \\ref{fig:layerwise_importance_for_resnet} all the layer importance indicators obtained by this method in a single training session.\nWe observe that the top layers always show a higher importance value, indicating that these layers need to be allocated a higher bit-width.\n\nInterestingly, we also find that training importance \nindicators only (\\emph{i.e.,} freezing the network weights during training) are almost identical to training the entire network to produce the final experimental results. That may be because there is no need to rely on the weights and the accuracy associated with the weights for evaluating the layer importance. \n\n\\begin{figure}[!htb]\n\\hspace{-0.3cm}\n\\subfigure[ResNet18]{\n\\begin{minipage}[t]{0.26\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/r18_weight_probe.pdf}\n\\end{minipage}\n\\hspace{-0.3cm}\n\\begin{minipage}[t]{0.26\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/r18_acts_probe.pdf}\n\\end{minipage}\n}\n\\hspace{-0.4cm}\n\\subfigure[ResNet50]{\n\\begin{minipage}[t]{0.26\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/r50_weight_probe.pdf}\n\\end{minipage}\n\\hspace{-0.3cm}\n\\begin{minipage}[t]{0.26\\linewidth}\n \\includegraphics[width = 1\\linewidth]{fig\/r50_act_probe.pdf}\n\\end{minipage}\n}\n\\caption{The weights and activations importance indicators for ResNet18 and ResNet50.}\n\\label{fig:layerwise_importance_for_resnet}\n\\end{figure}\n\n\n\\subsection{Mixed-Precision Quantization Search Through Layer-wise Importance}\nNow, we consider using these learned importance indicators to allocate bit-widths for each layer automatically. \nSince these indicators reflect the corresponding layer's contribution to final performance under certain bit-width, we no longer need to use iterative accuracy to evaluate the bit-width combination. \n\nAs shown in Figure \\ref{fig:mbv1_separate_importance}, the DW-convs always have a higher importance score than PW-convs, and the importance score rise when bit-width reduce, then DW-convs should be quantized to higher bit-width than PW-convs, \\emph{e.g.,} 2 bits for PW-convs and 4 bits for DW-convs. \nFor layer $l$, we use a binary variable $x_{i,j}^{(l)}$ representing the bit-width combination $(b^{(l)}_w, b^{(l)}_a)=(b_i, b_j)$ that $b_i$ bits for weights and $b_j$ bits for activations, whether it is selected or not. \nUnder the given constraint $C$, our goal is to minimize the summed value of the importance indicator of every layer. \nBased on that, we reformulate the mixed-precision search into a simple ILP problem as Equation \\ref{eq:search_ilp}:\n\\begin{subequations}\\label{eq:search_ilp}\n\\begin{align}\n\\mathop{\\arg\\min}\\limits_{\\{x^{(l)}_{i,j}\\}_{l=0}^L} \\sum_{l=0}^L(s_{a, j}^{(l)}+\\alpha \\times s_{w, a}^{(l)}) \\times x^{(l)}_{i,j} \\ \\ \\tag{\\ref{eq:search_ilp}}\n\\end{align} \n\\begin{alignat}{2}\n\\text{s.t.} \n\\quad & \\sum_i\\sum_j x^{(l)}_{i,j}=1 \\label{subeq:sumed_x} \\\\\n& \\sum_lBitOps(l,\\sum_i\\sum_j x^{(l)}_{i,j}) \\leq C \\label{subeq:bitops_c} \\\\\n\\text{vars} \n\\quad & x^{(l)}_{i,j} \\in \\{0,1\\} \n\\end{alignat}\n\\end{subequations}\nwhere Equation \\ref{subeq:sumed_x} denotes only one bit-width combination selected for layer $l$, Equation \\ref{subeq:bitops_c} denotes the summed BitOps of each layer constrains by $C$. Depending on the deployment scenarios, it can be replaced with other constraints, such as compression rate.\n$\\alpha$ is the hyper-parameter used to form a linear combination of weights and activations importance indicators.\nTherefore, the final bit-width combination of the whole network $\\mathcal{S^*}$ can be obtained by solving Equation \\ref{eq:search_ilp}.\n\nPlease note that, since Equation \\ref{eq:search_ilp} do not involve any training data, we no longer need to perform iterative evaluations on the training set as previous works. Thus the MPQ policy search time can be saved exponentially. \nWe solve this ILP by a python library PuLP \\cite{mitchell2011pulp}, elapsed time of the solver for ResNet18 is 0.06 seconds on an 8-core Apple M1 CPU. \nMore details about MPQ policy search efficiency please refer \\cref{sec:MPQ Policy Search Efficiency}.\n\n\\section{Experiments}\nIn this section, we conduct extensive experiments with the networks ResNet18\/50 \\cite{he2016deep} and the lightweight network MobileNetv1 \\cite{howard2017mobilenets} on ImageNet \\cite{deng2009imagenet} classification. \nWe compare our method with the fixed-precision quantization methods including \nPACT \\cite{choi2018pact}, PROFIT \\cite{park2020profit}, LQ-Net \\cite{zhang2018lq}, \nand layer-wise mixed-precision quantization methods \nAutoQ \\cite{lou2019autoq}, HAQ \\cite{wang2019haq}, SPOS \\cite{guo2020single}, DNAS \\cite{wu2018mixed}, BP-NAS \\cite{yu2020search}, MPDNN \\cite{uhlich2019mixed}, HAWQ \\cite{dong2019hawq}, HAWQv2 \\cite{dong2019hawq2} and MPQCO \\cite{chen2021towards}.\n\n\\subsection{Experimental Setups}\nFor each layer, we use the bit-width options $\\mathcal{B}=$\\{2,3,4,5,6\\} for its weights and activation. We use the pre-trained model as initialization and keep the first and last layer at 8 bits. \nBased the method in \\cref{sec:one_time_importance_training}, we train 5 epochs for ResNet18 and MobileNet and 1 epoch for ResNet50, both with 0.01 learning rate (LR). Then, we extract the layer-wise importance indicators to apply mixed-precision search upon different constraints according to Equation \\ref{eq:search_ilp}. \nThe hyper-parameter $\\alpha$ for ResNet18, ResNet50, MobileNetv1 is 3.0, 2.0, 1.0 respectively.\nAfter searching, we quantize the models with the searched policies and finetuning them 90 epochs, both using the cosine LR scheduler and the SDG optimizer with 0.04 LR and $2.5 \\times 10^{-5}$ weight-decay, the first 5 epochs are used for warm-up. \n\n\\begin{table}[t]\n\\centering\n\n\\caption{Results for ResNet18 on ImageNet with BitOps constraints. ``W-bits'' and ``A-bits'' indicate bit-width of weights and activations respectively. ``MP'' means mixed-precision quantization. ``Top-1\/Quant'' and ``Top-1\/FP'' indicates the top-1 accuracy of quantized and \\textbf{F}ull-\\textbf{P}recision model. ``Top-1\/Drop'' = ``Top-1\/FP'' $-$ ``Top-1\/Quant''.}\n\\setlength{\\tabcolsep}{0.25mm}\n\\centering\n\\begin{tabular}{c|cccccc}\n\\hline\n\n\\quad Method \\quad & \\quad W-bits & \\quad A-bits & \\quad Top-1\/Quant & \\quad Top-1\/FP & \\quad Top-1\/Drop & \\quad BitOps (G) \\\\ \\hline\n\nPACT & 3 & 3 & 68.1 & 70.4 & -2.3 & 23.09 \\\\\nLQ-Net & 3 & 3 & 68.2 & 70.3 & -2.1 & 23.09 \\\\\nNice & 3 & 3 & 67.7 & 69.8 & -2.1 & 23.09 \\\\\nAutoQ & 3MP & 3MP & 67.5 & 69.9 & -2.4 & - \\\\\nSPOS & 3MP & 3MP & 69.4 & 70.9 & -1.5 & 21.92 \\\\\nDNAS & 3MP & 3MP & 68.7 & 71.0 & -2.3 & 25.38 \\\\\n\\hdashline\nOurs & 2.5MP & 3MP & 68.7 & 69.6 & -0.9 & \\textbf{19.81} \\\\\nOurs & 3MP & 3MP & 69.0 & 69.6 & \\textbf{-0.6} & 23.07 \\\\\nOurs & 3MP & 3MP & \\textbf{69.7} & 70.5 & -0.8 & 23.07 \\\\\n\\hline\nPACT & 4 & 4 & 69.2 & 70.4 & -1.2 & 33.07 \\\\\nLQ-Net & 4 & 4 & 69.3 & 70.3 & -1.0 & 33.07 \\\\\nNice & 4 & 4 & 69.8 & 69.8 & 0 & 33.07 \\\\\nSPOS & 4MP & 4MP & 70.5 & 70.9 & -0.4 & 31.81 \\\\\nMPDNN & 4MP & 4MP & 70.0 & 70.2 & -0.2 & - \\\\\nAutoQ & 4MP & 4MP & 68.2 & 69.9 & -1.7 & - \\\\\nDNAS & 4MP & 4MP & 70.6 & 71.0 & -0.4 & 33.61 \\\\\nMPQCO & 4MP & 4MP & 69.7 & 69.8 & -0.1 & - \\\\\n\\hdashline\nOurs & 4MP & 4MP & 70.1 & 69.6 & \\textbf{0.5} & 33.05 \\\\\nOurs & 4MP & 4MP & \\textbf{70.8} & 70.5 & 0.3 & 33.05 \\\\\n\n\\hline\n\\end{tabular}\n\\label{tab:resnet18_result}\n\\end{table}\n\n\n\n\n\\subsection{Mixed-Precision Quantization Performance Effectiveness}\nTo verify the SOTAs accuracy we achieved, we compare our method with existing SOTAs quantization methods on ResNet18, ResNet50 and MobileNetv1.\n\\subsubsection{ResNet18}\nIn Table \\ref{tab:resnet18_result}, we show the results of three BitOps (computation cost) constrained MPQ schemes, \\emph{i.e.,} 2.5W3A of 19.81G BitOps, 3W3A of 23.07G BitOps and 4W4A of 33.05G BitOps.\n\nFirstly, we observe that in 3-bits level (\\emph{i.e.,} 23.07G BitOps) results. \nWe achieve a \\emph{least} absolute top-1 accuracy drop than all methods. \nPlease note that the accuracy of our initialization full-precision (FP) model is only 69.6\\%, which is about 1\\% lower than some MPQ methods such as SPOS and DNAS. \nTo make a fair comparison, we also provide a result initializing by a higher accuracy FP model (\\emph{i.e.,} 70.5\\%). \nAt this time, the accuracy of the quantized model improves 0.7\\% and reaches 69.7\\%, which surpasses all existing methods, especially DNAS 1.0\\% while DNAS uses a 71.0\\% FP model as initialization. \nIt is noteworthy that a 2.5W3A (\\emph{i.e.,} 19.81G BitOps) result is provided to demonstrate that our method causes less accuracy drop even with a much strict BitOps constraint.\n\nSecondly, in 4-bits level results (\\emph{i.e.,} 33.05G BitOps), we also achieve a highest top-1 accuracy than prior arts whether it is fixed-precision quantization method or mixed-precision quantization method. \nA result initialized by a higher FP precision model is also provided for a fair comparison.\n\n\\subsubsection{ResNet50}\nIn Table \\ref{tab:resnet50_result}, we show the results that not only perform a BitOps constrainted MPQ search but also set a model size constraint (\\emph{i.e.,} 12.2 $\\times$ compression rate). \n\\begin{table}[!h]\n\\centering\n\\caption{Results for ResNet50 on ImageNet with BitOps and compression rate constraints. ``W-C'' means weight compression rate, the size of original full-precision model is 97.28 (MB). ``Size'' means quantized model size (MB). \n}\n\\setlength{\\tabcolsep}{0.25mm}\n\\begin{tabular}{c|ccccccc}\n\\hline\nMethod & W-bits & A-bits & \\quad Top-1\/Quant & \\quad Top-1\/Full & \\quad Top-1\/Drop. & W-C & Size (M)\\\\ \\hline\nPACT & 3 & 3 & 75.3 & 76.9 & -1.6 & 10.67$\\times$ & 9.17\\\\\nLQ-Net & 3 & 3 & 74.2 & 70.3 & -2.1 & 10.67$\\times$ & 9.17\\\\\nDeepComp& 3MP & 8 & 75.1 & 76.2 & -1.1 & 10.41$\\times$ & 9.36 \\\\ \nHAQ & 3MP & 8 & 75.3 & 76.2 & -0.9 & 10.57$\\times$ & 9.22 \\\\\nBP-NAS & 4MP & 4MP & 76.7 & 77.5 & -0.8 & 11.1$\\times$ & 8.76 \\\\\nAutoQ & 4MP & 3MP & 72.5 & 74.8 & -2.3 & - & - \\\\\nHAWQ & MP & MP & 75.5 & 77.3 & -1.8 & 12.2$\\times$ & 7.99\\\\\nHAWQv2 & MP & MP & 75.8 & 77.3 & -1.5 & 12.2$\\times$ & 7.99\\\\\nMPQCO & 2MP & 4MP & 75.3 & 76.1 & -0.8 & 12.2$\\times$ & 7.99\\\\\n\\hdashline\nOurs & 3MP & 4MP & \\textbf{76.9} & 77.5 & \\textbf{-0.6} & \\textbf{12.2$\\times$} & \\textbf{7.97}\\\\\n\\hline\n\\end{tabular}\n\\label{tab:resnet50_result}\n\\end{table}\n\nWe can observe that our method achieves a much better performance than PACT, LQ-Net, DeepComp, and HAQ, under a much smaller model size (\\emph{i.e.,} more than 9MB vs. 7.97MB).\nIn addition, the accuracy degradation of our method is smaller than the criterion-based methods HAWQ, HAWQv2 and MPQCO, which indicates that our quantization-aware search and unlimited search space is necessary for discovering a well performance MPQ policy.\n\n\\subsubsection{MobileNetv1}\n\\begin{table*}[!htb]\n\\centering\n\\begin{minipage}[t]{8cm}\n\\centering\n\\footnotesize\n\\makeatletter\\def\\@captype{table}\\makeatother\\caption{Results for MobileNetv1 on ImageNet with BitOps constraints. ``W-b'' and ``A-b'' means weight and activation bit-widths. ``Top-1'' and ``Top-5'' represent top-1 and top-5 accuracy of quantized model respectively. ``B (G)'' means BitOps (G).}\n\\begin{tabular}{cccccc}\n\\hline\nMethod & W-b & A-b & Top-1 & Top-5 & B (G) \\\\ \\hline\nPROFIT & 4 & 4 & 69.05 & 88.41 & 9.68 \\\\\nPACT & 6 & 4 & 67.51 & 87.84 & 14.13 \\\\\nHMQ & 3MP & 4MP & 69.30 & - & - \\\\\nHAQ & 4MP & 4MP & 67.45 & 87.85 & - \\\\\nHAQ & 6MP & 4MP & 70.40 & 89.69 & - \\\\\n\n\\hdashline\nOurs & 3MP & 3MP & \\textbf{69.48} & \\textbf{89.11} & \\textbf{5.78}\\\\\nOurs & 4MP & 4MP & \\textbf{71.84} & \\textbf{90.38} & \\textbf{9.68} \\\\ \\hline\n\\label{tab:mobilenetv1_result}\n\\end{tabular}\n\\end{minipage}\\hspace{4mm}\n\\begin{minipage}[t]{6.5cm}\n\\centering\n\\footnotesize\n\\makeatletter\\def\\@captype{table}\\makeatother\\caption{Weight only quantization results for MobileNetv1 on ImageNet.\n``W-b'' means weight bit-widths. ``S (M)'' means quantized model size (MB).}\n \\begin{tabular}{ccccc}\n \n\\hline\nMethod & W-b & Top-1 & Top-5 & S (M) \\\\ \\hline\nDeepComp& 3MP & 65.93 & 86.85 & 1.60 \\\\\nHAQ & 3MP & 67.66 & 88.21 & 1.58 \\\\\nHMQ & 3MP & 69.88 & - & \\textbf{1.51} \\\\\n\\hdashline\nOurs & 3MP & \\textbf{71.57} & \\textbf{90.30} & 1.79 \\\\ \\hline\nPACT & 8 & 70.82 & 89.85 & 4.01 \\\\\nDeepComp& 4MP & 71.14 & 89.84 & 2.10 \\\\\nHAQ & 4MP & 71.74 & 90.36 & \\textbf{2.07} \\\\\nHMQ & 4MP & 70.91 & - & 2.12 \\\\\n\\hdashline\nOurs & 4MP & \\textbf{72.60} & \\textbf{90.83} & 2.08\\\\ \\hline\n\\label{tab:mobilenetv1_weight_only_result}\n\\end{tabular}\n \\end{minipage}\n \n\\end{table*}\nIn Table \\ref{tab:mobilenetv1_result}, we show the results of two BitOps constrainted including a 3-bits level (5.78G BitOps) and a 4-bits level (9.68G BitOps).\nEspecially in the 4-bit level result, we achieve a meaningful accuracy improvement (up to 4.39\\%) compared to other MPQ methods.\n\nIn Table \\ref{tab:mobilenetv1_weight_only_result}, we show the weight only quantization results.\nWe find that the accuracy of our 1.79MB model even surpasses that of the 2.12M HMQ model.\n\n\n\n\n\\subsection{Mixed-Precision Quantization Policy Search Efficiency}\n\\label{sec:MPQ Policy Search Efficiency}\nHere, we compare the efficiency of our method to other SOTAs MPQ algorithms with unlimited search space (\\emph{i.e.,} MPQ for both weights and activations instead of weights only MPQ, layer-wise MPQ instead of block-wise).\n\nThe time consumption of our method consists of 3 parts. Namely, \n\\emph{1)} Importance indicators training.\n\\emph{2)} MPQ policy search.\n\\emph{3)} Quantized model fine-tuning.\nThe last part is necessary for all MPQ algorithms while searching the MPQ policy is the biggest bottleneck (\\emph{e.g.,} AutoQ needs more than 1000 GPU-hours to determine the final MPQ policy), thus we mainly focus on the first two parts.\n\n\\subsubsection{Comparison with SOTAs on ResNet50}\nThe time consumption of the first part is to leverage the federative training technique (see \\cref{sec:one_time_importance_training}) to get importance indicators for all layers and their corresponding bit-widths, but it only needs to be done once. \nIt needs to train the network about 50 minutes (using 50\\% data of training set) on 4 NVIDIA A100 GPUs (\\emph{i.e.,} 3.3 GPU-hours). \nThe time consumption of the second part is to solve the ILP problem. It consumes 0.35 seconds on a six-core Intel i7-8700 (at 3.2 GHz) CPU, which is negligible. \n\nHence, suppose we have different $z$ devices with diverse computing capabilities to deploy, our method consumes $50+0.35 \\times \\frac{1}{60} \\times z$ minutes to finish the whole MPQ search processes. \n\n\\textbf{Compared with the search-based approach}, AutoQ \\cite{lou2019autoq} needs 1000 GPU-hours to find the final MPQ policy for a single device, which means it needs $1000z$ GPU-hours to search MPQ policies for these $z$ devices. \nThus we achieve about $\\mathbf{330z\\times}$ speedup and obtain a higher accuracy model simultaneously. \n\n\\textbf{Compared with the criterion-based approach}, \nHAWQv2 \\cite{dong2019hawq2} takes 30 minutes on 4 GPUs to approximate the Hessian trace. \nThe total time consumption of HAWQv2 for these $z$ devices is $30+c \\times \\frac{1}{60} \\times z$ minutes, and $c$ is the time consumption for solving a Pareto frontier based MPQ search algorithm with less than 1 minute.\nThus if $z$ is large enough, our method has almost the same time overhead as HAWQv2.\nIf $z$ is small, \\emph{e.g.,} $z=1$, our method only needs a one-time additional 20-minute investment for the cold start of first part, but resulting in a significant accurate model (\\emph{i.e.,} 1.1\\% top-1 accuracy improvement).\n\n\n\\subsection{Ablation Study}\n\\begin{table}[h]\n\\centering\n\\caption{Ablation study for MobileNetv1 on ImageNet.}\n\\begin{tabular}{c|ccccc}\n\\hline\nMethod & W-bits & A-bits & Top-1\/Quant & Top-5\/Quant & BitOps \\\\ \\hline\nOurs & 3MP & 3MP & 69.48 & 89.11 & 5.78 \\\\\nOurs & 4MP & 4MP & 71.84 & 90.38 & 9.68 \\\\\nOurs-R & 4MP & 4MP & 65.25 & 86.15 & 9.68 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:ablation_study}\n\\end{table}\nIn Figure \\ref{fig:mbv1_separate_importance} and its analysis, we empirically verify that the layers with bigger scale factor values are more sensitive to quantization when their quantization bit-width is reduced.\nBased on this observation, we propose our ILP-based MPQ policy search method. \nHowever, an intuitive question is \\emph{what if we reverse the correlation between scale factors and sensitivity}. \nNamely, what if we gave the layers with smaller scale factor values more bit-widths instead of fewer bit-widths. \nAnd, what if we gave the layers with bigger scale factor values fewer bit-widths instead of more bit-widths. \n\nThe result is shown in Table \\ref{tab:ablation_study}, we use ``Ours-R'' to denote the result of reversed bit-width assignment manner; ``Ours'' results come from Table \\ref{tab:mobilenetv1_result} directly to represent the routine (not reversed) bit-width assignment manner. \n\nWe observe that ``Ours-R'' has 6.59\\% top-1 accuracy lower than our routine method under the same BitOps constraint. \nMore seriously, it has 4.23\\% absolute accuracy gap between ``Ours-R'' (with 4-bits level constrainted, \\emph{i.e.,} 9.68 BitOps) and a 3-bits level (\\emph{i.e.,} 5.78G BitOps) routine result.\nSuch a colossal accuracy gap demonstrates that our ILP-based MPQ policy search method is reasonable.\n\n\\subsection{Bit-width Assignment Visualization}\nWe visualize the bit-width assignment for MobileNet and ResNet50 in Figure \\ref{fig:bit_width_assignment} to understand the behavior of our method.\nWe can observe that the top layers tend to be assigned more bit-widths due to their extraction of low-level features. \nIn particular, in MobileNet, DW-convs are assigned higher bit-width due to their sensitivity to quantization. \nThus, we can conclude that our method does achieve not only better performance and but also a reasonable bit-width assignment through the learned importance indicators.\n\n\\begin{figure}[!htb]\n\\subfigure{\n\\begin{minipage}[t]{0.48\\linewidth}\n\\centering\n \\includegraphics[scale=0.3]{fig\/mbv1_avg4.pdf}\n \n\\end{minipage}\\hspace{2mm} \\\\\n\\begin{minipage}[t]{0.48\\linewidth}\n\\centering\n \\includegraphics[scale=0.3]{fig\/resnet50_bitwidth.pdf}\n\\end{minipage}\n}\n\\caption{Bit-width for MobileNet and ResNet50.}\n\\label{fig:bit_width_assignment}\n\\end{figure}\n\\section{Conclusion}\nIn this paper, we propose a novel MPQ method that leverages the unique parameters in quantization, namely the scale factors in the quantizer, as the importance indicators to assign the bit-width for each layer. \nWe demonstrate the association between these importance indicators and the quantization sensitivity of layers empirically. \nWe conduct extensive experiments to verify the effectiveness of using these learned importance indicators to represent the contribution of certain layers under specific bit-width to the final performance, as well as to demonstrate the rationality of the bit-width assignment obtained by our method. \nFor example, on ResNet50, compared to the search-based method AutoQ, our method saves 330$\\times$ MPQ policy search time for a single device. \nCompared to the criterion-based method HAWQv2 and MPQCO, our method improve 1.1\\% and 1.6\\% top-1 accuracy on ImageNet. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe comprehensive formulation for loop quantum cosmology (LQC) in the spatially flat-isotropic model has been constructed \\cite{Ashtekar:2006uz,Ashtekar:2006rx}. With a massless scalar field serving as the \\emph{emergent time}, the result shows that the quantum evolution is \\emph{deterministic across the deep Planck regime} and in the backward evolution of the states which are semiclassical at late times, \\emph{the big bang is replaced by a big bounce}. Based on the same principles, the construction was further improved by a more direct implementation of the underlying physical ideas of loop quantum gravity (LQG) \\cite{Ashtekar:2006wn}. In the improved dynamics, \\emph{the big bounce occurs precisely when the matter density enters the Planck regime}, regardless of the value of the momentum $p_\\phi$ of the scalar field.\n\nBoth the precursor strategy (``$\\mu_o$-scheme'') and the improved strategy (``$\\mubar$-scheme'') were applied and reconstructed for the Bianchi I model to include anisotropy \\cite{Chiou:2006qq}. The analytical investigation shows that the state in the kinematical Hilbert space associated with the classical singularity is \\emph{completely decoupled} in the difference evolution equation, indicating that the classical singularity is resolved in the quantum evolution and the big bounce may take place when any of the area scales undergoes the vanishing behavior.\n\nWhile a thorough numerical investigation remains to be done to draw the definite conclusion for the details of the quantum evolution in the Bianchi I model, this paper studies its effective dynamics with LQC discreteness corrections. Not only does the result affirm the anticipations in \\cite{Chiou:2006qq} but more intuitive pictures are also obtained in this semiclassical approach, giving an insight into how and why the big bounces take place.\n\nIn accordance with the formulation in \\cite{Chiou:2006qq}, this paper focus specifically on the model with a massless scalar field. In the context of effective dynamics with LQC discreteness corrections, the similar analysis for more generic Bianchi I models with the inclusion of arbitrary matter with equation of state $w<+1$ is also investigated in \\cite{Chiou:2007sp}, which gives the similar results for the occurrence of big bounces with only difference in detail. With arbitrary matter sources, however, the equations of motion are very complicated and a proper approximation has to be used. By contrast, in the special case of a massless scalar field, the equations of motion can be solved analytically and therefore the underlying physics is more transparent.\n\nThis paper is organized as follows. In \\secref{sec:classical dynamics}, the classical dynamics of the Bianchi I cosmology with a massless scalar source is solved in terms of Ashtekar variables in Hamiltonian formulation. The effective dynamics with LQC corrections in $\\mubar$-scheme is constructed and solved in \\secref{sec:mubar dynamics}. Its phenomenological ramifications are discussed in \\secref{sec:discussion}. As a comparison to the $\\mubar$-scheme, the effective dynamics in $\\mu_o$-scheme is also included in \\appref{sec:muzero dynamics}.\n\n\\section{Classical Dynamics}\\label{sec:classical dynamics}\nThe spacetime metric of Bianchi type I is given as:\n\\begin{equation}\nds^2=-dt^2+a_1^2(t)dx^2+a_2^2(t)dy^2+a_3^2(t)dz^2.\n\\end{equation}\nIn terms of Ashtekar variables, the phase space of Bianchi I models is given by the diagonal triad\nvariables $p_I$ and diagonal connection variables $c_I$ for\n$I=1,2,3$, which satisfy the canonical relation:\n\\begin{equation} \\{c_I,p_J\\}=8\\pi G\\gamma\\,\\delta_{IJ}.\n\\end{equation}\nThe triad variables $p_I$ are\nrelated with the length scale factors $a_I$ via:\n\\begin{equation}\\label{eqn:p and a}\np_1=a_2a_3,\\qquad\np_2=a_1a_3,\\qquad p_3=a_1a_2.\n\\end{equation}\nIn the presence of a massless scalar field $\\phi(\\vec{x},t)=\\phi(t)$,\n(which is independent of the spatial coordinates with homogeneity assumed),\nthe classical dynamics is govern\nby the Hamiltonian constraint:\n\\begin{eqnarray}\\label{eqn:cl Hamiltonian}\n&&C=C_{\\rm grav}+C_\\phi\\\\\n&=&-\\frac{\\big(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\\big)}{8\\pi G\\gamma^2\\sqrt{{p_1p_2p_3}}}\n+\\frac{p_\\phi^2}{2\\sqrt{p_1p_2p_3}},\\nonumber\n\\end{eqnarray}\nwhere $p_\\phi$ is the conjugate momentum of $\\phi$ and has the canonical relation with $\\phi$:\n\\begin{equation}\n\\{\\phi,p_\\phi\\}=1.\n\\end{equation}\n\nWe can simplify the Hamiltonian by choosing the lapse function $N=\\sqrt{p_1p_2p_3}$\nand thus introducing the new time variable $dt'=(p_1p_2p_3)^{-1\/2}dt$.\nThe rescaled Hamiltonian constraint is given by\n\\begin{equation}\\label{eqn:cl rescaled Hamiltonian}\nH=-\\frac{\\left(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\\right)}{8\\pi G\\gamma^2}\n+\\frac{p_\\phi^2}{2}.\\\\\n\\end{equation}\n\nThe equations of motion are governed by the Hamilton's equations:\n\\begin{eqnarray}\n\\label{eqn:cl eom 1}\n\\frac{dp_\\phi}{dt'}&=&\\{p_\\phi,H\\}=0\\quad\\Rightarrow\\quad\np_\\phi\\ \\text{is constant}\\\\\n\\label{eqn:cl eom 2}\n\\frac{d\\phi}{dt'}&=&\\{\\phi,H\\}=p_\\phi,\\\\\n\\label{eqn:cl eom 3}\n\\frac{dc_1}{dt'}&=&\\{c_1,H\\}=8\\pi G\\gamma\\,\\frac{\\partial\\, H}{\\partial p_1}\\nonumber\\\\\n&=&-\\gamma^{-1}c_1\\left(c_2p_2+c_3p_3\\right),\\\\\n\\label{eqn:cl eom 4}\n\\frac{dp_1}{dt'}&=&\\{p_1,H\\}=-8\\pi G\\gamma\\,\\frac{\\partial\\, H}{\\partial c_1}\\nonumber\\\\\n&=&\\gamma^{-1}p_1\\left(c_2p_2+c_3p_3\\right),\n\\end{eqnarray}\nand so on for $c_2$, $c_3$, $p_2$, $p_3$ in the cyclic manner.\nIn addition to the Hamilton's equations, the constraint that the Hamiltonian must vanish yields\n\\begin{eqnarray}\\label{eqn:cl eom 5}\n&&H(c_I,p_I)=0\\quad\n\\Rightarrow\\\\\np_\\phi^2&=&\\frac{1}{4\\pi G\\gamma^2}\n\\big(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\\big).\\nonumber\n\\end{eqnarray}\n\nCombining \\eqnref{eqn:cl eom 3} and \\eqnref{eqn:cl eom 4} gives\n\\begin{eqnarray}\\label{eqn:const Ki}\n\\frac{d}{dt'}(p_Ic_I)=0,\\quad\\Rightarrow\\quad\np_Ic_I=8\\pi G\\gamma\\hbar\\,{\\cal K}_I,\n\\end{eqnarray}\nwhere ${\\cal K}_I$ are dimensionless constants, which will be used to\nparameterize the solutions of evolution.\nTaking \\eqnref{eqn:const Ki} into \\eqnref{eqn:cl eom 5}, we have\n\\begin{equation}\\label{eqn:p_ph and K}\np_\\phi^2=16\\pi G\\hbar^2\n\\left\\{{\\cal K}_2{\\cal K}_3+{\\cal K}_1{\\cal K}_3+{\\cal K}_1{\\cal K}_2\\right\\}\n\\end{equation}\nor equivalently\n\\begin{equation}\\label{eqn:K}\n{\\cal K}_\\phi^2=2\n\\left({\\cal K}_2{\\cal K}_3+{\\cal K}_1{\\cal K}_3+{\\cal K}_1{\\cal K}_2\\right),\n\\end{equation}\nif we define\n\\begin{equation}\\label{eqn:def of p_ph}\np_\\phi:=\\hbar\\sqrt{8\\pi G}\\,{\\cal K}_\\phi.\n\\end{equation}\n\nPutting \\eqnref{eqn:const Ki} into \\eqnref{eqn:cl eom 4} gives\n\\begin{equation}\n\\frac{1}{p_1}\\frac{dp_1}{dt'}={8\\pi G \\hbar}\\left({\\cal K}_2+{\\cal K}_3\\right),\n\\end{equation}\nBy referring to \\eqnref{eqn:cl eom 2}, this leads to\n\\begin{equation}\\label{eqn:cl diff eq 2}\n\\frac{1}{p_I}\\frac{dp_I}{d\\phi}=8\\pi G\\hbar\\,\\frac{{\\cal K}_2+{\\cal K}_3}{p_\\phi}\n=\\sqrt{8\\pi G}\\,\\Big(\\frac{1-\\kappa_I}{\\kappa_\\phi}\\Big),\n\\end{equation}\nwhere we scale the parameters ${\\cal K}_I={\\cal K}\\kappa_I$, ${\\cal K}_\\phi={\\cal K}\\kappa_\\phi$ such that\n\\begin{equation}\\label{eqn:para constraint}\n\\kappa_1+\\kappa_2+\\kappa_3=1,\n\\qquad\n\\kappa_1^2+\\kappa_2^2+\\kappa_3^2+\\kappa_\\phi^2=1.\n\\end{equation}\nRegarding $\\phi$ as the \\emph{emergent time}, the solutions of evolution are given by\n\\begin{equation}\\label{eqn:cl sol 1}\np_I(\\phi)=p_I(\\phi_0)\\,e^{\\sqrt{8\\pi G}\\big(\\frac{1-\\kappa_I}{\\kappa_\\phi}\\big)(\\phi-\\phi_0)},\n\\end{equation}\nor equivalently\n\\begin{equation}\\label{eqn:cl sol 2}\na_I(\\phi)=a_I(\\phi_0)\\,e^{\\sqrt{8\\pi G}\\,\\frac{\\kappa_I}{\\kappa_\\phi}(\\phi-\\phi_0)}.\n\\end{equation}\n\nThe classical Bianchi I model with a massless scalar field admits both ``Kasner-like''\n(two of $\\kappa_I$ positive and the other negative) and ``Kasner-unlike''\n(all $\\kappa_I$ positive) solutions.\nThe Kasner-like solution, which has two expanding and one contracting directions (say $\\kappa_\\phi>0$),\neventually encounters the ``Kasner-like singularity''\n(a given regular cubical cell stretches as an infinitely long line) in the far past\nand the ``planar collapse'' (a regular cubical cell stretches as an infinitely large plane)\nin the far future. On the other hand, the Kasner-unlike solution, with all directions\nexpanding, encounters the ``Kasner-unlike singularity''\n(a regular cubical cell vanishes to a point) in the far past and no\nplanar collapse.\n\nWe will see that with LQC discreteness corrections, both Kasner-like and Kasner-unlike singularities are resolved and replaced by the \\emph{big bounces},\nwhereas the planar collapse remains its destiny even one of the three diagonal directions\napproaches infinitely small length scale.\n\n\\section{Effective Dynamics in $\\mubar$-Scheme}\\label{sec:mubar dynamics}\nIn LQC, the connection variables $c_I$ do not exist and should be replace by\nholonomies. In the effective theory, to capture the quantum corrections, following the procedures used in the isotropic case \\cite{Taveras:IGPG preprint,Singh:2005xg}, we take the\nprescription to replace $c_I$ by $\\sin(\\mubar_Ic_I)\/\\mubar_I$, introducing discreteness\nvariables $\\mubar_I$. In the improved strategy ($\\mubar$-scheme) used in\nBianchi I LQC \\cite{Chiou:2006qq}, $\\mubar_I$ are not fixed constants but given by\n\\begin{equation}\n\\mubar_1=\\sqrt{\\frac{\\Delta}{p_1}}\\,,\\quad\n\\mubar_2=\\sqrt{\\frac{\\Delta}{p_2}}\\,,\\quad\n\\mubar_3=\\sqrt{\\frac{\\Delta}{p_3}}\\,,\n\\end{equation}\nwhere $\\Delta=\\frac{\\sqrt{3}}{2}(4\\pi\\gamma\\Pl^2)$ is the \\emph{area gap} in the full theory of LQG.\n\nImposing this prescription plus the loop quantum correction to the inverse triad on \\eqnref{eqn:cl Hamiltonian}, we have the effective Hamiltonian constraint to the leading order:\n\\begin{eqnarray}\\label{eqn:qm Hamiltonian original}\nC_{\\rm eff}&=&f(p_1)f(p_2)f(p_3)\\frac{p_\\phi^2}{2}\n-\\frac{f(p_1)f(p_2)f(p_3)}{8\\pi G \\gamma^2}\\\\\n&&\\quad\\times\n\\left\\{\n\\frac{\\sin(\\mubar_2c_2)\\sin(\\mubar_3c_3)}{\\mubar_2\\mubar_3}p_2p_3+\n\\text{cyclic terms}\n\\right\\},\\nonumber\n\\end{eqnarray}\nwhere $f(p_I)$ is the eigenvalue of the inverse triad operator $\\widehat{1\/\\sqrt{p_I}}$. The loop quantization gives the quantum corrections:\n\\begin{equation}\nf(p_I)\\sim\n\\left\\{\n\\begin{array}{cr}\n\\frac{1}{\\sqrt{{p_I}}}\\left(1+{\\cal O}(\\Pl^2\/p_I)\\right) & \\text{for}\\ p_I\\gg\\Pl^2\\\\\n\\propto{p_I}^n\/\\Pl^{2n+1} & \\text{for}\\ p_I\\ll\\Pl^2\n\\end{array}\n\\right.\n\\end{equation}\nwith the Planck length $\\Pl:=\\sqrt{G\\hbar}$ and a positive $n$. The corrections to $f(p_I)$ is significant only in the Planckian region in the vicinity of $p_I=0$. From now on, we will ignore the quantum corrections to $f(p_I)$ by simply taking its classical function $f(p_I)=p_I^{-1\/2}$. [We will see that in the backward evolution the big bounce takes place much earlier before the discreteness correction on the inverse triad operator becomes considerable, and it is the ``non-locality'' effect (i.e., using the holonomies) that accounts for the occurrence of the big bounce.]\n\nWith $f(p_I)=p_I^{-1\/2}$, by choosing $dt'=(p_1p_2p_3)^{-1\/2}dt$, the Hamiltonian constraint \\eqnref{eqn:qm Hamiltonian original} can be rescaled as\n\\begin{eqnarray}\\label{eqn:qm Hamiltonian}\n&&H_\\mubar=\\frac{p_\\phi^2}{2}\\\\\n&&\\quad\n-\\frac{1}{8\\pi G \\gamma^2}\n\\left\\{\n\\frac{\\sin(\\mubar_2c_2)\\sin(\\mubar_3c_3)}{\\mubar_2\\mubar_3}p_2p_3+\n\\text{cyclic terms}\n\\right\\}.\\nonumber\n\\end{eqnarray}\nAgain, the equations of motion are given by the Hamilton's equations and\nthe constraint that the Hamiltonian must vanish:\n\\begin{eqnarray}\n\\label{eqn:qm eom 1}\n\\frac{dp_\\phi}{dt'}&=&\\{p_\\phi,H_\\mubar\\}=0\\quad\\Rightarrow\\quad\np_\\phi\\ \\text{is constant}\\\\\n\\label{eqn:qm eom 2}\n\\frac{d\\phi}{dt'}&=&\\{\\phi,H_\\mubar\\}=p_\\phi,\\\\\n\\label{eqn:qm eom 3}\n\\frac{dc_1}{dt'}&=&\\{c_1,H_\\mubar\\}=8\\pi G\\gamma\\,\\frac{\\partial\\, H_\\mubar}{\\partial p_1}\\nonumber\\\\\n&=&-\\gamma^{-1}\n\\left(\\frac{3\\sin(\\mubar_1c_1)}{2\\mubar_1}-\\frac{c_1\\cos(\\mubar_1c_1)}{2}\\right)\\nonumber\\\\\n&&\\quad\\ \\times\n\\left(\\frac{\\sin(\\mubar_2c_2)}{\\mubar_2}p_2\n+\\frac{\\sin(\\mubar_3c_3)}{\\mubar_3}p_3\\right),\\\\\n\\label{eqn:qm eom 4}\n\\frac{dp_1}{dt'}&=&\\{p_1,H_\\mubar\\}=-8\\pi G\\gamma\\,\\frac{\\partial\\, H_\\mubar}{\\partial c_1}\\nonumber\\\\\n&=&\\gamma^{-1}p_1\\cos(\\mubar_1c_1)\\nonumber\\\\\n&&\\quad\\ \\times\n\\left(\\frac{\\sin(\\mubar_2c_2)}{\\mubar_2}p_2\n+\\frac{\\sin(\\mubar_3c_3)}{\\mubar_3}p_3\\right),\n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{eqn:qm eom 5}\n&&H_\\mubar(c_I,p_I)=0\\quad\n\\Rightarrow\\qquad p_\\phi^2=\\\\\n&&\\frac{1}{4\\pi G\\gamma^2}\n\\left\\{\n\\frac{\\sin(\\mubar_2c_2)\\sin(\\mubar_3c_3)}{\\mubar_2\\mubar_3}p_2p_3+\n\\text{cyclic terms}\n\\right\\}.\\nonumber\n\\end{eqnarray}\n[Note that in the classical limit $\\mubar_Ic_I\\rightarrow0$, we have\n$\\sin(\\mubar_Ic_I)\/\\mubar_I\\rightarrow c_I$,\n$\\cos(\\mubar_Ic_I)\\rightarrow1$ and therefore\n\\eqnref{eqn:qm eom 3}--\\eqnref{eqn:qm eom 5} reduce to their\nclassical counterparts \\eqnref{eqn:cl eom 3}--\\eqnref{eqn:cl eom 5}.]\n\nBy \\eqnref{eqn:qm eom 3} and \\eqnref{eqn:qm eom 4}, we have\n\\begin{eqnarray}\\label{eqn:qm dpc\/dt'}\n&&\\left(\\frac{3\\sin(\\mubar_Ic_I)}{2\\mubar_I}-\\frac{c_I\\cos(\\mubar_Ic_I)}{2}\\right)\\frac{dp_I}{dt'}\n+p_I\\cos(\\mubar_Ic_I)\\frac{dc_I}{dt'}\\nonumber\\\\\n&&=\\frac{d}{dt'}\\left[p_I\\frac{\\sin(\\mubar_Ic_I)}{\\mubar_I}\\right]=0,\n\\end{eqnarray}\nwhich gives\n\\begin{equation}\\label{eqn:qm pc}\np_I\\frac{\\sin(\\mubar_Ic_I)}{\\mubar_I}\n=8\\pi G\\gamma\\hbar\\,{\\cal K}_I.\n\\end{equation}\\\\\nTaking \\eqnref{eqn:qm pc} into \\eqnref{eqn:qm eom 5} again\ngives the same constraints on the constant parameters as in\n\\eqnref{eqn:p_ph and K} or \\eqnref{eqn:K}.\n\nSubstituting \\eqnref{eqn:qm pc} into \\eqnref{eqn:qm eom 4} yields\n\\begin{equation}\\label{eqn:qm diff eq 1}\n\\frac{1}{p_1}\\frac{dp_1}{dt'}\n=8\\pi G \\hbar\\,\\cos(\\mubar_1c_1)({\\cal K}_2+{\\cal K}_3).\n\\end{equation}\nBy regarding $\\phi$ as the emergent time via \\eqnref{eqn:qm eom 2} and expressing $\\cos x=\\pm\\sqrt{1-\\sin^2 x}$,\n\\eqnref{eqn:qm diff eq 1} then gives\n\\begin{equation}\\label{eqn:qm diff eq 2}\n\\frac{1}{p_I}\\frac{dp_I}{d\\phi}=\n\\pm\\sqrt{8\\pi G}\\,\\Big(\\frac{1-\\kappa_I}{\\kappa_\\phi}\\Big)\n\\left[1-\\frac{\\varrho_I}{\\varrho_{I\\!,\\,{\\rm crit}}}\\right]^{1\/2},\n\\end{equation}\nwhere we define the \\emph{directional density}:\n\\begin{equation}\n\\varrho_I:=\\frac{p_\\phi^2}{p_I^3}\n\\end{equation}\nfor the $I$-direction and its critical value is given by the \\emph{Planckian matter density}\n$\\rho_{\\rm Pl}$ times a numerical factor:\n\\begin{equation}\n\\varrho_{I\\!,\\,{\\rm crit}}:=\\left(\\frac{\\kappa_\\phi}{\\kappa_I}\\right)^2\\rho_{\\rm Pl},\n\\qquad\n\\rho_{\\rm Pl}:=(8\\pi G \\gamma^2\\Delta)^{-1}.\n\\end{equation}\n\n\n\\begin{widetext}\n\n\\begin{figure}\n\\begin{picture}(400,150)(0,0)\n\n\\put(-60,0){\n\\begin{picture}(460,150)(0,0)\n\n\n\\put(10,133){(a)}\n\\put(175,133){(b)}\n\\put(345,133){(c)}\n\n\n\\resizebox{\\textwidth}{!}\n{\\includegraphics{fig1.eps}}\n\n\\end{picture}\n}\n\n\\end{picture}\n\\caption{$\\kappa_1=\\kappa_2=\\kappa_3=1\/3$, $\\kappa_\\phi=\\sqrt{2\/3}$; $p_1(\\phi_0)=1\\times10^4\\Pl^2$,\n$p_2(\\phi_0)=2\\times10^4\\Pl^2$, $p_3(\\phi_0)=3\\times10^4\\Pl^2$;\nand $p_\\phi=2\\times10^3\\hbar\\sqrt{8\\pi G}$ (i.e., ${\\cal K}\\kappa_\\phi=2\\times10^3$).\nThe\n{red} lines are for $p_1$, $a_1$, $\\varrho_1$;\n{green} for $p_2$, $a_2$, $\\varrho_2$;\nand\n{blue} for $p_3$, $a_3$, $\\varrho_3$. The values of\n$\\varrho_{1,\\,{\\rm crit}}$, $\\varrho_{2,\\,{\\rm crit}}$\nand $\\varrho_{3,\\,{\\rm crit}}$ are pointed by the arrow(s) in (c). (The Barbero-Immirzi parameter is set to $\\gamma=1$.)}\\label{fig:fig1}\n\\end{figure}\n\n\\begin{figure}\n\\begin{picture}(400,150)(0,0)\n\n\\put(-60,0){\n\\begin{picture}(460,150)(0,0)\n\n\n\\put(10,133){(a)}\n\\put(175,133){(b)}\n\\put(345,133){(c)}\n\n\n\\resizebox{\\textwidth}{!}\n{\\includegraphics{fig2.eps}}\n\n\\end{picture}\n}\n\n\\end{picture}\n\\caption{$\\kappa_1=1\/3$, $\\kappa_2=1\/5$, $\\kappa_3=7\/15$, $\\kappa_\\phi=\\sqrt{142}\/15$; $p_1(\\phi_0)=1\\times10^4\\Pl^2$,\n$p_2(\\phi_0)=2\\times10^4\\Pl^2$, $p_3(\\phi_0)=3\\times10^4\\Pl^2$;\nand $p_\\phi=2\\times10^3\\hbar\\sqrt{8\\pi G}$ (i.e., ${\\cal K}\\kappa_\\phi=2\\times10^3$).}\\label{fig:fig2}\n\\end{figure}\n\n\\begin{figure}\n\\begin{picture}(400,150)(0,0)\n\n\\put(-60,0){\n\\begin{picture}(460,150)(0,0)\n\n\n\\put(10,133){(a)}\n\\put(175,133){(b)}\n\\put(345,133){(c)}\n\n\n\\resizebox{\\textwidth}{!}\n{\\includegraphics{fig3.eps}}\n\n\\end{picture}\n}\n\n\\end{picture}\n\\caption{$\\kappa_1=1\/2$, $\\kappa_2=3\/4$, $\\kappa_3=-1\/4$, $\\kappa_\\phi=1\/\\sqrt{8}$; $p_1(\\phi_0)=p_2(\\phi_0)=p_3(\\phi_0)\n=1\\times10^4\\Pl^2$;\nand $p_\\phi=2\\times10^3\\hbar\\sqrt{8\\pi G}$ (i.e., ${\\cal K}\\kappa_\\phi=2\\times10^3$).}\\label{fig:fig3}\n\\end{figure}\n\n\\begin{figure}\n\\begin{picture}(400,150)(0,0)\n\n\\put(-60,0){\n\\begin{picture}(460,150)(0,0)\n\n\n\\put(10,133){(a)}\n\\put(175,133){(b)}\n\\put(345,133){(c)}\n\n\n\n\\resizebox{\\textwidth}{!}\n{\\includegraphics{fig4.eps}}\n\n\\end{picture}\n}\n\n\\end{picture}\n\\caption{$\\kappa_1=1\/2$, $\\kappa_2=3\/4$, $\\kappa_3=-1\/4$, $\\kappa_\\phi=1\/\\sqrt{8}$; $p_1(\\phi_0)=3\\times10^4\\Pl^2$,\n$p_2(\\phi_0)=2\\times10^4\\Pl^2$, $p_3(\\phi_0)=1\\times10^4\\Pl^2$;\nand $p_\\phi=2\\times10^3\\hbar\\sqrt{8\\pi G}$ (i.e., ${\\cal K}\\kappa_\\phi=2\\times10^3$).}\\label{fig:fig4}\n\\end{figure}\n\n\\end{widetext}\n\n\n\\section{Discussion}\\label{sec:discussion}\nAs opposed to the classical equation \\eqnref{eqn:cl diff eq 2}, in which $p_I$ continues to decrease toward the classical singularity in the backward evolution, the effective equation in \\eqnref{eqn:qm diff eq 2} flips sign exactly at the moment when $\\varrho_I$ approaches its critical value $\\varrho_{I\\!,\\,{\\rm crit}}$.\nNote that by \\eqnref{eqn:p and a} $p_I$ can be regarded as the \\emph{area} scale factors.\nTherefore, with the LQC discreteness corrections, \\eqnref{eqn:qm diff eq 2} shows that the singularities (both Kasner-like and Kasner-unlike) are resolved and replaced by the big bounces in the backward evolution when any of the area scales undergoes the vanishing behavior. Across the bounces, the equation of motion again comes closer and closer to the classical counterpart. Hence, the semiclassicality is retained on both asymptotic sides of the evolution.\n\nFurthermore, the detailed evolutions of $p_I$ are decoupled in different diagonal directions and evolve independently of one another once the initial conditions ($p_I(\\phi_o)$, $p_\\phi$ and $\\kappa_I$) are specified. Thus, the bounces occur up to three times, once in each direction, whenever each of the directional densities $\\varrho_I$ approaches its critical value.\nAs expected, in $\\mubar$-scheme, the critical values $\\varrho_{I\\!,\\,{\\rm crit}}$ are in the Planck regime of ${\\cal O}(\\hbar\\Pl^{-4})$ and \\emph{independent of the value of $p_\\phi$} ($\\varrho_{I\\!,\\,{\\rm crit}}$ depend on $p_\\phi$ only through the ratio $\\kappa_\\phi\/\\kappa_I\\equiv{\\cal K}_\\phi\/{\\cal K}_I$).\\footnote{In \\appref{sec:muzero dynamics}, the old precursor strategy ($\\mu_o$-scheme) is presented and it shows that the critical value of $\\varrho_I$ can be made arbitrarily small by increasing $p_\\phi$.}\nNote that $\\varrho_I$ have the same dimension as the matter density $\\rho:=p_\\phi^2\/(2p_1p_2p_3)$ and $\\varrho_I$ play the same role as $\\rho$ does in the isotropic case, signaling the occurrence of big bounces.\n\nOn the other hand, the planar collapse is \\emph{not} resolved but one of the length scale factors $a_I$ continues the vanishing behavior in the Kasner-like case. This is\nexpected since the classical solutions \\eqnref{eqn:const Ki} and \\eqnref{eqn:cl sol 1} yield $\\mubar_Ic_I\\rightarrow0$ (and $\\muzero_Ic_I\\rightarrow0$ in $\\mu_o$-scheme) toward the planar collapse and therefore the quantum corrections become more and more negligible (in both schemes).\n\nFor given initial conditions, the differential equation \\eqnref{eqn:qm diff eq 2} can be solved numerically. The behaviors of $p_I(\\phi)$, $a_I(\\phi)$ and $\\varrho_I(\\phi)$ are depicted in parts (a), (b) and (c) respectively in \\figref{fig:fig1} and \\figref{fig:fig2} for Kasner-unlike solutions and in \\figref{fig:fig3} and \\figref{fig:fig4} for Kasner-like solutions.\n\nThe fact that smallness of $p_I$ (not of $a_I$) is an indication of the occurrence of big bounces seems to support the suggestion that ``area is more fundamental than length in LQG'', although whether this is simply a technical artifact or reflects some deep physics is still not clear. (See Section VII.B of \\cite{Rovelli:1997yv} for some comments on this aspect and \\cite{Rovelli:1993vu} for more details.)\nMeanwhile, as the length operator has been shown to have a\ndiscrete spectrum \\cite{Thiemann:1996at}, the fact that the\nvanishing of the length scale factor in the planar collapse is not stopped seems to contradict the discreteness of the length spectrum. Whether we miss some important ingredients when imposing the fundamental discreteness of LQG in the LQC construction or indeed area is more essential than length remains an open question for further investigation.\n\nIt is also noteworthy that \\eqnref{eqn:qm diff eq 2} remains invariant if we rescale $p_\\phi\\rightarrow l^3 p_\\phi$ and $p_I\\rightarrow l^2p_I$ at the same time. This is reminiscent of the idea as suggested in \\cite{Rovelli:1990ph,Rovelli:1992vv} that area is measurable only if the surface is \\emph{coupled with the material reference}. The scaling invariance, however, breaks down in the full LQC theory since the quantum evolution is governed by a difference equation \\cite{Chiou:2006qq}, in which the step size of difference introduces an additional scale in the deep Planck regime.\\footnote{Therefore, the semiclassicality is retained in the full quantum theory only for large $p_\\phi$ and $p_I$. Accordingly, we put big values of $p_\\phi$ and $p_I(\\phi_o)$ in the figures to make sense of the semiclassical approach for the effective dynamics. The figures are trivially rescaled under the scaling.}\n\nMeanwhile, related to the above observation, the physical meaning of the directional densities $\\varrho_I$ can be interpreted as the (inverse of) area scales, again, \\emph{measured by the reference of the matter content}. The big bounces take place whenever one of the area scales becomes very small by the reference of the matter momentum. It is then attempting to regard not only $\\phi$ as the ``internal clock'' (emergent time) but also $p_\\phi$ as the ``internal rod'' --- namely, the measurement of both temporal and spatial geometries makes sense only in the presence of matter content. This observation may support the ideas of the relational interpretation of quantum mechanics with real rods and clocks such as studied in \\cite{Gambini:2006ph} (see also \\cite{Rovelli:1990ph,Rovelli:1992vv}), although the link is far from clear. If this concept is taken seriously, in return, we might be able to further improve the $\\mubar$-scheme to better reflect the underlying physics of LQG such that the difference equation of evolution in the full LQC theory also respects the scaling invariance mentioned above.\n\n\n\n\\begin{acknowledgements}\nThe author would like to thank Abhay Ashtekar, Golam Hossain, Tomasz Pawlowski, Parampreet Singh for useful discussions and especially Kevin Vandersloot for sharing his private notes and important ideas. This work was supported in part by the NSF grant PHY-0456913.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nFor people who are blind or visually impaired, tactile graphics are essential resources to learn and explore non-textual information such as maps, graphs, and pictorial diagrams. \nStudents with visual impairments often use tactile graphics to understand an abstract concept in physics, the structure of molecules in chemistry, or a human brain model in biology classes.\nAlthough such tactile representations are useful in presenting spatial and visual information, the form to associate the visual and textual information has been limited;\ntactile graphics can only present a fixed and limited amount of information to tactile readers~\\cite{brock2015interactivity}. \nBraille labels and keys are often used to annotate the content~\\cite{BANA}, but this annotation is not accessible for many blind users, as approximately 90 percent of blind people in the United States cannot read braille~\\cite{national2009braille, blindstatics2015}.\n\nTo overcome these limitations, recent work has focused on enhancing traditional tactile graphics with interactive tactile graphics, which leverage multimodal interaction to provide more accessible and adaptable information associated with the visual information.\nThese interactive graphics can detect touch input and annotate content with an audio description~\\cite{baker2014tactile, miele2006talking} or haptic feedback~\\cite{yu2001haptic}, which allows readers to explore the tactile content more efficiently than classical tactile maps or diagrams~\\cite{brock2015interactivity, landau2003merging}.\nHowever, the tactile representations of these systems are still static and they cannot render dynamic content in tactile form~\\cite{jacobson1998navigating}.\nThrough our formative study, we found that the current way to present dynamic tactile representations is largely limited with small-size and expensive refreshable braille displays, which can significantly limit both potential applications and wide-spread adoption.\n\nThis paper proposes {\\it dynamic tactile markers}, a new approach to enhancing tactile graphics with reconfigurable multiple tactile elements.\nDynamic tactile markers are movable, self-adjusting physical elements that can render the dynamic information on top of a traditional tactile graphic such as swell paper or thermoformed plastic.\nIn contrast to prior approaches, which utilize audio or haptic feedback, our approach aims to enhance {\\it tactile} feedback which can provide the real-time affordances and the physical guides for blind or low visions users to explore the spatial information.\n\nWe explore four application scenarios where the dynamic tactile markers assist blind users accomplish the following tasks: (1) find locations on a map, (2) read and analyze dynamic data, (3) locate and identify specific features on tactile graphics, and (4) draw through dynamic assistance. In these scenarios, the dynamic tactile markers can be used as a tangible data point for data visualization, a location or navigating path on a map (Figure~\\ref{fig:system-concept}), or spatial reference points for guided drawings.\n\nThis approach is motivated by a formative study in which we asked four blind participants the needs and challenges of the current tactile representations. \nFrom the study we learned that high cost and small working space size are the major limitations for accessing the technology, which suggests an opportunity for a new type of interaction with tactile graphics.\nBased on these findings and the consideration of different actuation techniques, we explore an electromagnetic actuation as a low-cost and scalable design for enhancing existing tactile graphics with dynamic tactile markers.\n\nTo demonstrate this concept, we present FluxMarker, a software and hardware prototype that actuates magnetic elements on top of a static tactile graphic. \\changes{FluxMarker can move multiple small magnets to a grid of possible locations by using an array of electromagnetic coils.} The coils are fabricated with standard printed circuit board (PCB) manufacturing techniques, which can enable the low-cost fabrication (40 USD for a 16x16 grid and 15cm x 15cm dimension, and 500 USD for a 160x160 grid and 150cm x 150cm dimension). With modular design, the size of the display easily scales up without significant increase of cost and fabrication complexity, while allowing independent multiple magnets control.\n\nWe evaluated our prototype with six people with visual impairments to investigate the plausibility of the application scenarios identified during our formative work. \nWe found that all participants were able to use the FluxMarker to identify specific features on the tactile graphics faster than when they did not have a reference point, albeit they wanted to have the markers move along paths to guide them between landmarks. They were also interested in using the markers to create raised lines around specific tactile elements so that they could feel the boundaries and the contained tactile information. Our participants also noted the possibility for the system to annotate graphics in real-time, which would help understand their data sets, interpret tactile graphics at the same time as teachers present the same information visually during lectures, and with building in situ ways to navigate. Finally, our participants confirmed that the FluxMarker would help people learn to draw, in particular, young students. \n\n\nIn summary, our contributions are as follows: \n\\begin{itemize}\n\\item An approach to enhancing the tactile graphics with dynamic tactile markers\n\\item A design of low-cost, scalable actuated tangible markers, informed by a formative study with four blind people.\n\\item A hardware prototype of the PCB manufactured electromagnetic coils and its technical evaluation.\n\\item A user evaluation study with four blind participants and two low vision participants, which illustrates the potential benefits of dynamic tactile markers in four application scenarios. \n\\end{itemize}\n\n\n\\section{Related Work}\n\n\\subsection{Interactive Tactile Graphics}\nAlthough the benefits of tactile graphics are well documented, there are several limitations. \nThe first limitation of a tactile graphic is its finite capacity to hold information~\\cite{brock2015interactivity}. It is difficult to add information, such as captions or annotations, without making a tactile graphic overly complicated~\\cite{tatham1991design}. Take for example a tactile map with roads, intersections, and several landmarks. It would not be feasible to add a tactile label to every map feature. Researchers have been exploring the use of other modalities to augment a tactile graphic with additional information. Two of the most promising modalities are sound~\\cite{baker2014tactile, miele2006talking} and haptics~\\cite{yu2001haptic}. \nSound has been applied to annotate the content of a tactile graphic to give a text-to-speech description based on QR code~\\cite{baker2014tactile}, object recognition~\\cite{fusco2015tactile}, or touch input~\\cite{miele2006talking}. \nHaptic-tactile maps~\\cite{rice2005design, zeng2010audio} can generate force feedback based on the user interaction. \nCompared with traditional tactile graphics these interactive tactile graphics can improve the efficiency in exploring content and facilitate learning ~\\cite{brock2015interactivity, landau2003merging}. \nHowever, sound and haptics have their own limitations:\nthey limit users' ability to obtain quick overviews of spatial information with two handed interaction~\\cite{mcgookin2010clutching} and using the hands as a marking or reference points to compare different parts of the graphic spatially~\\cite{rice2005design}.\nIn contrast, our proposed dynamic markers are designed to improve {\\it tactile} feedback by providing physical guides and affordances directly on a tactile graphic for blind users to explore and comprehend spatial information.\n\n\n\\subsection{Dynamic Tactile Graphics}\nAnother limitation of a tactile graphic is its static content and the high cost of production~\\cite{jacobson1998navigating}.\nAlthough recent work has demonstrated tools to automate the design of tactile graphics~\\cite{brown2012viztouch, jayant2007automated, vstampach2016automated}, once it is created a static tactile graphic cannot be easily modified.\nA dynamic tactile graphic can enable updating of its content in response to users' inputs. HyperBraille~\\cite{prescher2010tactile} is a commercially available refreshable braille display that has one of the largest touch-sensitive pin-matrix display (7200 pins arranged in 60 rows). \nResearchers have demonstrated interactive systems that leverage such commercially available refreshable displays to produce a dynamic tactile map with geographic annotation~\\cite{schmitz2012interactively, zeng2012atmap}.\nHowever, the cost of a dynamic tactile display like HyperBraille is prohibitive, ranging from 2,000 USD for an 18-character display to 50,000 USD for a half page of braille. \n\nRecently, a wide variety of novel actuator technologies has been proposed, including electromagnetic actuators~\\cite{yeh2007mechanism}, piezo-electric actuators~\\cite{cho2006development, volkel2008tactile}, electroactive polymers~\\cite{chakraborti2012compact}, hydraulic and pneumatic actuation~\\cite{lee2005micromachined, russomanno2015design}, and shape memory alloy~\\cite{taylor1998sixty}. \nHowever, piezo-electric actuators are still the only technology found in commercially available devices~\\cite{russomanno2015design}, and the cost of a single piezo-powered braille cell is approximately 100 USD, bringing the cost of even a single line refreshable braille display to over 1,000 USD~\\cite{runyan2010eap}. \nTo enable dynamic updating of tactile content, we explore an alternative approach where instead of developing an alternative refreshable braille display we augment a tactile graphic.\nOur hybrid approach allows a blind user to interact with the tactile content dynamically, while allowing size and resolution to scale without a significant increase in cost and fabrication complexity. \n\n\\subsection{Tangible Interaction}\nOne emerging form of dynamic tactile graphics are those enabled by a tabletop tangible user interface.\nTabletop tangible user interfaces were first created to allow users to interact with digital information by moving or actuating physical objects~\\cite{ishii1997tangible, pangaro2002actuated, patten2007mechanical}, and these systems have been applied to many domains, including urban planning~\\cite{underkoffler1999urp}, remote collaboration~\\cite{follmer2013inform}, education~\\cite{horn2009comparing}, and data visualization~\\cite{le2016zooids}.\nRecently, researchers have investigated ways to use tangible interfaces for assistive applications~\\cite{mcgookin2010clutching, schneider2000constructive}.\nFor example, Tangible Graph Builder~\\cite{mcgookin2010clutching} is specifically designed for visually impaired users to allow them to access graph and chart-based data through tangible interface.\nTangible Reels~\\cite{ducasse2016tangible} helps visually impaired users to construct a tangible map by their own with sucker pads and retractable reels.\nThese devices allow visually impaired people to dynamically create tactile maps and retrieve specific information related to points and links.\nInspired from these work, we explore how {\\it actuated} tangible objects can enhance the exploration and interaction with tactile graphics for visually impaired users.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Formative Study}\n\nWe conducted a formative study with four blind individuals (male: 2, female: 2) to understand their current uses of tactile graphics, challenges they encounter, and opportunities where dynamic tactile markers may be helpful for them.\n\\changes{The age of the participants ranged from 22 to 28 (Mean=25.75, SD=2.6).}\nAll participants were students (one undergraduate and three graduate students) in various fields (biology and neuroscience, astrophysics, and computer science) at a local university.\nWe chose students as main target users because tactile graphics are heavily used in education, particularly in STEM fields.\nThroughout a 30-minute semi-structured interview, we focused on three aspects: (1) current use of tactile graphics, (2) challenges and limitations in the current use of tactile graphics, and (3) opportunities for an alternative approach to enhancing tactile representations.\nNext we present our findings.\n\n\\subsection{Current Uses}\nWe first asked the participants when and why they use tactile graphics. All participants have used tactile graphics for their coursework or research. For example, P2 said that she uses tactile graphics to access visual material in the textbooks of her biology and neuroscience classes. She uses a tactile representation of the brain model to spatially understand the functionality of each anatomical region. A particularly important use scenario is data exploration and analysis. P4 was involved in a space grant project that sends a balloon with instruments to high attitude to collect data. P4 mentioned that a tactile graphic would be a good medium to represent the data in an accessible way for analysis.\n\n\\subsection{Challenges}\nParticipants reported several challenges in using a \\emph{static} tactile graphic for data analysis and learning resources. First, they found it difficult to understand changes in data. P4 pointed out that this difficulty is due to the lack of dynamicness in tactile representation. In addition, when there is too much information, a graphic can be too complex to interpret\\changes{~\\cite{brock2015interactivity,tatham1991design}}. P2 commented that {\\it ``P2: When you try to add all the information on a single tactile graphic, this can be too complex.''} \n\nParticipants also reported several limitations in the current form of a \\emph{dynamic} tactile representation. First, the current devices for displaying dynamic tactile graphics are very costly. All participants commented on this cost issue as a hindrance for wider adoption. P2 said that {\\it ``P2: I don't have any of these [refreshable braille displays]. I want, but the cost of thousands of dollar is just too expensive for me.''} Second, the size is too small\\changes{~\\cite{swaminathan2016linespace}}. P1 mentioned that the small size of these displays makes it difficult to use for data analysis applications: {\\it ``P1: braille display can show the 40 characters or maybe 80. That's about it. [...] I know there is an effort to make 4 lines or 5 lines of the braille display, but I'm not sure how successful they are. These can be very expensive.''} P3 mentioned that these [refreshable braille displays] are only designed for reading text, not showing data. {\\it ``P3: It's too small and can't express, for example, weather map, or complicated graph of 5000 data point.''} \n\nIn summary, a static tactile graphic lacks the ability to represent changes and can be overly complex, whereas a dynamic tactile graphic is costly and too small. Neither is ideal for supporting data analysis and interactive information retrieval.\n\n\\subsection{Opportunities}\nThe key opportunity we identified from the formative study is to consider a hybrid method that combines both static and dynamic tactile graphics. P2, who did not own a refreshable display, mentioned that {\\it ``It would be cool if it can dynamically label the part or change the texture, so that it can keep the tactile graphic simple but as accurate as possible.''} In other words, only a part or a few parts of a graphic need to be dynamic, while the rest of the content remains static. \n\nAnother opportunity worth noting is a common desire to understand changes in data analysis. P1 mentioned that {\\it ``I'm actually more talking about the dynamics over time, say, ... how much snow falls over time, earthquake data, or global warming data, anything that are changing over time. I don't know what any of these look like in the real world.''} \n\n\n\\subsection{Design Requirements}\n\nOur formative study inspired us to develop FluxMarker{}, a technique for controlling a set of dynamic tactile makers to move around a static tactile graphic to support data exploration and analysis. Informed by our findings, we identified the following design requirements for FluxMarker{}:\n\\begin{enumerate}\n\\item \\textbf{Support:} It needs to support a range of traditional static tactile graphics.\n\\item \\textbf{Dynamic Update:} It needs to dynamically update its location in response to user inputs.\n\\item \\textbf{Multiple Markers:} It needs to be capable of controlling multiple markers independently.\n\\item \\textbf{Perceptibility:} Its location as well as changes in location need to be perceivable by users via hands.\n\\item \\textbf{Scalability:} Its cost needs scale well as the display area increases, preferably linearly.\n\\end{enumerate} \n\n\n\\section{Dynamic Tactile Markers}\nTo address the limitations of the current tactile graphics, we propose {\\it dynamic tactile markers}, a new approach that uses movable self-adjusting physical elements to dynamically render points of information on top of a traditional tactile graphic. The markers are magnets that are manipulated above a bed of electromagnetic coils, whose movements are controlled by software. The magnetic, dynamic tactile markers create real-time and adjustable tactile reference points, which can easily reconfigure the tactile content and enrich the spatial information.\nWhile dynamic tactile markers can be applied more generally in any tangible user interface, we specifically explore the design space of augmented tactile graphics for people with visual impairments. This section describes the interaction design and use scenarios that led us to investigate the design of a system to render dynamic markers. \n\n\n\\subsection{Interaction and Application Scenarios}\nThe main goal of dynamic tactile markers is to provide real-time tactile affordances on an otherwise static tactile graphic in order to direct a user's attention to specific features of the graphic.\nThis type of interaction is especially important when users explore spatial content.\n\\changes{In contrast to the refreshable braille display, the hybrid approach using the combination of a static tactile diagram and dynamic markers makes the display content persistent without losing the user's spatial memory~\\cite{swaminathan2016linespace}.\nThis enables users to easily recognize the position of the marker by referring the static outline as a constant reference, while allowing update context-dependent contents based on the user's needs, such as a location in a map or data points of a graph.}\nHere we describe four application scenarios where dynamic tactile markers can be useful for people with visual impairments.\n\n\n\\subsubsection{Location Finding and Feature Identification}\nTactile maps provide blind people a means to explore geographical information.\nFor example, a tactile map of a campus will display a layout of buildings and braille labels associated with each building. \nHowever, finding a particular location is often a tedious task;\nunlike a sighted person's ability to scan a map and quickly identify a specific location, blind users usually explore the map sequentially and must orient themselves to the whole graphic before finding a specific location. \nMoreover, although the information is often labeled with braille, reading braille takes time and is inaccessible for those who cannot read braille. Audio feedback can help to orient users to the name or feature of the current location, however, this technique makes it difficult to orient oneself to specific locations on the page. \n\nDynamic tactile markers can help to identify a spatial location quickly. For example, responding to ``Where is the nearest coffee shops?'' in a local area map or ``Where is the Black Sea?'' in a geographical world map, the dynamic tactile markers can move around on the tactile map, and the blind user can use their hands to quickly skim the map to identify the location of the marker (Figure~\\ref{fig:application-map}). They can quickly find the marker position relative to their current location or an outline of surrounding areas on an existing tactile map. In this way, they do not lose contact with spatial reference points or the spatial memory they have developed. \nIn addition, responding to the query, ``How can I get to this place?'', other markers can instantly draw the tactile path by aligning dots on the map. \nOnce the user is satisfied by finding the location or route, the dynamic tactile markers can be reset, and cleared from the tactile graphic. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{figures\/application-map.png}\n\\caption{FluxMarker's Location Finding and Feature Identification Application}\n~\\label{fig:application-map}\n\\vspace{-0.2in}\n\\end{figure}\n\n\\changes{\nSimilar to location finding, the dynamic tactile marker can be used to locate a specific feature on a tactile map based on a user's question. \nFor example, a student with visual impairments is given a brain model to use in her biology class. She can ask ``which region of the brain has a memory function?'', the dynamic tactile marker can point out the domain of a hippocampus by positioning the marker within that region of the organ. This is a similar, but different interaction from existing interactive tactile graphics, which explains the feature of each region triggered by the user's pointing, while the dynamic tactile marker can point out the location triggered by the user's question. In another scenario, a student is in a lecture, and the professor is presenting a graphical representation of a cell via PowerPoint, and uses a laser pointer to identify the cell nucleus for the sighted students. The student with visual impairment, who has a tactile version of the graphic, can ask the dynamic tactile marker to move to the corresponding location.\n}\n\n\\subsubsection{Data Analysis and Physicalization}\nData analysis is one of the most challenging tasks for people with visual impairments. As the visualized data is not accessible for blind users, they often find it difficult to interact with the data.\nDynamic tactile markers can help blind users make sense of data through data physicalization~\\cite{jansen2015opportunities}.\n\nOne advantage of using dynamic tactile markers is the ability to update the data for a different context.\nFor example, a blind user who wants to analyze the temperature of a city over time might want to know the pattern throughout the year, maximum temperature, and minimum temperature of the city. \nTwelve dynamic tactile markers can position themselves to display a plot graph to represent the temperature data of each month.\nBy touching the data point and referring to the scale, which can be given by a static embossed paper, the user can find the maximum and minimum temperature of the city.\nWhile understanding the pattern of the data can be challenging with audio representation alone, with dynamic tactile markers, she can also comprehend the pattern of the graph by recognizing spatial positions. If she wants to analyze the temperature data a different city, she can just ask ``render the data point'' with the city name.\nThen, the dynamic tactile markers can be repositioned to render the requested data point.\n\n\n\\subsubsection{Guided Drawing Assistant}\nIn addition to supporting an interpretation of a content or analyzing the data, the dynamic tactile marker can also support students to create their own tactile graphic representations. Many students with visual impairment have limited exposure to drawing or making their own representations of information due in part to the lack of educational practices and materials~\\cite{hayhoe2014reducing}. The dynamic tactile marker can help blind users to make their own tactile representations by guiding them with reference points of the drawing. For example, when a blind user is trying to draw a hexagon, six dynamic tactile markers would appear, marking the reference points of each corner of the shape. The user can touch the markers to position themselves with the nondominate hand, and guide them to draw the line to the next point (Figure~\\ref{fig:concept-draw}). Or, the tactile markers can form a nearly solid edge that the user could mark alongside. This guided drawing can be particularly useful when creating their own tactile graphics when used in conjunction with inexpensive physical tactile drawing boards \nsuch as the Sensational Art Board~\\footnote{http:\/\/www.sensationalbooks.com\/products.htmlblackboard}, the inTact Sketchboard~\\footnote{http:\/\/www.easytactilegraphics.com\/} and 3D printing Doodle Pens~\\footnote{http:\/\/the3doodler.com\/}. \n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{figures\/application-drawing.png}\n\\caption{FluxMarker's \"Guided Drawing Assistant\" Application.}\n~\\label{fig:concept-draw}\n\\vspace{-0.2in}\n\\end{figure}\n\n\n\n\n\\section{Design Considerations}\nMany different actuation techniques can enable dynamic tactile markers, but an appropriate design should meet the design requirements that we identified through our formative study. \nIn order to ensure that our design meets these requirements, we evaluated a variety of actuation approaches that have been proposed in different areas such as tangible user interface, robotics, and accessibility.\nThese actuation methods include mechanical actuators (e.g., DC motors, servo motors, stepper motors), piezoelectric actuators (e.g., piezo-elastomer, piezo-electric linear motor, ultrasonic motor), electrostatic actuation, magnetic actuation, electromagnetic actuation, pneumatic and hydraulic actuation, and material-based actuation (e.g. shape memory alloy).\nGiven the technical considerations, we decided to explore electromagnetic coils to actuate a passive magnet as a marker.\nThree primary considerations rose to the surface while conducting this evaluation: cost, scalability, fabrication complexity, and compliance. \nThis section describes the design rationale behind our decision. \n\n\n\\subsection{Cost}\nOne of the most important considerations is the cost of fabrication. \nAlthough mechanical actuation such as motors and linear actuators is the straightforward design choice, these parts are expensive.\nFor example, coordinated self-positioning robots like Zooids system~\\cite{le2016zooids} can be used as dynamic tactile markers, but parts and assembly costs 50 USD for each robot so it is costly to increase the number of markers. \nIn contrast, components that can be fabricated with existing PCB manufacturing technique are inexpensive~\\cite{strasnick2017shiftio},\nfor example, electromagnetic~\\cite{pelrine2012diamagnetically, strasnick2017shiftio} or electrostatic actuation~\\cite{karagozler2009stress}.\nPiezo-electric actuation such as ultrasonic motors and piezo-electric linear actuators can be also integrated with PCB board~\\footnote{http:\/\/pcbmotor.com\/}, but the fabrication process requires piezoceramic materials and specialized manufacturing process, which increases the cost of fabrication. \nAnother low-cost actuation method is pneumatic or hydraulic actuation as the parts are relatively inexpensive.\n\n\n\\subsection{Size and Scalability}\nAs we found through the formative interviews, display size is another important consideration. \nExisting approaches that use one actuator for each pixel of a dynamic tactile display, including refreshable braille~\\cite{hyperbraille} or raised-pin displays~\\cite{follmer2013inform, leithinger2010relief, poupyrev2004lumen}, do not scale well.\nFor example, a 10x10 pin size raised-pin display that uses either mechanical or piezo-electric linear actuation requires only 100 actuators.\nHowever, a 100x100 display size requires 10,000 individually actuated pins.\nEven with relatively low-cost actuators, cost increases exponentially with display size (e.g., using a 5 USD servo motor, a 100x100 pixel display will cost at least 50,000 USD).\n\nIn contrast, PCB manufactured electromagnetic actuation scales relatively well because many coils can be aligned on a PCB.\nFor example, in our design an 8x8 array of coils can be aligned on a 10cm x 10cm PCB, costing only 0.50 USD.\nWhile the cost of printed circuit board increasees with the size of the board, the cost increase is trivial, and the cost of transistors to drive a high current for electromagnetic actuation is also inexpensive compared to mechanical or piezo-electric components.\n\n\\subsection{Fabrication Complexity}\nIn addition to cost and scalability, we value the simplicity of fabrication and control mechanism which allows the larger accessibility community to quickly adapt, replicate, and test.\nAs mentioned above, pneumatic and hydraulic actuation methods are also promising approaches. Researchers have proposed using a fluidic logic circuit to switch the pressure of pneumatic actuators and control the state of each pixel in a refreshable braille display~\\cite{russomanno2015design}. The complexity of design and fabrication of hydraulic actuation can be alleviated with advanced 3D printing technology~\\cite{maccurdy2016printable}, but it is still difficult to design complex fluidic circuits that can control the multiple pixels individually. In short, the fabrication and control mechanisms of such pnuematic actuated devices are a challenge.\n\nIn contrast, using electromagnetic coils leverages commercially available PCB manufacturing for fabrication, and a standard circuit design for the control mechanism.\nThus, we chose to develop an electromagnetic actuation technique that meets all the considerations we identified previously, while allowing the simple control and fabrication process. \n\n\n\\section{System Design \\& Implementation}\n\nTo instantiate the concept of the dynamic tactile marker, we present FluxMarker{}, a software and hardware system that actuates magnetic markers with low-cost, scalable electromagnetic coil arrays (Figure~\\ref{fig:system-pcb}). \nThe hardware system is comprised of markers, coils, circuits, a controller, and a corresponding GUI. \nIn the following section, we describe the specifications of the elements we used to construct the system.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{draft-figures\/system-pcb.png}\n\\caption{Overview of PCB Coil Board}\n~\\label{fig:system-pcb}\n\\vspace{-0.2in}\n\\end{figure}\n\n\\subsection{Hardware}\n\n\\subsubsection{System Design}\n\n\\textbf{Coil Design:} \nFluxMarker{} consists of magnetic passive markers and arrays of electromagnetic coils.\nThe electromagnetic coil arrays can be fabricated with standard PCB manufacturing technique.\nWe use a two-layer printed circuit board and each layer contains a set of micro-coils with horizontal and vertical offsets.\nEach coil has an identical rectangular shape and is arranged in the shape of a tile (see Figure~\\ref{fig:system-pcb}).\n\n\nRunning current through the circuit coils generates a local magnetic field within the area of the coil such that each coil can only attract a single magnet located within its area.\nIf the PCB had only one layer, there would be no way to move the magnet from the center of one coil to the next because the magnet is located beyond the range of the second coil.\nThus, the pattern of coils on the top and bottom layers are offset so that their effective areas overlap.\nFigure~\\ref{fig:system-move} illustrates the movement of the magnet. \nThe microcontroller switches a sequence of coils on and off to move the magnet across the coils.\nAs the top layer and bottom layer are offset both horizontally and vertically, the magnet travels in a zig-zag path from one coil (on the top layer) to the next (on the bottom layer) rather than in a straight line.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{figures\/system-module.png}\n\\caption{Modular and Scalable Design}\n~\\label{fig:system-module}\n\\end{figure}\n\nThe coil arrays are fabricated with standard PCB manufacturing so the size of each array is limited by the capability of the PCB factory.\nTo address this, we designed our electromagnetic coil arrays as a scalable module. \nEach 16 x 16 magnetic coil array board is a module of a certain size (e.g., 15cm x 15cm).\nModular boards can be soldered together side to side as tiles, allowing the overall size of the coil array to be as large as desired (Figure~\\ref{fig:system-module}).\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{figures\/system-move.png}\n\\caption{Diagram of the Dynamic Marker's Movement Across the Coil Board}\n~\\label{fig:system-move}\n\\vspace{-0.2in}\n\\end{figure}\n\n\n\\textbf{Circuit Design:} \nSwitching current to each coil turns on and off its magnetic field.\nThe standard approach to switching the current is to use a single MOSFET transistor for each coil, but this increases the complexity of the circuit design as it requires several of I\/O lines to drive each MOSFET transistor.\nInstead, we use a multiplexing technique with a diode array to moreefficiently control and drive many coils in an array.\nConsider a 4x4 array of coils where each coil is connected to a diode (Figure~\\ref{fig:system-matrix}). \nSimilar to a LED matrix display, only one row of coils can be on at any time. By switching through each row quickly (e.g., 10-100ms), a coil at any position can be activated. For example, setting only row A as HIGH and the other rows (B, C, and D) as LOW, while setting column 1 and 3 as LOW and the other columns (2 and 4) as HIGH will turn on only coils (A, 1) and (A, 3). \nNext, if we set row B as HIGH and the other rows (A, C, and D) as LOW, and set column 1 and 4 as LOW and the other columns (2 and 3) as HIGH, we can turn on (B, 1) and (B, 4).\nIn this way we can control 16 coils using only 8 (4 + 4) I\/O pins on the microcontroller.\nThis design decreases the complexity of the circuit and reduces the required number of microcontroller I\/O pins as well as MOSFETs, which cost more than diodes.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=2in]{draft-figures\/system-matrix.png}\n\\caption{Multiplex Coil Matrix}\n~\\label{fig:system-matrix}\n\\end{figure}\n\nWhile LEDs can be switched with relatively low current (e.g., 20mA) directly supplied by the microcontroller, the electromagnetic coil requires higher current (e.g., 0.5-1A).\nThus, we use half-bridge MOSFET transistor switches to amplify and control the current to each coil.\nThe half-bridges are made from a push-pull pair of P-channel and N-channel power MOSFET transistors.\nOne terminal from each coil is tied to a P-channel MOSFET transistor, and another terminal is tied to an N-channel MOSFET transistor. \nThe gate of both MOSFET transistors are controlled by an I\/O line from the microcontroller, and the source voltage comes from an external 9V power supply.\n\n\\textbf{Controller Design:} In this scheme, each half-bridge transistor uses two I\/O pins of the microcontroller, so the number of I\/O pins on the microcontroller limits the number of available transistor switches (e.g., the Arduino microcontroller has only 14 digital I\/O pins).\nTo further reduce the required number of I\/O pins, we use daisy-chained shift registers.\nEach shift register switches multiple MOSFET transistors with serial-in\/parallel-out data transmission. By using a chain of shift registers any number of transistors are controlled using only a few microcontroller pins.\n\nBy generating a local magnetic field, each coil attracts a magnetic marker located in its range. \nTo move the marker from one point to another, the program analyzes the shortest path, and then switches coils on and off sequentially along this path to move the magnet.\nAs a blind or low vision user interacts with a marker, the system keeps the coil charged so that the marker cannot accidentally be pushed aside. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=3in]{figures\/system-gui.png}\n\\caption{GUI Software}\n~\\label{fig:system-gui}\n\\vspace{-0.2in}\n\\end{figure} \n\n\\subsubsection{Implementation }\n\n\\textbf{Markers:} We use N50 neodymium disc magnets (2mm diameter and 3mm thickness) to act as tactile markers that are dynamically actuated by electromagnetic forces. We added a laser-cut square cap to stabilize the orientation so that the magnets do not flip over. In our prototype, each magnetic marker costs approximately 0.20 USD.\n\n\\textbf{Coils:} Our current prototype comprises 16x16 grid coils and is 15cm x 15cm in size. Each coil has 22 turns and the size is 15.85mm width and height. The width and spacing of each line in the coils is 0.1524mm (0.006 inches), the minimum trace width and separation of our PCB manufacturer, to maximize the number of turns in each coil. \n\n\\textbf{Circuits and Microcontrollers:} We use an ESP8266 microcontroller, which switches the current to the coils using 8-bit HC595N shift registers. Each shift register can drive 8 coils which are switched using a half-bridge MOSFET transistor.\nAn N4001 diode attached to each coil prevents reverse current flow in the diode array. We use FQP27P06 and IRF740 for P-channel and N-channel MOSFETs, respectively. A 9V AC-DC line voltage adapter powers the MOSFETs.\n\n\\textbf{Fabrication Cost:}\nOur prototype costs approximately 40 USD, including the cost of 32 MOSFETs, 128 diodes, 4 shift registers, a printed circuit board, and a microcontroller.\nWe estimate the cost of 160x160 grid coils will be 500 USD for the total parts cost (MOSFETs: 128 USD, diodes: 147 USD, shift registers: 20 USD, PCB: 200 USD, and microcontroller: 15 USD).\nIn the production of our prototype hardware system, we manually assemble these parts, but this process can automate with PCB assembly machines.\n\n\n\\subsection{Software}\nWe developed software to support the task of specifying the locations of dynamic tactile markers and controlling their movements. This software consists of a web-based graphical user interface (GUI) and a web server that communicates with the display hardware. Using our GUI, the task of creating a hybrid tactile graphic is as follows. \nFirst, a sighted tactile graphic designer will specify the static elements of the graphic by drawing lines and polygons or by importing an existing graphical file. \nSecond, the designer will specify a spatial configuration of markers (e.g., the locations of coffee shops on a local tactile map, or the position of hippocampus in the human brain model). \nThird, the designer will specify the input commands associated with this particular configuration (e.g., voice command of ``show me the nearest coffee shops'' or ``which region of the brain is the hippocampus?''). \nFinally, if the designer wants to specify a sequence of such spatial configurations, the system supports the creation of a step-by-step guide or a drawing aid (Figure~\\ref{fig:system-gui}).\n\nThe main functions of the web server component of our software are to compute the display logic given a particular marker configuration and to communicate this logic to the display hardware. The communication is through a wireless HTTP (HyperText Transfer Protocol) connection. On the hardware side, an ESP8266 microcontroller enables wireless communication with a built-in Wifi chip module. Each coil in the display matrix has a unique ID. The web server can send messages to individual coils to turn them on or off. The display logic specifies the sequence and timing of these messages in order to move markers to desired locations and the software tracks the history of each marker position. Once the task is finished, the system moves the markers to the corner of the display away from the tactile graphic. The microcontroller program is written in C++ and the control GUI is written in JavaScript.\n\n\\section{User Study}\nThe goal of the user study was to use the prototype to assess the use case scenarios we identified during our formative research and background research, and to probe users about other possible applications of the FluxMarker{}. In particular we observed how the tool supported participants' ability to find specific locations within a tactile graphic, supported participants ability to relate content knowledge to elements on a tactile graphic; engaged participants in drawing tasks; and affected participants perceptions of how information can be communicated with tactile graphics and their interactions with assistive technologies. \n\n\\subsection{Participants}\nSix people with visual impairments participated in the user study (3 male, 3 female); three participants were also part of the earlier formative study. \nP1 (male) P2 (male) P3 (female) identified as being totally blind. One female participant identified as being legally blind with a little bit of light perception (P4). One male (P5) and one female (P6) participant identified as having a visual impairment, but had functional vision through the use of assistive technologies. Figure~\\ref{fig:participant-chart} summarizes the characteristics of participants in terms of frequency of use of tactile graphics, familiarity with science graphics, familiarity with tactile maps, and braille fluency. \n \n\\subsection{Method}\nWe conducted a 45-minute session with each participant. During each session we presented an overview of the research, introduced the prototype and described how it worked in conjunction with the graphics, and then showed the participant two embossed tactile graphics from the local university's accessible media lab so that they would have basic familiarity with the graphics. The graphics included (A) an embossed tactile map of Eastern Europe and Russia and (B) an embossed tactile graphic representing a sectional view of a human brain. At the beginning of the session we provided the participants with the context in which these graphics might be used and provided time for them to explore the graphics. We then asked the participants to (C) draw a hexagon on a piece of trace paper, in order to observe their familiarity with drawing without any aids. Figure~\\ref{fig:userstudy} shows examples of user study sessions.\n\nTo observe how the participants used the FluxMarker{} we laid the embossed tactile graphics on top of the display and asked the participants to read graphics A and B, and perform a series of tasks with the aid of the dynamic markers. \\changes{When evaluating FluxMarker with Graphic A, we asked participants to first find a region on the map without the aid of the marker, and then find a specific point on the graphic as marked with the FluxMarker. Subsequent to finding the marker, we asked participants to identify other geographic features. This allowed us to observe how each participant used the marker as a reference point throughout their search. We performed the same order of operations with Graphic B, albeit the graphic was more detailed and the representation of the brain had less \"regions\" and more features represented. We asked participants to identify these features in relation to each other.}We also asked participants to follow a moving marker to draw a hexagon on a piece of trace paper. While the participants were performing the task, we answered any questions that arose. We observed their actions, recorded their commentary. After these activities we conducted 10-minute semi-structured interview where we asked about their experience with the FluxMarker{} in relation to their first and third experience with the graphics. We also asked for feedback about the prototype and their view of its current and future application. To analyze the data we reviewed the video of the sessions and captured questions and comments that arose during the testing, and identified themes that arose from the interview questions.\n\n\n\n\n\\subsection{Findings \\& Discussion}\n\n\n\n\n\\subsubsection{Applications}\nIn order to assess the application of FluxMarker{} we asked participants to use the tactile map, tactile graphic, and drawing paper to preform a task with the tool. Each participant performed those tasks in slightly different ways, and provided unique feedback and new ideas about the effectiveness of the tool. \n\n\\textbf {Spatial Navigation:}\nWhen viewing the tactile map, P1 rapidly scanned the display area with two hands and found the boundaries of the countries represented on the map without guidance; he said that he loves geography and is good at geometry. He immediately started looking for the Black Sea, at which point we used the FluxMarker{} to help him locate the sea. He found the marker within seconds and noticed that it was positioned in the middle of the sea. P1 compared his experience with this tool as being similar to working with a teacher of the visually impaired (TVI), who might manually place a simple magnet or sticker on the map to mark a location. He suggests that we use the FluxMarker{} to guide someone to follow a path in order to discover a landmark, in this case {\\it ``the Yangtze River''} if this map also included China. At the end of the user study he said {\\it ``The best application I could see this being used for is to have the marker move with the user following along, so that the teacher could trace a path out for me in real-time.''}\n\nP3 also identified real-time mapping as an important application of the tool. She suggested {\\it ``If you could have a tactile map, where then you could locate two buildings [using the markers], and then figure out pathways between those markers [which represent the buildings], you could then start populating the map with landmarks using these markers.''} P5 and P6, both low vision, explored the tactile graphic visually and did not have any ideas for how this tool would support them with navigation. \n\n\\textbf{Feature Identification and Locating:}\nIn addition to using the markers to identify specific landmarks or geolocations, P3 wanted the markers to form into a raised line around specific regions of the tactile graphic to make the boundaries more amplified. When viewing the tactile map with the marker, it was located in the middle of an empty space. She asked, {\\it ``I was wondering, is the marker in the middle [of the country]?''} She then suggested that the dynamic markers would be more beneficial if they could outline the boundary of the country or entity one was trying to find.\n\nWhen using the dynamic markers to explore the map, P2 started brainstorming other possible applications. He provided the scenario of using Google to find restaurants with four stars, and the then using the FluxMarker{} to automatically populating the location of the restaurants to narrow his decisions about where to go.\n\nWhen using the markers to find Kazakhstan on the tactile map, P4 indicated that the markers provide a sense of independence. {\\it ``It works better than having another person poking at the spot. Even if you know where their finger is, and you start taking time to explore around, they might think you are lost--which you are not--and try to show you around.``} She also mentioned that if an instructor was talking about a specific location on a graphic or map, it would be easier to keep up if the marker was in the corresponding position on the display. {\\it ``This would be useful if it was synced up with a lecture and graphics, or even if it was synced with an instructors laser pointer; if it was tracking what was up on the board, and I could follow along, that would be amazing.''} \n\nP6 elaborated on this concept, indicating that as somebody with low vision, she has a hard time following lectures that have slides. {\\it ``I usually ask the presenter to give audio cues when they are changing slides so I can follow along with the slides in front of me, but they usually forget to do that. Or if they have annotated graphics and they forget to describe that...then this tool would be very helpful. If this could be used to help track those animations on a print out of their slides. I think this would be great for low vision.''} \n\n\\textbf{Data Analysis and Visualization:}\nP3 mentioned wanting to be able to rapidly zoom into a specific area of a graphic, and see the details in higher resolution. She envisioned that the user could specify a region of a graphic, and FluxMarker{} could dynamically represent the zoomed in region in higher resolution next to the original view. P2 suggested that FluxMarker{} could be used to display incoming financial data from wall street to show how the market fluctuates. \n\nP4, an astrophysicist, remarked that while FluxMarker{} is not yet useful for her work, she enthusiastically recommended connecting FluxMarker{} to Excel so that she could create diagrams of her data in real-time using multiple markers. {\\it ``If they [the markers] were more stable and you had an ordinary piece of tactile graphic paper, you could add the markers on top of that and make a line graph. That would be very useful. That would absolutely be helpful to me in my professional career. On of the things that happens when I am writing a paper is preparing my data in Excel, I can't get any feedback from that graph. If I had something like this plugged into my computer, I could see if it graph matches my numbers. I could actually check my own work before publishing. That is a big deal!''}\n\n\\textbf{Drawing or Tracing Guide:}\nAll participants found the task of following the dynamic marker to draw a hexagon to be slow and troublesome as they already had strategies for how to draw the shape. For example, when drawing a hexagon freehand, P4 first drew a square and then used her spatial understanding to add additional sides. She remarked that it was not different from following a real person and it did not provide the tactile feedback that a drawing board would. P2 remarked that the FluxMarker{} would be difficult to use to draw organic shapes due to the orthogonal layout of the coil grid. He also remarked that it would be more useful if the markers could be closer together. However, P1 pointed out that it could provide a sense of independence for people who want to practice drawing on their own; P3 thought that the FluxMarker{} would be a good resource for young children learning to draw. \n\n\\subsubsection{Technical Evaluation}\n\nThe user study provided us an opportunity to evaluate the technical characteristics of the FluxMarker{}. Here we discuss users' comments with respect to support, perceptibility, and scalability: \n\n\\textbf{Support:} \nThe design of the physical prototype enabled the user to directly place embossed tactile graphics on the PCB display. During the user study, we found that the markers moved well along embossed paper graphics, but Swell paper was too thick for the current level of magnetization to maintain contact with the display. Throughout the user study the participants remarked that the markers felt loose and that they moved too easily when touched; they wanted them to have a stronger magnetic connection to the PCB display. P4 remarked {\\it `` If I was taking a test and was in a hurry looking around, and moved the marker, it would slow me down.`` We noted that in some cases this made participants hesitant to freely explore the graphics.''}\n\n\n\n\\textbf{Perceptibility:} \nP4 and P5 explicitly said something about the markers. P4 noted that the marker stands out in comparison to the rest of the page. P5 wanted to make sure that they would not cover any important graphic content. P1, P3, and P4 each remarked on the heat of the magnets and underlying PCB display. When looking for the marker P4 said, {\\it ``In some ways I think the heat is a good indicator of where the marker is going to be...because the marker is a very specific spot, but the heat is a region.''} The heat she was referring to is the result of current flowing through the coil, which turns out to be a useful side-effect for P4. In contrast, P2 found the heat less favorable and noted that if we increase resolution of the markers, the heating problem would need to be addressed.\n\n\\textbf {Scalability:}\nAll participants were impressed by the low cost of the prototype and the prospect of a larger display size. P1 mentioned that if the resolution were lower (the markers more spread out), a mechanism would be needed to help users find their starting reference point. Without this it would be laborious to find the marker. \nP3 was satisfied with the current size of the display since she could explore the whole display in a short amount of time. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\n\n\n\\section{Limitations \\& Future Work}\n\nOur system, while promising, has several limitations that require future research.\n\\changes{\nFirst, our user study is largely based on qualitative findings, rather than quantitative measures from controlled experiments.\nAlthough the focus of our study was to explore the application space with a proof-of-concept prototype, we acknowledge the importance of more in-depth and rigorous user study about the various aspects of the system.\nFor example, our system has a GUI component to enable designers to specify dynamic marker configurations, but we have not tested this formally.\nFor the future work, we will recruit other stakeholders including family members, teachers of visual impairments, and professional tactile transcribers to gain insight into content creation involving dynamic markers.\n\nSecond, our system currently does not detect the positions of the markers.\nInstead, it assumes the initial positions of the markers are known and tracks their movements based on users' commands. \nWe are interested in developing a closed-loop system which can detect markers' locations by using camera-based tracking or reed switch arrays, and then feed location information back to the system in real-time.\nSuch system can improve the accuracy and robustness of marker movement.\n\nFinally, the current system uses dynamic markers as {\\it output} of information, but the markers can be also used by users to provide {\\it input}.\nFor example, one participant was excited about the potential use scenarios where he can use tactile markers to find nearby restaurants, and then he can ask context-aware questions such as open hours, ratings, or a menu of the restaurant by touching the marker, just like Google Maps.\nIn these scenarios, tactile markers should be enhanced to respond to real-time information requests.\nFor the future work, we are interested in integrating touch sensing or gesture tracking mechanisms into the system to open up a new interaction model for visually impaired users and explore the possible design space of dynamic tactile markers.\n}\n\n\n\n\n\\section{Acknowledgments}\nWe would like to thank the participants for their time and helpful feedback.\nWe also would like to thank Shohei Aoki and Ron Pelrine for their technical advice and feedback.\nThis research was supported by the NSF CAREER award 1453771 and the Nakajima Foundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Methods summary}\n\n\nIn our experimental setup\\cite{Frohlich2011}, we prepare a quantum degenerate Fermi gas of $^{40}$K atoms in a 50\/50 mixture of the two lowest hyperfine states $|F=9\/2,m_F=-9\/2\\rangle$ and $|F=9\/2,m_F=-7\/2\\rangle$. We confine the quantum gas to two dimensions in a deep optical lattice formed by a standing wave laser field, preparing approximately 30 layers. The interaction strength between spin-up and spin-down particles is tuned at a Feshbach resonance near 202.1\\,Gauss. The photoemission measurement couples the $|F=9\/2,m_F=-7\/2\\rangle$ state to the weakly interacting state $|F=9\/2,m_F=-5\/2\\rangle$ using a radiofrequency photon of frequency $\\Omega$ with negligible momentum transfer. We measure the momentum distribution of the transferred atoms in a time-of-flight experiment and average the absorption signal azimuthally to obtain $A(k,\\Omega)$, where $k=\\sqrt{k_x^2+k_y^2}$.\n\n\\section*{methods}\n\n\\subsection{Experimental setup}\nWe evaporatively cool a 50\/50 spin mixture of $^{40}$K atoms in the $|F=9\/2,m_F=-9\/2\\rangle\\equiv |-9\/2\\rangle$ and $|F=9\/2,m_F=-7\/2\\rangle\\equiv |-7\/2\\rangle$ states of the hyperfine ground state manifold\\cite{Frohlich2011}. After reaching quantum degeneracy in a crossed-beam optical dipole trap with approximately 70000 atoms per spin state, we turn on an optical lattice potential in order to prepare two-dimensional Fermi gases\\cite{Gunter2005,Martiyanov2010,Frohlich2011,Dyke2011}. The optical lattice is formed by a horizontally propagating, retro-reflected laser beam of wavelength $\\lambda=1064$\\,nm, focussed to a waist of 140\\,$\\mu$m. We increase the laser power over a time of 200\\,ms to reach a final potential depth of up to $V_{lat}=83\\,E_{rec}$, which is calibrated by intensity modulation spectroscopy. $E_{rec}=h^2\/(2 m \\lambda^2)$ is the recoil energy. The trapping frequency along the strongly confined direction is $\\omega=2 \\pi \\times 78.5$\\,kHz. After loading the optical lattice, we adiabatically reduce the power of the optical dipole trap such that the atoms are confined only by the Gaussian intensity envelope of the lattice laser beams. The radial trapping frequency of the two-dimensional gases is $\\omega_\\perp=2\\pi\\times 127$\\,Hz for $V_{lat}=83\\,E_{rec}$ and we confine on the order of $10^3$ atoms per two-dimensional gas at the center of the trap. Along the axial direction we populate approximately 30 layers of the optical lattice potential with an inhomogeneous peak density distribution. Approximately two thirds of the 2D layers with highest density dominate the measured signal and their relevant energy scales $E_F$, $E_B$, and $\\Delta^2\/2E_F$ are more than an order of magnitude larger than the trapping frequency $\\omega_\\perp$. Therefore, finite particle number effects do not influence the measured signal. After evaporation, we adiabatically increase the interaction strength by lowering the magnetic field, at a rate of up to 0.25\\,G\/ms, to a value near the Feshbach resonance at 202.1\\,G. We apply a radio-frequency pulse near 47\\,MHz with a Gaussian amplitude envelope with a full width at half maximum of 230\\,$\\mu$s to transfer atoms from the $|-7\/2\\rangle$ state to the $|F=9\/2,m_F=-5\/2\\rangle$ state. Atoms in the $|9\/2,-5\/2\\rangle$ state have a two-body s-wave scattering length of 130 Bohr radii with the $|-7\/2\\rangle$ state and 250 Bohr radii with the $|-9\/2\\rangle$ state\\cite{Stewart2008}. We turn off the optical lattice 100\\,$\\mu$s after the radiofrequency pulse, switch off the magnetic field and apply a magnetic field gradient to achieve spatial splitting of the three spin components in a Stern-Gerlach experiment. For each run, the magnetic field is calibrated using spin-rotation with an rf pulse of an imbalanced mixture on the $|-9\/2\\rangle$\/$|-7\/2\\rangle$ transition. The magnetic field accuracy deduced from these measurements is $<3$\\,mG. We measure the temperature by ballistic expansion of a weakly interacting gas, and the quoted numbers refer to the average of $T\/T_F$ across the whole sample.\n\n\n\\subsection{Determination of the energy threshold $E_{th}$ of the energy distribution curve}\nWe fit our data with a double-peak fitting function comprising of a Gaussian for the atomic signal and a modified Gumbel function $f(\\Omega)=\\alpha \\exp[-(\\Omega-\\Omega_0)\/b-a \\exp(-(\\Omega-\\Omega_0)\/(a b))]$ for the pairing peak. The parameter $\\Omega_0$ measures the peak position and the parameters $a$ and $b$ measure skewness and width. For our further analysis, we only use the peak position $\\Omega_0$, which does not depend on the line shape function used. From this fit we determine the maximum of the molecular peak $\\nu_{max}=\\Omega_0$ and the minimum between the atomic and the molecular peak $\\nu_{min}$. Between $\\nu_1=\\nu_{max}$ and $\\nu_2=\\nu_{min}-2$\\,kHz we fit the data with a linear function and determine the zero-crossing of the linear extrapolation as the energy threshold $E_{th}$. We correct the obtained result for our spectral resolution of $1.5$\\,kHz, obtained from the width of the Gaussian fits. The data are normalized to the two-body binding energy in vacuum which we obtain from the transcendental equation\\cite{Bloch2008}\n\\begin{equation}\nl_z\/a=\\int_0^\\infty \\frac{du}{\\sqrt{4 \\pi u^3}} \\left(1- \\frac{\\exp(-E_B u\/(\\hbar \\omega))}{\\sqrt{(1-\\exp(-2 u))\/(2 u)}}\\right).\n\\end{equation}\nHere, $l_z=\\sqrt{\\hbar\/m\\omega}$ and $a$ is the three-dimensional scattering length using the following parameters of the Feshbach resonance: $B_0=202.1$\\,G, $\\Delta B=7$\\,G and $a_{BG}=174\\,a_B$ where $a_B$ is the Bohr radius.\n\n\n\\subsection{Thermal singlet model}\nWe model our data on the BEC side of the resonance with a thermal ensemble of singlet pairs\\cite{Gaebler2010}. The expression for the wave function of the bound state in two dimensions is $\\psi_B(r)=\\sqrt{2\/a_{2D}}K_0(r\/a_{2D})$, in which $K_0(x)$ is the modified Bessel function, and for the scattering state is $\\psi_q(r) = J_0(qr) -\\frac{i f(q)}{4} H^{(1)}_0 (qr)$, in which $J_0(x)$ is the Bessel function of the first kind and $H^{(1)}_0(x)$ is the Hankel function of the first kind\\cite{Petrov2001}. $f(q)$ is the scattering amplitude between the state $|-7\/2\\rangle$ and the final state $|-5\/2\\rangle$. We compute the momentum resolved rf spectrum for the dissociation from the bound state to the scattering state, averaging over a thermal distribution of the center-of-mass momenta of the initial pairs using Monte-Carlo sampling. From the momentum-resolved rf spectrum we calculate the effective mass $m^*$ and the wave vector $k^*$ using the same fitting routines as for the experimental data. This model of tightly bound pairs in the normal state includes the correct short-range physics but neglects many-body pairing, interactions between atoms and between pairs, as well as quantum statistical effects. Therefore, we do not expect quantitative agreement in the strongly interacting regime or on the BCS side of the resonance.\n\n\n\\vspace{0.5 cm}\n\n We thank A. Georges, C. Kollath, D. Pertot, D. Petrov, M. Randeria, W. Zwerger, and M. Zwierlein for discussions. The work has been supported by {EPSRC} (EP\/G029547\/1), Daimler-Benz Foundation (B.F.), Studienstiftung, and DAAD (M.F.).\n\n The authors declare that they have no competing financial interests.\n\n The experimental setup was devised and constructed by M.F., B.F., E.V., and M.K., data taking was performed by M.F., B.F., E.V., and M. Kos., data analysis was performed by M.F., B.F., and M.Kos., numerical modelling was performed by B.F., and the manuscript was written by M.K. with contributions from all coauthors.\n\n Correspondence and requests for materials should be addressed to M.K.~(email: mk540@cam.ac.uk).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}